How to load updated java class on a existing Ignite cluster? - ignite

I have a Ignite cluster of 2 or more nodes (max of 4) in server mode.
Let's say I have an Ignite cache defined by Java class called Employee (let's say this is version 1) loaded and used. If I update this Employee class with a new member field (version 2), how would I go about updating the loaded class with the new version (ie update the cache definition)? How does Ignite handle objects (cache records) created previously based on Employee version 1 vs new cache records created with Employee version 2? If I have SQL queries using new fields as defined in version 2, is that going to fail because the Employee version 1 based objects/cache records are not compatible with new SQL using the newly defined field(s) in Employee version 2?
I can delete db folder from the working directory, reload the new class as part of restarting the Ignite service. But, I lose all previous data.
Cluster member with updated Employee class definition will not join other nodes in the cluster still loaded with original Employee version 1 class. Again, I need to shutdown all members in the cluster and reload the new Employee version and restart all members in the cluster.

Ignite doesn't store code versions. The latest deployed class is in use.
in order to preserve the fields, Ignite builds binary meta for a customer type and stores it for validation. If you are going to add new fields and leave the old ones untouched, Ignite will update the meta automatically, nothing to configure/change. A old record will be deserialised with new fields set to null.
For SQL it's recommended to go with DDL to adjust the schema accordingly:
ALTER TABLE "schema".MyTable DROP COLUMN oldColumn
ALTER TABLE "schema".MyTable ADD COLUMN newColumn VARCHAR;
You can check available meta using control script --meta command (not sure if it's available in Ignite edition though)
control.sh --meta list
Ignite won't propagate POJO changes automatically using peerClassLoading. You should either update the JARs manually or rely on some deployment SPI, like URL deployment.
Overall, you should not remove your db folder each time you are going to make changes to your POJOs/SQL tables. Adding new fields should be totally OK. Do not remove the old fields, it's better to mark them as deprecated.

Related

Not getting the column access in Ignite Cache created and loaded from Oracle

I am doing a POC to ingest data from Oracle to Ignite cluster and Fetch the
data from Ignite in another application. When I created the Model and Cache,
I specified the Key as String and value as Custom Object. Data loaded to
cluster but then I querying "SELECT * FROM TB_USER" I am getting only two
column, i.e. _KEY and _VAL. I am trying to get all the column from the
TB_USER. What are the configuration required for this?
There are three ways of configuring SQL tables in Ignite:
DDL statements (create table). As far as I can see, you used something else.
QueryEntities. You should enlist all columns that you want to see in your table in the QueryEntity#fields property. All names should correspond to field names of your Java objects.
Annotations. Fields, that are annotated as #QuerySqlField will become columns in your table.

SSIS Migrating data to Azure from multiple sources

The scenario is this: We have an application that is deployed to a number of locations. Each application is using a local-instance of SQL Server (2016) with exactly the same DB schema.
The reason for local-instance DBs is that the servers on which the application is deployed will not have internet access - most of the time.
We were now considering keeping the same solution but adding an SSIS package that can be executed at a later time - when the server is connected to the internet.
For now let's assume that once the package is executed - no further DB changes will be made to the local instance.
All tables (except for many-to-many intermediary) have an INT IDENTITY primary key.
What I need is that the table PKs get auto-generated on the Azure DB - which I'm currently doing by setting the mapping property to for the PK, however I would also need all FKs pointing to that PK to get the newly generated ID instead of pointing to the original ID.
Since data would be coming from multiple deployments, I want to keep all data as new entries - without updating / deleting existent records.
Could someone kindly explain or link me to some resource that handles this situation?
[ For future references I'm considering using UNIQUEIDENTIFIER instead of INT, but this is what we have atm... ]
Edit: Added example
So for instance, one of the tables would be Events. Now each DB deployment will have at least one Event starting off from Id 1. I'd like that when consolidating the data into the Azure DB, their actual Id is ignored and instead get an auto-generated Id from the Azure DB. - That part is Ok. But then I'd need all FKs pointing to EventId to point to the new Id, so instead of e.g. 1 they'd get the new Id according to Azure DB (e.g. 3).

SolrCloud update node by node after schema update

Is it possible to update one node by one in SOLRCluster so you after update of SOLR schema you will not have to reindex all nodes at once and have downtime of search?
Current SolrCluster configuration
So basically can I:
Put one node into recovering mode
Update this node
Update schema.xml
Reindex Node
Bring this node as Leader
Put another node into recovery mode
Update this node too and launch it
Or I didn't get something?
Your question is how to update the schema.xml in SolrCloud.
This answer is about changes in schema.xml which does not imply the nead of a reindexing.
I also assume that you have create your collection with an named configuration in zookeeper(collection.configName).
In this case:
Upload your changed configuration folder to zookeeper.
Reload your collection
Be aware that step 2. need the name of the collection (does not support alias).

Liferay ServiceBuilder doesn't alter tables

Short story
When I modify the column withs in tables.sql (VARCHAR(4000)) generated by the service builder, redeploying the portlet does not cause Liferay to alter the db tables. How can I make sure that the column withs get expanded?
Long story
I have to make some changes to a Liferay 6.1.20 EE GA2 project developed by another contractor. The project uses maven as a build tool.
After adding some columns to the service.xml and running mvn liferay:build-service, I noticed, that the portlet-model-hints.xmlgot overriden (see https://issues.liferay.com/browse/MAVEN-37) and resettet to the default column width.
There's alot of data in the tables (it is running in production mode), so I cannot simply drop and recreate the tables.
So I manually modified the column width in the generated tables.sql and redeployed the portlet. The new columns are now present in the db tables, but the column widths were not altered.
Does Liferay alter column width or do I have to fire some sql statements against the database manually?
(We are working with an oracle 10g database)
If you want to change the column withs, you need to write in the portlet-model-hints.xml.
For instance, to increase a field until 255 you will do:(Its important running the build service after that change.)
ServiceBuilder doesn't do ALTER TABLE by itself - you'll have to write an UpgradeProcess for this yourself. Check this blog post or the underlying documentation.
In short: The update that can always be done automatically is of the type "DROP TABLE - CREATE TABLE", but, as you say, this is typically not desirable. Any more fancy way needs to be done manually, and that's exactly what this mechanism is for.

Doctrine schema changes while keeping data?

We're developing a Doctrine backed website using YAML to define our schema. Our schema changes regularly (including fk relations) so we need to do a lot of:
Doctrine::generateModelsFromYaml(APPPATH . 'models/yaml', APPPATH . 'models', array('generateTableClasses' => true));
Doctrine::dropDatabases();
Doctrine::createDatabases();
Doctrine::createTablesFromModels();
We would like to keep existing data and store it back in the re-created database. So I copy the data into a temporary database before the main db is dropped.
How do I get the data from the "old-scheme DB copy" to the "new-scheme DB"? (the new scheme only contains NEW columns, NO COLUMNS ARE REMOVED)
NOTE:
This obviously doesn't work because the column count doesn't match.
SELECT * FROM copy.Table INTO newscheme.Table
This obviously does work, however this is consuming too much time to write for every table:
SELECT old.col, old.col2, old.col3,'somenewdefaultvalue' FROM copy.Table as old INTO newscheme.Table
Have you looked into Migrations? They allow you to alter your database schema in programmatical way. WIthout losing data (unless you remove colums, of course)
How about writing a script (using the Doctrine classes for example) which parses the yaml schema files (both the previous version and the "next" version) and generates the sql scripts to run? It would be a one-time job and not require that much work. The benefit of generating manual migration scripts is that you can easily store them in the version control system and replay version steps later on. If that's not something you need, you can just gather up changes in the code and do it directly through the database driver.
Of course, the more fancy your schema changes becomes, the harder the maintenance will get i.e. column name changes, null to not null etc.