Below are my key and value object in ignite.
Index key and affinity key is annotated in Key object.
Attached grid gain control center UI. Schema name is KNOWLEDGETIME
VdtProto.KnowledgeTime.class this is Protobuf object.
Key.class, VdtProto.KnowledgeTime.class
Inserted few records to cache and trying to query using below sql but not returning any records. Why? I tried with plain java pojo object, it's working.
SELECT * FROM "test-client_null_cacheName10"."KNOWLEDGETIME"
Related
I have a Ignite cluster of 2 or more nodes (max of 4) in server mode.
Let's say I have an Ignite cache defined by Java class called Employee (let's say this is version 1) loaded and used. If I update this Employee class with a new member field (version 2), how would I go about updating the loaded class with the new version (ie update the cache definition)? How does Ignite handle objects (cache records) created previously based on Employee version 1 vs new cache records created with Employee version 2? If I have SQL queries using new fields as defined in version 2, is that going to fail because the Employee version 1 based objects/cache records are not compatible with new SQL using the newly defined field(s) in Employee version 2?
I can delete db folder from the working directory, reload the new class as part of restarting the Ignite service. But, I lose all previous data.
Cluster member with updated Employee class definition will not join other nodes in the cluster still loaded with original Employee version 1 class. Again, I need to shutdown all members in the cluster and reload the new Employee version and restart all members in the cluster.
Ignite doesn't store code versions. The latest deployed class is in use.
in order to preserve the fields, Ignite builds binary meta for a customer type and stores it for validation. If you are going to add new fields and leave the old ones untouched, Ignite will update the meta automatically, nothing to configure/change. A old record will be deserialised with new fields set to null.
For SQL it's recommended to go with DDL to adjust the schema accordingly:
ALTER TABLE "schema".MyTable DROP COLUMN oldColumn
ALTER TABLE "schema".MyTable ADD COLUMN newColumn VARCHAR;
You can check available meta using control script --meta command (not sure if it's available in Ignite edition though)
control.sh --meta list
Ignite won't propagate POJO changes automatically using peerClassLoading. You should either update the JARs manually or rely on some deployment SPI, like URL deployment.
Overall, you should not remove your db folder each time you are going to make changes to your POJOs/SQL tables. Adding new fields should be totally OK. Do not remove the old fields, it's better to mark them as deprecated.
I'm trying to use Azure Data Factory (V2) to copy data to a MongoDb database on Atlas, using the MongoDB Atlas connector but I have an issue.
I want to do an Upsert but the data I want to copy has no primary key, and as the documentation says:
Note: Data Factory automatically generates an _id for a document if an
_id isn't specified either in the original document or by column mapping. This means that you must ensure that, for upsert to work as
expected, your document has an ID.
This means the first load works fine, but then subsequent loads just insert more data rather than replacing current records.
I also can't find anything native to Data Factory that would allow me to do a delete on the target collection before running the Copy step.
My fallback will be to create a small Function to delete the data in the target collection before inserting fresh, as below. A full wipe and replace. But before doing that I wondered if anyone had tried something similar before and could suggest something within Data Factory that I have missed that would meet my needs.
As per the document, You cannot delete multiple documents at once from the MongoDB Atlas. As an alternative, you can use the db.collection.deleteMany() method in the embedded MongoDB Shell to delete multiple documents in a single operation.
It has been recommended to use Mongo Shell to delete via query. To delete all documents from a collection, pass an empty filter document {} to the db.collection.deleteMany() method.
Eg: db.movies.deleteMany({})
I am using spring data redis and and saving my data in the form of hash using the annotation #RedisHash("myKey") to my entity class. The data is getting inserted but when I see the type of the data I have inserted it shows SET.
I tried following command :
TYPE myKey
Result : set
what changes do I have to make if I want to change type of the data to be saved in hash and not in set ?
This is the defination I get from that annotation :
RedisHash marks Objects as aggregate roots to be stored in a
Redis hash.
you can try Hash mapping as described in the documentation
I am doing a POC to ingest data from Oracle to Ignite cluster and Fetch the
data from Ignite in another application. When I created the Model and Cache,
I specified the Key as String and value as Custom Object. Data loaded to
cluster but then I querying "SELECT * FROM TB_USER" I am getting only two
column, i.e. _KEY and _VAL. I am trying to get all the column from the
TB_USER. What are the configuration required for this?
There are three ways of configuring SQL tables in Ignite:
DDL statements (create table). As far as I can see, you used something else.
QueryEntities. You should enlist all columns that you want to see in your table in the QueryEntity#fields property. All names should correspond to field names of your Java objects.
Annotations. Fields, that are annotated as #QuerySqlField will become columns in your table.
I'm running an instance of HSQLDB from inside a Java class: an instance of org.hsqldb.Server is initialized and set to be only in memory, no other configuration; then, it's used filling it with data accessible from outside the running jvm.
Using SQuirreL set to "Read on, Block size", I connect to HSQLDB server and query for data: it seems like all returning rows from the query are loaded in client memory and then displayed by block size. Instead, using Oracle (by example) I see the client downloading only displayed rows, others are downloaded only when the list is scrolled down. Is it possible to force HSQLDB client to act in the same way?
The query is performed using a java.sql.Statement object. This has a setFetchSize(n) method that indicates the number of rows that are fetched at one time. HSQLDB supports this when it is used in Server mode. It returns the rows in chunks containing the indicated fetch size.
The application program, in this case SQuirrel, should explicitly call setFetchSize(n) on the Statement object.