I have Ignite cache with name "IgniteCache" on each node in cluster(of 2 servers) with local mode enabled. Certain number of entries are loaded into these local caches. Now, I have started separate client node which queries data from this "IgniteCache" on cluster. But always when I query data, I am getting null result(Instead of getting data from both server nodes)
This happens because local caches are not distributed across nodes. When you query a local cache, you only will see data which is stored locally on the same node. You don't have any on the client, so result is empty.
Related
I have an Apache Ignite cluster with 5 nodes, running in PARTITIONED mode with 1 back-up copy for each primary partition (also configured to read from backup if it's on the local node).
Updates to data in one of the caches is received from a Kafka topic, updates are processed and cache is re-loaded as required.
However, occasionally, I am observing that when I request the data from the cache, I will get the correct updated data a handful of times, but this will alternate with getting the stale data pre-update back.
It seems to me that something fails when syncing between the primary and back up node upon update (configuration is FULL_SYNC so not related to async issues). I can't spot any errors in the logs which suggest something like this however.
How can I determine if this is the cause of the issue? What else may be going wrong to cause this behaviour?
Running on Ignite 2.9.1
Thanks
When I tested the expiration method of infinispan cluster node cache, I found that when the node reached the maximum idle time, it would not get "the last time the entry is accessed" from other nodes in the cluster, but directly invalidate the cache entry of the node. For example: I started two nodes A and B, and set the maximum idle time of the cache to 10s. At the beginning of the test, I sent a request to Node A to access the database records and write the database records to the cache. At this time, Node A synchronizes the data cache to Node B. Then at 5s, I accessed the cache entry at Node A, and then at Node B after 10s. I found that the cache entry at Node B was invalid, Node B retrieved the database records from the database, and wrote the cache and synchronized to other nodes, instead of treating the cache as valid.
Why is it different from the description in the document? http://infinispan.org/docs/stable/user_guide/user_guide.html#expiration_details
For the configuration of cluster node cache expiration failure, I configure it as follows:
Configuration C = new ConfigurationBuilder()
.expiration().enableReaper(). wakeUpInterval(50000L).maxIdle(10000l).build();
It sounds like you are using an older version of Infinispan. Cluster wide max idle expiration wasn't introduced until 9.3 in https://issues.jboss.org/browse/ISPN-9003. If this issue still persists with 9.3 or newer, you can log a bug at https://issues.jboss.org/projects/ISPN.
I am trying to position Ignite as Query Grid for databases such as Kudu, Hbase, etc.. Thus, all data silos will be queried over Ignite with read/write through. How this is possible? Are there any integrations with them?
The first time, SQL query runs, it will need to pull the data from such databases and create the key/value on Ignite.
Then, if one/two/three node goes down, eventually the data stored in memory will be lost. How the recovery is done or not possible?
Thanks
CK
Ignite SQL is unable to load specific data by query from external store, it's only possible on API get()/getAll() operations. To be able querying data you need load them into Ignite at first, for example, with loadCache(). Internally this function does a query to target database and transforms response into key-value manner.
BTW, if you enable persistence in Ignite, it will know the structure of data and will be able to query them, even if not all entries loaded into memory.
In case of node crash traditionally used data replication between nodes. In Ignite it's named backups. If you loose more nodes than backups set, then you'll need to preload data from store again.
We are using ignite JDBC thin driver to store 1 million records in a table on ignite cache.
To insert 1 Million records on single node it take 60 sec, where as on cluster of 2 nodes it takes 5 min and time grows exponentially as number of nodes are increased.
attached ignite log file where time was consumed on cluster.
attached configuration file for the cluster.
The log and configuration file is here
IS there any additional configuration required to get time down to insert records over a cluster.
Please make sure that you always test networked configuration.
You should avoid testing "client and server on the same machine" configuration because it can not be compared with "two server nodes on different machines". And it certainly should not be compared with "two server nodes on the same machine" :)
I've heard that thin JDBC driver is not optimized yet for fast INSERTs. Please try client-node JDBC driver with batching (via PreparedStatement.addBatch()).
I have 2 nodes, in which im trying to run 4 ignite servers, 2 on each node and 16 ignite clients, 8 on each node. I am using replicated cache mode. I could see the load on cluster is not distributed eventually to all servers.
My intension of having 2 servers per node is to split the load of 8 local clients to local servers and server can work in write behind to replicate the data across all servers.
But I could notice that only one server is taking the load, which is running at 200% cpu and other 3 servers are running at very less usage of around 20%cpu. How can I setup the cluster to eventually distribute the client loads across all servers. Thanks in advance.
I'm generating load by inserting same value 1Million times and trying to get the value using the same key
Here is your problem. Same key is always stored on the same Ignite node, according to Affinity Function (see https://apacheignite.readme.io/docs/data-grid), so only one node takes read and write load.
You should use a wide range of keys instead.