So i'm totally new to ignite here. Is there any configuration or strategy to export all data present in the cache memory to the local hard disk in ignite.
Basically what i'm hoping for is some kind of a logger/snapshot that shows the change in data when any kind of sql update operation is performed on the data present in the caches.
If someone could sugest a solution, i'd appreciate it a lot.
You can create and configure persistence store for any cache [1]. If cluster is restarted, all the data will be there and can be reloaded into memory using IgniteCache#loadCache(..) method. Out of the box Ignite provides integration with RDBMS [2] and Cassandra [3].
Additionally, in one of the future versions (most likely next 2.1) Ignite will provide a local disk persistence storage which will allow to run with a cold cache, i.e. without explicit reloading after cluster restart. I would recommend to monitor dev and user Apache Ignite mailing lists for more details.
[1] https://apacheignite.readme.io/docs/persistent-store
[2] https://apacheignite-tools.readme.io/docs/automatic-rdbms-integration
[3] https://apacheignite-mix.readme.io/docs/ignite-with-apache-cassandra
Related
I like to upgrade the redis memorystore instance in our gcloud because 5.x (at least in Github) appears to have reached its end of life. It's being use for simple key value pairs, so I don't expect anything unexpected during the upgrade to 6.x. However management is nervous and wants a way to rollback the upgrade if there are issues. Is there a way to do this? The documentation appears to say that rollback is not possible. I plan to do the usual backup and then upgrade. The instance is just the basic.
In order to Upgrade the redis memorystore instance, follow the best practices mentioned in the Public Documentation as the following :
We recommend exporting your instance data before running a version upgrade operation.
Note that upgrading an instance is irreversible. You cannot downgrade the Redis version of a Memorystore for a Redis instance.
For Standard Tier instances, to increase the speed and reliability of your version upgrade operation, upgrade your instance during
periods of low instance traffic. To learn how to monitor instance
traffic, see Monitoring Redis instances.
As mentioned in the documentation which recommends you to enable RDB Snapshots.
Memorystore for Redis is primarily used as an in-memory cache. When
using Memorystore as a cache, your application can either tolerate
loss of cache data or can very easily repopulate the cache from a
persistent store.
However, there are some use cases where downtime for a Memorystore
instance, or a complete loss of instance data, can cause long
application downtimes. We recommend using the Standard Tier as the
primary mechanism for high availability. Additionally, enabling RDB
snapshots on Standard Tier instances provides extra protection from
failures that can cause cache flushes. The Standard Tier provides a
highly available instance with multiple replicas, and enables fast
recovery using automatic failover if the primary fails.
In some scenarios you may also want to ensure data can be recovered
from snapshot backups in the case of catastrophic failure of Standard
Tier instances. In these scenarios, automated backups and the ability
to restore data from RDB snapshots can provide additional protection
from data loss. With RDB snapshots enabled, if needed, a recovery is
made from the latest RDB snapshot.
For more information, you can refer to the documentation related to version upgrade behavior.
What is Ignite maintenance mode of Ignite, and how to change an ignite to this mode? I was stuck joining the node to the cluster and complains cleaning up the persistent data, however the data can be cleaned (using control.sh) only in the maintenance mode only.
This is a special mode, similar to running Windows in a safe mode after a crash or a data corruption where most of the cluster functionality is disabled and a user is asked to perform some maintenance task to resolve the issue, most straightforward example I can think of is - to clean (remove) some corrupted files on disk just like in your question. You can refer to IEP-53: Maintenance Mode proposal for the details.
I don't think that there is a way to enter this mode manually unless you trigger some preconfigured conditions like stopping a node in the middle of checkpointing with WAL disabled. Once the state is fixed, maintenance mode should be resolved automatically allowing a node to join the cluster.
Also, from my understanding, this mode is about a particular node rather than a complete cluster. I.e. you can have a 4-nodes cluster with only 1 node in maintenance mode, in that case, you have to run control.sh commands locally for the concrete failed node, not from another healthy node. If that's not the case, please provide more details or file a JIRA ticket because reported behavior looks quite broken to me.
As we can persist cache data in Apache Ignite by enabling persistanceEnabled property. Is there a similar way we can store audit events as well, i.e. when we restart ignite server, all cache events must also be retained as they are currently lost on a server restart.
I am open to any other better approach for auditing via Ignite. I basically want to store all audit operations (especially INSERT,UPDATE) which we can review(fetch) later in the future?
You would need to implement your own EventStorageSpi.
Is it possible to persist the ignite cache on local file system?
I need to perform cache operations like insert, update, delete on my look up data.
But this has to be persisted on local file system of the respective nodes to survive the data even after restart of the ignite cluster.
Alternatively I was able to persist the data on MySQL database.
But I'm looking for a persistence solution that works independent of databases and HDFS.
Ignite since version 2.1 has it own native Persistence. Moreover, it has advantages over integration with 3rd party databases.
You can read about it here: https://apacheignite.readme.io/docs/distributed-persistent-store
I would like to know answers for below questions:
1) In case if Ignite server is restarted, I need to restart the client (web applications). Is there any way client can reconnect to server on server restart. I know when server restarts it allocates a different ID and because of this the current existing connection becomes stale. Is there way to overcome this problem and if so, which version of Ignite supports this feature. Currently I utilize version 1.7
2) Can I have client cache like how Ehcache provides. I don’t want client cache as a front–end to a distributed cache. When I looked at the Near Cache API, it doesn’t have cache name properties like cache configuration and it acts only as a front-end to a distributed cache. Is it possible to create client only cache in Ignite
3) If I have a large object to cache, I find Serialization and Deserialization takes a longer time in Ignite and retrieving from distributed cache is slow. Is there any way we can speed up large objects retrieval from Ignite DataGrid.
This topic is discussed on Apache Ignite users mailing list: http://apache-ignite-users.70518.x6.nabble.com/Questions-on-Client-Reconnect-and-Client-Cache-td10018.html