Apache Ignite cache persist on local file system - ignite

Is it possible to persist the ignite cache on local file system?
I need to perform cache operations like insert, update, delete on my look up data.
But this has to be persisted on local file system of the respective nodes to survive the data even after restart of the ignite cluster.
Alternatively I was able to persist the data on MySQL database.
But I'm looking for a persistence solution that works independent of databases and HDFS.

Ignite since version 2.1 has it own native Persistence. Moreover, it has advantages over integration with 3rd party databases.
You can read about it here: https://apacheignite.readme.io/docs/distributed-persistent-store

Related

Is it possible to store cache events in Apache Ignite?

As we can persist cache data in Apache Ignite by enabling persistanceEnabled property. Is there a similar way we can store audit events as well, i.e. when we restart ignite server, all cache events must also be retained as they are currently lost on a server restart.
I am open to any other better approach for auditing via Ignite. I basically want to store all audit operations (especially INSERT,UPDATE) which we can review(fetch) later in the future?
You would need to implement your own EventStorageSpi.

Ignite data backup in hard disk

So i'm totally new to ignite here. Is there any configuration or strategy to export all data present in the cache memory to the local hard disk in ignite.
Basically what i'm hoping for is some kind of a logger/snapshot that shows the change in data when any kind of sql update operation is performed on the data present in the caches.
If someone could sugest a solution, i'd appreciate it a lot.
You can create and configure persistence store for any cache [1]. If cluster is restarted, all the data will be there and can be reloaded into memory using IgniteCache#loadCache(..) method. Out of the box Ignite provides integration with RDBMS [2] and Cassandra [3].
Additionally, in one of the future versions (most likely next 2.1) Ignite will provide a local disk persistence storage which will allow to run with a cold cache, i.e. without explicit reloading after cluster restart. I would recommend to monitor dev and user Apache Ignite mailing lists for more details.
[1] https://apacheignite.readme.io/docs/persistent-store
[2] https://apacheignite-tools.readme.io/docs/automatic-rdbms-integration
[3] https://apacheignite-mix.readme.io/docs/ignite-with-apache-cassandra

Ignite Client connection and Client Cache

I would like to know answers for below questions:
1) In case if Ignite server is restarted, I need to restart the client (web applications). Is there any way client can reconnect to server on server restart. I know when server restarts it allocates a different ID and because of this the current existing connection becomes stale. Is there way to overcome this problem and if so, which version of Ignite supports this feature. Currently I utilize version 1.7
2) Can I have client cache like how Ehcache provides. I don’t want client cache as a front–end to a distributed cache. When I looked at the Near Cache API, it doesn’t have cache name properties like cache configuration and it acts only as a front-end to a distributed cache. Is it possible to create client only cache in Ignite
3) If I have a large object to cache, I find Serialization and Deserialization takes a longer time in Ignite and retrieving from distributed cache is slow. Is there any way we can speed up large objects retrieval from Ignite DataGrid.
This topic is discussed on Apache Ignite users mailing list: http://apache-ignite-users.70518.x6.nabble.com/Questions-on-Client-Reconnect-and-Client-Cache-td10018.html

database Vs cache management in deepstream

I was wondering about how deepstream decides to store an info in cache vs database if both of them are configured. Can this be decided by the clients?
Also, when using redis will it provide both cache and database functionality? I would be using amazon elastic cache with redis backend for the same.
It stores it in both, first in the cache in a blocking way and outside the critical path in the database in a non-blocking way.
Here's an animation illustrating this.
You can also find more information here: https://deepstream.io/tutorials/core/storing-data/

Fast Multitenant Caching - Local Caching in Addition to Distributed Caching. Or are they the same thing?

I am working on converting an existing single tenant application to a multitenant one. Distributed caching is new to me. We have an existing primitive local cache system using the .NET cache that generates cloned objects from existing cached ones. I have been looking at utilizing Redis.
Can Redis cache and invalidate locally in addition to over the wire thus replacing all benefit of the local primitive cache? Or would it potentially be an ideal approach to have a tiered approach utilizing Redis distributed cache if the local one doesn't have the objects we need? I believe the latter would imply requiring expiration notifications to happen against the local caches when data is updated otherwise servers may have out of date inconsistent data
It seems like a set of local caches with expiration notifications would also qualify as a distributed cache, so I am a bit confused here on how Redis might be configured and if it would be distributed accross the servers serving the requests or live in it's own cluster.
When I say local I am implying not having to go over the wire for data.