I would like to know answers for below questions:
1) In case if Ignite server is restarted, I need to restart the client (web applications). Is there any way client can reconnect to server on server restart. I know when server restarts it allocates a different ID and because of this the current existing connection becomes stale. Is there way to overcome this problem and if so, which version of Ignite supports this feature. Currently I utilize version 1.7
2) Can I have client cache like how Ehcache provides. I don’t want client cache as a front–end to a distributed cache. When I looked at the Near Cache API, it doesn’t have cache name properties like cache configuration and it acts only as a front-end to a distributed cache. Is it possible to create client only cache in Ignite
3) If I have a large object to cache, I find Serialization and Deserialization takes a longer time in Ignite and retrieving from distributed cache is slow. Is there any way we can speed up large objects retrieval from Ignite DataGrid.
This topic is discussed on Apache Ignite users mailing list: http://apache-ignite-users.70518.x6.nabble.com/Questions-on-Client-Reconnect-and-Client-Cache-td10018.html
Related
As we can persist cache data in Apache Ignite by enabling persistanceEnabled property. Is there a similar way we can store audit events as well, i.e. when we restart ignite server, all cache events must also be retained as they are currently lost on a server restart.
I am open to any other better approach for auditing via Ignite. I basically want to store all audit operations (especially INSERT,UPDATE) which we can review(fetch) later in the future?
You would need to implement your own EventStorageSpi.
Folks,
Does anyone know the behavior for the below two items on Ignite Near Cache,
Can the same distributed cache on the ignite grid be configured to be a near cache in one ignite client & a regular cache in another ignite client at the same time?. Hopefully this can be done.
Does near caches work when using SQL queries (we use Spring Data abstraction) or does it work only with JCache based key-value access.
Thanks
lmk
I think you should be able to only start Near Cache on a subset of clients. Have you tried that.
No, SQL map phase will not happen on clients, it will happen on primary server node(s).
So i'm totally new to ignite here. Is there any configuration or strategy to export all data present in the cache memory to the local hard disk in ignite.
Basically what i'm hoping for is some kind of a logger/snapshot that shows the change in data when any kind of sql update operation is performed on the data present in the caches.
If someone could sugest a solution, i'd appreciate it a lot.
You can create and configure persistence store for any cache [1]. If cluster is restarted, all the data will be there and can be reloaded into memory using IgniteCache#loadCache(..) method. Out of the box Ignite provides integration with RDBMS [2] and Cassandra [3].
Additionally, in one of the future versions (most likely next 2.1) Ignite will provide a local disk persistence storage which will allow to run with a cold cache, i.e. without explicit reloading after cluster restart. I would recommend to monitor dev and user Apache Ignite mailing lists for more details.
[1] https://apacheignite.readme.io/docs/persistent-store
[2] https://apacheignite-tools.readme.io/docs/automatic-rdbms-integration
[3] https://apacheignite-mix.readme.io/docs/ignite-with-apache-cassandra
Ignite has two modes, one is Server mode, and the other is client mode.I am reading https://apacheignite.readme.io/docs/clients-vs-servers, but didn't get a good understanding of these two modes.
In my opinion, there are two use cases:
If the Ignite is used as an embedded server in a java application, they the Ignite should be in server mode, that is, Ignite should be started with
Ignite ignite = Ignition.start(configFile)
If I have setup an Ignite cluster that are running as standalone processes. Then in my java code, I should start Ignite in client mode, so that the client mode Ignite can connect to the Ignite cluster, and CRUD the cache data that resides in the ignite cluster?
Ignition.setClientMode(true);
Ignite ignite = Ignition.start(configFile)
Yeah, this is correct understanding.
Ignite client mode intended as lightweight mode (which do not store data and do not execute compute tasks). Client node should communicate with a cluster and should not utilize self resources.
Client does not even started without server node presented in topology.
In order to further add to #Makros answer, Ignite Client stores data if near cache is enabled. This is done in order to increase the performance of cache retrievals.
Yeah, you are right in ignite client has IgniteConfiguration.setClientMode(true); and for server IgniteConfiguration.setClientMode(false);, which is default value. if set IgniteConfiguration.setClientMode(false); in you code or forget to set setClientMode(); it will work as server.
Currently I have the following setup:
Hardware load balancer directing traffic to two physical servers each with 2 instances of weblogic running.
Works ok. I'd like to be able to shutdown one of the servers without dropping active sessions. Right now if I shutdown one of the physical servers any traffic that was going there gets bounced back to a login screen.
I'm looking for the simplest way of accomplishing this with the smallest performance hit.
Things I've considered so far:
1. See if I can somehow store the session information on the Load Balancer and through some Load Balancer magic have it notice a server is dead and try another one with the same session information (not sure this is possible)
2. Configure weblogic clustering. Not sure what the performance hit would be. Im guessing this is what I'll end up with, but still fishing for alternatives.
3. ?
What I currently have is an overly designed DR solution (which was the requirement), but I'd like to move it more in the direction of HA (for the flexibility)
edit Also is it worthwhile to create 2 clusters and replicate the sessions between them (I was thinking one cluster per site, sites are close enough). This would cover the event of one cluster failing.
You could try setting up a JDBC Session Storage pointing (of course) both instances to the same datasource without setting up a cluster, but I think the right approach would be setting up a Weblogic Cluster.
A nice thing about clustering Weblogic Servers is that - (from the link above, emphasis mine):
Sessions can be shared across clustered WebLogic Servers. Note that session persistence is no longer a requirement in a WebLogic Cluster. Instead, you can use in-memory replication of state. For more information, see Using WebLogic Server Clusters.
We've got a write up of this on our blog http://blog.c2b2.co.uk/2012/10/basic-clustering-with-weblogic-12c-and.html which provides step by step instructions on setting up web session failover in a cluster.
Clusters are not heavyweight assuming you don't store huge amounts of data in the cluster as it will be replicated.