Does Apache Ignite support WAN replication? - replication

I've been doing some experiments with Apache Ignite and I've started to look into WAN replication. By this I mean there would be 2 (or more) data centres each running an Ignite cluster. There would be some caches that I would like kept in sync between the two data centres.
Does Apache Ignite support this? If so how is this configured as I can't find any mention of this in the documentation.

At the moment Ignite does not support caches spanning multiple clusters(nor cache mirroring). If however you mean there is only one Ignite cluster consisting of nodes located in different data centers(WAN),that would be possible though would most likely be inefficient! since you will have to use the Replicated Mode.

GridGain provides asynchronous WAN replication on top of Ignite as part of their payed solution: https://www.gridgain.com/docs/latest/administrators-guide/data-center-replication/configuring-replication

Related

What is the difference between:Redis Replicated setup, Redis Cluster setup Redis Sentinel setup and Redis with Master with Slave only?[REDISSON]

I've read https://github.com/redisson/redisson
And I found out that there are several
Redis Replicated setup (including support of AWS ElastiCache and Azure Redis Cache)
Redis Cluster setup (including support of AWS ElastiCache Cluster and Azure Redis Cache)
Redis Sentinel setup
Redis with Master with Slave only
I am not a big expert in clusters and I don't understand the difference between these setups.
Could you beiefly explain the differences ?
Disclaimer I am an AWS employee.
I do not know how Redis Replicated Setup is different from Redis in Master-Slave mode. Maybe they mean cross-region replication?
In any case, I can try and explain setups I know about:
Redis with Master with Slave only - is a single shard setup where you create a primary replica together with one or more secondary (slave) replicas (let's hope PC police won't arrest me). This setup is used to improve the durability of your in-memory store. It's not advised to use your secondaries for reads because such setup has eventual consistency guarantees and your replica reads may be stale (depending on the replication lag).
Redis Cluster setup - the setup supported by cloud provides such as AWS Elasticache. In this setup your workload can be spread horizontally across multiple shards and each shard may have its own secondary replicas. Your client library must support this setup since it requires maintaining multiple connections to several nodes at a client level. Moreover, there are some locality rules you need to follow in order to use cluster mode efficiently:
Keys with foo{<shard>}bar notation will be routed to their shard according to what is stored inside curly brackets.
You can not use mset, mget and other multi-key commands across shards. You can still use these commands if their keys contain the same {shard} part.
There are additional cluster mode admin commands that are exposed by Redis but they are usually hijacked and hidden from users by cloud providers since cloud provides use them in order to manage redis cluster themselves.
Redis cluster have an ability to migrate part of your workload between shards. However, it still obliged to preserve correctness with respect to {shard} notation. Since your client library is responsible to fetch data from specific shard it must handle "moved" response when a shard might redirect it to another node.
Redis Sentinel setup - using an additional server that provides service discovery functionality for Redis clusters. Not strictly required and I believe is less popular across users. It serves as a single source of truth regarding each node's health and state. It provides monitoring, management, and service discovery functions for managing your Redis cluster. Many Redis client libraries provide the option of connecting to Redis sentinel nodes in order to achieve automatic service discovery and seamless failover flow. One of the reasons why this setup is less popular is because cloud companies like AWS Elasticache provide this service out of the box.

How to run multiple ignite clusters on same network

I have several machines on my intranet. If I switch on ignite on two of them, they automatically discover each other and become part of a single cluster. If I start ignite on a third machine, it automatically connects to the cluster.
How can I prevent this.
Basically, I want to run two clusters of Ignite on a single network. I have two testing environments, I want separate Ignites for both these environments.
I suppose that you're using TcpDiscoveryMulticastIpFinder in TcpDiscoverySpi configuration.
It's possible to archive network isolation, but you should use TcpDiscoveryVmIpFinder instead of TcpDiscoveryMulticastIpFinder. The example of configuration could be found here https://apacheignite.readme.io/docs/tcpip-discovery#section-static-ip-finder.

Ignite Client connection and Client Cache

I would like to know answers for below questions:
1) In case if Ignite server is restarted, I need to restart the client (web applications). Is there any way client can reconnect to server on server restart. I know when server restarts it allocates a different ID and because of this the current existing connection becomes stale. Is there way to overcome this problem and if so, which version of Ignite supports this feature. Currently I utilize version 1.7
2) Can I have client cache like how Ehcache provides. I don’t want client cache as a front–end to a distributed cache. When I looked at the Near Cache API, it doesn’t have cache name properties like cache configuration and it acts only as a front-end to a distributed cache. Is it possible to create client only cache in Ignite
3) If I have a large object to cache, I find Serialization and Deserialization takes a longer time in Ignite and retrieving from distributed cache is slow. Is there any way we can speed up large objects retrieval from Ignite DataGrid.
This topic is discussed on Apache Ignite users mailing list: http://apache-ignite-users.70518.x6.nabble.com/Questions-on-Client-Reconnect-and-Client-Cache-td10018.html

Ignite Server mode vs Client Mode

Ignite has two modes, one is Server mode, and the other is client mode.I am reading https://apacheignite.readme.io/docs/clients-vs-servers, but didn't get a good understanding of these two modes.
In my opinion, there are two use cases:
If the Ignite is used as an embedded server in a java application, they the Ignite should be in server mode, that is, Ignite should be started with
Ignite ignite = Ignition.start(configFile)
If I have setup an Ignite cluster that are running as standalone processes. Then in my java code, I should start Ignite in client mode, so that the client mode Ignite can connect to the Ignite cluster, and CRUD the cache data that resides in the ignite cluster?
Ignition.setClientMode(true);
Ignite ignite = Ignition.start(configFile)
Yeah, this is correct understanding.
Ignite client mode intended as lightweight mode (which do not store data and do not execute compute tasks). Client node should communicate with a cluster and should not utilize self resources.
Client does not even started without server node presented in topology.
In order to further add to #Makros answer, Ignite Client stores data if near cache is enabled. This is done in order to increase the performance of cache retrievals.
Yeah, you are right in ignite client has IgniteConfiguration.setClientMode(true); and for server IgniteConfiguration.setClientMode(false);, which is default value. if set IgniteConfiguration.setClientMode(false); in you code or forget to set setClientMode(); it will work as server.

Weblogic http session failover

Currently I have the following setup:
Hardware load balancer directing traffic to two physical servers each with 2 instances of weblogic running.
Works ok. I'd like to be able to shutdown one of the servers without dropping active sessions. Right now if I shutdown one of the physical servers any traffic that was going there gets bounced back to a login screen.
I'm looking for the simplest way of accomplishing this with the smallest performance hit.
Things I've considered so far:
1. See if I can somehow store the session information on the Load Balancer and through some Load Balancer magic have it notice a server is dead and try another one with the same session information (not sure this is possible)
2. Configure weblogic clustering. Not sure what the performance hit would be. Im guessing this is what I'll end up with, but still fishing for alternatives.
3. ?
What I currently have is an overly designed DR solution (which was the requirement), but I'd like to move it more in the direction of HA (for the flexibility)
edit Also is it worthwhile to create 2 clusters and replicate the sessions between them (I was thinking one cluster per site, sites are close enough). This would cover the event of one cluster failing.
You could try setting up a JDBC Session Storage pointing (of course) both instances to the same datasource without setting up a cluster, but I think the right approach would be setting up a Weblogic Cluster.
A nice thing about clustering Weblogic Servers is that - (from the link above, emphasis mine):
Sessions can be shared across clustered WebLogic Servers. Note that session persistence is no longer a requirement in a WebLogic Cluster. Instead, you can use in-memory replication of state. For more information, see Using WebLogic Server Clusters.
We've got a write up of this on our blog http://blog.c2b2.co.uk/2012/10/basic-clustering-with-weblogic-12c-and.html which provides step by step instructions on setting up web session failover in a cluster.
Clusters are not heavyweight assuming you don't store huge amounts of data in the cluster as it will be replicated.