Why Redis Client Use Multiple Address in ClusterMode? - redis

Why Redis client use multiple address in cluster-mode for create connection? is this to switch between addresses when one of them has failed?
Thanks.

Redis uses multiple address to setUp application with all the master and slave node available in redis cluster. Redis never switch address it is just redis-cluster responsibility to promote the slave node to master if any one of them failed. After that subsequent request can be served directly from that redis node.
More details here : https://redis.io/topics/cluster-tutorial

Related

Redis sentinel implmentation over the internet

I'm trying to implement redis sentinel in which there are two seperate
environments where master and replica redis will be running. The two
enviroments i.e. Primary and Backup will communicate through internet. Each
environment will have 2 nodes and each node will have one pod which contains
redis+sentinel processes. The following architecture represents the same.
Let's consider a scenario, if Master Redis (Node 1) goes down then sentinel
will invoke fail-over process and make one of the replica as Master Redis.
In such case, suppose Node 3 replica becomes master redis. So far all works
as expected. Now when Node 1 becomes available then its redis will start as
Master, after sentinels communication redis will act as replica. Ideally,
redis should bind on 1.2.3.4:30001 but it is binding on private IP of node
i.e. 192.168.x.x.
My question is why this is happening and as per my understanding sentinel is
responsible for config rewrites and asking Node 1 redis to become replica
redis so how sentinel is taking private IP rather than public IP.
Hopefully, I have properly conveyed my problem to you. if you need any futher
information feel free to comment.

Redirecting redis client to slave if master is under a large transaction (redis cluster) and vice versa

I am trying to implement a 3 master 3 slave architecture with redis cluster. I want to redirect my client to slave if master is blocked (like undergoing a MULTI EXEC query) or redirect to master if slave is synchronising the MULTI EXEC query. Is there any way I can achieve it through redis configuration, or do I need to manually implement this logic with the client library (redis-rb) I am using?
Thanks in advance.
As I know, there isn't any proxy or balancing in redis cluster that you can control. In Redis Cluster nodes don't proxy commands to the right node in charge for a given key, but instead, they redirect clients to the right nodes serving a given portion of the keyspace. So you can't somehow control this from config.
Maybe your case with MULTI EXEC will be handled by the client library because it knows all about redis master nodes config

Redis - Tomcat Session Manager : Read from Slave

I am using redis(Redis 3.1) as session store for tomcat(Tomcat 7). To ensure high availability, there is a sentinel setup and two instances(master and slave) of redis server. The slave is configured as read-only. After running few tests and verifying the statistics, it's observerd there are no read requests sent to the slave. All the read requests are processed by the master alone.
Could you please let me know how I can make the slave serve the read requests?
You could use Redis based Tomcat Session Manager provided by Redisson. It allows to manage which type of node use for read operation (master, slave or both master and slave). Perfectly works in Sentinel/Cluster modes.

Failing over with single Replication Group on ElastiCache Redis

I'm testing out ElastiCache backed by Redis with the following specs:
Using Redis 2.8, with Multi-AZ
Single replication group
1 master node in us-east-1b, 1 slave node in us-east-1c, 1 slave node in us-east-1d
The part of the application writing is directly using the endpoint for the master node (primary-node.use1.cache.amazonaws.com)
The part of the application doing only reads is pointing to a custom endpoint (readonly.redis.mydomain.com) configured in HAProxy, which then points to the two other read slave end points. (readslave1.use1.cache.amazonaws.com and readslave2.use1.cache.amazonaws.com)
Now lets say the primary node (master) fails in us-east-1b.
From what I understand, if the master instance fails, I won't have to change the url for the end point for writing to Redis (primary-node.use1.cache.amazonaws.com), although from there, I still have the following questions:
Do I have to change the endpoint names for the read only slaves?
How long until the missing slave is added into the pool?
If there's anything else I'm missing, I'd appreciate the advice/information.
Thanks!
If you are using ElastiCache, you should make use the "Primary EndpointThe" provided by AWS.
That endpoint actually is backed by Route53, if the primary (master) redis is down, since you enable MutliA-Z, it will auto fail over to one of the read replica (slave).
In that case, you don't need to modify the endpoint of your redis.
I don't know why you have such design, seems you only want write to master, but always read from slave.
For HA Proxy part, you should include TCP check for ALL 3 redis nodes, using their "Read Endpoint"
In haproxy, you can check if the endpoint is SLAVE, if yes, your haproxy should redirect the traffic to that.
Notice that in the application layer, if your redis driver don't support auto reconnect, your script will fail to connect to the new master nodes.
In addition to "auto reconnect", since AWS is using Route53 DNS to do fail over, some lib will NOT do NS lookup again, which means the DNS is still pointing to the OLD ip which is the old master.
Using HAproxy can solve this problem.

Redis failover scenario

currently I have a redis instance, now I would make it more failure prove.
Is it possible to archive the following things?
I connect to redis with the service stack library, now I want that when the server is not available redis switch to the failover server automatically.
You should configure a Redis instance as a slave of your master instance, either using the slaveof command or more likely by adding a slaveof directive in the configuration file (something like 'slaveof 127.0.0.1 6380' ; look at the documentation for more info); then use Redis Sentinel to monitor the instances and promote the Slave as Master when the master fails.
Moreover you either have to use a Redis client that supports sentinel and handles the redirection when the slave is promoted to slave, or use a network configuration (like virtual IP) to make the redirection transparent for your application.