Redis data replication - redis

I am trying to do data replication using redis server and jedis client. What I am asking may be completely wrong. I just started exploring the things today.
I have configured 2 laptops in my network.
Config on node1 is default the config.
Config on node2:
updated redis.conf file to have this line:
slaveof <node1 ip> <port on which redis server is running>
I have a stand-alone java program on both of the nodes.
Case 1:
I did jedis.set("key1", "value1"); from node1 (master). I am able to retrieve key-value from node2.
Case 2:
I did jedis.set("key2", "value2"); from node2 (slave). But I'm not able to see that replicated on the master node.
Question is, is it expected behavior? My requirement is to keep the data in sync on all the nodes.
Thanks.

That is the expected behavior. Writes to the master will propagate to the slave(s), writes to the slave(s) will not propagate to the master. If you want to use Redis, you will have to write to the master and read from the slave(s). See Redis replication documentation.

Related

Redis sentinel implmentation over the internet

I'm trying to implement redis sentinel in which there are two seperate
environments where master and replica redis will be running. The two
enviroments i.e. Primary and Backup will communicate through internet. Each
environment will have 2 nodes and each node will have one pod which contains
redis+sentinel processes. The following architecture represents the same.
Let's consider a scenario, if Master Redis (Node 1) goes down then sentinel
will invoke fail-over process and make one of the replica as Master Redis.
In such case, suppose Node 3 replica becomes master redis. So far all works
as expected. Now when Node 1 becomes available then its redis will start as
Master, after sentinels communication redis will act as replica. Ideally,
redis should bind on 1.2.3.4:30001 but it is binding on private IP of node
i.e. 192.168.x.x.
My question is why this is happening and as per my understanding sentinel is
responsible for config rewrites and asking Node 1 redis to become replica
redis so how sentinel is taking private IP rather than public IP.
Hopefully, I have properly conveyed my problem to you. if you need any futher
information feel free to comment.

Redis connect single instance slave (slave of) to cluster or sentinel

When running a single instance redis, I can use "slave of" to create a (or as many I like) readonly replica of this one redis node.
When using redis cluster, I split my Data into Partitons (Masters) and can create a slave for each partition.
Is it possible to treat this cluster as a single instance and connect a "slave of" Slave to this cluster which will hold a replica of all Data in the cluster and not just the partition of the connected node?
If not possible with redis cluster, is this might a working solution when using sentinel?
Our current Problem:
We are using the "slave of" feature together with keepalived to failover our redis instance on an outage of the master.
But we have lots of "slave of" slaves connected to the virtual IP of the failover setup, to deliver cached data.
Now everytime the system fails over (for maintenance reasons e.g.) all connected slaves have a timout for up to 30 seconds, when they have to resync their data with the new master.
We allready played with all possible redis config parameters but can't get this syncing time to be shorter (e.g. by relying on the replication-backlog, which isn't available on the new master after the failover).
Anyone any ideas?
a very good doc here : http://redis.io/presentation/Redis_Cluster.pdf and here http://fr.slideshare.net/NoSQLmatters/no-sql-matters-bcn-2014 (slide #9) or better https://www.javacodegeeks.com/2015/09/redis-clustering.html
If you want "slave" in Redis cluster mode, you need use replication of all nodes.
Regards,
Well, I just read this article:
https://seanmcgary.com/posts/how-to-build-a-fault-tolerant-redis-cluster-with-sentinel
The author used a single master with Redis Cluster, with 2 slaves per master, instead of one, and he let Redis Sentinel take care of the election of a slave to a master when the master is down.
You could play with this setup to see if the election of Master occurs quickly. While it's happening, clients would be served by a slave and should experience no downtime.

Redis - Promoting a slave to master manually

Suppose I have [Slave IP Address] which is the slave of [Master IP Address].
Now my master server has been shut down, and I need to set this slave to be master MANUALLY (WITHOUT using sentinel automatic failover, WITH redis command).
Is it possible doing this without restarting the redis service ? (and losing all the cached data)
use SLAVEOF NO ONE to promote a slave to master
http://redis.io/commands/slaveof
it depends, if you are in a cluster you will be better using the fail over. You will need to use the force option in the command
http://redis.io/commands/cluster-failover
Is it possible doing this without restarting the redis service? (and
losing all the cached data)
yes that's possible, you can use
SLAVEOF NO ONE (without sentinel)
But it is recommended to use sentinel to avoid data loss.
sentinel failover master-name(with sentinel)
This will force the sentinel to switch master.
The new master will have all the data that was synchronized before the old-master shutdown.
Redis will automatically choose the best slave with max. data, that will reduce the amount of data we lose when switching master.
Below 2 options in step 3 have helped me to recover the cluster once a master node is down, compute was replaced or other not recoverable state.
1 .- First you need to connect to the slave node, use redis-cli, here a link how to do that: How to connect to remote Redis server?
2 .- Once connected to the slave node run the command cluster nodes to validate master node is in fail state, also run cluster info to see the overall state of your cluster(this is always a good idea)
3 .- Inside the slave node to be promoted run command: cluster failover,
in rare cases when there is some serious issues with redis this
command could fail, and you will need to use cluster failover force
or cluster failover takeover, here more info abut the implications
of those options: https://redis.io/commands/cluster-failover
4 .- Run cluster forged $old_master_id in all your cluster nodes
5 .- Add a new node with cluster meet $new_node_IP $new_node_PORT
6 .- Subscribe your new node to your brand new master, login in to the new bode and run cluster replicate $master_node_id
Steps 1-3 are required for the slave-master promotion and 4-5 are required to left all cluster in a healthy master-slave equilibrium.
As of Redis version 5.0.0 the SLAVEOF command is regarded as deprecated.
If a Redis server is already acting as replica, the command REPLICAOF NO ONE will turn off the replication, turning the Redis server into a MASTER.

Redis - Tomcat Session Manager : Read from Slave

I am using redis(Redis 3.1) as session store for tomcat(Tomcat 7). To ensure high availability, there is a sentinel setup and two instances(master and slave) of redis server. The slave is configured as read-only. After running few tests and verifying the statistics, it's observerd there are no read requests sent to the slave. All the read requests are processed by the master alone.
Could you please let me know how I can make the slave serve the read requests?
You could use Redis based Tomcat Session Manager provided by Redisson. It allows to manage which type of node use for read operation (master, slave or both master and slave). Perfectly works in Sentinel/Cluster modes.

Failing over with single Replication Group on ElastiCache Redis

I'm testing out ElastiCache backed by Redis with the following specs:
Using Redis 2.8, with Multi-AZ
Single replication group
1 master node in us-east-1b, 1 slave node in us-east-1c, 1 slave node in us-east-1d
The part of the application writing is directly using the endpoint for the master node (primary-node.use1.cache.amazonaws.com)
The part of the application doing only reads is pointing to a custom endpoint (readonly.redis.mydomain.com) configured in HAProxy, which then points to the two other read slave end points. (readslave1.use1.cache.amazonaws.com and readslave2.use1.cache.amazonaws.com)
Now lets say the primary node (master) fails in us-east-1b.
From what I understand, if the master instance fails, I won't have to change the url for the end point for writing to Redis (primary-node.use1.cache.amazonaws.com), although from there, I still have the following questions:
Do I have to change the endpoint names for the read only slaves?
How long until the missing slave is added into the pool?
If there's anything else I'm missing, I'd appreciate the advice/information.
Thanks!
If you are using ElastiCache, you should make use the "Primary EndpointThe" provided by AWS.
That endpoint actually is backed by Route53, if the primary (master) redis is down, since you enable MutliA-Z, it will auto fail over to one of the read replica (slave).
In that case, you don't need to modify the endpoint of your redis.
I don't know why you have such design, seems you only want write to master, but always read from slave.
For HA Proxy part, you should include TCP check for ALL 3 redis nodes, using their "Read Endpoint"
In haproxy, you can check if the endpoint is SLAVE, if yes, your haproxy should redirect the traffic to that.
Notice that in the application layer, if your redis driver don't support auto reconnect, your script will fail to connect to the new master nodes.
In addition to "auto reconnect", since AWS is using Route53 DNS to do fail over, some lib will NOT do NS lookup again, which means the DNS is still pointing to the OLD ip which is the old master.
Using HAproxy can solve this problem.