Is there any mechanism to replicate the whole Redis cluster to another?
Example: I have Redis Cluster RC1 which has n master nodes and m slaves. I also have Redis Cluster RC2 with a similar configuration to RC1. Now I want to replicate data (whole/delta) from RC1 to RC2.
Related
I have a Drupal cluster of 3 servers that are using HAproxy (TCP) to handle communication with a Redis cluster of 3 nodes (used for caching) on which the sentinel service is active as well.
The Redis cluster has 1 main (master) node and 2 secondary (slave) nodes in replication mode.
I recently noticed that the avg_ttl is zero on one of the secondary (slave) nodes.
It's weird, I mean the data is synced between these nodes so they should have the same keys.
I checked the configuration and they almost have the same configuration in the redis.conf file.
Any idea what could this mean?
Thanks!
avg_ttl
Replication Info
By this famous guestbook example:
https://github.com/kubernetes/examples/tree/master/guestbook
It will create Redis master/slave deployment and services. It also has a subfolder named redis-slave which used for create a docker image and run Redis replication command.
Dockerfile
run.sh
The question is, if deployed the Redis master and slave to the k8s cluster. Then how to run that command? Deploy a new container? That will not relate to the slave container already deployed.
Is there a better way to do Redis repliaciton between master and slave running in k8s cluster?
One option you have is using helm to deploy the redis-ha app.
Info about helm: https://github.com/kubernetes/helm
The redis-ha helm app page: https://hub.kubeapps.com/charts/stable/redis-ha
Redis Sentinel is often suggested for simple master-slave replication and high availability.
Unfortunately, Sentinel does not fit Kubernetes world well and it also requires a Sentinel-aware client to talk to Redis.
You could try Redis operator which can be considered a Kubernetes-native replacement for Sentinel and allows to create a Redis deployment that would resist without human intervention to most kind of failures.
Here is how you can setup Redis HA Master Slave Cluster in Kubernetes/Openshift OKD
Basically you have to use configMap, StatefulSet in collaboration with VolumeClaims
https://reachmnadeem.wordpress.com/2020/10/01/redis-ha-master-slave-cluster-up-and-running-in-openshift-okd-kubernetes/
Im trying to set a Redis cluster (3 masters, 3 slaves) using JedisCluster. How should I set up the configuration files for the nodes of the cluster?
Does JedisCluster methods are sufficient to set up the cluster?
JedisCluster is to communicate with Redis cluster.
First, you have to set up Redis cluster on your own. There are several resources (tutorials, blogs, etc.) are available online. Google for those. To begin, you can take a look into Redis cluster tutorial.
After setting up, you can communicate with that Redis cluster by JedisCluster (as well as many other ways).
Now we deploy redis clusters in IDC1 and IDC2. The servers in IDC1 will only access the redis cluster in IDC1 and so does IDC2. We hope IDC1 and IDC2 could replicate for each other with the final consistency.
We know about the replication of redis instance and we use twemproxy for one cluster now. How can we replicate the whole redis cluster in different IDCs.
I am planning on adding Redis to our application as a session and cache store. I have been looking at how to make Redis highly available on an on-premise hosted solution.
The standard approach appears to be to set up Redis as a 3 node replica and use Sentinel for the monitoring and automatic failover.
Redis 2.8 introduces Redis cluster. Does that mean it brings in automatic failover etc and we no longer need to use Sentinel?
No, Cluster and Failover are different scenarios. Also Cluster is in 3.0, not 2.8.
The standard (and minimum) setup for HA is a master and one slave (aka "a pod"), with a separate set of three nodes which run Sentinel and monitor the pod.
This is to ensure failover of the server. However, either your client library has to support using Sentinel to discover master and reconnect on failure, you implement it in your code, or you set up a TCP load balancer and a sentinel monitoring daemon to update your load balancer configuration when a failover occurs at which point the client code doesn't know or care about sentinel.
Cluster isn't there to provide HA, it is there for server-side sharding of data. For Cluster you're looking at 6-7 nodes minimum (3 master, 3 slave, 1 spare) as well as Cluster support in the client and restrictions about commands and Lua script which need to access multiple keys.