I have a Redis cluster deployed using Kubernetes, one master node with two slave nodes.
Because of the failover issue or something else, the master node has changed.
I know that apps can connect to sentinel and choose the master node, but this is not my point.
I want to know if it's possible to change (choose) the master node by force using k8s ?
No, it's not possible.
You have to call the sentinel service first which will return the Master and slave service IPs.
However, if you will set up a cluster with Master-slave and fix the Master service type it will work.
https://github.com/bitnami/charts/tree/master/bitnami/redis-cluster
There is two thing in redis
One with sentinel and the other with sharding.
Choose between Redis™ Helm Chart and Redis™ Cluster Helm Chart You can
choose any of the two Redis™ Helm charts for deploying a Redis™
cluster. While Redis™ Helm Chart will deploy a master-slave cluster
using Redis™ Sentinel, the Redis™ Cluster Helm Chart will deploy a
Redis™ Cluster with sharding. The main features of each chart are the
following:
https://github.com/bitnami/charts/tree/master/bitnami/redis-cluster#introduction but there limitation and depends on the use case.
By default only one service is exposed. You will connect your client to the exposed service, regardless you need to read or write. When a write operation arrives to a replica it will redirect the client to the proper master node.
https://github.com/bitnami/charts/tree/master/bitnami/redis-cluster#cluster-topology
Related
I found that Sentinel is mainly used for promoting slave to master automatically when master failed.
I also found that redis-4.0.11's cluster mode seemly also have this function itself.
So when I use redis-4.0.11's in cluster mode, do I need a Sentinel ?
NO, you don't need sentinels in cluster mode.
When a master is down, the cluster will promote one of its slaves to be the new master automatically.
By this famous guestbook example:
https://github.com/kubernetes/examples/tree/master/guestbook
It will create Redis master/slave deployment and services. It also has a subfolder named redis-slave which used for create a docker image and run Redis replication command.
Dockerfile
run.sh
The question is, if deployed the Redis master and slave to the k8s cluster. Then how to run that command? Deploy a new container? That will not relate to the slave container already deployed.
Is there a better way to do Redis repliaciton between master and slave running in k8s cluster?
One option you have is using helm to deploy the redis-ha app.
Info about helm: https://github.com/kubernetes/helm
The redis-ha helm app page: https://hub.kubeapps.com/charts/stable/redis-ha
Redis Sentinel is often suggested for simple master-slave replication and high availability.
Unfortunately, Sentinel does not fit Kubernetes world well and it also requires a Sentinel-aware client to talk to Redis.
You could try Redis operator which can be considered a Kubernetes-native replacement for Sentinel and allows to create a Redis deployment that would resist without human intervention to most kind of failures.
Here is how you can setup Redis HA Master Slave Cluster in Kubernetes/Openshift OKD
Basically you have to use configMap, StatefulSet in collaboration with VolumeClaims
https://reachmnadeem.wordpress.com/2020/10/01/redis-ha-master-slave-cluster-up-and-running-in-openshift-okd-kubernetes/
I am trying to setup redis cluster on Kubernetes. One of my requirements is that my redis cluster should be resilient in case of kubernetes cluster restart(due to issues like power failure).
I have tried Kubernetes statefulset and deployment.
In case of statefulset, on reboot a new set of IP addresses are assigned to Pods and since redis cluster works on IP addresses, it is not able to connect to other redis instance and form cluster again.
In case of services with static IP over individual redis instance deployment, again redis stores IP of Pod even when I created cluster using static service IP addresses, so on reboot it is not able to connect to other redis instance and form cluster again.
My redis-cluster statefulset config
My redis-cluster deployment config
Redis-4.0.0 has solved this problem by adding support for cluster announce node IP and Port
Set cluster-announce-ip as static IP of service over redis instance kubernetes deployment.
Link to setup instructions: https://github.com/zuxqoj/kubernetes-redis-cluster/blob/master/README-using-statefulset.md
Are you able to use DNS names instead of IP addresses? I think that is the preferred way to route your traffic to individual nodes in a statefulset:
https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#stable-network-id
When I provision a default Redis cluster on Google Compute Engine, there is one master and 2 read-only slaves and Redis Sentinel is running on each machine. Given the previous cluster I'd now like to use this in my ServiceStack Service, but the Sentinel setting has me stumped, typically I do something along the lines of :
container.Register<IRedisClientsManager>(c =>
new RedisManagerPool(container.Resolve<IAppSettings>().GetString("Redis:Master")));
var cacheClient = container.Resolve<IRedisClientsManager>().GetCacheClient();
container.Register(cacheClient);
So a couple of things are incomplete with this setup, how do I specify the master and 2 read-only slaves, and configure Sentinel?
The RedisSentinel support in ServiceStack.Redis is available in the RedisSentinel class but as it's still being tested, it's not yet announced. You can find some info on how to use and configure a RedisSentinel in this previous StackOverflow Answer.
Configuring a RedisSentinel
When using a Redis Sentinel, it's the redis sentinel external process that manages the individual master/slave connections so you would just need to configure the sentinel host and ignore the individual master/slave connections.
Configuring a RedisClientManager
Alternatively if you're using a Redis Client Manager you would do the opposite, i.e. ignore the sentinels hosts and configure the Redis Client Managers with the master and slave hosts. Only the PooledRedisClientManager supports configuring both read-write/master and read-only/slave hosts, e.g:
container.Register<IRedisClientsManager>(c =>
new PooledRedisClientManager(redisReadWriteHosts, redisReadOnlyHosts) {
ConnectTimeout = 100,
//...
});
I am planning on adding Redis to our application as a session and cache store. I have been looking at how to make Redis highly available on an on-premise hosted solution.
The standard approach appears to be to set up Redis as a 3 node replica and use Sentinel for the monitoring and automatic failover.
Redis 2.8 introduces Redis cluster. Does that mean it brings in automatic failover etc and we no longer need to use Sentinel?
No, Cluster and Failover are different scenarios. Also Cluster is in 3.0, not 2.8.
The standard (and minimum) setup for HA is a master and one slave (aka "a pod"), with a separate set of three nodes which run Sentinel and monitor the pod.
This is to ensure failover of the server. However, either your client library has to support using Sentinel to discover master and reconnect on failure, you implement it in your code, or you set up a TCP load balancer and a sentinel monitoring daemon to update your load balancer configuration when a failover occurs at which point the client code doesn't know or care about sentinel.
Cluster isn't there to provide HA, it is there for server-side sharding of data. For Cluster you're looking at 6-7 nodes minimum (3 master, 3 slave, 1 spare) as well as Cluster support in the client and restrictions about commands and Lua script which need to access multiple keys.