Redis cluster on kubernetes - redis

I am trying to setup redis cluster on Kubernetes. One of my requirements is that my redis cluster should be resilient in case of kubernetes cluster restart(due to issues like power failure).
I have tried Kubernetes statefulset and deployment.
In case of statefulset, on reboot a new set of IP addresses are assigned to Pods and since redis cluster works on IP addresses, it is not able to connect to other redis instance and form cluster again.
In case of services with static IP over individual redis instance deployment, again redis stores IP of Pod even when I created cluster using static service IP addresses, so on reboot it is not able to connect to other redis instance and form cluster again.
My redis-cluster statefulset config
My redis-cluster deployment config

Redis-4.0.0 has solved this problem by adding support for cluster announce node IP and Port
Set cluster-announce-ip as static IP of service over redis instance kubernetes deployment.
Link to setup instructions: https://github.com/zuxqoj/kubernetes-redis-cluster/blob/master/README-using-statefulset.md

Are you able to use DNS names instead of IP addresses? I think that is the preferred way to route your traffic to individual nodes in a statefulset:
https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#stable-network-id

Related

Why Redis Client Use Multiple Address in ClusterMode?

Why Redis client use multiple address in cluster-mode for create connection? is this to switch between addresses when one of them has failed?
Thanks.
Redis uses multiple address to setUp application with all the master and slave node available in redis cluster. Redis never switch address it is just redis-cluster responsibility to promote the slave node to master if any one of them failed. After that subsequent request can be served directly from that redis node.
More details here : https://redis.io/topics/cluster-tutorial

How to do Redis slave repalication in k8s cluster?

By this famous guestbook example:
https://github.com/kubernetes/examples/tree/master/guestbook
It will create Redis master/slave deployment and services. It also has a subfolder named redis-slave which used for create a docker image and run Redis replication command.
Dockerfile
run.sh
The question is, if deployed the Redis master and slave to the k8s cluster. Then how to run that command? Deploy a new container? That will not relate to the slave container already deployed.
Is there a better way to do Redis repliaciton between master and slave running in k8s cluster?
One option you have is using helm to deploy the redis-ha app.
Info about helm: https://github.com/kubernetes/helm
The redis-ha helm app page: https://hub.kubeapps.com/charts/stable/redis-ha
Redis Sentinel is often suggested for simple master-slave replication and high availability.
Unfortunately, Sentinel does not fit Kubernetes world well and it also requires a Sentinel-aware client to talk to Redis.
You could try Redis operator which can be considered a Kubernetes-native replacement for Sentinel and allows to create a Redis deployment that would resist without human intervention to most kind of failures.
Here is how you can setup Redis HA Master Slave Cluster in Kubernetes/Openshift OKD
Basically you have to use configMap, StatefulSet in collaboration with VolumeClaims
https://reachmnadeem.wordpress.com/2020/10/01/redis-ha-master-slave-cluster-up-and-running-in-openshift-okd-kubernetes/

Kubernetes cluster internal load balancing

Playing a bit with Kubernetes (v1.3.2) I’m checking the ability to load balance calls inside the cluster (3 on-premise CentOS 7 VMs).
If I understand correctly the documentation in http://kubernetes.io/docs/user-guide/services/ ‘Virtual IPs and service proxies’ paragraph, and as I see in my tests, the load balance is per node (VM). I.e., if I have a cluster of 3 VMs and deployed a service with 6 pods (2 per VM), the load balancing will only be between the pods of the same VM which is somehow disappointing.
At least this is what I see in my tests: Calling the service from within the cluster using the service’s ClusterIP, will load-balance between the 2 pods that reside in the same VM that the call was sent from.
(BTW, the same goes when calling the service from out of the cluster (using NodePort) and then the request will load-balance between the 2 pods that reside in the VM which was the request target IP address).
Is the above correct?
If yes, how can I make internal cluster calls load-balance between all the 6 replicas? (Must I employ a load balancer like nginx for this?)
No, the statement is not correct. The loadbalancing should be across nodes (VMs). This demo demonstrates it. I have run this demo on a k8s cluster with 3 nodes on gce. It first creates a service with 5 backend pods, then it ssh into one gce node and visits the service.ClusterIP, and the traffic is loadbalanced to all 5 pods.
I see you have another question "not unique ip per pod" open, it seems you hadn't set up your cluster network properly, which might caused what you observed.
In your case, each node will be running a copy of the service - and load-balance across the nodes.

How to properly register Redis Master and Slaves with ServiceStack Client Managers?

When I provision a default Redis cluster on Google Compute Engine, there is one master and 2 read-only slaves and Redis Sentinel is running on each machine. Given the previous cluster I'd now like to use this in my ServiceStack Service, but the Sentinel setting has me stumped, typically I do something along the lines of :
container.Register<IRedisClientsManager>(c =>
new RedisManagerPool(container.Resolve<IAppSettings>().GetString("Redis:Master")));
var cacheClient = container.Resolve<IRedisClientsManager>().GetCacheClient();
container.Register(cacheClient);
So a couple of things are incomplete with this setup, how do I specify the master and 2 read-only slaves, and configure Sentinel?
The RedisSentinel support in ServiceStack.Redis is available in the RedisSentinel class but as it's still being tested, it's not yet announced. You can find some info on how to use and configure a RedisSentinel in this previous StackOverflow Answer.
Configuring a RedisSentinel
When using a Redis Sentinel, it's the redis sentinel external process that manages the individual master/slave connections so you would just need to configure the sentinel host and ignore the individual master/slave connections.
Configuring a RedisClientManager
Alternatively if you're using a Redis Client Manager you would do the opposite, i.e. ignore the sentinels hosts and configure the Redis Client Managers with the master and slave hosts. Only the PooledRedisClientManager supports configuring both read-write/master and read-only/slave hosts, e.g:
container.Register<IRedisClientsManager>(c =>
new PooledRedisClientManager(redisReadWriteHosts, redisReadOnlyHosts) {
ConnectTimeout = 100,
//...
});

Redis failover scenario

currently I have a redis instance, now I would make it more failure prove.
Is it possible to archive the following things?
I connect to redis with the service stack library, now I want that when the server is not available redis switch to the failover server automatically.
You should configure a Redis instance as a slave of your master instance, either using the slaveof command or more likely by adding a slaveof directive in the configuration file (something like 'slaveof 127.0.0.1 6380' ; look at the documentation for more info); then use Redis Sentinel to monitor the instances and promote the Slave as Master when the master fails.
Moreover you either have to use a Redis client that supports sentinel and handles the redirection when the slave is promoted to slave, or use a network configuration (like virtual IP) to make the redirection transparent for your application.