Imagine I have two data centers with one redis instance running in each of them. In addition, I have three sentinel instances running in these both data centers, each on a seperate machine, but two of them in the same data center.
Is this a problem?
In worst case, if the first the data center gets unavailable, two of the three sentinel instances + one redis instance shut down simultaneously. If this redis instance was the master, there would be failover to the other data center.
But what happens if data center 1 gets available again? I'd guess this would be the new configuration:
Data center 1 - Sentinel 1 -> Points to master in data center 1
Data center 1 - Sentinel 2 -> Points to master in data center 1
Data center 2 - Sentinel 1 -> Points to master in data center 2
Will redis set the master in data center 1 again as the new master? If so, what happens with database changes occured in the meantime at the master in data center 2?
If the data center with two sentinels go down and also the master Redis node goes down, then the remaining sentinel node won't be able to elect a new master Redis or promote the Redis server in the other data center as master. Majority of the sentinels have to agree upon the failure of the master Redis. They also have to elect a sentinel process as the leader to promote the live Redis server as master; for that also majority of Sentinel processes have to be available. In case of 3 Sentinels deployment, the majority is 2 and when 2 of them are down, no failover for the master Redis will happen.
Related
In a 6 nodes Redis Cluster, with 3 master nodes and 3 slave nodes, if let's say a master will go down, the according slave will be promoted. When the old master comes back live will be a slave.
Is it possible to force it somehow from the redis config or otherwise, so that when it comes back live, the old master will be promoted as a master as it was at the beginning?
Thank you!
If the old master comes up with property:
slaveof no one
It will join the cluster as a master, but I don't think you would like to do it.
The 'old master' does not have the latest data, if you force it to become the master, there will be data loss.
We have a 3 node redis cluster with redis and sentinel running on all three nodes.
One of the node is master and other two are replicas.
There are some situations when one node goes down and in those cases one of the replica nodes is promoted to master without any issue.
Now we have a use case when two nodes goes down and we want last remaining node to be promoted to master. we dont want quorum to set 1 as this may lead to some unnecessary failovers. Please suggest the possible solutions.
Assuming you run both Sentinel and Redis processes on each one of the 3 nodes, your deployment can handle a failure of single node only.
This is because after two nodes goes down, there is only one running Sentinel process which (like your said) can't form a quorum.
If you need to support 2 concurrent nodes failures you will need to increase the size of your cluster and preferably also separate Sentinel nodes from Redis nodes.
I'm currently exploring Redis cluster. I've started 6 instances on 3 physical servers(3 master and 3 slaves) with persistence enabled.
I've noticed that when I kill one of the master instances then it's slave is promoted to master after some time. However, it remains as master even when I start the killed instance.
Since, Redis does asynchronous replication, therefore, I was thinking of a scenario where the master, immediately after flushing the data is killed i.e. it wasn't able to replicate that data.
Will this data get replicated to the new master(initially slave), once
the instance comes back up?
NO. If the master haven't replicate data to slave, the data will be lost. When the old master recovers, it will be become a slave of some other node based on some rules. Then the old master will replicate data from its new master.
I have 3 replicated Redis instances running on 3 different machines: A, B and C.
I initially choose A as my master.
I also have 3 sentinels (1 on each machine) monitoring A.
In case A goes down, I want sentinels to choose a specific master to failover to (say B).
Is there a way to choose a specific master instead of leaving it to the election mechanism of the sentinels?
Since I couldn't find this question anywhere, I reckon it's not standard procedure so I'll explain the reason behind it:
My application is running on A, B and C behind a load balancer.
The master uses its local Redis db which is replicated to the other two slaves.
When A fails, the load balancer could choose B as master while Redis sentinels could elect C as Redis master.
As I just said, I need the instance to be local, so that's why I need to specify B as the Redis master.
There's a Redis configuration setting called 'slave-priority' that may help you out.
Reference:
http://download.redis.io/redis-stable/redis.conf
When using the replication features of Redis in an environment with slaves of slaves, would the connected_slaves counter increase for the master, the slave which is acting as a master, or both? Refer to the example diagram: https://imgur.com/Ge1WLzX
In the image, there is a master with two slaves, each slave having its own two slaves. In this instance would the master's connected_slaves value be 6? Would the connected_slaves value of the first pair of slaves be 2 each?
I have looked through Redis's documentation and have found nothing describing this.
To find the solution, I set up a virtual environment with seven Redis servers running simultaneously. They were set with sequential port numbers, with the default Redis port (6379) instance being the master. I configured two instances to be slaves to Instance_6379, and configured two more slaves each for Instance_6380 and Instance_6381, respectively. I then checked the redis-cli info output of each, taking note of the connected_slaves metric.
Here is what I found:
Masters will only report those slaves that are DIRECTLY CONNECTED to them. Slaves of slaves will NOT count towards the total number of slaves connected to the master.
In the example image referenced in the question, the leftmost Redis server would have only two connected_slaves and each of its children would also show two connected_slaves.
I hope this answer will be of use to someone.