changing the quorum in redis sentinel - redis

I have 3 sentinels monitoring a master slave setup with a quorum of 2, I would like to increase this to 5 sentinels and a quorum of 3. However when I run SENTINEL SET master quorum 3 the change is not propagated to the other 2 sentinels. Is this correct, if a fail over does happen is the value of the last change taken?

This is intended behavior. All master level commands must be sent to each sentinel individually, it does not propagate commands to other sentinels. Send the command to each of your five sentinels and you will get the effect you are after.

Related

Redis sentinel with multilevel replicas

I am using Sentinel as a high availability solution for redis.
I have a problem.
In consideration of reducing the replication pressure of the master, our redis instances are multi-level, as follow:
In the introduction of the sentinel, I found that can monitor multiple masters, so I import it and hope to work as follows:
The second row of the replica belongs to the "master" logically too, so it also needs to be monitored.
Get the opposite of what one wants When the Sentinels just started, they had an election and independent many masters, actual master(role: master), not logic master.
Q: So can sentinels do the monitoring mode in the figure above?
My main configuration is as follows:
sentinel monitor top-master xxx.x.x.x 6379 2
sentinel monitor second-level-first xxx.x.x.x 6379 2
sentinel monitor second-level-second xxx.x.x.x 6379 2
sentinel monitor second-level-third xxx.x.x.x 6379 2
IN BRIEF - NO
To answer the above you would want to drill down into what sentinel is doing.
It is going to find out all the slaves it is connected to a master.
it establishes a pub-sub with those nodes.
when your actual master fails and another node becomes master this cannot be propagated.
Infact, to answer further more, can you please share the configuration of your slave nodes on level1? Infact this should have not been possible at all. I am just wondering how this worked.
If you can share the config files, will go through and update accordingly.

Redis WAIT behavior upon failover

I have a Redis HA deployment with 3 nodes (1 Master, 2 Slaves) and a sentinel running on every node. On the client side, I use a WAIT 2 0 to block indefinitely until my write reached the 2 slaves (and I am OK with that).
What would be the behavior of the WAIT command upon:
1) a network partition isolates the master and the client from the 2 slaves so my client is currently blocked by the WAIT
2) the majority of sentinels elects one of the slave as the new master (since there is still a quorum)
3) the network partition heals and the old master become slave of the new one
Would the WAIT still be blocking? Or would it release the client returning "0" slaves reached?
Many thanks

Redis Sentinel with 2 master after multi az netsplit

Hello stack community,
I have a question about Redis sentinel for a specific problem case. I use AWS with Multi AZ to create a sensu cluster.
On eu-central-1a I have a sensu+redis(M), a RBMQ+Sentinel and 2 others Sentinels. Same on eu-central-1b but the redis is my slave on this AZ.
What happen if there is a problem and eu-central-1a can not communicate with eu-central-1b ? What I think is that Sentinel on eu-central-1b should promote my redis slave to master, because he can not contact my redis master. So I should have 2 redis masters running together on 2 different AZ.
But when the link is retrieved between AZ, I will still have 2 masters, with 2 different datas. What will happen in this case ? One master will become a slave and data will be replicated without loss ? Do we need to restart a master and he will be a slave ?
Sentinel detects changes to the master for example
If the master goes down and is unreachable a new slave is elected. This is based on the quorum where multiple sentinels agree that the master has gone down. The failover then occurs.
Once the sentinel detects the master come back online it is then a slave I believe thus the new master continues I believe. You will loose data in the switchover from master to new master that in inevitable.
If you loose connection then yes sentinel wont work correctly as it relies on multiple sentinels to agree the master redis is down. You shouldn't use sentinel in a 2 sentinel system.
Basic solution would be for you to put a extra sentinel on another server maybe the client/application server that isn't running redis/sentinel this way you can make use of the quorum and sentinels agreeing the master is down.

Unslave a redis slave

I have a setup of 3 instances in a failover cluster, one master and two slaves. All monitored by sentinels. At one point I decide I don't need one slave, and I want to reuse that redis instance for something else, what commands to I issue?
I tried running slaveof no one on that slave, but it's enslaved again in a few seconds.
Sentinels remember forever the slaves they have seen, in order to reconnect them when they return after a crash or a network partition.
For the sentinels to forget the slave to remove, Redis' doc says "you need to send a SENTINEL RESET mastername command to all the Sentinels: they'll refresh the list of slaves within the next 10 seconds, only adding the ones listed as correctly replicating from the current master INFO output."

Should Redis Sentinel Monitor Each Master in a cluster?

Does it require sentinel to monitor each master in the cluster with a distinct service name, or just one of the 3 masters in the cluster?
My current config is 3 masters, 3 slaves, and 3 sentinel instances. Each instance of sentinel is monitoring each master. master1, master2, master3. I haven't seen any documentation that has more than a single master, and the redis documentation isn't real clear.
I found the solution by running a test myself. Yes, in a cluster configuration you need to monitor each master in order for failover to occur.