How to run redis sentinel monitoring redis servers - redis

I have 3 redis servers running with 3 sentinels on each host
3 redis-3 sentinels(total 3 hosts)
Can I run sentinel on a separate host or it should always run along with redis-server?
3 redis on 3 hosts
3 sentinels on 3 other hosts(total 6 hosts)
Is it possible to monitor all the 3 redis servers with only one redis sentinel? 3 redis on 3 host
1 sentinel on 1 host(total 3 or 4 hosts)

You can run sentinel on a separate hosts or on the same hosts.
The benefit of running it in separate hosts is that the sentinel instances will not be affected by load on the Redis instances.
The benefit of running it on the same hosts is mainly cost.
It might be possible but doesn't make any sense.
The benefit of Redis sentinel deployment over Redis single node deployment is that it adds high availability (HA).
It means that in case of master failure one of the slaves will be promoted to master and the cluster will continue to function.
If you have only single sentinel instance, you don't have HA since failure in sentinel instance will cause the cluster to fail.
Therefore to achieve HA you must have at least 3 sentinel instances running on different physical nodes.
If you don't need HA, just run Redis single instance without sentinel.

Related

Redis cluster failover: slave won't become master

I am trying to test my software behavior during cluster failover, and for that reason I want to configure a simplest cluster: one master and two slaves. I have tree files 7000.conf - 7002.conf of the following content:
port 7000
cluster-config-file nodes.7000.conf
appendfilename appendonly.7000.aof
dbfilename dump.7000.rdb
pidfile /var/run/redis_7000.pid
include cluster.conf
The content of cluster.conf:
cluster-enabled yes
appendonly yes
maxclients 100
daemonize yes
cluster-node-timeout 2000
cluster-slave-validity-factor 0
I've configured then that 7000 runs all slots from 0 to 16383, and 7001 and 7002 are replicas of 7000:
XXX 127.0.0.1:7002 slave YYY 0 1511389011347 4 connected
YYY 127.0.0.1:7000 myself,master - 0 0 4 connected 0-16383
ZZZ 127.0.0.1:7001 slave YYY 0 1511389011246 4 connected
Then I try to get rid of 7000 - via shutdown command, or via killing a process. One of the slaves should promote itself to master, but none does:
ZZZ 127.0.0.1:7001 slave YYY 0 0 3 connected
YYY 127.0.0.1:7000 master,fail? - 1511389104442 1511389103933 4 disconnected 0-16383
XXX 127.0.0.1:7002 myself,slave YYY 0 1511389116543 4 connected
I've waited for like minutes, and my slaves not want to become master. If I force a slave to become master via cluster failover takeover, it's more than happy to do so (and if I restart master, it becomes slave), but not automatically.
I've tried to play with cluster-node-timeout - does not help.
Am I doing something wrong? Redis version is 3.2.11.
The issue is that a redis-cluster has a minimum size of 3 masters to get automatic failover working. It's the master nodes that watch each other, and detect the failover, so with a single master in the cluster there is no processes running are able to detect that your one master is down. The minimum of three, is to make sure that in the case of any downed node, the majority of the entire cluster needs to agree, so at the minimum you need 3 nodes, to still have more than half of them around to reach a majority view in case of failure.
The Redis-cluster tutorial mentions this in the following section: https://redis.io/topics/cluster-tutorial#creating-and-using-a-redis-cluster
"Note that the minimal cluster that works as expected requires to contain at least three master nodes."
Please note that even with 3 masters the automatic failover is not guaranteed if the failure happens like below in the cluster: (M-Master / S-Slave)
Node-1: M1 S3
Node-2: M2 S1
Node-3: M3 S2
Now if node 3 fails, then its slave S3 in Node-1 is promoted as Master automatically.All is well with following status after the Node-3 recovers:
Node-1: M1 M3 <----- Please note 2 Masters in Node-1 now with S3 become M3 in prev step.
Node-2: M2 S1
Node-3: S3 S2 <----- Please note that the redis-server came up as Slave(was M3 before)
Now you might think the cluster will continue to handle failures easily since 3 masters are there in this setup. However, if Node-1 fails the Cluster is DOWN due to quorum not satisfied and never gets up unless we do some manual adjustments.
Hope this helps.

Redis Query regarding slave to master promotion

I have following redis configuration:
Machine 1 - 1 redis master ; 1 redis sentinel
Machine 2 - 1 redis slave ; 1 redis sentinel.
When I shutdown the redis master of machine 1 then redis slave of machine 2 gets promoted to master role.
However, now if i restarted redis server on machine 1 and shutdown redis server on machine 2 then redis on machine 1 does not get promoted to master . Is this the intended behaviour and if so then why.?

Redis Cluster: No automatic failover for master failure

I am trying to implement a Redis cluster with 6 machine.
I have a vagrant cluster of six machines:
192.168.56.101
192.168.56.102
192.168.56.103
192.168.56.104
192.168.56.105
192.168.56.106
all running redis-server
I edited /etc/redis/redis.conf file of all the above servers adding this
cluster-enabled yes
cluster-config-file nodes.conf
cluster-node-timeout 5000
cluster-slave-validity-factor 0
appendonly yes
I then ran this on one of the six machines;
./redis-trib.rb create --replicas 1 192.168.56.101:6379 192.168.56.102:6379 192.168.56.103:6379 192.168.56.104:6379 192.168.56.105:6379 192.168.56.106:6379
A Redis cluster is up and running. I checked manually by setting value in one machine it shows up on other machine.
$ redis-cli -p 6379 cluster nodes
3c6ffdddfec4e726f29d06a6da550f94d976f859 192.168.56.105:6379 master - 0 1450088598212 5 connected
47d04bc98ab42fc793f9f382855e5c54ab8f2e20 192.168.56.102:6379 slave caf2cec45114dc8f4cbc6d96c6dbb20b62a39f90 0 1450088598716 7 connected
040d4bb6a00569fc44eec05440a5fe0796952ccf 192.168.56.101:6379 myself,slave 5318e48e9ef0fc68d2dc723a336b791fc43e23c8 0 0 4 connected
caf2cec45114dc8f4cbc6d96c6dbb20b62a39f90 192.168.56.104:6379 master - 0 1450088599720 7 connected 0-10922
d78293d0821de3ab3d2bca82b24525e976e7ab63 192.168.56.106:6379 slave 5318e48e9ef0fc68d2dc723a336b791fc43e23c8 0 1450088599316 8 connected
5318e48e9ef0fc68d2dc723a336b791fc43e23c8 192.168.56.103:6379 master - 0 1450088599218 8 connected 10923-16383
My problem is that when I shutdown or stop redis-server on any one machine which is master the whole cluster goes down, but if all the three slaves die the cluster still works properly.
What should I do so that a slave turns a master if a master fails(Fault tolerance)?
I am under the assumption that redis handles all those things and I need not worry about it after deploying the cluster. Am I right or would I have to do thing myself?
Another question is lets say I have six machine of 16GB RAM. How much total data I would be able to handle on this Redis cluster with three masters and three slaves?
Thank you.
the setting cluster-slave-validity-factor 0 may be the culprit here.
from redis.conf
# A slave of a failing master will avoid to start a failover if its data
# looks too old.
In your setup the slave of the terminated master considers itself unfit to be elected master since the time it last contacted master is greater than the computed value of:
(node-timeout * slave-validity-factor) + repl-ping-slave-period
Therefore, even with a redundant slave, the cluster state is changed to DOWN and becomes unavailable.
You can try with a different value, example, the suggested default
cluster-slave-validity-factor 10
This will ensure that the cluster is able to tolerate one random redis instance failure. (it can be slave or a master instance)
For your second question: Six machines of 16GB RAM each will be able to function as a Redis Cluster of 3 Master instances and 3 Slave instances. So theoretical maximum is 16GB x 3 data. Such a cluster can tolerate a maximum of ONE node failure if cluster-require-full-coverage is turned on. else it may be able to still serve data in the shards that are still available in the functioning instances.

Redis - configure sentinel to elect slave if master shutdown

Hi i have create a cluster Redis with sentinel composed by 3 aws instances, i have configured sentinel to have an HA redis cluster and work, but if i simulate a crash of master (shutdown of master instance), sentinel installed on slaves, not locate sentinel of master and the election fail.
My sentinel configuration is:
sentinel monitor master ip-master 6379 2
sentinel down-after-milliseconds master 5000
sentinel failover-timeout master 10000
sentinel parallel-syncs master 1
Same file to all instaces
There are issues when running sentinel on the same node as the master and attempting to trigger a failover. Try it w/o running Sentinel on the master. Ultimately this means not running Sentinel on the same nodes as the Redis instances.
In your case your dead-node simulation is showing why you should not run Sentinel on the same node as Redis: If the node dies you lose one of your sentinels. In theory it should still work but as you and others have seen it isn't certain to work. I have some theories why but I've not yet confirmed them.
In a sense Sentinel is partly a monitoring system. Running a monitoring solution on the same nodes as are being monitored is generally unadvisable anyway, so you should be using off-node sentinels anyway. As Sentinel is resource efficient you don't necessarily need dedicated machines or large VMs. Indeed if you have a static set of application servers (where your client code runs), you should run Sentinel there, keeping in mind you want 3 minimum and a quorum of 50%+1.
recent redis version introduced the "protected-mode" option, which defaults to yes.
with protected-mode set to yes, redis instances, without a password set will not allow remote clients to execute commands.
this also affects sentinels master election.
try it with setting "protected-mode no" in the sentinels. this will allow them to talk to each other.
If you don't want to set protected-mode as no. you'd better set masterauth myredis in redis.conf and use sentinel auth-pass mymaster myredis in sentinel.conf

Redis Sentinel for Windows

I'm successfully using Redis for Windows (2.6.8-pre2) in a master slave setup. However, I need to provide some automated failover capability, and it appears the sentinel is the most popular choice. When I run redis in sentinel mode the sentinel connects, but it always thinks the master is down. Also, when I run the sentinel master command it reports that there are 0 slaves (not true) and that there are no other sentinels (again, not true). So it's like it connects to the master, but not correctly.
Has anyone else seen this issue on Windows and, more importantly, is anyone successfully using sentinel in a windows environment? Any help or direction at all is appreciated!
I recommend use this:
1 master node redis server 1 slave node redis server
List item 3 redis sentinels with a quorum of 2
It's so important have more than have 3 sentinels to get a odd quorum.
I made this configuration in Windows 7 and it's working well.
Example of sentinel conf:
port 20001
logfile "sentinel1.log"
sentinel monitor shard1 127.0.0.1 16379 2
sentinel down-after-milliseconds shard1 5000
sentinel failover-timeout shard1 30000