Redis sentinels in same servers as master/slave? - redis

I've been doing some reading on how to use Redis Sentinel, and I know it's possible to have 2 or more sentinels, and load balance between them when calling from the client side.
Is it good practice to have these 2 sentinels in the same server as my master + slave? In other words, have 1 sentinel in the same physical server as master, and another in same physical server as slave?
It seems to me if the master server dies, the sentinel in the slave will simply promote the slave to a master. if the slave server dies, it doesn't matter because the master is still up.
Am I missing something? What are the downsides?
I rather have the sentinels be in the same physical server as the master/slave to reduce latency.

First, Sentinel is not a load balancer or a proxy for Redis.
Second, not all failures are death of the host. Sometimes the server hangs briefly, sometimes a network cable gets unplugged, etc. Because f this, it is not good practice to run Sentinel on the same hosts as your Redis instance. If you're using Sentinel to manage failover, anything less than three sentinels running on nodes other than your Redis master and slave(s) is asking for trouble.
Sentinel uses a quorum mechanism to vote on a failover and slave. With less than two sentinels you run the risk of split brain where two or more Redis servers think they are master.
Imagine the scenario where you run two servers and run sentinel on each. If you lose one you lose reliable failover capability.
Clients only connect to Sentinel to learn the current master connection information. Anytime the client loses connectivity they repeat this process. Sentinel is not a proxy for Redis - commands for Redis go directly to Redis.
The only reliable reason to run Sentinel with less than three sentinels is for service discovery, which means not using it for failover management.
Consider the two host scenario:
Host A: redis master + sentinel 1 (Quorum 1)
Host B: redis slave + sentinel 2 (Quorum 1)
If Host B temporarily loses network connectivity to Host A in this scenario HostB will promote itself to master. Now you have:
Host A: redis master + sentinel 1 (Quorum 1)
Host B: redis master + sentinel 2 (Quorum 1)
Any clients which connect to Sentinel 2 will be told Host B is the master, whereas clients which connect to Sentinel 1 will be told Host A the master (which, if you have your Sentinels behind a load balancer, means half of your clients).
Thus what you need to run to obtain minimum acceptable reliable failover management is:
Host A: Redis master
Host B: Redis Slave
Host C: Sentinel 1
Host D: Sentinel 2
Host E: Sentinel 2
Your clients connect to the sentinels and obtain the current master for the Redis instance (by name), then connect to it. If the master dies the connection should be dropped by the client whereupon the client will/should connect to Sentinel again and get the new information.
How well each client library handles this is dependent on the library.
Ideally Hosts C,D, and E are either on the same hosts where you connect to Redis from (ie. the client host). or represent a good sampling got them. The main thrust here is to ensure you are checking from where you need to connect to Redis from. Failing that place them in the same DC/Rack/Region as the clients.
If you are wanting to have your clients talk to a load balancer try to have your Sentinels on those LB nodes if possible, adding additional non-LB hosts as needed to obtain an odd number of sentinels > 2. An exception to this is if your client hosts are dynamic in that the number of them is inconsistent (they scale up for traffic, down for slow periods, for example). In this scenario you pretty much must run your Sentinels on non-client and non-redis-server hosts.
Note that if you do this you will then need to write a daemon which monitors the Sentinel PUBSUB channel for the master switch event to update the LB -which you must configure to only talk to the current master (never try to talk to both). It is more work to do that but does make use of Sentinel transparent to the client - which only knows to talk to the LB IP/Port.

It all depends the level of Disaster Recovery you want to achieve, let's assume you have the following components independently of where they are hosted:
2 Sentinels
1 Master
1 Slave
1 Master 1+ Slaves
One host scenario
Host fails: You loose everything, bad replication scenario for most use cases.
Two host scenario
Host 1:
(Current elected) Master
1 Sentinel
Host 2:
Slave
1 Sentinel
It is true that in this scenario you can have the hosts fail one at a time which gives you some level of security. Just try to understand if by different server you mean physically different hosts. If these are just VMs on the same host, you do not get the same level of DR (Disaster Recovery).
Regarding your question:
I rather have the sentinels be in the same server as the master/slave to reduce latency.
Notice that Sentinels keep track of the current master and slaves, but the Redis clients do not connect to the Master VIA the Sentinels, they just get where the current master is via the Sentinels, e.g., in terms of reads and writes you're not looking into any considerable* latency gains.
Configuration provider. Sentinel acts as a source of authority for clients service discovery: clients connect to Sentinels in order to ask for the address of the current Redis master responsible for a given service. If a failover occurs, Sentinels will report the new address.
(see: http://redis.io/topics/sentinel)
The way I see it the only gains you have in terms of latency are the heartbeats sent from the Master and Slaves to the sentinel. As long as you are not spreading your servers through the whole world that should be ok.
It all depends on the use cases, but it seems you would do best to keep things as separate as possible if all other things are equal (costs, distance to clients, etc).

You can have sentinels on the same machine with master/slave, but the sentinels must be odd(3/5/7) in number. There should be atleast three sentinels and it is must to have a dedicated machine for atleast one sentinel.
If you have only two nodes, then in case of a split-brain (network disrupt) situation, the slave will be promoted to master. Both the master now will accept data from clients.However, when things come back to normal, one of the master will be demoted as a slave. That master will lose all of its data as it is a slave now and will replicate the data from current master.
check this for good a explanation of redis architectural desings and split-brain:
https://web.archive.org/web/20170527053749/http://www.yzuzun.com/2015/04/some-architectural-design-concepts-for-redis/

It's certainly not a recommended approach.
The Redis Sentinel docs explains the tradeoffs pretty well. Hope this helps.
https://redis.io/topics/sentinel#example-sentinel-deployments

Related

Redis sentinel implmentation over the internet

I'm trying to implement redis sentinel in which there are two seperate
environments where master and replica redis will be running. The two
enviroments i.e. Primary and Backup will communicate through internet. Each
environment will have 2 nodes and each node will have one pod which contains
redis+sentinel processes. The following architecture represents the same.
Let's consider a scenario, if Master Redis (Node 1) goes down then sentinel
will invoke fail-over process and make one of the replica as Master Redis.
In such case, suppose Node 3 replica becomes master redis. So far all works
as expected. Now when Node 1 becomes available then its redis will start as
Master, after sentinels communication redis will act as replica. Ideally,
redis should bind on 1.2.3.4:30001 but it is binding on private IP of node
i.e. 192.168.x.x.
My question is why this is happening and as per my understanding sentinel is
responsible for config rewrites and asking Node 1 redis to become replica
redis so how sentinel is taking private IP rather than public IP.
Hopefully, I have properly conveyed my problem to you. if you need any futher
information feel free to comment.

Supporting Slave of Slave Replication with Redis Sentinel?

We have two datacenters, each with two redis instances. Generally they are replicated as chain.
NY1 (Master) --> NY2 (Slave) --> CO1 (Slave) --> CO2 (Slave)
NY is New York and CO is Colorado, our backup datacenter. In order to save bandwidth over the WAN, we don't want CO1 and CO2 connected to NY1. Rather we want a chain configuration, where there is only one slave directly to the master, and the others are all "slaves of slaves".
Can this sort of replication layout be maintained using Sentinel? Or do all slaves have to be a slave of the master, and not a slave of a slave?
Currently this type of setup isn't possible with Sentinel because Sentinel rewrites the configurations of all monitored Redis systems.
For example, if you set up a system as you described and have sentinel monitoring all of the hosts, if the master goes down and forces a failover, each of the Redis hosts will be re-configured. One of the replicas (any of them) will become the new master, and the others will become replicas of the new master. When the old master comes back online, it will be re-configured to be a replica of the new master.
However, in general you can get Redis to work the way you want. You can have as many replicas of a replica as you need by setting the replicaof config value to a replica.
Personally, I would still use Sentinel to monitor the master and the "prime" replicas (those that replicate from the master itself). This could result in one of the prime replicas becoming a new master, so I would enable the notification option. This tells sentinel to call a script whenever a failover happens. In that script you can send an email, hit a Slack webhook, or whatever else you want to do with it. When I get it, I'd manually reconfigure the hosts back into the format I want, but with the new master. It'd be a pain to do it this way but I'd still get automatic failover of the master and prime replicas so my apps will continue working.

redis sentinel out of sync with servers in a cluster

We have a setup with a number of redis (2.8) servers (lets say 4) and as many redis sentinels. On startup of each machine, we set a pre-select machine as master through the command line and all the rest as slaves of that. and the sentinels all monitor these machines. The clients first connect to the local sentinel and retrieve the master's IP address and then connect there.
This setup is trouble free most of the time but sometimes the sentinels go out of sync with servers. if I name the machines A,B,C and D - sentinels will think B is master while redis servers are all connected to A as the master. bringing down redis server on B doesnt help either. I had to bring it down and manually "Sentinel failover" on A to fix the issue. Question is
1. What causes this to happen and whats the easiest and quickest way to fix this ?
2. What is best configuration - is there something better than this ?
The only time you should set a master is the first time. Once sentinel has taken over management of replication you should let it do it. This includes on restarts. Don't use the command line to set replication. Let sentinel and redis manage it. This is why you're getting issues - you've told sentinel it is authoritative, but you are telling the Redis servers to ignore sentinel.
Sentinel stores the status in its Config file, so when it restarts it can resume the last configuration. So even on restart, let sentinel do it's job.
Also, if you have 4 servers (be specific, not "let's say") you should be running a quorum of three on your monitor statement in sentinel. With a quorum of two you can wind up with two masters

1 Redis sentinel vs multiple redis sentinels?

I've been reading about the use of Redis sentinel for failover. I plan to have 1 master + 1 slave, and turn the slave into a master if the master goes down for more than 1 minute. I know this is 100% possible with Sentinel.
However, I've seen documentation mention the use of multiple Sentinels. Let's assume this is not possible (ie. budget or technical constraints). I assume I can have this configuration:
1 Sentinel in Server A
Master in Server B
Slave in Server C
What's the benefit of having multiple sentinels as opposed to 1? My app can only connect to 1 sentinel at a time, and even if there were 2 sentinels, my app can't rotate or switch between either of them if one goes down w/o some complicated logic in my app layer.
This configuration is possible only if servers in different locations. In that case, it hasn't SPOF, because of very low chances that 2 servers failed at the same time. If Sentinel failed, you can quickly notice it and repair/start new with Ansible.
This configuration worked for me 2 years, HA and FO worked perfectly.

How to switch masters in this Redis Sentinel configuration?

I have the following Redis/Sentinel configuration:
Redis master A + N slaves
M sentinels watching A, named masterA
the client application query the sentinels for masterA, then query and modify A
Now say A is outdated and I want to replace it by a new Redis master called B (with minimum down time / data loss.). In the end of the operation, I want this:
Redis master B + N slaves
the client application querying and modifying B
I could proceed as follows:
Have the sentinels start watching B, named masterB
Have each slave of A become a slave of B
From there, I am stuck because the client application still asks for masterA when talking to the sentinels. I have two questions:
Is there a way to switch masters names, such that B becomes known as masterA for the sentinels, and therefore for the client application as well?
Is it better to modify the client application code to handle the switch from an old master to a new master?
One way of achieving your aim is to follow the age old solution of "adding another level of indirection".
A particularly effective method is to have your clients talk to a TCP proxy (e.g. HAProxy) and have it pass the traffic to the current master.
To keep the TCP proxy is sync you can do something similar to http://blog.haproxy.com/2014/01/02/haproxy-advanced-redis-health-check/ which makes HAProxy Sentinel aware.
The major plus for this solution is that it makes your clients very simple - they only connect to one place and the traffic is always forwarded to the correct Redis instance.
One issue with this solution is that HAProxy's configuration DSL does not have the ability to deal with the period when a Redis server restarts and announces itself initially as a master before the sentinels make it a slave. This will lead to missed writes and inconsistent state which depending on you application could be fine or maybe not.
To deal with this I have started to develop a "smarter" daemon to keep HAProxy in sync with the current master. My solution is at https://github.com/mdevilliers/redishappy.