I've been reading about the use of Redis sentinel for failover. I plan to have 1 master + 1 slave, and turn the slave into a master if the master goes down for more than 1 minute. I know this is 100% possible with Sentinel.
However, I've seen documentation mention the use of multiple Sentinels. Let's assume this is not possible (ie. budget or technical constraints). I assume I can have this configuration:
1 Sentinel in Server A
Master in Server B
Slave in Server C
What's the benefit of having multiple sentinels as opposed to 1? My app can only connect to 1 sentinel at a time, and even if there were 2 sentinels, my app can't rotate or switch between either of them if one goes down w/o some complicated logic in my app layer.
This configuration is possible only if servers in different locations. In that case, it hasn't SPOF, because of very low chances that 2 servers failed at the same time. If Sentinel failed, you can quickly notice it and repair/start new with Ansible.
This configuration worked for me 2 years, HA and FO worked perfectly.
Related
We have 2 app/web servers running HA application, we need to setup redis with high availability/replication to support our app.
Considering the minimum sentinel setup requirement of 3 nodes.
We are planning to prepare the first app serve with redis master and 1 sentinel, the second app server will have the redis slave and 1 sentinel, we plan to add one additional server to hold the third sentinel node to achieve the 2 quorum sentinel setup.
Is this a valid setup ? what could be the risks ?
Thanks ,,,
Well it looks its not recommended to put the redis nodes on the app servers (where it is recommended to put the sentinel nodes there).
We ended with a setup for KeyDB (a fork from Redis) which claimed to be faster and support high availability/replication (and much more) to create two nodes within the app servers.
Of course We had to modify little in the client side to support some advance Lua scripts (There is some binary serialized data not getting replicated to the other node).
But after some effort, it worked ! as expected.
Hope this helps ...
Hello stack community,
I have a question about Redis sentinel for a specific problem case. I use AWS with Multi AZ to create a sensu cluster.
On eu-central-1a I have a sensu+redis(M), a RBMQ+Sentinel and 2 others Sentinels. Same on eu-central-1b but the redis is my slave on this AZ.
What happen if there is a problem and eu-central-1a can not communicate with eu-central-1b ? What I think is that Sentinel on eu-central-1b should promote my redis slave to master, because he can not contact my redis master. So I should have 2 redis masters running together on 2 different AZ.
But when the link is retrieved between AZ, I will still have 2 masters, with 2 different datas. What will happen in this case ? One master will become a slave and data will be replicated without loss ? Do we need to restart a master and he will be a slave ?
Sentinel detects changes to the master for example
If the master goes down and is unreachable a new slave is elected. This is based on the quorum where multiple sentinels agree that the master has gone down. The failover then occurs.
Once the sentinel detects the master come back online it is then a slave I believe thus the new master continues I believe. You will loose data in the switchover from master to new master that in inevitable.
If you loose connection then yes sentinel wont work correctly as it relies on multiple sentinels to agree the master redis is down. You shouldn't use sentinel in a 2 sentinel system.
Basic solution would be for you to put a extra sentinel on another server maybe the client/application server that isn't running redis/sentinel this way you can make use of the quorum and sentinels agreeing the master is down.
We have a setup with a number of redis (2.8) servers (lets say 4) and as many redis sentinels. On startup of each machine, we set a pre-select machine as master through the command line and all the rest as slaves of that. and the sentinels all monitor these machines. The clients first connect to the local sentinel and retrieve the master's IP address and then connect there.
This setup is trouble free most of the time but sometimes the sentinels go out of sync with servers. if I name the machines A,B,C and D - sentinels will think B is master while redis servers are all connected to A as the master. bringing down redis server on B doesnt help either. I had to bring it down and manually "Sentinel failover" on A to fix the issue. Question is
1. What causes this to happen and whats the easiest and quickest way to fix this ?
2. What is best configuration - is there something better than this ?
The only time you should set a master is the first time. Once sentinel has taken over management of replication you should let it do it. This includes on restarts. Don't use the command line to set replication. Let sentinel and redis manage it. This is why you're getting issues - you've told sentinel it is authoritative, but you are telling the Redis servers to ignore sentinel.
Sentinel stores the status in its Config file, so when it restarts it can resume the last configuration. So even on restart, let sentinel do it's job.
Also, if you have 4 servers (be specific, not "let's say") you should be running a quorum of three on your monitor statement in sentinel. With a quorum of two you can wind up with two masters
I've been doing some reading on how to use Redis Sentinel, and I know it's possible to have 2 or more sentinels, and load balance between them when calling from the client side.
Is it good practice to have these 2 sentinels in the same server as my master + slave? In other words, have 1 sentinel in the same physical server as master, and another in same physical server as slave?
It seems to me if the master server dies, the sentinel in the slave will simply promote the slave to a master. if the slave server dies, it doesn't matter because the master is still up.
Am I missing something? What are the downsides?
I rather have the sentinels be in the same physical server as the master/slave to reduce latency.
First, Sentinel is not a load balancer or a proxy for Redis.
Second, not all failures are death of the host. Sometimes the server hangs briefly, sometimes a network cable gets unplugged, etc. Because f this, it is not good practice to run Sentinel on the same hosts as your Redis instance. If you're using Sentinel to manage failover, anything less than three sentinels running on nodes other than your Redis master and slave(s) is asking for trouble.
Sentinel uses a quorum mechanism to vote on a failover and slave. With less than two sentinels you run the risk of split brain where two or more Redis servers think they are master.
Imagine the scenario where you run two servers and run sentinel on each. If you lose one you lose reliable failover capability.
Clients only connect to Sentinel to learn the current master connection information. Anytime the client loses connectivity they repeat this process. Sentinel is not a proxy for Redis - commands for Redis go directly to Redis.
The only reliable reason to run Sentinel with less than three sentinels is for service discovery, which means not using it for failover management.
Consider the two host scenario:
Host A: redis master + sentinel 1 (Quorum 1)
Host B: redis slave + sentinel 2 (Quorum 1)
If Host B temporarily loses network connectivity to Host A in this scenario HostB will promote itself to master. Now you have:
Host A: redis master + sentinel 1 (Quorum 1)
Host B: redis master + sentinel 2 (Quorum 1)
Any clients which connect to Sentinel 2 will be told Host B is the master, whereas clients which connect to Sentinel 1 will be told Host A the master (which, if you have your Sentinels behind a load balancer, means half of your clients).
Thus what you need to run to obtain minimum acceptable reliable failover management is:
Host A: Redis master
Host B: Redis Slave
Host C: Sentinel 1
Host D: Sentinel 2
Host E: Sentinel 2
Your clients connect to the sentinels and obtain the current master for the Redis instance (by name), then connect to it. If the master dies the connection should be dropped by the client whereupon the client will/should connect to Sentinel again and get the new information.
How well each client library handles this is dependent on the library.
Ideally Hosts C,D, and E are either on the same hosts where you connect to Redis from (ie. the client host). or represent a good sampling got them. The main thrust here is to ensure you are checking from where you need to connect to Redis from. Failing that place them in the same DC/Rack/Region as the clients.
If you are wanting to have your clients talk to a load balancer try to have your Sentinels on those LB nodes if possible, adding additional non-LB hosts as needed to obtain an odd number of sentinels > 2. An exception to this is if your client hosts are dynamic in that the number of them is inconsistent (they scale up for traffic, down for slow periods, for example). In this scenario you pretty much must run your Sentinels on non-client and non-redis-server hosts.
Note that if you do this you will then need to write a daemon which monitors the Sentinel PUBSUB channel for the master switch event to update the LB -which you must configure to only talk to the current master (never try to talk to both). It is more work to do that but does make use of Sentinel transparent to the client - which only knows to talk to the LB IP/Port.
It all depends the level of Disaster Recovery you want to achieve, let's assume you have the following components independently of where they are hosted:
2 Sentinels
1 Master
1 Slave
1 Master 1+ Slaves
One host scenario
Host fails: You loose everything, bad replication scenario for most use cases.
Two host scenario
Host 1:
(Current elected) Master
1 Sentinel
Host 2:
Slave
1 Sentinel
It is true that in this scenario you can have the hosts fail one at a time which gives you some level of security. Just try to understand if by different server you mean physically different hosts. If these are just VMs on the same host, you do not get the same level of DR (Disaster Recovery).
Regarding your question:
I rather have the sentinels be in the same server as the master/slave to reduce latency.
Notice that Sentinels keep track of the current master and slaves, but the Redis clients do not connect to the Master VIA the Sentinels, they just get where the current master is via the Sentinels, e.g., in terms of reads and writes you're not looking into any considerable* latency gains.
Configuration provider. Sentinel acts as a source of authority for clients service discovery: clients connect to Sentinels in order to ask for the address of the current Redis master responsible for a given service. If a failover occurs, Sentinels will report the new address.
(see: http://redis.io/topics/sentinel)
The way I see it the only gains you have in terms of latency are the heartbeats sent from the Master and Slaves to the sentinel. As long as you are not spreading your servers through the whole world that should be ok.
It all depends on the use cases, but it seems you would do best to keep things as separate as possible if all other things are equal (costs, distance to clients, etc).
You can have sentinels on the same machine with master/slave, but the sentinels must be odd(3/5/7) in number. There should be atleast three sentinels and it is must to have a dedicated machine for atleast one sentinel.
If you have only two nodes, then in case of a split-brain (network disrupt) situation, the slave will be promoted to master. Both the master now will accept data from clients.However, when things come back to normal, one of the master will be demoted as a slave. That master will lose all of its data as it is a slave now and will replicate the data from current master.
check this for good a explanation of redis architectural desings and split-brain:
https://web.archive.org/web/20170527053749/http://www.yzuzun.com/2015/04/some-architectural-design-concepts-for-redis/
It's certainly not a recommended approach.
The Redis Sentinel docs explains the tradeoffs pretty well. Hope this helps.
https://redis.io/topics/sentinel#example-sentinel-deployments
I have the following Redis/Sentinel configuration:
Redis master A + N slaves
M sentinels watching A, named masterA
the client application query the sentinels for masterA, then query and modify A
Now say A is outdated and I want to replace it by a new Redis master called B (with minimum down time / data loss.). In the end of the operation, I want this:
Redis master B + N slaves
the client application querying and modifying B
I could proceed as follows:
Have the sentinels start watching B, named masterB
Have each slave of A become a slave of B
From there, I am stuck because the client application still asks for masterA when talking to the sentinels. I have two questions:
Is there a way to switch masters names, such that B becomes known as masterA for the sentinels, and therefore for the client application as well?
Is it better to modify the client application code to handle the switch from an old master to a new master?
One way of achieving your aim is to follow the age old solution of "adding another level of indirection".
A particularly effective method is to have your clients talk to a TCP proxy (e.g. HAProxy) and have it pass the traffic to the current master.
To keep the TCP proxy is sync you can do something similar to http://blog.haproxy.com/2014/01/02/haproxy-advanced-redis-health-check/ which makes HAProxy Sentinel aware.
The major plus for this solution is that it makes your clients very simple - they only connect to one place and the traffic is always forwarded to the correct Redis instance.
One issue with this solution is that HAProxy's configuration DSL does not have the ability to deal with the period when a Redis server restarts and announces itself initially as a master before the sentinels make it a slave. This will lead to missed writes and inconsistent state which depending on you application could be fine or maybe not.
To deal with this I have started to develop a "smarter" daemon to keep HAProxy in sync with the current master. My solution is at https://github.com/mdevilliers/redishappy.