When I check my slaves
redis-cli -p 26379 sentinel slaves mycluster
I get some ips that have already been nuked.
Is there away to take them out of sentinel?
You can send SENTINEL RESET master command to every sentinel to remove these slaves.
Related
I am trying to get a list of all sentinels which are currently monitoring the redis master.
I know that if I have one sentinel I can use sentinel sentinels mymaster but if I don't have any of the Sentinel's addresses how can I get them?
There is no direct command to get the list of sentinels from a master/slave node. To get the sentinels' list, you need to subscribe to any node's pub/sub (master or slave doesn't matter) "__sentinel__:hello" channel and wait for the messages. Messages passing through that hello channel are from sentinels that are listening that cluster. If you parse those, you get the sentinels' addresses. The messages are in form: "sentinel_ip,sentinel_port,sentinel_runid,sentinel_current_epoch,master_name,master_ip,master_port,master_config_epoch" (e.g. 127.0.0.1,26380,07fabf3cbac43bcc955588b1023f95498b58f8f2,16,mymaster,127.0.0.1,6381,16). See: https://redis.io/topics/sentinel#sentinels-and-slaves-auto-discovery about sentinel details. If you need more to know about how sentinel works, take a look at https://github.com/antirez/redis/blob/unstable/src/server.c
In order to explore more about this instance, you may want to try the following two commands:
SENTINEL slaves mymaster
SENTINEL sentinels mymaster
https://redis.io/topics/sentinel#asking-sentinel-about-the-state-of-a-master
After sentinel failover. My slave prometed as a new master. But my old master couldn't convert as a slave. Sentinel's log is below getting forever :)
26378:X 23 May 17:55:00.429 * +convert-to-slave slave 10.0.22.43:6379 10.0.22.43 6379 # master01 10.0.22.44 6379
I tested this redis v3.2.0 and v3.0.7
same error. I am missing something.
Hi i have create a cluster Redis with sentinel composed by 3 aws instances, i have configured sentinel to have an HA redis cluster and work, but if i simulate a crash of master (shutdown of master instance), sentinel installed on slaves, not locate sentinel of master and the election fail.
My sentinel configuration is:
sentinel monitor master ip-master 6379 2
sentinel down-after-milliseconds master 5000
sentinel failover-timeout master 10000
sentinel parallel-syncs master 1
Same file to all instaces
There are issues when running sentinel on the same node as the master and attempting to trigger a failover. Try it w/o running Sentinel on the master. Ultimately this means not running Sentinel on the same nodes as the Redis instances.
In your case your dead-node simulation is showing why you should not run Sentinel on the same node as Redis: If the node dies you lose one of your sentinels. In theory it should still work but as you and others have seen it isn't certain to work. I have some theories why but I've not yet confirmed them.
In a sense Sentinel is partly a monitoring system. Running a monitoring solution on the same nodes as are being monitored is generally unadvisable anyway, so you should be using off-node sentinels anyway. As Sentinel is resource efficient you don't necessarily need dedicated machines or large VMs. Indeed if you have a static set of application servers (where your client code runs), you should run Sentinel there, keeping in mind you want 3 minimum and a quorum of 50%+1.
recent redis version introduced the "protected-mode" option, which defaults to yes.
with protected-mode set to yes, redis instances, without a password set will not allow remote clients to execute commands.
this also affects sentinels master election.
try it with setting "protected-mode no" in the sentinels. this will allow them to talk to each other.
If you don't want to set protected-mode as no. you'd better set masterauth myredis in redis.conf and use sentinel auth-pass mymaster myredis in sentinel.conf
I was testing Redis Sentinel's failover ability. It worked, and Sentinel added some lines to the conf files. It auto-discovered the other sentinels and slave replicas, but it added some weird ids.
Can anyone tell me what those ids represent? Since they come right after known-sentinel, I assume they are the id of those sentinels but I can't be sure.
# Generated by CONFIG REWRITE
sentinel known-slave redis_master 127.0.0.1 6379
sentinel known-slave redis_master 127.0.0.1 6381
sentinel known-sentinel redis_master 127.0.0.1 26380
26f81b692201f11f0f16747b007da9d4f079d9d3 # this
sentinel known-sentinel redis_master 127.0.0.1 26381
0b613c6146bbf261f08c1b13f1d1b2dbc2f99413 # and this?
It's the run_id of sentinel. Remember sentinel is a special redis instance. Log into the sentinel and using "info server" to see its information, which includes the run_id. e.g.
redis-cli -h sentinel_host -p sentinel_port
info server
If you have multiple sentinels, you can use
sentinel sentinels mymaster(or redis_master in your situation)
to list all other sentinels' infomation.
I am working with two redis instances monitored by one sentinel.
when the master goes down and there is a "+sdown", I run a notification-script where I promote the slave to master using the following command on its redis-client:
SLAVEOF NO ONE .
It works fine.
My question is, it takes around 10 secs for the slave to become master and the application to continue working again.
How can i reduce this time stamp?
below is the sentinel config::::
sentinel monitor mymaster 127.0.0.1 6379 1
sentinel down-after-milliseconds mymaster 5000
sentinel failover-timeout mymaster 900000
sentinel can-failover mymaster yes
sentinel parallel-syncs mymaster 1
sentinel notification-script mymaster /etc/init.d/config/script.sh
This is not a direct answer to your question, but a description of a different setup that avoids the need for having a quick response (in some scenario's).
When we model a use case in UML, we never put the redis sentinel in the default flow. The sentinel is our guard for situations where unknown errors occor: the exceptional flow.
If we know beforehand if the client needs to connect to a different redis instance, we simply instruct the client to do so, using redis pub/sub (combined with low resolution polling, since pub/sub traffic is not guaranteed-delivery).
Kind regards, TW
Try reduce sentinel down-after-milliseconds mymaster and failover-timeout mymaster