Redis cluster 3 master not working without any slave - redis

I have a Redis cluster with 3 master nodes each having 1 slave.
When all the 3 master nodes crash, then the slaves take over as masters. However in this state when only the slaves promoted as masters are present, the (Jedis) client is not working. Essentially, the set(key,value) is not working.
I see the following error log on console:
Error condition on socket for SYNC: Connection refused
Connecting to MASTER 127.0.0.1:7002
MASTER <-> REPLICA sync started
Error condition on socket for SYNC: Connection refused
...
Is it mandatory that each master must have 1 slave for read/write operation to work? Shouldn't at least read work?

Related

Redis Sentinel Readonly Slave with no Auth

Setup: I have three machines with co-located Sentinel/Redis, exactly as in example-2, configured with sentinel auth-pass, masterauth, requirepass. -- failover, read/write -- all works.
Requirement: I want to add in a Redis slave (4th instance of Redis) that does not require auth, but replicates from the main cluster.
Issue: I setup exactly as in the uncommon case where you need a slave that is accessible without authentication, ie, Master >> Slave,replica-priority 0.
When using redis-py client, I ask Sentinel for a slave to read from and this no-auth-slave is chosen, i receive error: Client sent AUTH, but no password is set and connection is aborted. I believe the client is just passing back the error from server.
I've Tried:
Master >> Slave,replica-priority 0 >> Slave,replica-priority 0. All clients, servers, sync work (because Sentinel doesn't know about this readonly slave) except I get +fix-slave-config entries every 10s in Sentinel log. Not sure if this is concerning??
Setup as defined in "the uncommon case..." with Master >> (Slave,replica-priority 0) but client error, and unable to proceed with connection/request.
Questions:
Is Master >> Slave >> Slave with Sentinel +fix-slave-config entries ok?
Is Client sent AUTH, but no password is set a bug/feature?
I would definitely prefer to have all Redis slaves known by Sentinel for HA though, but doesn't work mixing auth/no-auth.
Redis 5.0.3, redis-py 3.2.0

how to check the message published from redis sentinel to redis master?

Question Background:
I deploy a redis cluster in k8s cluster and use Redis-Sentinel to implement ha for redis cluster. My redis cluster structure likes below:
One master
One slave
three sentinel (serve a specific redis cluster)
When i login the container of the one of sentinels, i execute a command:
sentinel sentinels mymaster
Luckly, i get a desirable output. These are two sentinel's infos. After a period of time, i execute "sentinels mymaster" command again, i found that there is a additional sentinel and don't find this instance through IP address or runId。
I know that sentinel discover other sentinels and master and slave through sub the channel of sentinel:hello in redis master.
Question:
how to check the message published from redis sentinel to redis master? I have opened log for master and set the log level to debug.
You can see the Sentinel's activity (when it discovers a sentinel, a replica, failsover to a new master, etc.) in the sentinel log file, not the master. If a sentinel is running on a host, it will be in the same directory the master or replica log file is. For me on CentOS it's /var/log/redis/sentinel.log.

Redis - Make failover master return to slave state and master take up it's old master role

I have a Redis v4.0.7 cluster consisting of 4 servers. These 4 servers are all running Ubuntu v17.10 64 bit Virtual Machines (in VirtualBox) that I have on my Windows PC. I have shifted all the slaves 1 server and will be using M1 for master 1 as well as S1 for slave 1 in the following explanation of my "issue".
192.168.56.101 (with a master on port 7000 (M1) and slave on port 7001 (S4))
192.168.56.102 (with a master on port 7000 (M2) and slave on port 7001 (S1))
192.168.56.103 (with a master on port 7000 (M3) and slave on port 7001 (S2))
192.168.56.104 (with a master on port 7000 (M4) and slave on port 7001 (S3))
I am fiddling a little bit with the setup to check if the failover "works".
Therefore I have tried shutting down M2, which means that S2 takes over and becomes the master. This works as intended. However if I start up the (old) M2 again it is now a slave and remains as such until I shut S2 down at which point it will take over the master role again.
I was wondering if there is a certain command that I can issue to the slave that has taken over the master role which makes it take over it's (old) slave role and hand over the master role to the (old) master, in this case M2.
I have tried googling the "issue", but to no avail.
You can do this by running:
redis-cli -h M2_IP_ADDRESS M2_PORT CLUSTER FAILOVER
Above command will make manual failover. M2 will became master and S2 slave.

Reconnect Shutdown Redis Instance back to Cluster

Given a redis cluster with six nodes (3M/3S) on ports 7000-7005 with master nodes on ports 7000-7002 and slave nodes on the rest, master node 7000 is shut down, so node 7003 becomes the new master:
$ redis-cli -p 7003 cluster nodes
2a23385e94f8a27e54ac3b89ed3cabe394826111 127.0.0.1:7004 slave 1108ef4cf01ace085b6d0f8fd5ce5021db86bdc7 0 1452648964358 5 connected
5799de96ff71e9e49fd58691ce4b42c07d2a0ede 127.0.0.1:7000 master,fail - 1452648178668 1452648177319 1 disconnected
dad18a1628ded44369c924786f3c920fc83b59c6 127.0.0.1:7002 master - 0 1452648964881 3 connected 10923-16383
dfcb7b6cd920c074cafee643d2c631b3c81402a5 127.0.0.1:7003 myself,master - 0 0 7 connected 0-5460
1108ef4cf01ace085b6d0f8fd5ce5021db86bdc7 127.0.0.1:7001 master - 0 1452648965403 2 connected 5461-10922
bf60041a282929cf94a4c9eaa203a381ff6ffc33 127.0.0.1:7005 slave dad18a1628ded44369c924786f3c920fc83b59c6 0 1452648965926 6 connected
How does one go about [automatically] reconnecting/restarting node 7000 as a slave instance of 7003?
Redis Cluster: Re-adding a failed over node has detail explanation about what happens.
Basically, the node will become a slave of the slave (which is now a master) that replaced it during the failover.
Have you seen the Redis Sentinel Documentation?
Redis Sentinel provides high availability for Redis. In practical
terms this means that using Sentinel you can create a Redis deployment
that resists without human intervention to certain kind of failures.

Does redis3 cluster support socket connections?

I'm running redis-3.0.4 in cluster mode. the os is centos6.6 x86_64
10.0.0.1:6379 master, 10.0.0.1:6380 slave
10.0.0.2:6379 master, 10.0.0.2:6380 slave
10.0.0.3:6379 master, 10.0.0.3:6380 slave
all the 6 processes are listening both tcp and unixsocket.
when I connect to the cluster via tcp, it works. all the operations are okay.
but when I save keys into the cluster via unixsocket, it throws out:
exception 'myfilename' with message 'Redis error: MOVED 6118 10.0.0.2:6379
I tested it on command line via:
redis-cli -c -s /tmp/redis-6379.sock
redis /tmp/redis-6379.sock> set hello world
it throws out a lot of:
...
-> Redirected to slot [3300] located at 10.0.0.3:6379
...
(it doesn't stop till I pressed ctrl+c)
How can I use a socket to connect to the redis3 cluster?