Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
Is it possible to create a redis cluster with 2 nodes , one acting as a master and other one as slave.
I get the following error if I try with 2 nodes (one as master and other as slave)
>>> Creating cluster
Connecting to node 127.0.0.1:6379: OK
Connecting to node 192.168.40.159:6379: OK
*** ERROR: Invalid configuration for cluster creation.
*** Redis Cluster requires at least 3 master nodes.
*** This is not possible with 2 nodes and 1 replicas per node.
*** At least 6 nodes are required.
Yes. The requirement of at least 3 master nodes is set by the ruby script, but not a hard limit in cluster.
The first thing you need to do is send a cluster command with 16385 arguments like
cluster addslots 0 1 2 3 ... 16384
to the cluster. Since there is too many arguments to manually type them in a redis-cli, I suggest write a program to do that, in which you open a TCP socket connecting to the redis node, convert the previous command into a redis command string and write it to the socket.
The single node cluster will be online after few seconds you send the command. Then connect to the other node with redis-cli, type the following command to make it a slave
cluster meet MASTER_HOST MASTER_PORT
cluster replicate MASTER_ID
where MASTER_HOST:MASTER_PORT is the address of the previous node, and MASTER_ID is the ID of that node, which you could retrieve it via a cluster nodes command.
For convenience I've written a python tool for those kinds of redis cluster management, you could install it with
pip install redis-trib
For more detail please go to https://github.com/HunanTV/redis-trib.py/
Redis-Cluster is not a fit for your use case.
For your use case, you need to configure one server (the master), then configure a second server and add the "slaveof" directive - pointing it to the master. How you handle failover is up to your scenario but I would recommend the use of redis-sentinel.
For a more detailed walkthrough, see the Redis Replication page
Nope, it is not possible to create a redis cluster with 1 master node, as suggested here setting up a redis cluster requires atleast 3 master nodes.
'redis-trib.rb create' command requires at least 3 nodes.
this way can make 1 master, 1 slave redis cluster.
Using redis-trib
$ redis-server 5001/redis.conf
$ redis-trib.rb fix 127.0.0.1:5001
so many messages ...
$ redis-server 5002/redis.conf
$ redis-trib.rb add-node --slave 127.0.0.1:5002 127.0.0.1:5001
>>> Adding node 127.0.0.1:5002 to cluster 127.0.0.1:5001<br>
Connecting to node 127.0.0.1:5001: OK<br>
>>> Performing Cluster Check (using node 127.0.0.1:5001)<br>
M: 015bec64d631990b83ad63736d906cda257a762c 127.0.0.1:5001<br>
slots:0-16383 (16384 slots) master<br>
0 additional replica(s)<br>
[OK] All nodes agree about slots configuration.<br>
>>> Check for open slots...<br>
>>> Check slots coverage...<br>
[OK] All 16384 slots covered.<br>
Automatically selected master 127.0.0.1:5001<br>
Connecting to node 127.0.0.1:5002: OK<br>
>>> Send CLUSTER MEET to node 127.0.0.1:5002 to make it join the cluster.<br>
Waiting for the cluster to join.<br>
>>> Configure node as replica of 127.0.0.1:5001.<br>
[OK] New node added correctly.
Using cluster commands
$ redis-server 5001/redis.conf
Using Ruby : addslots
$ echo '(0..16383).each{|x| puts "cluster addslots "+x.to_s}' | ruby | redis-cli -c -p 5001 > /dev/null
$ redis-server 5002/redis.conf
$ redis-cli -c -p 5002
127.0.0.1:5002> cluster meet 127.0.0.1 5001
OK
127.0.0.1:5002> cluster replicate 7c38d2e5e76fc4857fe238e34b4096fc9c9f12a5
node-id of 5001
OK
Related
I have a redis sentinel configuration with one master, two slaves and 3 sentinels running. I noticed that at some point the sentinels may switch the master electing one of the slaves as master. This is causing problems to an application which is connecting to the master node as a standalone client(I'm working on changing the code to use sentinels). I wanted to know if it is possible to switch the master by connecting to the sentinel client i.e. through 'redis-cli'
Can somebody let me know if there is a command that I can use to switch the master IP?
The client applications should use a client library that supports sentinel in the case where a redis master goes down and the sentinels select a new master. Not sure how beneficial it is to have sentinel setup if your client applications are not taking advantage of it. A client application that supports sentinel will query sentinel for the master ip and should be somewhat tolerant to faults occurring with the master connection. You can trigger a manual failover like the other answer states:
redis-cli -h {sentinel-ip} -p {26379 or sentinel port} sentinel failover {mastername}
But you will not be able to pick which node it fails over to. You can control a configuration value slave_priority in the redis.conf file so that it prefers a node over the rest. A description of the slave priority can be found here: https://redis.io/topics/sentinel
You can manually trigger a failover by running:
redis-cli -a {password} -p {sentinel_port} SENTINEL failover {cluster_name}
If you are using Lettuce Client you can use masterSlaveStatefulConnection and pass the sentinel URI it will perform auto discovery in the background and will refresh the master node internally.
https://github.com/lettuce-io/lettuce-core/wiki/Master-Replica
I have setup up Redis master slave configuration having one master (6379 port) and 3 slaves (6380,6381,6382) running in the same machine. Looks like cluster is setup properly as I can see the following output on running info command:
# Replication
role:master
connected_slaves:3
slave0:ip=127.0.0.1,port=6380,state=online,offset=29,lag=1
slave1:ip=127.0.0.1,port=6381,state=online,offset=29,lag=1
slave2:ip=127.0.0.1,port=6382,state=online,offset=29,lag=1
master_repl_offset:43
repl_backlog_active:1
repl_backlog_size:1048576
repl_backlog_first_byte_offset:2
repl_backlog_histlen:42
But wherever I try to add new key in master, I get the following error:
(error) CLUSTERDOWN Hash slot not served
Using redis-3.0.7 in Mac OS X Yosemite.
I had the same issue, turned out I forgot to run the create cluster:
cd /path/to/utils/create-cluster
./create-cluster create
http://redis.io/topics/cluster-tutorial#creating-a-redis-cluster-using-the-create-cluster-script
To fix slots issue while insertion:
redis-cli --cluster fix localhost:6379
You can use ruby script buddled with redis for creating clusters as mentioned below :
/usr/local/redis-3.2.11/src/redis-trib.rb create --replicas 1 192.168.142.128:7001 192.168.142.128:7002 192.168.142.128:7003 192.168.142.128:7004 192.168.142.128:7005 192.168.142.128:7006
There is a hash slot not allotted to any master. Check the hash slots by looking at the column 9 in the output of following command (column 9 will be empty if no hash slots for that node):
redis-cli -h masterIP -p masterPORT CLUSTER NODES
The hash slots can be allotted by using the following command.
redis-cli -h masterIP -p masterPORT CLUSTER ADDSLOTS SLOTNNUMBER
But this has to be done for every slot number individually without missing. Use a bat script to include it in for loop. something like,
for /L %a in (0,1,5400) Do redis-cli -h 127.0.0.1 -p 7001 cluster addslots %a
Also, this command works before assigning slaves to master. After this ADDSLOTS step and completing the setup, the SET and GET worked fine. Remember to use -c along with redis-cli before SET to enable cluster support.
The issue comes when one or more Redis nodes gets corrupted and can no longer serve its configured hash slots.
You will have to bootstrap the cluster again to make sure the nodes agree on the hash slots to serve.
If the Redis nodes contain data or a key in the database 0, you will have to clear this data before rerunning the bootstrap.
I am trying to implement a Redis cluster with 6 machine.
I have a vagrant cluster of six machines:
192.168.56.101
192.168.56.102
192.168.56.103
192.168.56.104
192.168.56.105
192.168.56.106
all running redis-server
I edited /etc/redis/redis.conf file of all the above servers adding this
cluster-enabled yes
cluster-config-file nodes.conf
cluster-node-timeout 5000
cluster-slave-validity-factor 0
appendonly yes
I then ran this on one of the six machines;
./redis-trib.rb create --replicas 1 192.168.56.101:6379 192.168.56.102:6379 192.168.56.103:6379 192.168.56.104:6379 192.168.56.105:6379 192.168.56.106:6379
A Redis cluster is up and running. I checked manually by setting value in one machine it shows up on other machine.
$ redis-cli -p 6379 cluster nodes
3c6ffdddfec4e726f29d06a6da550f94d976f859 192.168.56.105:6379 master - 0 1450088598212 5 connected
47d04bc98ab42fc793f9f382855e5c54ab8f2e20 192.168.56.102:6379 slave caf2cec45114dc8f4cbc6d96c6dbb20b62a39f90 0 1450088598716 7 connected
040d4bb6a00569fc44eec05440a5fe0796952ccf 192.168.56.101:6379 myself,slave 5318e48e9ef0fc68d2dc723a336b791fc43e23c8 0 0 4 connected
caf2cec45114dc8f4cbc6d96c6dbb20b62a39f90 192.168.56.104:6379 master - 0 1450088599720 7 connected 0-10922
d78293d0821de3ab3d2bca82b24525e976e7ab63 192.168.56.106:6379 slave 5318e48e9ef0fc68d2dc723a336b791fc43e23c8 0 1450088599316 8 connected
5318e48e9ef0fc68d2dc723a336b791fc43e23c8 192.168.56.103:6379 master - 0 1450088599218 8 connected 10923-16383
My problem is that when I shutdown or stop redis-server on any one machine which is master the whole cluster goes down, but if all the three slaves die the cluster still works properly.
What should I do so that a slave turns a master if a master fails(Fault tolerance)?
I am under the assumption that redis handles all those things and I need not worry about it after deploying the cluster. Am I right or would I have to do thing myself?
Another question is lets say I have six machine of 16GB RAM. How much total data I would be able to handle on this Redis cluster with three masters and three slaves?
Thank you.
the setting cluster-slave-validity-factor 0 may be the culprit here.
from redis.conf
# A slave of a failing master will avoid to start a failover if its data
# looks too old.
In your setup the slave of the terminated master considers itself unfit to be elected master since the time it last contacted master is greater than the computed value of:
(node-timeout * slave-validity-factor) + repl-ping-slave-period
Therefore, even with a redundant slave, the cluster state is changed to DOWN and becomes unavailable.
You can try with a different value, example, the suggested default
cluster-slave-validity-factor 10
This will ensure that the cluster is able to tolerate one random redis instance failure. (it can be slave or a master instance)
For your second question: Six machines of 16GB RAM each will be able to function as a Redis Cluster of 3 Master instances and 3 Slave instances. So theoretical maximum is 16GB x 3 data. Such a cluster can tolerate a maximum of ONE node failure if cluster-require-full-coverage is turned on. else it may be able to still serve data in the shards that are still available in the functioning instances.
Suppose I have a redis cluster with nodes 10.0.0.1, 10.0.0.2, 10.0.0.3 and 10.0.0.4, which I'm using as a cache.
Then, for whatever reason, node 10.0.0.4 fails and goes down. This brings down the entire cluster:
2713:M 13 Apr 21:07:52.415 * FAIL message received from [id1] about [id2]
2713:M 13 Apr 21:07:52.415 # Cluster state changed: fail
Which causes any query to be shut down with "CLUSTERDOWN The cluster is down".
However, since I'm using the cluster as a cache, I don't really care if a node goes down. A key can get resharded to a different node and lose its contents without affecting my application.
Is there a way to set up such an automated resharding?
I found something close enough to what I need.
By setting cluster-require-full-coverage to "no", the rest of the cluster will continue to respond to queries, although the client needs to handle the possibility of being redirected to a failing node.
Then I can replace the broken node by running:
redis-trib.rb call 10.0.0.1:6379 cluster forget [broken_node_id]
redis-trib.rb add-node 10.0.0.5:6379 10.0.0.1:6379
redis-trib.rb fix 10.0.0.1:6379
Where 10.0.0.5:6379 is the node that will replace the broken one.
By assuming you have only master nodes in your current cluster, you will definitely get cluster down error because there is no replica of down master and Redis thinks cluster is not in safe and triggers an error.
Solution
Create a new node (Create redis.conf with desired parameters.)
Join that node to cluster
redis-trib.rb add-node 127.0.0.1:6379 EXISTING_MASTER_IP:EXISTING_MASTER_PORT
Make node slave of 10.0.0.4
redis-cli -p 6379 cluster replicate NODE_ID_OF_TARGET_MASTER
To Test
First be sure, cluster is in good shape.(All slots are covered and nodes are agreed about configurations.)
redis-trib.rb check 127.0.0.1:6379 (On any master)
Kill process of 10.0.0.4
Wait Slave to be new master.(It happens quickly. Slots assigned to 10.0.0.4 will be resharded automatically to Slave.)
Check cluster and be sure all slots are moved new master
redis-trib.rb check 127.0.0.1:6379 (On any master)
No manual actions needed. Additionally, if you have more slaves in cluster they may be promoted as new masters of other masters as well. (e.g. You have a setup of 3 master, 3 slaves. Master1 goes down, Slave1 becomes new master. Slave1 goes down, Slave1 can be new master as Master1.)
Sorry, should be shot for having to even ask this, but wasted day on this - and feel like I've read everything there is.
I can't create a cluster on my EC2 instances (3) that are spread on three different regions. The hosts:
rabbit#ip-172-31-47-217
rabbit#ip-172-31-1-82
rabbit#ip-172-31-36-111
The initial state before trying to make the cluster:
ubuntu#ip-172-31-47-217:~$ sudo rabbitmqctl cluster_status
Cluster status of node 'rabbit#ip-172-31-47-217' ...
[{nodes,[{disc,['rabbit#ip-172-31-47-217']}]},
{running_nodes,['rabbit#ip-172-31-47-217']},
{partitions,[]}]
ubuntu#ip-172-31-36-111:~$ sudo rabbitmqctl cluster_status
Cluster status of node 'rabbit#ip-172-31-36-111' ...
[{nodes,[{disc,['rabbit#ip-172-31-36-111']}]},
{running_nodes,['rabbit#ip-172-31-36-111']},
{partitions,[]}]
ubuntu#ip-172-31-1-82:~$ sudo rabbitmqctl cluster_status
Cluster status of node 'rabbit#ip-172-31-1-82' ...
[{nodes,[{disc,['rabbit#ip-172-31-1-82']}]},
{running_nodes,['rabbit#ip-172-31-1-82']},
{partitions,[]}]
When I try to check status from one server for another:
sudo rabbitmqctl status -n rabbit#ip-172-31-1-82
Status of node 'rabbit#ip-172-31-1-82' ...
Error: unable to connect to node 'rabbit#ip-172-31-1-82': nodedown
nodes in question: ['rabbit#ip-172-31-1-82']
hosts, their running nodes and ports:
- unable to connect to epmd on ip-172-31-1-82: timeout (timed out)
current node details:
- node name: 'rabbitmqctl3835#ip-172-31-36-111'
- home dir: /var/lib/rabbitmq
- cookie hash: 0tsf/OyQZI7zobmv1Ia97w==
All three servers have the same erlang cookie hash.
I can verify the host names are setup properly:
host ip-172-31-36-111
ip-172-31-36-111.us-west-2.compute.internal has address 172.31.36.111
I know the ports are open:
netstat -plten | grep beam
Because I opened all TCP and UDP at this point as a test, no change.
and finally if this would behave differently given those failures:
sudo rabbitmqctl join_cluster --ram rabbit#ip-172-31-1-82
Clustering node 'rabbit#ip-172-31-47-217' with 'rabbit#ip-172-31-1-82' ...
Error: {cannot_discover_cluster,"The nodes provided are either offline or not running"}
Please help, being driven insane by this.
The problem is that they are in different regions (presumably in EC2-classic - you didn't mention whether you were using a VPC). This means they cannot communicate via their private IPs (see e.g. Can EC2 instances in different regions communicate over their private IP addresses?)
ping 172.31.36.111
will fail from one of the other servers, for example. Pinging using the hostname probably will probably even fail on the DNS lookup.
Your options are:
Put them in separate zones in a single region (in EC2 classic, they will be able to communicate). You could also use a VPC in this case, putting the in separate subnets but allowing interconnections via appropriately set up security groups.
Set up /etc/hosts on each server to point the relevant public IPs of the other servers (you could attach elastic IPs to each server to ensure stability across server restarts). You could also set the hostname of each server for clarity. Set you your security groups to allow access on the relevant ports that rabbitmq uses. There may be security implications of doing this, since the data will be travelling over the public internet.
Set up a VPN between each server in the cluster. Amazon VPC has a VPN facility, but there are ways of setting it up yourself I think.
I think only option 1 is simplest. Option 2 has major security implications (I believe there are ways of securing the connection between the cluster servers, but they aren't documented on the rabbitmq website as far as I can tell). Option 3 is complex but probably the best option if you need multiple regions.
Note that rabbitmq clusters aren't meant to be run over wide geographical areas, since they aren't too reliable in the face of network partitions. See here: https://www.rabbitmq.com/clustering.html