I want to set up 1 master and 2 slaves or 1 master and 1 slave for certain number of servers. So it will have 20 nodes and 45 instances. I'm wondering how I can set it up so that it's like this:
"127.0.0.1",|1M2S|port 40000~40002 e.g. 127.0.0.1:4000(master), 127.0.0.1:4001(slave)127.0.0.1:4002(slave)
"127.0.0.2",|1M2S|port 40000~40002
"127.0.0.3",|1M2S|port 40000~40002
"127.0.0.4",|1M2S|port 40000~40002
"127.0.0.5",|1M2S|port 40000~40002
"127.0.0.6",|1M1S|port 40000~40001
"127.0.0.7",|1M1S|port 40000~40001
"127.0.0.8",|1M1S|port 40000~40001
"127.0.0.9",|1M1S|port 40000~40001
"127.0.1.0",|1M1S|port 40000~40001
I also want the slaves distributed across servers. I belive this --replicas will automatically try to distribute the slaves so that the master and slaves are not on the same server/IPadress.
./redis-trib.rb create --replicas 1 127.0.0.1:7000 127.0.0.1:7001 \ 127.0.0.1:7002 127.0.0.1:7003 127.0.0.1:7004 127.0.0.1:7005
But that only works if there are even number of slaves and masters. How do I make it so it will put 2 slaves on server 1~5?
I also have to set up another cluster that has 8M11S|40700~40718 so I would like to know if there is a way of distributing these simply as opposed to manually getting the ID's of master from a different server and adding a slave.
Related
we have many datacenters but datacenter1 is the main.
the master in datacenter1 is being monitored by sentinel so if the master goes down one the replicas will become master and also all data is being synced continuously.
we want to have one Redis replica in each datacenter, replicate all data from datacenter1 but without the ability to become master. (always get data from data center 1 and just replica 1 have the ability to become master but other replicas must not be able)
is there a Redis config for this or any idea?
Redis Multi Datacenter
Redis config [1] has a replica-priority parameter which should serve your purpose.
The replica priority is an integer number published by Redis in the INFO
output. It is used by Redis Sentinel in order to select a replica to promote
into a master if the master is no longer working correctly.
A replica with a low priority number is considered better for promotion, so
for instance if there are three replicas with priority 10, 100, 25 Sentinel
will pick the one with priority 10, that is the lowest.
However a special priority of 0 marks the replica as not able to perform the
role of master, so a replica with priority of 0 will never be selected by
Redis Sentinel for promotion.
By default the priority is 100.
The idea can be setting lower replica-priority value to replicas in datacenter1 and higher value to replicas in other datacenters.
[1] redis.conf file of Redis version 6.2.6: https://github.com/redis/redis/blob/6.2.6/redis.conf
I'm trying to set up Redis Sentinel.
I know that when a master goes down the sentinel pick up one of its slaves and promote it as master.
I was wondering based on which attributes the new master is selected among the slaves and which slave got selected for being a new master?
After Sentinels election, the leader sentinel will do the following steps:
Remove slaves already in down status from slave list.
Remove slaves which disconnection time is more than ten times of down-after-milliseconds + master down time
Select slave(s) by replica-priority(configured in slave)
If multiple slaves are selected, sort them by sync offset, and select the most in-sync(maximum offset) slave.
If there are still multiple selection, sort with RunId and select the smaller one.
So you can see the process order of master selection can be following order:
Disconnection time
Priority
Replication offset
Run Id
I want to understand the behavior of aerospike in different consistancy mode.
Consider a aerospike cluster running with 3 nodes and replication factor 3.
AP modes is simple and it says
Aerospike will allow reads and writes in every sub-cluster.
And Maximum no. of node which can go down < 3 (replication factor)
For aerospike strong consistency it says
Note that the only successful writes are those made on replication-factor number of nodes. Every other write is unsuccessful
Does this really means the no writes are allowed if available nodes < replication factor.
And then same document says
All writes are committed to every replica before the system returns success to the client. In case one of the replica writes fails, the master will ensure that the write is completed to the appropriate number of replicas within the cluster (or sub cluster in case the system has been compromised.)
what does appropriate number of replica means ?
So if I lose one node from my 3 node cluster with strong consistency and replication factor 3 , I will not be able to wright data ?
For aerospike strong consistency it says
Note that the only successful writes are those made on
replication-factor number of nodes. Every other write is unsuccessful
Does this really means the no writes are allowed if available nodes <
replication factor.
Yes, if there are fewer than replication-factor nodes then it is impossible to meet the user specified replication-factor.
All writes are committed to every replica before the system returns
success to the client. In case one of the replica writes fails, the
master will ensure that the write is completed to the appropriate
number of replicas within the cluster (or sub cluster in case the
system has been compromised.)
what does appropriate number of replica means ?
It means replication-factor nodes must receive the write. When a node fails, a new node can be promoted to replica status until either the node returns or an operator registers a new roster (cluster membership list).
So if I lose one node from my 3 node cluster with strong consistency
and replication factor 3 , I will not be able to wright data ?
Yes, so having all nodes a replicas wouldn't be a very useful configuration. Replication-factor 3 allows up to 2 nodes to be down, but only if the remaining nodes are able to satisfy the replication-factor. So for replication-factor 3 you would probably want to run with a minimum of 5 nodes.
You are correct, with 3 nodes and RF 3, losing one node means the cluster will not be able to successfully take write transactions since it wouldn't be able to write the required number of copies (3 in this case).
Appropriate number of replicas means a number of replicas that would match the replication factor configured.
I am writing a script to monitor redis replication latency in a group of redis slaves managed using sentinel. I am looking at the results of the INFO replication command, which look like this:
# Replication
role:master
connected_slaves:5
slave0:ip=x.x.x.x,port=6379,state=online,offset=22246539656,lag=0
slave1:ip=y.y.y.y,port=6379,state=online,offset=22246538633,lag=1
slave2:ip=z.z.z.z,port=6379,state=online,offset=22247193804,lag=0
slave3:ip=n.n.n.n,port=6379,state=online,offset=22246538633,lag=1
slave4:ip=m.m.m.m,port=6379,state=online,offset=22244239193,lag=1
master_repl_offset:22246539199
repl_backlog_active:1
repl_backlog_size:536870912
repl_backlog_first_byte_offset:21709668288
repl_backlog_histlen:536870912
I had thought that the offset for each slave was a measure of how much data had been replicated so far, so I could look at the difference between the master_repl_offset and the offset values for the various slaves to determine the amount of data not yet replicated. However, in the above output, the offsets for slave0 and slave2 are both higher than for the master. Have I misunderstood what these numbers mean?
I have a Redis Cluster. I am using JedisCluster client to connect to my Redis.
My application is a bit complex and I want to basically control to which partition data from my application goes. For example, my application consists of sub-module A, B, C. Then I want that all data from sub-module A should go to partition 1 for example. Similarly data from sub-module B should go to partition 2 for example and so on.
I am using JedisCluster, but I don't find any API to write to a particular partition on my cluster. I am assuming I will have same partition names on all my Redis nodes and handling which data goes to which node will be automatically handled but to which partition will be handled by me.
I tried going through the JedisCluster lib at
https://github.com/xetorthio/jedis/blob/b03d4231f4412c67063e356a7c3acf9bb7e62534/src/main/java/redis/clients/jedis/JedisCluster.java
but couldn't find anything. Please help?
Thanks in advance for the help.
That's not how Redis Cluster works. With Redis Cluster, each node (partition) has a defined set of keys (slots) that it's handling. Writing a key to a master node which is not served by the master results in rejection of the command.
From the Redis Cluster Spec:
Redis Cluster implements a concept called hash tags that can be used in order to force certain keys to be stored in the same node.
[...]
The key space is split into 16384 slots, effectively setting an upper limit for the cluster size of 16384 master nodes (however the suggested max size of nodes is in the order of ~ 1000 nodes).
Each master node in a cluster handles a subset of the 16384 hash slots.
You need to define at Cluster configuration-level which master node is exclusively serving a particular slot or a set of slots. The configuration results in data locality.
The slot is calculated from the key. The good news is that you can enforce a particular slot for a key by using Key hash tags:
There is an exception for the computation of the hash slot that is used in order to implement hash tags. Hash tags are a way to ensure that multiple keys are allocated in the same hash slot. This is used in order to implement multi-key operations in Redis Cluster.
Example:
{user1000}.following
The content between {…} is used to calculate the slot. Key hash tags allow you to group keys on particular nodes and enforce the same data locality when using arbitrary hash tags.
You can also go a step further by using known hash tags that map to slots (you'd need either precalculate a table or see this Gist). By using known hash tags that map to a specific slot you're able to select the slot and so the master node on which the data is located.
Everything else is handled by your Redis client.