I have installed latest version ( 6.0.8) of redis in new centos D,E,F servers, now I want to add these new servers to the the existing cluster A,B,C which has old redis version, My plan is to after added new redis servers then decommission the old servers. Can anyone please guide me with the steps
1. Setup your new Redis instance as a slave for your current Redis instance. In order to do so you need a different server, or a server that has enough RAM to keep two instances of Redis running at the same time.
2. If you use a single server, make sure that the slave is started in a different port than the master instance, otherwise the slave will not be able to start at all.
3. Wait for the replication initial synchronization to complete (check the slave log file).
4. Make sure using INFO that there are the same number of keys in the master and in the slave. Check with redis-cli that the slave is working as you wish and is replying to your commands.
5. Allow writes to the slave using CONFIG SET slave-read-only no
6. Configure all your clients in order to use the new instance (that is, the
slave). Note that you may want to use the CLIENT PAUSE command in order to make sure that no client can write to the old master during the switch.
7. Once you are sure that the master is no longer receiving any query (you can check this with the MONITOR command), elect the slave to master using the SLAVEOF NO ONE command, and shut down your master.
You can follow this guide upgrading-or-restarting-a-redis-instance-without-downtime.
Related
I have 5 redis server
2 of them run redis both Master and Slave roles ( looks like redis.conf is not setup manually but via some sort of process cause it has the following line at the bottom: Generated by CONFIG REWRITE )
From time to time I can see Master and Slave switch roles automatically - no human intervention
3 of them run redis sentinel
Question 1: I need to replicate this setup on a 5 different systems but I don’t know how is that “Generated by CONFIG REWRITE” portion setup. Where and how is this automation setup?
Question 2: Why is that /etc/redis/ has a 6329.conf file? I thought redis setup is redis.conf...
Thanks
The Config Rewrites are all caused by Redis Sentinel. The 3 sentinels you have monitor the master and in the event that enough sentinels think the master is down, they will force a failover by promoting an existing slave to the new master, then will reconfigure all other hosts to be a slave of the new master. You can read more about Redis Sentinel, including how to set it up for common scenarios, (docs page, examples section).
For the 6329.conf file, you can name the config files however you want, but however you start your redis server has to reference the non-default file name. Here's the usage example from the --help option to redis-server:
Usage: ./redis-server [/path/to/redis.conf] [options]
Suppose I have [Slave IP Address] which is the slave of [Master IP Address].
Now my master server has been shut down, and I need to set this slave to be master MANUALLY (WITHOUT using sentinel automatic failover, WITH redis command).
Is it possible doing this without restarting the redis service ? (and losing all the cached data)
use SLAVEOF NO ONE to promote a slave to master
http://redis.io/commands/slaveof
it depends, if you are in a cluster you will be better using the fail over. You will need to use the force option in the command
http://redis.io/commands/cluster-failover
Is it possible doing this without restarting the redis service? (and
losing all the cached data)
yes that's possible, you can use
SLAVEOF NO ONE (without sentinel)
But it is recommended to use sentinel to avoid data loss.
sentinel failover master-name(with sentinel)
This will force the sentinel to switch master.
The new master will have all the data that was synchronized before the old-master shutdown.
Redis will automatically choose the best slave with max. data, that will reduce the amount of data we lose when switching master.
Below 2 options in step 3 have helped me to recover the cluster once a master node is down, compute was replaced or other not recoverable state.
1 .- First you need to connect to the slave node, use redis-cli, here a link how to do that: How to connect to remote Redis server?
2 .- Once connected to the slave node run the command cluster nodes to validate master node is in fail state, also run cluster info to see the overall state of your cluster(this is always a good idea)
3 .- Inside the slave node to be promoted run command: cluster failover,
in rare cases when there is some serious issues with redis this
command could fail, and you will need to use cluster failover force
or cluster failover takeover, here more info abut the implications
of those options: https://redis.io/commands/cluster-failover
4 .- Run cluster forged $old_master_id in all your cluster nodes
5 .- Add a new node with cluster meet $new_node_IP $new_node_PORT
6 .- Subscribe your new node to your brand new master, login in to the new bode and run cluster replicate $master_node_id
Steps 1-3 are required for the slave-master promotion and 4-5 are required to left all cluster in a healthy master-slave equilibrium.
As of Redis version 5.0.0 the SLAVEOF command is regarded as deprecated.
If a Redis server is already acting as replica, the command REPLICAOF NO ONE will turn off the replication, turning the Redis server into a MASTER.
I am using redis(Redis 3.1) as session store for tomcat(Tomcat 7). To ensure high availability, there is a sentinel setup and two instances(master and slave) of redis server. The slave is configured as read-only. After running few tests and verifying the statistics, it's observerd there are no read requests sent to the slave. All the read requests are processed by the master alone.
Could you please let me know how I can make the slave serve the read requests?
You could use Redis based Tomcat Session Manager provided by Redisson. It allows to manage which type of node use for read operation (master, slave or both master and slave). Perfectly works in Sentinel/Cluster modes.
We have a setup with a number of redis (2.8) servers (lets say 4) and as many redis sentinels. On startup of each machine, we set a pre-select machine as master through the command line and all the rest as slaves of that. and the sentinels all monitor these machines. The clients first connect to the local sentinel and retrieve the master's IP address and then connect there.
This setup is trouble free most of the time but sometimes the sentinels go out of sync with servers. if I name the machines A,B,C and D - sentinels will think B is master while redis servers are all connected to A as the master. bringing down redis server on B doesnt help either. I had to bring it down and manually "Sentinel failover" on A to fix the issue. Question is
1. What causes this to happen and whats the easiest and quickest way to fix this ?
2. What is best configuration - is there something better than this ?
The only time you should set a master is the first time. Once sentinel has taken over management of replication you should let it do it. This includes on restarts. Don't use the command line to set replication. Let sentinel and redis manage it. This is why you're getting issues - you've told sentinel it is authoritative, but you are telling the Redis servers to ignore sentinel.
Sentinel stores the status in its Config file, so when it restarts it can resume the last configuration. So even on restart, let sentinel do it's job.
Also, if you have 4 servers (be specific, not "let's say") you should be running a quorum of three on your monitor statement in sentinel. With a quorum of two you can wind up with two masters
I'm testing out ElastiCache backed by Redis with the following specs:
Using Redis 2.8, with Multi-AZ
Single replication group
1 master node in us-east-1b, 1 slave node in us-east-1c, 1 slave node in us-east-1d
The part of the application writing is directly using the endpoint for the master node (primary-node.use1.cache.amazonaws.com)
The part of the application doing only reads is pointing to a custom endpoint (readonly.redis.mydomain.com) configured in HAProxy, which then points to the two other read slave end points. (readslave1.use1.cache.amazonaws.com and readslave2.use1.cache.amazonaws.com)
Now lets say the primary node (master) fails in us-east-1b.
From what I understand, if the master instance fails, I won't have to change the url for the end point for writing to Redis (primary-node.use1.cache.amazonaws.com), although from there, I still have the following questions:
Do I have to change the endpoint names for the read only slaves?
How long until the missing slave is added into the pool?
If there's anything else I'm missing, I'd appreciate the advice/information.
Thanks!
If you are using ElastiCache, you should make use the "Primary EndpointThe" provided by AWS.
That endpoint actually is backed by Route53, if the primary (master) redis is down, since you enable MutliA-Z, it will auto fail over to one of the read replica (slave).
In that case, you don't need to modify the endpoint of your redis.
I don't know why you have such design, seems you only want write to master, but always read from slave.
For HA Proxy part, you should include TCP check for ALL 3 redis nodes, using their "Read Endpoint"
In haproxy, you can check if the endpoint is SLAVE, if yes, your haproxy should redirect the traffic to that.
Notice that in the application layer, if your redis driver don't support auto reconnect, your script will fail to connect to the new master nodes.
In addition to "auto reconnect", since AWS is using Route53 DNS to do fail over, some lib will NOT do NS lookup again, which means the DNS is still pointing to the OLD ip which is the old master.
Using HAproxy can solve this problem.