currently I have a redis instance, now I would make it more failure prove.
Is it possible to archive the following things?
I connect to redis with the service stack library, now I want that when the server is not available redis switch to the failover server automatically.
You should configure a Redis instance as a slave of your master instance, either using the slaveof command or more likely by adding a slaveof directive in the configuration file (something like 'slaveof 127.0.0.1 6380' ; look at the documentation for more info); then use Redis Sentinel to monitor the instances and promote the Slave as Master when the master fails.
Moreover you either have to use a Redis client that supports sentinel and handles the redirection when the slave is promoted to slave, or use a network configuration (like virtual IP) to make the redirection transparent for your application.
Related
Why Redis client use multiple address in cluster-mode for create connection? is this to switch between addresses when one of them has failed?
Thanks.
Redis uses multiple address to setUp application with all the master and slave node available in redis cluster. Redis never switch address it is just redis-cluster responsibility to promote the slave node to master if any one of them failed. After that subsequent request can be served directly from that redis node.
More details here : https://redis.io/topics/cluster-tutorial
I am using redis(Redis 3.1) as session store for tomcat(Tomcat 7). To ensure high availability, there is a sentinel setup and two instances(master and slave) of redis server. The slave is configured as read-only. After running few tests and verifying the statistics, it's observerd there are no read requests sent to the slave. All the read requests are processed by the master alone.
Could you please let me know how I can make the slave serve the read requests?
You could use Redis based Tomcat Session Manager provided by Redisson. It allows to manage which type of node use for read operation (master, slave or both master and slave). Perfectly works in Sentinel/Cluster modes.
When I provision a default Redis cluster on Google Compute Engine, there is one master and 2 read-only slaves and Redis Sentinel is running on each machine. Given the previous cluster I'd now like to use this in my ServiceStack Service, but the Sentinel setting has me stumped, typically I do something along the lines of :
container.Register<IRedisClientsManager>(c =>
new RedisManagerPool(container.Resolve<IAppSettings>().GetString("Redis:Master")));
var cacheClient = container.Resolve<IRedisClientsManager>().GetCacheClient();
container.Register(cacheClient);
So a couple of things are incomplete with this setup, how do I specify the master and 2 read-only slaves, and configure Sentinel?
The RedisSentinel support in ServiceStack.Redis is available in the RedisSentinel class but as it's still being tested, it's not yet announced. You can find some info on how to use and configure a RedisSentinel in this previous StackOverflow Answer.
Configuring a RedisSentinel
When using a Redis Sentinel, it's the redis sentinel external process that manages the individual master/slave connections so you would just need to configure the sentinel host and ignore the individual master/slave connections.
Configuring a RedisClientManager
Alternatively if you're using a Redis Client Manager you would do the opposite, i.e. ignore the sentinels hosts and configure the Redis Client Managers with the master and slave hosts. Only the PooledRedisClientManager supports configuring both read-write/master and read-only/slave hosts, e.g:
container.Register<IRedisClientsManager>(c =>
new PooledRedisClientManager(redisReadWriteHosts, redisReadOnlyHosts) {
ConnectTimeout = 100,
//...
});
I am planning on adding Redis to our application as a session and cache store. I have been looking at how to make Redis highly available on an on-premise hosted solution.
The standard approach appears to be to set up Redis as a 3 node replica and use Sentinel for the monitoring and automatic failover.
Redis 2.8 introduces Redis cluster. Does that mean it brings in automatic failover etc and we no longer need to use Sentinel?
No, Cluster and Failover are different scenarios. Also Cluster is in 3.0, not 2.8.
The standard (and minimum) setup for HA is a master and one slave (aka "a pod"), with a separate set of three nodes which run Sentinel and monitor the pod.
This is to ensure failover of the server. However, either your client library has to support using Sentinel to discover master and reconnect on failure, you implement it in your code, or you set up a TCP load balancer and a sentinel monitoring daemon to update your load balancer configuration when a failover occurs at which point the client code doesn't know or care about sentinel.
Cluster isn't there to provide HA, it is there for server-side sharding of data. For Cluster you're looking at 6-7 nodes minimum (3 master, 3 slave, 1 spare) as well as Cluster support in the client and restrictions about commands and Lua script which need to access multiple keys.
According to the git commit messages, ServiceStack has recently added failover support. I initially assumed this meant that I could pull one of my Redis instances down, and my pooled client manager would handle the failover elegantly and try to connect with one of my alternate Redis instances. Unfortunately, my code just bugs out and says that it can't connect with the initial Redis instance.
I am currently running instances of Redis 2.6.12 on a Windows, with the master at port 6379 and a slave at 6380, with sentinels set up to automatically promote the slave to a master if the master goes down. I am currently instantiating my client manager like this:
PooledRedisClientManager pooledClientManager =
new PooledRedisClientManager(new string[1] { "localhost:6379"},
new string[1] {"localhost:6380"});
where the first array is read-write hosts (for the master), and the second array is read-only hosts (for the slave).
When I terminate the master at port 6379, the sentinels promote the slave to a master. Now, when I try to run my C# code, instead of failing over to port 6380, it simply breaks and returns the error "could not connect to redis Instance at localhost:6379".
Is there a way around this, or will failover simply not work the way I want it to?
PooledRedisClientManager.FailoverTo allows you to reset which are the read/write hosts, vs readonly hosts, and restart the factory. This allows for a quick transition without needing to recreate clients.