I have a Redis cluster and using Java Jedis library to perform read/write. In my use case, I won't mind reading from slave if there is any issue with master node.
So far I haven't found any way to read from slave node using Jedis APIs. Is there any other option available? Like connecting to the slave node directly and then fetching the value?
Or, changing the library the only way forward?
Related
I have a Redis cluster of 3 master nodes and each master has corresponding slave nodes. I would like to acquire a lock on the cluster to perform some write operations and then release the lock.
From what I've read is - To connect to a cluster we generally connect to one node in the cluster and perform all operations on that node which in-turn handles re-directing to other nodes in the cluster.
Is it possible to acquire lock on a Redis cluster? [P.S I am using Redisson client]
From the examples in Redisson client under Multilock and redlock (https://github.com/redisson/redisson/wiki/8.-Distributed-locks-and-synchronizers), they are acquiring a lock on individual nodes.
How does Multi lock or Red lock work on a cluster?
How and what kind of lock do I use if I have a Redis cluster?
Which library (Jedis/Redisson) do I use?
Jedis also seems to have support for locking on the cluster (https://github.com/kaidul/jedis-lock).
P.S: I've read extensively on this, but I've not been able to find clear answers on locking on a cluster. Would really appreciate some help.
I found a solution to my question above.
As far as we are using the same key to acquire a lock across all client nodes, all the attempts to acquire the lock will go to the same node on the Redis cluster. So you can just use simple Rlock from Redisson.
See comments to this question:
https://github.com/leandromoreira/redlock-rb/issues/63
Can anyone suggest me a way to intercept communication between master and slave redis and use that buffered data to write in AWS elastic cache
Redis replication protocol is open. You can write your own 'server' that would appear just like another slave to master, but will push date to elastic cache.
By the way, isn't ElasticCache support Redis directly?
I have a basic question about Redis connection parameters from CacheManager.NET perspective. In case when we have Redis cluster with a master and 2 slaves, and with quorum of sentinel processes, should we provide the IP:PORT combinations pointing to the sentinel processes OR the actual Redis server processes.
As suggested in https://seanmcgary.com/posts/how-to-build-a-fault-tolerant-redis-cluster-with-sentinel, it is advisable to ask the sentinel process about the actual master before making the connection. And probably that goes in line with Jedis which provides JedisSentinelPool to do the initial lookup.
Essentially what we want is that the load balancing on reads (via CacheManager.NET) and the writes should go to the current master node of the cluster.
CacheManager relies on StackExchange.Redis for the Redis implementation. Therefor, whatever this client library supports, CacheManager does, too.
Unfortunately, sentinel support is not implemented, there are issues on github for years regarding that
That being said, I did some testing with a Multi Master/Slave + Sentinel setup. Added all the non-sentinel nodes as endpoints to the Multiplexer configuration and it kinda works because the Redis Client knows how to handle multiple master/slave instances.
In the process of switching to another master, the client might throw exceptions that it cannot write to a readonly slave and such. CacheManager might retry those calls and after a short amount of time, when the leader election is done, the call should go through.
But this is not 100% stable and I would not put that in production, as "official" support is still missing...
Alternative to running with sentinels, you could run Redis in Cluster mode which should just work, or behind a proxy which deals with all that master/slave stuff.
Twemproxy is one alternative.
I still have to add support for Twemproxy to CacheManager, as many features are simply not available, like Lua scripting or get a list of servers or flush commands...
This will come in 1.0.2
Hope that helps.
Our current Redis setup is a Web application client using Jedis to directly connect using one JedisPool for writes to a single Redis master and a second JedisPool for reads from a single Redis slave. The slave is setup to replicate the master.
We are in the process of moving to using the JedisSentinelPool on the client and introducing Sentinel(s) to handle failover more cleanly. As far as I know, it seems that the JedisSentinelPool only communicates with the currently elected master, so now all writes/reads go to the master. Compared to before when the reads could be distributed to the slave.
Is there any way using JedisSentinelPool to distribute the reads to the slave for load balancing purposes? Or it is necessary to implement this manually with a JedisPool (as before) In which case if the master failed, the JedisSentinelPool would now point to the old slave (new master) and the JedisPool would still dumbly point to the old slave, and effectively the old slave (new master) would now be handling reads AND writes?
Does the Redis Sentinel (or otherwise) have any load balancing (as opposed to failover) capabilities? We currently only have one slave, could adding more slaves be used for load balancing? And if so what are the recommended configurations?
Any advice, real-world experience here would be appreciated.
I write a new JedisSentinelPool , can read from slave with load balancing ,write from master ,it use the redis subscribe the slaves, I use it in my Web application , see code github sentinel-slave-jedis-pool
This might be something simple but I cannot find answer anywhere (in the codebase included).
I have a simple Redis deployment with master + slave.
How am I supposed to configure JedisPool to use master for writes and slave/master for reads?
Everything I see now tells me that I have to configure JedisPool to connect to master, but I don't see any logic that auto-detects slaves and sends "gets" there.
What am I missing?
Would appreciate your clarification. Thanks in advance.
Paul
There is a solution called Jedis-failover: well it was specifically made to be a failover solution but maybe you can extend this library to your own usage. JedisPool is just a pool of Jedis instances, and I don't think there is a way to tell Jedis to call another Redis server depending on the query as a Jedis instance is defined by a single connection. You'll probably have to define your own facade on JedisPool which can contain a list of JedisPool based on your topology of redis servers.
Here are interesting links on jedis-failover, note that you'll probably need a Zookeeper to manage this configuration.
https://github.com/officedrop/jedis_failover
http://fr.slideshare.net/ryanlecompte/handling-redis-failover-with-zookeeper
There is also redis sentinel which is quite new and alpha / beta. But it is official.