I'm adding Redis support to an open-source project written in Go. The goal is to support all Redis topologies: server, cluster, sentinel.
I browsed Go clients listed in redis.io/clients, and it seems that github.com/go-redis/redis project is a viable option.
My main concern is the NewSentinelClient() method accepts a single sentinel address.
According to the Guidelines for Redis clients (redis.io/topics/sentinel-clients#guidelines-for-redis-clients-with-support-for-redis-sentinel), "the client should iterate the list of Sentinel addresses. "
How can SentinelClient iterate through the rest of sentinel instances, if it only has one sentinel address?
Do I miss something?
On the same topic, could someone recommend another Go Redis client that might be suitable for this scenario?
Use NewFailoverClient if you have multiple sentinels.
rdb := redis.NewFailoverClient(&redis.FailoverOptions{
MasterName: "mymaster",
SentinelAddrs: []string{
"sentinel_1:26379",
"sentinel_2:26379",
"sentinel_3:26379",
},
})
Related
I need an HA redis solution instead of a single instance. Should I use cluster or Sentinel? I have tried to find out the difference between them, there is no official document about this, thanks a lot.
Well, for a HA redis solution , it depends upon the number of nodes you want to configure.
According to offical Redis documentation on Redis-cluster and Redis-sentinel both provides HA Solution but.....
Redis Sentinel provides high availability for Redis. In practical terms this means that using Sentinel you can create a Redis deployment that resists without human intervention to certain kind of failures.
Redis Cluster provides a way to run a Redis installation where data is automatically sharded across multiple Redis nodes.
Redis Cluster also provides some degree of availability during partitions, that is in practical terms the ability to continue the operations when some nodes fail or are not able to communicate. However the cluster stops to operate in the event of larger failures (for example when the majority of masters are unavailable).
For more information please refer the official docs :)
Cheers
I have a basic question about Redis connection parameters from CacheManager.NET perspective. In case when we have Redis cluster with a master and 2 slaves, and with quorum of sentinel processes, should we provide the IP:PORT combinations pointing to the sentinel processes OR the actual Redis server processes.
As suggested in https://seanmcgary.com/posts/how-to-build-a-fault-tolerant-redis-cluster-with-sentinel, it is advisable to ask the sentinel process about the actual master before making the connection. And probably that goes in line with Jedis which provides JedisSentinelPool to do the initial lookup.
Essentially what we want is that the load balancing on reads (via CacheManager.NET) and the writes should go to the current master node of the cluster.
CacheManager relies on StackExchange.Redis for the Redis implementation. Therefor, whatever this client library supports, CacheManager does, too.
Unfortunately, sentinel support is not implemented, there are issues on github for years regarding that
That being said, I did some testing with a Multi Master/Slave + Sentinel setup. Added all the non-sentinel nodes as endpoints to the Multiplexer configuration and it kinda works because the Redis Client knows how to handle multiple master/slave instances.
In the process of switching to another master, the client might throw exceptions that it cannot write to a readonly slave and such. CacheManager might retry those calls and after a short amount of time, when the leader election is done, the call should go through.
But this is not 100% stable and I would not put that in production, as "official" support is still missing...
Alternative to running with sentinels, you could run Redis in Cluster mode which should just work, or behind a proxy which deals with all that master/slave stuff.
Twemproxy is one alternative.
I still have to add support for Twemproxy to CacheManager, as many features are simply not available, like Lua scripting or get a list of servers or flush commands...
This will come in 1.0.2
Hope that helps.
Is there a way for a client to get notified about failover events in the Redis cluster? If so, which client library would support this? I am currently using Jedis but have the flexibility to switch to any other Java client.
There are two ways that I can think of to check this, one of them is to grep for master nodes on the cluster, keeping in mind their IDs, if the ports changed for any of them then a failover happened.
$ redis-cli -p {PORT} cluster nodes | grep master
Another way, but it is not as robust of a solution is using the consistency checker ruby script, that will start showing errors in writes as an output, which you can monitor and send notifications depending on it, since that happens when the read server is trying to take its master's role.
Sentinel (http://redis.io/topics/sentinel) has the ability to monitor the cluster member, and send a publish/subscribe notification upon failure. The link contains a more in-depth explanation and tutorial.
I set up twemproxy (nutcracker) with 2 redis servers as backends including slaves, sentinel and failover.
As soon as I add another redis server some of the keys are not able to be read, probably due to twemproxy redirecting to another redis.
How do I add another redis instance without breaking the consistency?
I want to use the setup as a consistent and very fast database.
Here are my settings:
redis_cluster:
auto_eject_hosts: false
distribution: ketama
hash: fnv1a_32
listen: 127.0.0.1:6379
preconnect: true
redis: true
servers:
- 127.0.0.1:7004:1 redis_1
- 127.0.0.1:7005:1 redis_2
I want to keep sharding a job of the server and be able to add instances. Do I need to use another setup?
Twemproxy can't do that. You can use Redis Cluster, or if you want to use Twemproxy you have to use a technique called presharding. Which is, start directly with, like, 32 or 64 instances or alike, even if them all run in the same host to start. Then start moving instances from one box to another in order to scale to multiple actual servers. The word to the right of the instances configured inside Twemproxy "redis_1" are used in order to hash, so that you can change IP address when you move instances, and still the hashing will be the same for that server.
Redis Cluster is release candidate 2 at this point. While it needs more testing and deployments to be battle tested as Redis is, it is already a viable product, so you may want to test it as well.
Assuming I have a Master-Slave deployment of Redis (1 master, 1 slave) and a client (webapp) that will manage Publish-Subscribe.
Can I Publish messages to the slave and will they be "seen" by the master?
Or should I use only the Master for Publish and the Slave for Subscribe commands?
I've been looking around but couldn't find the answer. Anyone knows?
EDIT: As #jameshfisher pointed out, the link below is regarding Redis Cluster. The comment from #lionello seems to be the correct answer:
Publishing to a slave will not propagate to the master, only the other way around.
The answer is on the cluster-spec docs:
Publish/Subscribe
In a Redis Cluster clients can subscribe to every node, and can also publish to every other node. The cluster will make sure that published messages are forwarded as needed.
The current implementation will simply broadcast each published message to all other nodes, but at some point this will be optimized either using Bloom filters or other algorithms.
For the typical data you store in Redis, you should only write to the master.
From http://redis.io/topics/replication:
...writes [to slaves] will be discarded if the slave and the master will [sic] resynchronize, or if the slave is restarted...
In fact, starting from v2.6, you can put slaves in slave-read-only mode which would prevent the mistake of writing data to a slave.
The documentation does go on to mention a potential use case for writing data to slaves:
...often there is ephemeral data that is unimportant that can be stored
into slaves. For instance clients may take information about
reachability of master in the slave instance to coordinate a fail over
strategy.