'hiredis' is a minimalistic C client for Redis. Does anyone know if it supports -
Redis Sentinel (the official high availability solution for Redis) https://redis.io/topics/sentinel
and Redis Cluster https://redis.io/topics/cluster-tutorial
It is not clear from its Github page - https://github.com/redis/hiredis
YES and NO.
Since you can send any command to Redis with hiredis, you can get master/slave info from Redis Sentinel, or get the slots info from Redis Cluster. So hiredis can work with Redis Sentinel and Redis Cluster.
However, since hiredis doesn't have high level API to work with sentinel and cluster, you have to do many work by yourself. If you need high level API, you need to try other libraries, for example:
If you're coding with C, you can try hiredis-vip, which supports Redis Cluster. But I'm not sure if it supports Redis Sentinel.
If you're coding with C++, you can try redis-plus-plus, which supports both Redis Cluster and Redis Sentinel, and has STL-like interfaces.
Declaimer: I'm the author of redis-plus-plus.
// Example on redis-plus-plus
#include <sw/redis++/redis++.h>
try {
auto cluster = RedisCluster("tcp://127.0.0.1:7000");
cluster.set("key", "val");
auto val = cluster.get("key");
if (val) cout << *val << endl;
} catch (const Error &e) {
cerr << e.what() << endl;
}
Related
I'm adding Redis support to an open-source project written in Go. The goal is to support all Redis topologies: server, cluster, sentinel.
I browsed Go clients listed in redis.io/clients, and it seems that github.com/go-redis/redis project is a viable option.
My main concern is the NewSentinelClient() method accepts a single sentinel address.
According to the Guidelines for Redis clients (redis.io/topics/sentinel-clients#guidelines-for-redis-clients-with-support-for-redis-sentinel), "the client should iterate the list of Sentinel addresses. "
How can SentinelClient iterate through the rest of sentinel instances, if it only has one sentinel address?
Do I miss something?
On the same topic, could someone recommend another Go Redis client that might be suitable for this scenario?
Use NewFailoverClient if you have multiple sentinels.
rdb := redis.NewFailoverClient(&redis.FailoverOptions{
MasterName: "mymaster",
SentinelAddrs: []string{
"sentinel_1:26379",
"sentinel_2:26379",
"sentinel_3:26379",
},
})
Have some code written with redisson in mine spring-boot application, which works with my local redis server version 5+. But once apllication pushed to PCF and trying to use redisson lock - I do get next error:
org.redisson.client.RedisException: ERR unknown command `EVAL`, with args beginning with: `if (redis.call('exists', KEYS[1]) == 0) then redis.call('hset', KEYS[1], ARGV[2], 1); redis.call('pexpire', KEYS[1], ARGV[1]); r`, . channel: [id: 0x63facc9b, L:/10.248.253.128:35276 - R:xxxxx:xxxx] command: (EVAL), params: [if (redis.call('exists', KEYS[1]) == 0) then redis.call('hset', KEYS[1], ARGV[2], 1);
Possible reasons that I was able to find were:
Low redis server version, which is not mine case.
Some Redis Cloud providers might not support EVAL command for redis, which is mandatory for redisson.
The most relevant topic I was able to find is this one, but still I am not familiar enough with this technology stack.
So generally I have question, If someone has experience using redisson with PCF Redis On-Demand Service, and maybe can help me understand the issue.
Redisson version is 3.12.0
UPDATE1;
Worked on other PCF instance with Redis On-Demand service, so issue is definitely in Redis On-Demand configuration. Just to confirm you can use Redisson on PCF.
When I am doing an upgrade in Redis,
Should I always keep my sentinel version upgraded with my Redis version(3.0 redis + 3.0 redis sentinel) to (4.0 redis + 4.0 redis sentinel)?
Will it work if 3.0 sentinel + 4.0 redis instance?
it seems not matter. redis sentinel is just a service which talk to redis.
Currently there are no problems. Redis Sentinel acts as a client when connecting to Redis servers.
As we known, redis cluster have 16384 hash slots.when redis client connect to redis cluster, how redis client to connect real save data redis node?
The rule is quite simple, redis use CRC16(key) % 16384 to determine which slot will a key go in. If you want to do everything by yourself, you just need to calculate every key's crc16.
But generally, you don't need to do these by yourself. At the moment, almost every well-known client, like jedis, predis,StackExchange.Redis , supports the cluster mode.
When I provision a default Redis cluster on Google Compute Engine, there is one master and 2 read-only slaves and Redis Sentinel is running on each machine. Given the previous cluster I'd now like to use this in my ServiceStack Service, but the Sentinel setting has me stumped, typically I do something along the lines of :
container.Register<IRedisClientsManager>(c =>
new RedisManagerPool(container.Resolve<IAppSettings>().GetString("Redis:Master")));
var cacheClient = container.Resolve<IRedisClientsManager>().GetCacheClient();
container.Register(cacheClient);
So a couple of things are incomplete with this setup, how do I specify the master and 2 read-only slaves, and configure Sentinel?
The RedisSentinel support in ServiceStack.Redis is available in the RedisSentinel class but as it's still being tested, it's not yet announced. You can find some info on how to use and configure a RedisSentinel in this previous StackOverflow Answer.
Configuring a RedisSentinel
When using a Redis Sentinel, it's the redis sentinel external process that manages the individual master/slave connections so you would just need to configure the sentinel host and ignore the individual master/slave connections.
Configuring a RedisClientManager
Alternatively if you're using a Redis Client Manager you would do the opposite, i.e. ignore the sentinels hosts and configure the Redis Client Managers with the master and slave hosts. Only the PooledRedisClientManager supports configuring both read-write/master and read-only/slave hosts, e.g:
container.Register<IRedisClientsManager>(c =>
new PooledRedisClientManager(redisReadWriteHosts, redisReadOnlyHosts) {
ConnectTimeout = 100,
//...
});