redis terminate called after throwing an instance of 'sw::redis::MovedError' - redis

I try to build a redis cluster follow this cluster-tutorial
I try to use this tutorial(RedLock) to write codeRed Lock
Got the following error :
terminate called after throwing an instance of 'sw::redis::MovedError'

Redlock does not work with Redis Cluster. Instead, it works with Single Redis instances. You should run your code with N dependent Redis instances, NOT a Redis Cluster.
terminate called after throwing an instance of 'sw::redis::MovedError'
Seems that you're using redis-plus-plus. You got this error, because you're trying to send commands to Redis Cluster with a Redis object.

Related

Redis Cluster Single Node issue

Can Redis run in cluster mode enabled but with single instance. It seems we get cluster status as fail when we try to deploy with single node because no slots are added to it.
https://serverfault.com/a/815813/510599
I understand we can manually add SLOTS after deployment. But I wonder if we can modify Redis source code to make this change.

Unresponsive redis:works fine after FLUSHALL

We observered a sporadic issue (once in every month or 2) in our redis. Suddenly we see that redis read/write calls from our application returns a failure. At that point we are still able to connect to redis from redis-cli and doing a FLUSHALL resolves the issue.
Has anyone encountered this kind of issue. Our redis version is 4.0.9. Since we are not observing any issue in connecting to redis, we are not suspecting issue with Jedis Client.
What kind of redis calls? Something that writes the data or reads the data?
All in all, just as an idea, examine your redis.conf file
It has Maxmemory configuration directive, and when redis reaches max memory limit it acts according to eviction policy.
If the policy is set to 'noeviction', the behavior sounds like what you've described (from their documentation):
return errors when the memory limit was reached and the client is trying to execute commands that could result in more memory to be used (most write commands, but DEL and a few more exceptions).
You can configure redis as a cache so that it will delete some entries by itself.
Read about this here

Is it safe to run SCRIPT FLUSH on Redis cluster?

Recently, I started to have some trouble with one of me Redis cluster. used_memroy and used_memory_rss increasing constantly.
According to some Googling, I found following discussion:
https://github.com/antirez/redis/issues/4570
Now I am wandering if it is safe to run SCRIPT FLUSH command on my production Redis cluster?
Yes - you can run the SCRIPT FLUSH command safely in a production cluster. The only potential side effect is blocking the server while it executes. Note, however, that you'll want to call it in each of your nodes.

How to do a redis FLUSHALL without initiating a sentinel failover?

We have a redis configuration with two redis servers. We also have 3 sentinels to monitor the two instances and initiate a fail over when needed.
We currently have a process where we periodically have to do a FLUSHALL on the redis server. This is a blocking operation that takes longer than the time we have allotted for the sentinels to timeout. In other words, we have our sentinel configuration with:
sentinel down-after-milliseconds OurMasterName 5000
and doing a redis-cli FLUSHALL on the server takes > 5000 milliseconds, so the sentinels initiate a fail over.
We acknowledge that doing a FLUSHALL isn't great and we also know that we could increase the down-after-milliseconds to but for the purposes of this question assume that neither of these are options.
The question is: how can we do a FLUSHALL (or equivalent operation) WITHOUT having our sentinels initiate a fail over due to the FLUSHALL blocking for greater than 5000 milliseconds? Has anyone encountered and solved this problem?
You could just create new instances: if you are using something like AWS or Azure than you have API for creating a new Redis cluster. Start it, load it with data and once ready just modify the DNS, again with API call -so all these can be handled by some part of your application. But on premises things can get more complex because it will require some automation with ansible/chef/puppet.
The next best option you currently have to is to delete keys in batches to reduce the amout of work to at once. You can build a list, assuming you don't have one, using scan Then delete in whatever batch size works for you.
Edit: as you are not interested in keeping data, disable persistence, delete the RDB file, then just restart the instance. This way you do t have to update sentinel like you would if you take the provision new hosts.
Out of curiosity, if you're just going to be flushing all the time and don't care about the data as you'll be wiping it, why bother with sentinel?

How does ServiceStack PooledRedisClientManager failover work?

According to the git commit messages, ServiceStack has recently added failover support. I initially assumed this meant that I could pull one of my Redis instances down, and my pooled client manager would handle the failover elegantly and try to connect with one of my alternate Redis instances. Unfortunately, my code just bugs out and says that it can't connect with the initial Redis instance.
I am currently running instances of Redis 2.6.12 on a Windows, with the master at port 6379 and a slave at 6380, with sentinels set up to automatically promote the slave to a master if the master goes down. I am currently instantiating my client manager like this:
PooledRedisClientManager pooledClientManager =
new PooledRedisClientManager(new string[1] { "localhost:6379"},
new string[1] {"localhost:6380"});
where the first array is read-write hosts (for the master), and the second array is read-only hosts (for the slave).
When I terminate the master at port 6379, the sentinels promote the slave to a master. Now, when I try to run my C# code, instead of failing over to port 6380, it simply breaks and returns the error "could not connect to redis Instance at localhost:6379".
Is there a way around this, or will failover simply not work the way I want it to?
PooledRedisClientManager.FailoverTo allows you to reset which are the read/write hosts, vs readonly hosts, and restart the factory. This allows for a quick transition without needing to recreate clients.