I'm importing an old Redis backup in order to browse its databases.
However upon launching Redis with the imported dump.rdb a lot of data is missing.
My guess is because Redis immediately expires all the old keys.
Is there a configuration setting to prevent Redis from expiring anything?
Redis doesn't do that, but you can try setting the server's clock to some time in the distant past before loading the RDB.
Related
We observered a sporadic issue (once in every month or 2) in our redis. Suddenly we see that redis read/write calls from our application returns a failure. At that point we are still able to connect to redis from redis-cli and doing a FLUSHALL resolves the issue.
Has anyone encountered this kind of issue. Our redis version is 4.0.9. Since we are not observing any issue in connecting to redis, we are not suspecting issue with Jedis Client.
What kind of redis calls? Something that writes the data or reads the data?
All in all, just as an idea, examine your redis.conf file
It has Maxmemory configuration directive, and when redis reaches max memory limit it acts according to eviction policy.
If the policy is set to 'noeviction', the behavior sounds like what you've described (from their documentation):
return errors when the memory limit was reached and the client is trying to execute commands that could result in more memory to be used (most write commands, but DEL and a few more exceptions).
You can configure redis as a cache so that it will delete some entries by itself.
Read about this here
I want to delete all keys from redis cluster by using fast process. I know how to delete the keys by using "redis-cli FLUSHALL". But this command can be slow when the data set is large. I heard that all keys can be cleared from redis cache by re-starting the redis service. I am testing this process on my local mac laptop. I am performing following steps:-
Setting many number of keys on my local redis server by using example command redis-cli SET mykey1 "Hello"
Then Re-starting the redis service "brew services restart redis" in the hope that all keys will be deleted when the service will be back up
Then getting the keys by giving "redis-cli KEYS '*'" command
I still see the keys after step-3
The keys are gone only when I give this command--> redis-cli FLUSHALL? How I can clear the keys by re-starting the redis service locally on my mac laptop first then I will try on QA servers?
You see the keys after restart because there is either RDB or AOF persistence enabled. See https://redis.io/topics/persistence.
RDB is enabled by default. To disable persistence, you need to edit your redis.conf or start as redis-server --save "" --appendonly no
See Is there a way to flushall on a cluster so all keys from master and slaves are deleted from the db on how to use redis-cli to send the command to all cluster nodes.
As dizzyf indicates, use FLUSHALL ASYNC to have the deletion performed in the background. This will create fresh hash maps for each database, while the old ones are deleted (memory reclaimed) progressively by a background thread.
In redis 4.0 and greater, the FLUSHALL ASYNC command was introduced, as a way to delete all the keys in a non-blocking manner. Would this solve your issue?
https://redis.io/commands/flushall
Thank you for links. These were very helpful. I was able to achieve the result by making changes to my redis.conf file with--> redis-server --save "" and --appendonly no. So after these changes, when I now re-start redis service nothing is saved.
I use redis on Ubuntu machine to store some keys with expiration, but in testing server (Ubuntu) my keys get deleted/disappears before expiration. Key was set to expire for 24hours but was already gone after 4 hours or so.
I know that ttl is correct and set in seconds and I do not see any key flush in history.
I also checked redis info maxmemory:0 (not set) and evicted_keys:0, so I assume it is not memory problem.
What could cause removal of my keys?
Maybe there is some script in server that might delete keys? If so how could I track such script?
I need to setup a Redis DB (2.8), which i suppose to use as a queue, which means that it's must be fully persistent (no message can be missed).
I'm pretty new with Redis, and i would like a get a review for my configuration:
I want to use both AOF and RDB persistence models, while always will be selected as appendfsync policy. According to their decontamination, always is not recommended, but i must select this option as i use Redis as a queue, and i can't endure any massages missing.
I would like a create a Master-Slave-Slave cluster using Sentinel with automatic failover.
Redis service will be automatically started after server boot.
Any kind of comments and suggestions will be great. The administration point of view is more important to me (persistence, backup, restore, high availability, etc).
We have a redis configuration with two redis servers. We also have 3 sentinels to monitor the two instances and initiate a fail over when needed.
We currently have a process where we periodically have to do a FLUSHALL on the redis server. This is a blocking operation that takes longer than the time we have allotted for the sentinels to timeout. In other words, we have our sentinel configuration with:
sentinel down-after-milliseconds OurMasterName 5000
and doing a redis-cli FLUSHALL on the server takes > 5000 milliseconds, so the sentinels initiate a fail over.
We acknowledge that doing a FLUSHALL isn't great and we also know that we could increase the down-after-milliseconds to but for the purposes of this question assume that neither of these are options.
The question is: how can we do a FLUSHALL (or equivalent operation) WITHOUT having our sentinels initiate a fail over due to the FLUSHALL blocking for greater than 5000 milliseconds? Has anyone encountered and solved this problem?
You could just create new instances: if you are using something like AWS or Azure than you have API for creating a new Redis cluster. Start it, load it with data and once ready just modify the DNS, again with API call -so all these can be handled by some part of your application. But on premises things can get more complex because it will require some automation with ansible/chef/puppet.
The next best option you currently have to is to delete keys in batches to reduce the amout of work to at once. You can build a list, assuming you don't have one, using scan Then delete in whatever batch size works for you.
Edit: as you are not interested in keeping data, disable persistence, delete the RDB file, then just restart the instance. This way you do t have to update sentinel like you would if you take the provision new hosts.
Out of curiosity, if you're just going to be flushing all the time and don't care about the data as you'll be wiping it, why bother with sentinel?