Redis keys deleted before expiration - redis

I use redis on Ubuntu machine to store some keys with expiration, but in testing server (Ubuntu) my keys get deleted/disappears before expiration. Key was set to expire for 24hours but was already gone after 4 hours or so.
I know that ttl is correct and set in seconds and I do not see any key flush in history.
I also checked redis info maxmemory:0 (not set) and evicted_keys:0, so I assume it is not memory problem.
What could cause removal of my keys?
Maybe there is some script in server that might delete keys? If so how could I track such script?

Related

Can redis excute its own command when key expires?

I want to increment another key when a key expires in Redis,
I found Redis Keyspace Notifications. However, it seems to only support client subsciptions.
Is it possible to set an action on expiration in redis-server itself?

Can I prevent Redis from expiring any keys?

I'm importing an old Redis backup in order to browse its databases.
However upon launching Redis with the imported dump.rdb a lot of data is missing.
My guess is because Redis immediately expires all the old keys.
Is there a configuration setting to prevent Redis from expiring anything?
Redis doesn't do that, but you can try setting the server's clock to some time in the distant past before loading the RDB.

How to delete all keys from redis cluster

I want to delete all keys from redis cluster by using fast process. I know how to delete the keys by using "redis-cli FLUSHALL". But this command can be slow when the data set is large. I heard that all keys can be cleared from redis cache by re-starting the redis service. I am testing this process on my local mac laptop. I am performing following steps:-
Setting many number of keys on my local redis server by using example command redis-cli SET mykey1 "Hello"
Then Re-starting the redis service "brew services restart redis" in the hope that all keys will be deleted when the service will be back up
Then getting the keys by giving "redis-cli KEYS '*'" command
I still see the keys after step-3
The keys are gone only when I give this command--> redis-cli FLUSHALL? How I can clear the keys by re-starting the redis service locally on my mac laptop first then I will try on QA servers?
You see the keys after restart because there is either RDB or AOF persistence enabled. See https://redis.io/topics/persistence.
RDB is enabled by default. To disable persistence, you need to edit your redis.conf or start as redis-server --save "" --appendonly no
See Is there a way to flushall on a cluster so all keys from master and slaves are deleted from the db on how to use redis-cli to send the command to all cluster nodes.
As dizzyf indicates, use FLUSHALL ASYNC to have the deletion performed in the background. This will create fresh hash maps for each database, while the old ones are deleted (memory reclaimed) progressively by a background thread.
In redis 4.0 and greater, the FLUSHALL ASYNC command was introduced, as a way to delete all the keys in a non-blocking manner. Would this solve your issue?
https://redis.io/commands/flushall
Thank you for links. These were very helpful. I was able to achieve the result by making changes to my redis.conf file with--> redis-server --save "" and --appendonly no. So after these changes, when I now re-start redis service nothing is saved.

flushdb not clear all keys in redis?

It seems there are still some keys left after i ran redis SHELL command flushdb,
what are these keys used for and why flushdb does not work?
When Redis runs flushdb command, it blocks any new writings to the database, and flushes all keys in the database. However, when Redis finishes the flushdb command, it can receive new writings, i.e. other Redis client can put new keys into the database.
In your case, I think there're other clients constantly writing to the database. So after you flush the database, new keys are put into Redis by other clients.
If you want to stop any further writing, you have to shutdown Redis server.

How to do a redis FLUSHALL without initiating a sentinel failover?

We have a redis configuration with two redis servers. We also have 3 sentinels to monitor the two instances and initiate a fail over when needed.
We currently have a process where we periodically have to do a FLUSHALL on the redis server. This is a blocking operation that takes longer than the time we have allotted for the sentinels to timeout. In other words, we have our sentinel configuration with:
sentinel down-after-milliseconds OurMasterName 5000
and doing a redis-cli FLUSHALL on the server takes > 5000 milliseconds, so the sentinels initiate a fail over.
We acknowledge that doing a FLUSHALL isn't great and we also know that we could increase the down-after-milliseconds to but for the purposes of this question assume that neither of these are options.
The question is: how can we do a FLUSHALL (or equivalent operation) WITHOUT having our sentinels initiate a fail over due to the FLUSHALL blocking for greater than 5000 milliseconds? Has anyone encountered and solved this problem?
You could just create new instances: if you are using something like AWS or Azure than you have API for creating a new Redis cluster. Start it, load it with data and once ready just modify the DNS, again with API call -so all these can be handled by some part of your application. But on premises things can get more complex because it will require some automation with ansible/chef/puppet.
The next best option you currently have to is to delete keys in batches to reduce the amout of work to at once. You can build a list, assuming you don't have one, using scan Then delete in whatever batch size works for you.
Edit: as you are not interested in keeping data, disable persistence, delete the RDB file, then just restart the instance. This way you do t have to update sentinel like you would if you take the provision new hosts.
Out of curiosity, if you're just going to be flushing all the time and don't care about the data as you'll be wiping it, why bother with sentinel?