It seems there are still some keys left after i ran redis SHELL command flushdb,
what are these keys used for and why flushdb does not work?
When Redis runs flushdb command, it blocks any new writings to the database, and flushes all keys in the database. However, when Redis finishes the flushdb command, it can receive new writings, i.e. other Redis client can put new keys into the database.
In your case, I think there're other clients constantly writing to the database. So after you flush the database, new keys are put into Redis by other clients.
If you want to stop any further writing, you have to shutdown Redis server.
Related
I want to delete all keys from redis cluster by using fast process. I know how to delete the keys by using "redis-cli FLUSHALL". But this command can be slow when the data set is large. I heard that all keys can be cleared from redis cache by re-starting the redis service. I am testing this process on my local mac laptop. I am performing following steps:-
Setting many number of keys on my local redis server by using example command redis-cli SET mykey1 "Hello"
Then Re-starting the redis service "brew services restart redis" in the hope that all keys will be deleted when the service will be back up
Then getting the keys by giving "redis-cli KEYS '*'" command
I still see the keys after step-3
The keys are gone only when I give this command--> redis-cli FLUSHALL? How I can clear the keys by re-starting the redis service locally on my mac laptop first then I will try on QA servers?
You see the keys after restart because there is either RDB or AOF persistence enabled. See https://redis.io/topics/persistence.
RDB is enabled by default. To disable persistence, you need to edit your redis.conf or start as redis-server --save "" --appendonly no
See Is there a way to flushall on a cluster so all keys from master and slaves are deleted from the db on how to use redis-cli to send the command to all cluster nodes.
As dizzyf indicates, use FLUSHALL ASYNC to have the deletion performed in the background. This will create fresh hash maps for each database, while the old ones are deleted (memory reclaimed) progressively by a background thread.
In redis 4.0 and greater, the FLUSHALL ASYNC command was introduced, as a way to delete all the keys in a non-blocking manner. Would this solve your issue?
https://redis.io/commands/flushall
Thank you for links. These were very helpful. I was able to achieve the result by making changes to my redis.conf file with--> redis-server --save "" and --appendonly no. So after these changes, when I now re-start redis service nothing is saved.
From the documentation this seems how flushall would work but in practice it is not working that way. When I use the command flushall it only flushes the keys from the db instance the cli is assigned to.
Redis flushall documentation
Delete all the keys of all the existing databases, not just the currently selected one. This command never fails.
The time-complexity for this operation is O(N), N being the number of keys in all existing databases.
For example if my cluster redis-cli has started and I search for a key and the node cli changes from 7000 to 7002 corresponding with the key that the hash is located i.e. server 7002 and then do a flush all it will delete the key per that server.
However, the other keys remain.
Is there a way to flushall meaning delete all keys across all masters and slaves?
Yes. You can use the cli's --cluster switch with the call command - it will execute the provided command on each of the cluster's master nodes (and will replicate, as FLUSHALL is a write command, to their respective slaves).
This should do it:
$ redis-cli --cluster --cluster-only-masters call <one-of-the-nodes-address>:<its-port> FLUSHALL
I'm trying to delete all keys on both redis master and slave, but when I'm executing flushall or flushdb from redis-cli on master it deletes keys only on master, or vice versa if I'm deleting keys on slave it deletes keys only on slave.
What command should I use to delete all keys both on master and slave(s)?
Do I need to enable cluster support for this? Cause now it is desabled in my setup.
One more question, why there is no replication in case of using flushall or flushdb from redis-cli?
You only need to call FLUSHALL or FLUSHDB on every master to remove all keys, when master syncs with slaves, slaves will remove all keys. However, you must ensure that the connection between master and slave is alive.
If you call these two commands on slaves (of course, the slave must be writable), keys on master won't be removed.
We have a redis configuration with two redis servers. We also have 3 sentinels to monitor the two instances and initiate a fail over when needed.
We currently have a process where we periodically have to do a FLUSHALL on the redis server. This is a blocking operation that takes longer than the time we have allotted for the sentinels to timeout. In other words, we have our sentinel configuration with:
sentinel down-after-milliseconds OurMasterName 5000
and doing a redis-cli FLUSHALL on the server takes > 5000 milliseconds, so the sentinels initiate a fail over.
We acknowledge that doing a FLUSHALL isn't great and we also know that we could increase the down-after-milliseconds to but for the purposes of this question assume that neither of these are options.
The question is: how can we do a FLUSHALL (or equivalent operation) WITHOUT having our sentinels initiate a fail over due to the FLUSHALL blocking for greater than 5000 milliseconds? Has anyone encountered and solved this problem?
You could just create new instances: if you are using something like AWS or Azure than you have API for creating a new Redis cluster. Start it, load it with data and once ready just modify the DNS, again with API call -so all these can be handled by some part of your application. But on premises things can get more complex because it will require some automation with ansible/chef/puppet.
The next best option you currently have to is to delete keys in batches to reduce the amout of work to at once. You can build a list, assuming you don't have one, using scan Then delete in whatever batch size works for you.
Edit: as you are not interested in keeping data, disable persistence, delete the RDB file, then just restart the instance. This way you do t have to update sentinel like you would if you take the provision new hosts.
Out of curiosity, if you're just going to be flushing all the time and don't care about the data as you'll be wiping it, why bother with sentinel?
I have a Redis instance with some persistent keys and a lot of keys that have expired. Then I execute the bgsave command to get an rdb file. As a result the file contains all keys - persistent and not persistent.
When I restart Redis all the non-persistent keys turned into persistent keys.
How can I avoid this?
I want all non-persistent keys stay non-persistent.