I want to delete all keys from redis cluster by using fast process. I know how to delete the keys by using "redis-cli FLUSHALL". But this command can be slow when the data set is large. I heard that all keys can be cleared from redis cache by re-starting the redis service. I am testing this process on my local mac laptop. I am performing following steps:-
Setting many number of keys on my local redis server by using example command redis-cli SET mykey1 "Hello"
Then Re-starting the redis service "brew services restart redis" in the hope that all keys will be deleted when the service will be back up
Then getting the keys by giving "redis-cli KEYS '*'" command
I still see the keys after step-3
The keys are gone only when I give this command--> redis-cli FLUSHALL? How I can clear the keys by re-starting the redis service locally on my mac laptop first then I will try on QA servers?
You see the keys after restart because there is either RDB or AOF persistence enabled. See https://redis.io/topics/persistence.
RDB is enabled by default. To disable persistence, you need to edit your redis.conf or start as redis-server --save "" --appendonly no
See Is there a way to flushall on a cluster so all keys from master and slaves are deleted from the db on how to use redis-cli to send the command to all cluster nodes.
As dizzyf indicates, use FLUSHALL ASYNC to have the deletion performed in the background. This will create fresh hash maps for each database, while the old ones are deleted (memory reclaimed) progressively by a background thread.
In redis 4.0 and greater, the FLUSHALL ASYNC command was introduced, as a way to delete all the keys in a non-blocking manner. Would this solve your issue?
https://redis.io/commands/flushall
Thank you for links. These were very helpful. I was able to achieve the result by making changes to my redis.conf file with--> redis-server --save "" and --appendonly no. So after these changes, when I now re-start redis service nothing is saved.
Related
From the documentation this seems how flushall would work but in practice it is not working that way. When I use the command flushall it only flushes the keys from the db instance the cli is assigned to.
Redis flushall documentation
Delete all the keys of all the existing databases, not just the currently selected one. This command never fails.
The time-complexity for this operation is O(N), N being the number of keys in all existing databases.
For example if my cluster redis-cli has started and I search for a key and the node cli changes from 7000 to 7002 corresponding with the key that the hash is located i.e. server 7002 and then do a flush all it will delete the key per that server.
However, the other keys remain.
Is there a way to flushall meaning delete all keys across all masters and slaves?
Yes. You can use the cli's --cluster switch with the call command - it will execute the provided command on each of the cluster's master nodes (and will replicate, as FLUSHALL is a write command, to their respective slaves).
This should do it:
$ redis-cli --cluster --cluster-only-masters call <one-of-the-nodes-address>:<its-port> FLUSHALL
Recently, I started to have some trouble with one of me Redis cluster. used_memroy and used_memory_rss increasing constantly.
According to some Googling, I found following discussion:
https://github.com/antirez/redis/issues/4570
Now I am wandering if it is safe to run SCRIPT FLUSH command on my production Redis cluster?
Yes - you can run the SCRIPT FLUSH command safely in a production cluster. The only potential side effect is blocking the server while it executes. Note, however, that you'll want to call it in each of your nodes.
It seems there are still some keys left after i ran redis SHELL command flushdb,
what are these keys used for and why flushdb does not work?
When Redis runs flushdb command, it blocks any new writings to the database, and flushes all keys in the database. However, when Redis finishes the flushdb command, it can receive new writings, i.e. other Redis client can put new keys into the database.
In your case, I think there're other clients constantly writing to the database. So after you flush the database, new keys are put into Redis by other clients.
If you want to stop any further writing, you have to shutdown Redis server.
We have a redis configuration with two redis servers. We also have 3 sentinels to monitor the two instances and initiate a fail over when needed.
We currently have a process where we periodically have to do a FLUSHALL on the redis server. This is a blocking operation that takes longer than the time we have allotted for the sentinels to timeout. In other words, we have our sentinel configuration with:
sentinel down-after-milliseconds OurMasterName 5000
and doing a redis-cli FLUSHALL on the server takes > 5000 milliseconds, so the sentinels initiate a fail over.
We acknowledge that doing a FLUSHALL isn't great and we also know that we could increase the down-after-milliseconds to but for the purposes of this question assume that neither of these are options.
The question is: how can we do a FLUSHALL (or equivalent operation) WITHOUT having our sentinels initiate a fail over due to the FLUSHALL blocking for greater than 5000 milliseconds? Has anyone encountered and solved this problem?
You could just create new instances: if you are using something like AWS or Azure than you have API for creating a new Redis cluster. Start it, load it with data and once ready just modify the DNS, again with API call -so all these can be handled by some part of your application. But on premises things can get more complex because it will require some automation with ansible/chef/puppet.
The next best option you currently have to is to delete keys in batches to reduce the amout of work to at once. You can build a list, assuming you don't have one, using scan Then delete in whatever batch size works for you.
Edit: as you are not interested in keeping data, disable persistence, delete the RDB file, then just restart the instance. This way you do t have to update sentinel like you would if you take the provision new hosts.
Out of curiosity, if you're just going to be flushing all the time and don't care about the data as you'll be wiping it, why bother with sentinel?
The redis cache offered by CloudFoundry has a small capacity, i.e. 16MB.
I know redis has a command "FLUSHALL" which is used to delete all the keys in the cache. How to do the same thing in cloudfoundry?
You can recreate and rebind the service as you wish unless you have any specific configuration that cannot be migrated. (I assume services provisioned by CF.com should be created as the same.)
Also sending FLUSHALL to the redis tunnel should be another option if you have vmc and caldecott gem installed as well as a redis execution locally. Would you mind if you can send the error why you cannot connect to it?