I'm trying to delete all keys on both redis master and slave, but when I'm executing flushall or flushdb from redis-cli on master it deletes keys only on master, or vice versa if I'm deleting keys on slave it deletes keys only on slave.
What command should I use to delete all keys both on master and slave(s)?
Do I need to enable cluster support for this? Cause now it is desabled in my setup.
One more question, why there is no replication in case of using flushall or flushdb from redis-cli?
You only need to call FLUSHALL or FLUSHDB on every master to remove all keys, when master syncs with slaves, slaves will remove all keys. However, you must ensure that the connection between master and slave is alive.
If you call these two commands on slaves (of course, the slave must be writable), keys on master won't be removed.
Related
I want to delete all keys from redis cluster by using fast process. I know how to delete the keys by using "redis-cli FLUSHALL". But this command can be slow when the data set is large. I heard that all keys can be cleared from redis cache by re-starting the redis service. I am testing this process on my local mac laptop. I am performing following steps:-
Setting many number of keys on my local redis server by using example command redis-cli SET mykey1 "Hello"
Then Re-starting the redis service "brew services restart redis" in the hope that all keys will be deleted when the service will be back up
Then getting the keys by giving "redis-cli KEYS '*'" command
I still see the keys after step-3
The keys are gone only when I give this command--> redis-cli FLUSHALL? How I can clear the keys by re-starting the redis service locally on my mac laptop first then I will try on QA servers?
You see the keys after restart because there is either RDB or AOF persistence enabled. See https://redis.io/topics/persistence.
RDB is enabled by default. To disable persistence, you need to edit your redis.conf or start as redis-server --save "" --appendonly no
See Is there a way to flushall on a cluster so all keys from master and slaves are deleted from the db on how to use redis-cli to send the command to all cluster nodes.
As dizzyf indicates, use FLUSHALL ASYNC to have the deletion performed in the background. This will create fresh hash maps for each database, while the old ones are deleted (memory reclaimed) progressively by a background thread.
In redis 4.0 and greater, the FLUSHALL ASYNC command was introduced, as a way to delete all the keys in a non-blocking manner. Would this solve your issue?
https://redis.io/commands/flushall
Thank you for links. These were very helpful. I was able to achieve the result by making changes to my redis.conf file with--> redis-server --save "" and --appendonly no. So after these changes, when I now re-start redis service nothing is saved.
From the documentation this seems how flushall would work but in practice it is not working that way. When I use the command flushall it only flushes the keys from the db instance the cli is assigned to.
Redis flushall documentation
Delete all the keys of all the existing databases, not just the currently selected one. This command never fails.
The time-complexity for this operation is O(N), N being the number of keys in all existing databases.
For example if my cluster redis-cli has started and I search for a key and the node cli changes from 7000 to 7002 corresponding with the key that the hash is located i.e. server 7002 and then do a flush all it will delete the key per that server.
However, the other keys remain.
Is there a way to flushall meaning delete all keys across all masters and slaves?
Yes. You can use the cli's --cluster switch with the call command - it will execute the provided command on each of the cluster's master nodes (and will replicate, as FLUSHALL is a write command, to their respective slaves).
This should do it:
$ redis-cli --cluster --cluster-only-masters call <one-of-the-nodes-address>:<its-port> FLUSHALL
I have a redis setup with sentinels and multiple slaves, each slave as well as the master writes persistently to a snapshot file.
When I restart the system, every slave has more keys in their instance than the master has (but less keys than are present in their snapshot file), and I do not understand why?
1) My question, does a slave ever read the snapshot file at startup or it only synch with the master?
2) I never copy my snapshot files, does this lead to overwrite problems?
3) If I have keys with EXPIRATION, are those removed form the snapshot file at the corresponding time?
1) My question, does a slave ever read the snapshot file at startup or it only synch with the master?
When a slave restarts, it loads snapshot(RDB) file from disk if there's no AOF file.
2) I never copy my snapshot files, does this lead to overwrite problems?
It has nothing to do with copying.
3) If I have keys with EXPIRATION, are those removed form the snapshot file at the corresponding time?
When Redis loads the RDB file, if a key has expiration, Redis will add the key value to dict and set expiration for that key. No matter whether the key has already expired (the key will be removed later).
When I restart the system, every slave has more keys in their instance than the master has
The slaves might NOT full sync with the master before they are shutdown, and some keys have been deleted from the master. After re-syncing with the master, those keys will be deleted from slaves.
It seems there are still some keys left after i ran redis SHELL command flushdb,
what are these keys used for and why flushdb does not work?
When Redis runs flushdb command, it blocks any new writings to the database, and flushes all keys in the database. However, when Redis finishes the flushdb command, it can receive new writings, i.e. other Redis client can put new keys into the database.
In your case, I think there're other clients constantly writing to the database. So after you flush the database, new keys are put into Redis by other clients.
If you want to stop any further writing, you have to shutdown Redis server.
We have an application here which uses redis cluster to hold some short lived keys, on the order of 2 seconds. While we want the master to fail over to the slave if something happens to the master, short term loss of this ephemeral data is unimportant. To save bandwidth we'd like to disable replication between the master and slave. Is there any configuration that of master and slave in a cluster that can make this happen?