I have tried method in this question, but it does not work since I'm working in cluster mode, and redis told me:
(error) CROSSSLOT Keys in request don't hash to the same slot
Answers for that question try to remove multiple keys in a single DEL. However, keys matching the given pattern might NOT locate in the same slot, and Redis Cluster DOES NOT support multiple-key command if these keys don't belong to the same slot. That's why you get the error message.
In order to fix this problem, you need to DEL these keys one-by-one:
redis-cli --scan --pattern "foo*" |xargs -L 1 redis-cli del
The -L option for xargs command specifies the number of keys to delete. You need to specify this option as 1.
In order to remove all keys matching the pattern, you also need to run the above command for every master nodes in your cluster.
NOTE
With this command, you have to delete these keys one-by-one, and that might be very slow. You need to consider re-designing your database, and use hash-tags to make keys matching the pattern belong to the same slot. So that you can remove these keys in a single DEL.
Either SCAN or KEYS command are inefficient, especially, KEYS should not be used in production. You need to consider building an index for these keys.
Building on for_stack's answer, you can speed up mass deletion quite a bit using redis-cli --pipe, and reduce the performance impact with UNLINK instead of DEL if you're using redis 4 or higher.
redis-cli --scan --pattern "foo*" | xargs -L 1 echo UNLINK | redis-cli --pipe
Output will look something like this:
All data transferred. Waiting for the last reply...
Last reply received from server.
errors: 0, replies: 107003
You do still need to run this against every master node in your cluster. If you have a large number of nodes, it's probably possible to automate the process further by parsing the output of CLUSTER NODES.
redis-cli provides a -c option to follow MOVED redirects. However, it should be deleted one at a time because you cannot guarantee two keys will be in the same node.
redis-cli -h myredis.internal --scan --pattern 'mycachekey::*' | \
xargs -L 1 -d'\n' redis-cli -h myredis.internal -c del
The first part provides a list of keys --scan prevents Redis from locking. xargs -L 1 runs the command for one entry at a time. -d'\n' disables the processing of quotes so you can have quoted strings like "SimpleKey[hello world]" be passed to the command otherwise the spaces will make it have two keys.
Related
I have want to delete field in redis hash by pattern (more specifically prefix). I tried following and it doesn't work.
hdel <some_key> <prefix>*
How to achieve this ?
By default, Redis does not provide a way to bulk remove keys that match a specific pattern. However, we can leverage the power of the command line to perform this action.
We will use the xargs to build and run commands back to Redis for this one. An example is as shown below:
redis-cli --scan --pattern "prefix*" | xargs redis-cli del
I have hundreds of thousands of keys that start with html: that I need to clean up quickly. Instead of writing a python script to do this, I was hoping for a simple command via redis-cli to do this. I was surprised to not find one. I am assuming I can do something with xargs from the linux terminal but I need to also pass in the auth password to redis.
After some munging and experimentation, I found this combination to work perfectly to remove all the keys I needed that match the beginning from left to right in the key name itself. It was very fast to run, which is no surprise since redis is in memory.
redis-cli -a <mypassword> --raw keys "html:*" | xargs redis-cli -a <mypassword> del
I want to check total number of keys in Redis Cluster.
Is there any direct command available to get this or I have to check with INFO command from each instance / node.
There is no direct way.
You can do the following with the cli though:
redis-cli --cluster call one-cluster-node-ip-address:the-port DBSIZE
And then sum the results.
Alternatively, there's RedisGears with which you can do the following to get the same result:
redis> RG.PYEXECUTE "GB().count().run()"
I am exploring the Redis benchmark tool and I looked at this web page: How fast is Redis?
I pick the below command for example:
$ ./redis-benchmark -r 1000000 -n 2000000 -t get,set,lpush,lpop -q
I see that we can specify the type of operations, such as get, set, lpush etc. However, how do we know what datatypes are being used for values in these operations? Also, is there a way to specify your own data that can be consumed by the benchmark command?
how do we know what datatypes are being used for values in these operations?
By default, xxx is used as value.
is there a way to specify your own data that can be consumed by the benchmark command?
You can use the command line argument -d to specify the length of value, e.g. use redis-benchmark -d 10 to use xxxxxxxxxx, i.e. 10 characters, as value.
You can explicitly specify the command that you want to test, e.g. use redis-benchmark -n 10000 SET your-key your-value to run SET command 10000 times.
Run redis-benchmark --help to see other options.
Is there a command to move redis keys from one database to another or is it possible only with lua scripting??
There has been this type of question asked perviously redis move all keys but the answers are not appropriate and convincing for a beginner like me.
u can use "MOVE" to move one key to another redis database;
the text below is from redis.io
MOVE key db
Move key from the currently selected database (see SELECT) to the specified destination database. When key already exists in the destination database, or it does not exist in the source database, it does nothing. It is possible to use MOVE as a locking primitive because of this.
Return value
Integer reply, specifically:
1 if key was moved.
0 if key was not moved.
I think this will do the job:
redis-cli keys '*' | xargs -I % redis-cli move % 1 > /dev/null
(1 is the new database number, and redirection to /dev/null is in order to avoid getting millions of '1' lines - since it will move the keys one by one and return 1 each time)
Beware that redis might run out of connections and then will display tons of such errors:
Could not connect to Redis at 127.0.0.1:6379: Cannot assign requested address
So it could be better (and much faster) to just dump the database and then import it into the new one.
If you have a big database with millions of keys, you can use the SCAN command to select all the keys (without blocking like the dangerous KEYS command that even Redis authors do not recommend).
SCAN gives you the keys by "pages" one by one and the idea is to start at page 0 (formally called CURSOR 0) and then continue with the next page/cursor until you hit the end (the stop signal is when you get the CURSOR 0 again).
You may use any popular language for this like Redis or Ruby or Scala. Here a draft using Bash Scripting:
#!/bin/bash -e
REDIS_HOST=localhost
PAGE_SIZE=10000
KEYS_TO_QUERY="*"
SOURCE_DB=0
TARGET_DB=1
TOTAL=0
while [[ "$CURSOR" != "0" ]]; do
CURSOR=${CURSOR:-0}
>&2 echo $TOTAL:$CURSOR
KEYS=$(redis-cli -h $REDIS_HOST -n $SOURCE_DB scan $CURSOR match "$KEYS_TO_QUERY" count $PAGE_SIZE)
unset CURSOR
for KEY in $KEYS; do
if [[ -z $CURSOR ]]; then
CURSOR=$KEY
else
TOTAL=$(($TOTAL + 1))
redis-cli -h $REDIS_HOST -n $SOURCE_DB move $KEY $TARGET_DB
fi
done
done
IMPORTANT: As usual, please do not copy and paste scripts without understanding what is doing, so here some details:
The while loop is selecting the keys page by page with the SCAN command and with every key then running the MOVE command.
The SCAN command will return the next cursor in the first line and then the rest of the lines will be the found keys. The while loop starts with the variable CURSOR not defined and then defined in the first loop (this is some magic to just stop in the next CURSOR 0 that will signal the end of the scanning)
PAGE_SIZE is the value of how long will be each scan query, lower values will impact very low on the server but will be slow, bigger values will make the server "sweat" but faster ... here the network is impacted, so try to find a sweet spot around 10000 or even 50000 (ironically values of 1 or 2 may stress also the server but due to the network wrapping part of each query)
KEYS_TO_QUERY: It's a pattern on the keys you want to query, like "*balance*" witll select the keys that include balance in the name of the key (don't forget to include the quotes to avoid syntax errors) ... additionally you can do the filtering at script side, just query all the key with "*" and add a bash scripting if conditional, this will be slower but if you cannot find a pattern for your keys selection this will help.
REDIS_HOST: using localhost by default, change it to any server you like (if you are using a custom port other than the default port 6379 you can also include it with something like myredisserver:4739)
SOURCE_DB: the database ID you want the keys move from (by default 0)
TARGET_DB: the database ID you want the keys move to (by default 1)
You can use this script to execute other commands or checks with the keys, just replace the MOVE command call for anything you may need.
NOTE: To move keys from one Redis server to another Redis server (this is not only moving between internal databases) you can use redis-utils-cli from the NPM packages here -> https://www.npmjs.com/package/redis-utils-cli