I am saving no.of(1000) records in Redis DB , while checking in redis CLI dbsize as its exactly double actually. Can any one having any idea why its saved double? Thanks!
Related
When using multiple databases on a single redis instance and its memory is full, when I'm trying to insert new data it samples a number of keys and it applies an algorithm to them to determine which ones should be evicted.
But, if I'm using db0 and db1 and I'm trying to insert a new record into db1, will redis sample keys from the same database or does it sample them globally?
When it does eviction, Redis chooses eviction candidate from all databases.
In your case, it might evict keys from db0 or db1.
I want to check total number of keys in Redis Cluster.
Is there any direct command available to get this or I have to check with INFO command from each instance / node.
There is no direct way.
You can do the following with the cli though:
redis-cli --cluster call one-cluster-node-ip-address:the-port DBSIZE
And then sum the results.
Alternatively, there's RedisGears with which you can do the following to get the same result:
redis> RG.PYEXECUTE "GB().count().run()"
I have tried method in this question, but it does not work since I'm working in cluster mode, and redis told me:
(error) CROSSSLOT Keys in request don't hash to the same slot
Answers for that question try to remove multiple keys in a single DEL. However, keys matching the given pattern might NOT locate in the same slot, and Redis Cluster DOES NOT support multiple-key command if these keys don't belong to the same slot. That's why you get the error message.
In order to fix this problem, you need to DEL these keys one-by-one:
redis-cli --scan --pattern "foo*" |xargs -L 1 redis-cli del
The -L option for xargs command specifies the number of keys to delete. You need to specify this option as 1.
In order to remove all keys matching the pattern, you also need to run the above command for every master nodes in your cluster.
NOTE
With this command, you have to delete these keys one-by-one, and that might be very slow. You need to consider re-designing your database, and use hash-tags to make keys matching the pattern belong to the same slot. So that you can remove these keys in a single DEL.
Either SCAN or KEYS command are inefficient, especially, KEYS should not be used in production. You need to consider building an index for these keys.
Building on for_stack's answer, you can speed up mass deletion quite a bit using redis-cli --pipe, and reduce the performance impact with UNLINK instead of DEL if you're using redis 4 or higher.
redis-cli --scan --pattern "foo*" | xargs -L 1 echo UNLINK | redis-cli --pipe
Output will look something like this:
All data transferred. Waiting for the last reply...
Last reply received from server.
errors: 0, replies: 107003
You do still need to run this against every master node in your cluster. If you have a large number of nodes, it's probably possible to automate the process further by parsing the output of CLUSTER NODES.
redis-cli provides a -c option to follow MOVED redirects. However, it should be deleted one at a time because you cannot guarantee two keys will be in the same node.
redis-cli -h myredis.internal --scan --pattern 'mycachekey::*' | \
xargs -L 1 -d'\n' redis-cli -h myredis.internal -c del
The first part provides a list of keys --scan prevents Redis from locking. xargs -L 1 runs the command for one entry at a time. -d'\n' disables the processing of quotes so you can have quoted strings like "SimpleKey[hello world]" be passed to the command otherwise the spaces will make it have two keys.
I am using lua script to do 2 operations belonging to same key. Running Redis in cluster mode. Using java jedis library to connect to the Redis cluster.
The syntax for loading lua script is as below
jedisCluster.loadScript(<ScriptString>, <Key>);
It returns a SHA value which I can use in evalsha function on jedis cluster as below
jedisCluster.evalsha(<ShaValue>, <Key Count>, <key>)
I am handling NoScript error when executing above method and will load the script again.
Question: If I am loading the same script with different key values will the SHA value is different? If the two keys land up in different cluster then the SHA value is different?
I am trying to save this SHA value in string use it for all keys.
I know the SHA of a string will be same but I am not sure if redis adding any extra information to the script before generating the SHA.
The SHA1 sum of the script will always be the same for the same script (you can also compute it externally, e.g. using the sha1sum tool). This remains true in single-instance and cluster modes, regardless the number of keys and arguments that the script gets as input.
all
I see the doc in http://redis.io/commands/dump, that the result of dump command
Values are encoded in the same format used by RDB.
So, is that possible to recovery data from rdb file with restore command?
No - the RESTORE command is indeed DUMP's complement but it only works on a single key. An RDB file, on the other hand, is potentially made up of multiple keys and is loaded only when the Redis server starts.