Redis: Is there a way to get a difference of Keyspace - redis

info keyspace
Its currently incremental and purged at end of month . But i want to have per day between mentioned time for some kpi analysis .

Set up a cron job of:
redis-cli -h host -p port info keyspace | grep db0 | sed 's/.*keys=\([0-9]*\).*/\1/' | xargs redis-cli -h host -p port set metric:keys:$(date "+%y-%m-%d-%H")
This will get you a set of keys in Redis with metrics at specific hour.
~$ redis-cli -h host -p port get metric:keys:18-06-15-12
"25"
This one-liner will get keyspace info, filter info for db0 (change for any other you interested in), extract the number, send back to Redis as a metric. You also can change that to a hash so metric itself won't change your number. But for 1m+ instances a couple of keys won't matter. Or you can store them in another db if you want.

Related

redis-cli --pipe yields MOVED errors when bulk uploading to Elasticache with cluster-mode enabled

I am trying to use redis-cli --pipe to bulk upload some commands to my AWS Elasticache for redis cluster. The commands come from parsing a file via a custom awk command, which helps generate some HSET commands. The awk command is in a custom shell script. When my Elasticache for redis server had cluster-mode disabled, doing something like the following worked like a charm:
sh script_containing_awk.sh $FILE_TO_PARSE | redis-cli -h <Primary_endpoint> -p <port> --tls --cacert <path/to/cert> --pipe
Due to an internal project requirement, the Elasticache for Redis server has been re-created with cluster-mode enabled, and hence I am adding the -c flag to the above command to specify as such.
I see the following results when trying to work with my Elasticache for Redis server with cluster-mode enabled:
I can connect to the cluster via the configuration endpoint no problem!
Single command uploads work (i.e: redis-cli -h <config_endpoint> -p <port> -c --tls --cacert <path/to/certs> SET key value)
It would be extremely convenient to just pipe output from my script to the cli:
sh script_containing_awk.sh $FILE_TO_PARSE | redis-cli -h <config_endpoint> -p <port> -c --tls --cacert <path/to/cert> --pipe
but adding the --pipe flag results in "MOVED" errors.
I have tried modifying the script to include {} (ex: HSET {user1}:hash field1 val1 field2 val2 ... brackets to try to force keys to the same CLUSTER SLOTS, but I still get the "MOVED" errors and I am attempting to bulk upload millions of keys so I don't think they would all fit in the same slot anyway.
Does anyone have experience getting --pipe to work with cluster-mode enabled Redis/Elasticache?
Thanks!
I am sure you understand that the core difference between Cluster Mode Disabled and Cluster Mode Enabled is that there is a split in your total Key slots.
Just to put in context;
CMD - Let's say we have 4 node cluster with 1 Primary and 3 Replicas.
if we have 100 key slots -
All the 100 key slots will be there in all the nodes. 3 of them will serve Read only commands and 1 of the node will serve all the commands.
CME - Let's say we have 4 nodes split in 2 shards - 1 replica and 1 primary each.
We can look at them as logical sub-clusters ie. they will have different sets of key-slots. Ideally a 50-50 split.
Now, the MOVED message is not necessarily an error.
When you connect to the configuration endpoint, by default you are being connected with one of the primary nodes (chosen at random, at first).
when you make a command, the client sends that command and the primary node decides if it has the correct hash-slot to serve that command.
As explained here, if the node does not have the hash-slot that your client is looking for, it will redirect you with a MOVED message.
So, I would assume MOVED messages are somewhat expected with CME clusters.

Keys in Redis cluster cannot be deleted (and have empty value)

We have a Redis cluster (3 master, 3 slave) and are seeing a large number of keys on one of the master nodes (and related slaves) which appear to be empty and cannot be deleted.
If I connect to the master node which has a large number of entries (as determined by DBSIZE), I can SCAN for and see the keys:
--> redis-cli -h 192.168.100.81 -p 6381
192.168.100.81:6381> scan 0 match mykey-* count 10
1) "2359296"
2) 1) "mykey-1be333a7"
2) "mykey-e85a9d31"
3) "mykey-d9162eff"
4) "mykey-41d12fd8"
5) "mykey-a6e755d3"
6) "mykey-c2aa1eaa"
7) "mykey-c0597cac"
8) "mykey-10e69376"
9) "mykey-7263aef0"
10) "mykey-7fa9de50"
However, if I try and GET the value of a key, it shows it has moved:
192.168.100.81:6381> get mykey-1be333a7
(error) MOVED 8301 192.168.3.107:6380
If I connect to the node to which the key has moved, I am not able to GET or DEL the value:
--> redis-cli -h 192.168.3.107 -p 6380
192.168.3.107:6380> get mykey-1be333a7
(nil)
192.168.3.107:6380> del mykey-1be333a7
(integer) 0
I am also unable to GET or DEL the value using the cluster (-c) flag for redis-cli:
--> redis-cli -h 192.168.100.81 -p 6381 -c get mykey-1be333a7
(nil)
--> redis-cli -h 192.168.100.81 -p 6381 -c del mykey-1be333a7
(integer) 0
--> redis-cli -h 192.168.3.107 -p 6380 -c get mykey-1be333a7
(nil)
--> redis-cli -h 192.168.3.107 -p 6380 -c del mykey-1be333a7
(integer) 0
What can I do to remove these types of keys?
Interesting question!
How to reproduce the problem
The following is a scenario that will reproduce your problem:
You create a standalone Redis instance, and set some data into it. However, one day, you configure this standalone Redis as a member of a Redis Cluster, without flushing the old data or moving the old data to the right node of the cluster. Let's call these data as dirty data.
In this scenario, you can SCAN all keys on this instance, including the dirty data. However, you cannot read or write these dirty data, since your Redis is in cluster mode, it will redirect your request to the right node, which doesn't have such data.
How to solve the problem
In order to remove these dirty data, you should reconfigure your Redis instance into standalone mode, i.e. cluster-enabled no, and delete dirty data in standalone mode.
Then you can make it join the cluster again.

Why does Redis scan return keys which belong to another node

When I scan the Redis using below command:
redis-cli -h <redis_master_ip> -p 6379 --scan --pattern '*'
it returns keys which belong to this node, but it also returns many keys which belong to another redis node. Therefore if I run below command:
redis-cli -h <redis_master_ip> -p 6379 object freq <some_keys_from_scan>
I get error like Error: MOVED 90 <another_redis_master_ip>:6379
Due to the same reason, I get the same error when running:
redis-cli -h <redis_master_ip> -p 6379 --hotkeys
Note both the <redis_master_ip> and <another_redis_master_ip> are a part of redis cluster.
The document https://redis.io/commands/scan defines scan as: "iterates the set of keys in the currently selected Redis database". My understanding is it should only scan the keys belong to the current node. my redis cluster is 6.0.10.
Does anybody know why executing scan return the keys of another node? I am only interested in getting the keys of this node.
I see another link mentioned the same issue but no solution yet: https://github.com/redis/redis/issues/4810

How to delete keys matching a certain pattern in redis

How to delete keys matching a certain pattern in redis using redis-cli. I would like to delete all foo's from the following list.
KEYS *
foo:1
foo:2
bar:1
foo:3
bar:2
foo:4
As mentioned in the comment on the question, there are many other answers to this here already. Definitely read the one linked above if you are thinking about doing this in a production sever.
The one I found most useful for occasional command-line cleanup was:
redis-cli KEYS "*" | xargs redis-cli DEL
from "How to atomically delete keys matching a pattern using Redis".
I wanted to delete thousands of keys by pattern after some searches I found these points:
if you have more than one db on redis you should determine the database using -n [number]
if you have a few keys use del but if there are thousands or millions of keys it's better to use unlink because unlink is non-blocking while del is blocking, for more information visit this page unlink vs del
also keys are like del and is blocking
so I used this code to delete keys by pattern:
redis-cli -n 2 --scan --pattern '[your pattern]' | xargs redis-cli -n 2 unlink
I just published a command line interface utility to npm and github that allows you to delete keys that match a given pattern (even *) from a Redis database.
You can find the utility here:
https://www.npmjs.com/package/redis-utils-cli
If someone want to do same operation in AWS Elasticache redis, then you can connect with SSH to your EC2 server which is supposed to access AWS Redis server then you can use below command.
redis-cli -h <HOST> -p <PORT> --scan --pattern "patter*n" | xargs redis-cli -h <HOST> -p <PORT> unlink
Replace Host and port with AWS redis server host and port.
Also if your redis setup needs password authentication then use,
redis-cli -h <HOST> -p <PORT> -a <PASSWORD> --scan --pattern "patter*n" | xargs redis-cli -h <HOST> -p <PORT> -a <PASSWORD> unlink
Replace Host, port and password with AWS redis server host, port and password.
You can also use above commands for localhost.

redis bulk import using --pipe

I'm trying to import one million lines of redis commands, using the --pipe feature.
redis_version:2.8.1
cat file.txt | redis-cli --pipe
This results in the following error:
Error reading from the server: Connection reset by peer
Does anyone know what I'm doing wrong?
file.txt contains, for example,
lpush name joe
lpush name bob
edit: I now see there's probably a special format(?) for using pipe mode - http://redis.io/topics/protocol
The first point is that the parameters have to be double-quoted. The documentation is somewhat misleading on this point.
So a working syntax is :
lpush "name" "joe"
lpush "name" "bob"
The second point is that each line has to end by an \r\n and not just by \n. To fix that point, you just have to convert your file with the command unix2dos
like : unix2dos file.txt
Then you can import your file using cat file.txt | src/redis-cli --pipe
This worked for me.
To use the pipe mode (a.k.a bulk loading, or mass insertion) you must indeed provide your commands directly in Redis protocol format.
The corresponding Redis protocol for LPUSH name joe is:
*3
$5
LPUSH
$4
name
$3
joe
Or as a quoted string: "*3\r\n$5\r\nLPUSH\r\n$4\r\nname\r\n$3\r\njoe\r\n".
This is what your input file must contain.
Redis documentation includes a Ruby sample to help you generate the protocol: see gen_redis_proto.
A Python sample is available e.g. in the redis-tools package.
There are existing tools that convert client commands directly to redis wire protocol messages. Example:
redis-mass my-client-script.txt | redis-cli --pipe option
https://golanglibs.com/dig_in/redis-mass
https://github.com/almeida/redis-mass
There are two kinds of possibilities.
First check point is exceed of maxclients limits.
You can check using 'info clients' and 'config get maxclients' redis command.
In my desktop result is below.
127.0.0.1:6379> info clients
# Clients
connected_clients:2
client_longest_output_list:0
client_biggest_input_buf:0
blocked_clients:0
127.0.0.1:6379> config get maxclients
1) "maxclients"
2) "2"
and then i tried to use pipe command, below is result.
[localhost redis-2.8.1]$ cat test.txt | ./src/redis-cli --pipe
All data transferred. Waiting for the last reply...
Error reading from the server: Connection reset by peer
If that result is same. you have to change redis.conf file.
Seconds check point is ulimit option.
ulimit option change needs a root privilige. check below link.
How do I change the number of open files limit in Linux?
This error happens because the timeout set in Redis is Default, 0. You need to configure this timeout value by redis-cli using the command below:
To connect in redis server:
redis-cli -h -p -a
To view timeout value configured:
this command-line: config get timemout, Works to see what is the timeout value was configured in Redis server.
To Set new value for redis timeout:
this command-line: config set timeout 120, Set the timeout to 2 minutes. So, you need to set the redis timeout so long your execution need.
I hope this answers help you. Cyu!!!
You can use the following command to import your file's data to redis
cat file.txt | xargs -L1 redis-cli