How to explore the redis queue using redis-cli - redis

I am new to redis and I want to know if there is any way to explore the redis queues usign redis-cli.
I recently picked up redis and I was surprised to find that there are many old entries that have cluttered the queue. The queue has a size of around 95000 (DBSIZE) and using the keys * in the terminal, I could view from 85000 to 95000 which are entries from almost 3 years ago(I could identify this because some of the keys are like '22-06-2019_status_440792_68587277').
I want to know if I can view all the keys at the same time(terminal only displayed the last 10000 keys) and if there is a way to delete all the old keys at the same time.

Related

Redis cli - KEYS * not showing all keys

I am connecting to an AWS redis cluster using following command
redis-cli -c -h host.amazonaws.com -p 6379
I pushed two key "X1" and "X2" into redis cache from a springboot application (API methods are not annotated with #Cacheable) and now when I run KEYS * from cli terminal it would list either "X1" or "X2" but not both. GET for both keys works fine though.
info keyspace return following;
Keyspace
db0:keys=11,expires=1,avg_ttl=1975400
What am I missing here?
You probably have cluster mode enabled. In cluster mode, the data you store is partitioned by key. One of the advantages of this is that you can now have a larger dataset than would reasonably fit on one machine (hundreds of terabytes, if you want) since every shard has some fraction of the entire data set.
A downside is that multi-key commands no long work like you would expect if the keys end up in different hash slots. The KEYS command is such a multi-key command.
To make a long story short:
KEYS is apparently giving you only the keys on the cluster node you're hitting. It would perhaps have been nicer to give you an error, instead, but it doesn't.
GET is unaffected: redis-cli, with the -c flag, knows how to find the right cluster node (perhaps after hitting the wrong one and being told the key has MOVED).
If you ask every individual primary node in your cluster for KEYS *, and add up all the results, you should get all the keys. This question has some examples of using the redis-cli to do this.

Redis -How to Get Last 1hour Data from redis

How to get all the keys/data stored in "redis" in the last one hour. I searched to found it out, but couldn't find a way. Is there any way to get this.?
Redis does not have a direct way to do this.
Depending on your use case, in increasing order of complexity -
You can manually add newly created keys to a set. The name of the set can include the timestamp. You can then query this set to find keys that have been modified
You can use redis keyspace notification to get notified of keys when they are changed. However, be aware that pub/sub notifications are "fire and forget" - so if your connection drops - you will lose some of the keys that were updated.
You can look at the AOF file and identify keys that have been created / modified. If you are using a cloud provider for redis - they may not provide access to the AOF file. Also, the AOF file doesn't have the timestamp, but the commands are in the order they were processed by redis.

Redis hmset, old data overwrite new?

I run two redis commands:
A: hmset k1,v1,k2,v2,k3,v3....(hundreds keys) at 11:03:05,450
B: hmset k1,v1.1 at 11:03:05,727
But the final data I get for k1 is v1.
I consider there are several possible reasons:
clocks on different machines are not accurate, so command B happens before A in fact. But I have other logic to prevent B run before A, and I'm 99 percent sure about that, so I don't want to trace this unless there are no other possible reasons.
I'm not sure if A is an atomic command, but I think so, as redis is single thread. So is it possible A started before A but finished after B?
May be related with the slave sync, but I can't figure out how?
I want to know if there are other possible reasons? And any suggestions how to check to make sure what happens?
I'm using redis cluster with several masters and slaves, and jedis 2.9.0.

Is it possible to get list of keys changed in redis server?

I'm getting over 10000 updates in 60 seconds in my Redis server and this triggers the background save which consumes resources.
I want to track the changed keys so that I can debug my app (which method causing this much change).
Is there a way to get updated keys?
While MONITOR is perfectly valid, it does include everything that gets sent to Redis. That means filtering read requests, pings, ...
Instead I recommend that you check the keyspace notifications documentation and configure your database the AK flags. By subscribing to the __keyspace:* pattern you'll be notified about every change to keys.
As I learned, it's only possible by using MONITOR command and figure out from output.

Dump the whole redis database instance using hiredis

I have a buffer that needs to read all values(hash, field and values) from redis after reboot, is there a way to do that in a fast way? I have approximately 100,000 hashes with 4 fields each.
Thanks!
EDIT:
The Slow way: Current Implementation is getting all the hashes using
Keys *
then
HGETALL xxx
to get all the fields' values.
There are two ways to approach this problem.
The first one is to try to optimize the KEYS/HGETALL combination you have described. Because you do not have millions of keys (100K is not so high by Redis standard), the KEYS command will not block the instance for a long time, and the output buffer size required to return 100K items is probably acceptable. Once the list of keys have been received by your program, then the next challenge is to run many HGETALL commands as fast as possible. The key is to pipeline them (for instance in synchronous batches of 1000 items) which is quite easy to implement with hiredis (just use redisAppendCommand / redisGetReply). The 100K items will be retrieved in 100 roundtrips only. Because most Redis instances can sustain 100K op/s or more, it should not last more than a few seconds. A more efficient solution would be to use the asynchronous interface of hiredis to try to maximize the throughput, but it is more complex to implement. I'm not sure it is worth it for 100K items.
The second approach is to use a BGSAVE command to take a snapshot of Redis content, retrieve the generated dump file, and then parse the file to extract the data. You can have a look at the excellent redis-rdb-tools package for a Python implementation. The main benefit of this approach is there is no impact on the Redis instance (no KEYS command to block the event loop) while still retrieving consistent data.