Is redis pubsub channels instance level or database level? - redis

Instead of storing data into redis, we use redis as our channels to sub/pub. Is this feature redis instance level or for per database?
http://redis.io/topics/pubsub

That is easy enough to test:
Terminal 1: Connect to db 6 and subscribe to foo
> redis-cli -n 6
127.0.0.1:6379[6]> subscribe foo
Reading messages... (press Ctrl-C to quit)
1) "subscribe"
2) "foo"
3) (integer) 1
Terminal 2: Connect to db 1 and publish
> redis-cli -n 1
127.0.0.1:6379[1]> publish foo 2
(integer) 1
127.0.0.1:6379[1]>
Terminal 1: Observe subscriber receiving
1) "message"
2) "foo"
3) "2"

Related

What is Management Command in redis?

The slowlog command on Azure redis returns the following item in response. What does this command do? It doesn't seem to be a command triggered from the client.
1) (integer) 260
2) (integer) 1660587982
3) (integer) 15508
4) 1) "ManagementCommand"
2) "list"
5) "[::]:31729"
6) ""
ManagementCommand are commands that are triggered by data plane components owned by Azure Cache for Redis for health monitoring purposes there are some Redis commands that return info about the cache itself. The slowlog is an example of this.

Redis-cli - list of running queues command?

I want to see the List of queues in the Redis server using Redis-cli. I am using this command to just monitor the queue.
redis-cli MONITOR | grep queuename
Please tell me if we have the any cli comamnd which meet my requirement.
I don't seem to have enough reputation to clarify in a comment how you have implemented your queue, so I'll provide a few thoughts below assuming you have your queue implemented as a FIFO queue using RPUSH and LPOP to add and remove items from your queue.
> RPUSH queue-1 "task-a"
(integer) 1
>LPOP queue-1
"task-a"
If you use a standard naming convention for your lists that represent queues, you could get them by name from the KEYS command with something like KEYS queue-*. A couple of notes on this approach. First, this has some performance concerns if you have a large number of keys in your production instance the best use is for ad-hoc troubleshooting when the rest of your team is aware there may be some performance hit to your redis instance. Second, this will only show keys where the list contains elements. If you have drained a queue it will not appear in the returned values.
An alternative using sorted sets to hold the keys for the lists used as queues, and modifying the score associated with the queue to give you an idea of the queue size. When adding or removing a message to a queue, you would also use ZADD to increment the score by the number of elements added or returned. This would allow you to quickly get the set of lists used as queues by decreasing queue size with ZREVRANGE at any point.
> RPUSH queue-1 "task-a"
(integer) 1
> ZADD queues INCR 1 queue-1
"1"
> RPUSH queue-1 "task-b"
(integer) 2
> ZADD queues INCR 1 queue-1
"2"
> RPUSH queue-2 "message-a"
(integer) 1
> ZADD queues INCR 1 queue-2
"1"
> RPUSH queue-2 "message-b"
(integer) 2
> ZADD queues INCR 1 queue-2
"2"
> LPOP queue-2
"message-a"
> ZADD queues INCR -1 queue-2
"1"
> ZREVRANGE queues 0 -1 WITHSCORES
1) "queue-1"
2) "2"
3) "queue-2"
4) "1"

Why redis pubsub working is independent of database?

I am newbie to Redis and trying to understand concept of Redis PubSub.
Step- 1:
root#01a623a828db:/data# redis-cli -n 1
127.0.0.1:6379[1]> subscribe foo
Reading messages... (press Ctrl-C to quit)
1) "subscribe"
2) "foo"
3) (integer) 1
In 1st step, subscribed database 1
Step- 2:
root#01a623a828db:/data# redis-cli -n 4
127.0.0.1:6379[4]> publish foo 2
(integer) 1
In 2nd step, published message on database 4
Step- 3:
root#01a623a828db:/data# redis-cli -n 1
127.0.0.1:6379[1]> subscribe foo
Reading messages... (press Ctrl-C to quit)
..........................................
1) "message"
2) "foo"
3) "2"
In 3rd step, on database 1 got the message which was published on database 4 in 2nd Step.
I tried to find out the reason behind it but I found same answer everywhere- "Pub/Sub has no relation to the key space. It was made to not interfere with it on any level, including database numbers. Publishing on db 10, will be heard by a subscriber on db 1. If you need scoping of some kind, prefix the channels with the name of the environment (test, staging, production)- This is as per official documentation of Redis PubSub."
Ques-
Why redis pubsub working architecture is independent of database?
How to implement "If you need scoping of some kind, prefix the channels with the name of the environment (test, staging, production)"?
"Publishing on db 10, will be heard by a subscriber on db 1."- It is not inline with statement
It was made to not interfere with it on any level, including database numbers.
it's a matter of design choice really.
If you need scoping, you can always prefix the pattern. eg: pattern productupdate on test env will be watched via test:productupdate and on staging env, it will be staging:productupdate
It seems to inline well with the statement. the database number doesn't matter here.

how to scan for keys whose values got updated since last SCAN

I'd like to periodically scan thru a redis instance for keys that changed since the last scan. in between the scans i don't want to process the keys.
eg one key could get a thousand updates between scans. i care for the most recent value only when doing the next periodic scan.
There is no built-in way in Redis to achieve that (yet).
You could, for example, recode your app and add some sort of a way to track updates. For example, wherever you're calling SET foo bar, also call ZADD updated <timestamp> foo. Then, you can use the 'updated' Sorted Set to retrieve updated keys.
Alternatively, you can try using RedisGears to automate the tracking part (for starters). Assuming that you have RedisGears running (i.e. docker run -it -p 6379:6379 redislabs/redisgears), you can do something like the following:
$ cat gear.py
def addToUpdatedZset(x):
import time
now = time.time()
execute('ZADD', 'updated', now, x['key'])
return x
GB().filter(lambda x: x['key'] != 'updated').foreach(addToUpdatedZset).register('*')
$ redis-cli RG.PYEXECUTE "$(cat gear.py)"
OK
$ redis-cli
127.0.0.1:6379> KEYS *
(empty list or set)
127.0.0.1:6379> SET foo bar
OK
127.0.0.1:6379> KEYS *
1) "updated"
2) "foo"
127.0.0.1:6379> ZRANGE updated 0 -1 WITHSCORES
1) "foo"
2) "1559339877.1392548"
127.0.0.1:6379> SET baz qux
OK
127.0.0.1:6379> KEYS *
1) "updated"
2) "baz"
3) "foo"
127.0.0.1:6379> ZRANGE updated 0 -1 WITHSCORES
1) "foo"
2) "1559339877.1392548"
3) "baz"
4) "1559339911.5493586"

How to remove a specific job from Redis job queue

I'm new to Redis so this will be a rudimentary question.
I'm considering creating a Redis job queue using a list. The jobs themselves will be JSON-encoded objects.
I realize that I can use LPOP and RPUSH for managing the queue. I can even use RPOPLPUSH when using multiple lists (e.g "Queued", "Processing" and "Completed").
Let's say I have a worker that processes images by steadily going through the "Queued" list. Then let's say the client has deleted an image from the front-end, before that particular job has even begun to process. How do I delete this job from the "Queued" list so the worker doesn't waste time processing it?
In other words, how can I index individual jobs in a job queue?
use a sorted set with the timestamp as the score and you can remove a job
$ redis-cli zadd myjobs `date +"%s.%N"` job1
$ redis-cli zadd myjobs `date +"%s.%N"` job2
$ redis-cli zadd myjobs `date +"%s.%N"` job3
$ redis-cli
127.0.0.1:6379> ZRANGEBYSCORE myjobs -inf +inf WITHSCORES
1) "job1"
2) "1638908693.1293526"
3) "job2"
4) "1638908696.5061705"
5) "job3"
6) "1638908699.2742543"
127.0.0.1:6379> ZREMRANGEBYLEX myjobs [job2 [job2
(integer) 1
127.0.0.1:6379> ZRANGEBYSCORE myjobs -inf +inf WITHSCORES
1) "job1"
2) "1638908693.1293526"
3) "job3"
4) "1638908699.2742543"