Redis notifications - redis

I'm using Redis as distributed cache. I have different applications which listen only particular keys. For example:
App1 listen App1.*
App2 listen App2.* and so on.
And my applications using following pattern to receive notifications:
App1: "key*:APP1."
App2: "key*:APP2."
I need to send notifications only about set, del, expired, evicted events that is why I have tried to use notify-keyspace-events "AK". If works fine for me but in this case of "AK" configuration redis starts to send extra notifications like "expire" which I don't need.
So according to documentation http://redis.io/topics/notifications I have tried to implement custom property:
notify-keyspace-events "Ksxe" to send only set, expired and evicted notifications. But in fact in this case I receive only expired notifications..
Questions:
1. Why I doesn't receive set and evicted events?? Why I receive only expired events?
2. Is there any way to make redis send only del notifications??
I also have tried "Ks" but redis doesn't send notifications about set events
A Alias for g$lshzxe, so that the "AKE" string means all the events.
"Kg$lshzxe" doesn't works correctly too..

I think you are misunderstanding the "s" flag. It has nothing to do with the set command. It tells Redis to only send commands that alter keys of the type "redis set" such as sadd or a key of the Redis set type expiring.
Thus, in your example "Ksxe" you instruct Redis to send you a notification anytime a key containing a Redis set is evicted or "expired". The "Ks" options instruct Redis to only send you notifications on keys of the type "set" being altered, not when a string is created using set command. To translate that to english, you told Redis "tell me when a key of type 'set' is expired or evicted".
If you want to know when a key of the type string is created or altered using the set command, has an expiration added to it, setting an expiration immediately deletes the key, or is evicted, you need to instead use "K$xeg". The "g" is important because it catches use of the expire command itself, and the '$' indicates the string type.
Also note that the "g" flag will result in "expire" events, but an expiration event will be labelled as "expired", enabling you to tell the difference. If you don't care about the creation of a TTL on a key, you can leave off the "g".

Related

Can you subscribe to a flushall notification in REDIS [duplicate]

The keyspace notifications have been essential for a recent web api I've been
developing.
We have redis setup in azure. The api mostly works, we use notifications to figure out if data on memory cache needs to be updated.
Right now, we want to handle notifying the flush event to clear local memory cache if redis database is flushed. But we can not get the flushdb event by Keyspace notification.
And the keyspace events is enable as "AKE". "AKE" string means all the events.
PS: we can get notification with 'set' event like '__keyevent#2__:set'
The subscription code is like below.
subscriber.Subscribe(
"*",
(channel, value) =>
{
// Some codes here
});
Just as the other answer mentioned, there's no such notification.
After all, Keyspace Notification is a notification for events on a single key. Each notification is associated with a key. For keyspace event, the key name is part of the channel name. For keyevent event, the key name is the message.
PUBLISH __keyspace#0__:key_name comamnd
PUBLISH __keyevent#0__:command key_name
Each command that sending a notification must have a key as argument. e.g. del key, set key val. However, the flushdb command has no key as argument. The command doesn't affect a single key. Instead, it removes all keys in the database. So there's no such notification for it. Otherwise, what do you expect from the channel? All keys that have been removed? It's not a good idea.
However, you can simulate an event for flushdb
set a special key, e.g. flushdb-event: set flushdb-event 0
subscribe on the corresponding channel: subscribe __keyspace#0__:flushdb-event
set the special key before you call flushdb: set flushdb-event 1
In this way, you can get the simulated flushdb notification.
According to Redis Documentation, there is no notification for Flushdb.
I think you have a couple of options.
You could call the INFO command periodically and check for a change in the number of flushdb or flushall calls. Here is the output from INFO that I am referring to...
Commandstats
cmdstat_flushall:calls=2,usec=112,usec_per_call=56.00
cmdstat_flushdb:calls=1,usec=110,usec_per_call=52.00
You could run the MONITOR command and parse the output. Note that this is not really a good option - it has heavy performance impact on the server side and would require a lot of processing on the client side.

redis pub sub only for a certain set of keys?

I have a use case in which I want to enable notification only for a certain set of keys, so that when those keys expire I can get a notification from redis.
I have followed this answer to implement this.
I have set parameter notify-keyspace-events to "Ex"
To accomplish this I am adding keys that I want notification for in DB-0 and the other keys in DB-1. But I am recieveing notification for both the DBs. Is there any way to just get notification from a particular DB?
According to redis documentation :
"Redis can notify Pub/Sub clients about events happening in the key space.
This feature is documented at http://redis.io/topics/notifications
For instance if keyspace events notification is enabled, and a client
performs a DEL operation on key "foo" stored in the Database 0, two
messages will be published via Pub/Sub:
PUBLISH keyspace#0:foo del
PUBLISH keyevent#0:del foo
"
But I am receiving notification from both DB-0 and DB-1.
PS : I know I can filter keys in my application, but I store too many expiring keys in redis and sending notification for all the expiring will increase load on my redis server.
I think you subscribed to a pattern that matches all DBs' notification message, e.g. PSUBSCRIBE __key*__:*.
In fact, you can specify the db index in the subscribed pattern: PSUBSCRIBE __keyspace#0__:* and PSUBSCRIBE __keyevent#0__:*. This way, you'll only received notification of DB0.

Is there a keyspace event in redis for a key entering the database that isn't already present?

I have a program that utilizes a redis key with an expire time set. I want to detect when there is a new entry to the data set. I can tell when there is a removal by listening for the "expired" event, but the "set" and "expire" events are fired every time the key is set, even if it's already in the database.
Is there a keyspace event for a NEW key appearing?
There's no keyspace configuration that detects that a key was overwritten vs. newly added.
If you are primarily using the SET command, you may be able to take advantage of the NX option and publish a custom event based on the result. Obviously this isn't an ideal approach, but it's something.
https://redis.io/commands/set
Example of a custom event:
PUBLISH __keyevent#0__:new_data_entry new_key
Details on that here: https://redis.io/topics/notifications#type-of-events
Hope that helps.

Is it posible to hook redis before key expired

I have set an expiration value to a key in redis, and want to get the opportunity to run a piece of code before the key will be deleted by redis. Is it possible, and if so how...?
Thanks
My solution was to create a new key, with the same name as the one I wanted to hook, only I added a prefix for it indicating it's a key for timeouts usage ("TO") - something like:
set key1 data1
set TO_key1 ""
expire TO_key1 20
In the example above, as soon as "TO_key1" will expire, it will notify my program and I'll get the opportunity to run my code before I will manually delete "key1".
I found this link very useful for creating the listener for redis: Redis Key expire notification with Jedis
This isn't possible with standard OS Redis... yet. There is a way, however, to do something similar without too much hassle. If you stop using Redis' expiry (at least for those keys that you're interesting in "hooking") and manage expiry "manually" in your code, you can do anything you want before/during/after the expiry event.
Since Redis offers key-level expiry out of the box, people are usually content with it. In some cases, i.e.g. expiring members in a Set, the only way to go is the manual approach but that approach is still valid for regular keys when you need finer control.

Redis as a message broker

Question
I want to pass data between applications, in a publish-subscribe manner. Data may be produced at a much higher rate than consumed and messages get lost, which is not a problem. Imagine a fast sensor and a slow sensor data processor. For that, I use redis pub/sub and wrote a class which acts as a subscriber, receives every message and puts that into a buffer. The buffer is overwritten when a new message comes in or nullified when the message is requested by the "real" function. So when I ask this class, I immediately get a response (hint that my function is slower than data comes in) or I have to wait (hint that my function is faster than the data).
This works pretty good for the case that data comes in fast. But for data which comes in relatively seldom, let's say every five seconds, this does not work: imagine my consumer gets launched slightly after the producer, the first message is lost and my consumer needs to wait nearly five seconds, until it can start working.
I think I have to solve this with Redis tools. Instead of a pub/sub, I could simply use the get/set methods, thus putting the cache functionality into Redis directly. But then, my consumer would have to poll the database instead of the event magic I have at the moment. Keys could look like "key:timestamp", and my consumer now has to get key:* and compare the timestamps permamently, which I think would cause a lot of load. There is no natural possibility to sleep, since although I don't care about dropped messages (there is nothing I can do about), I do care about delay.
Does someone use Redis for a similar thing and could give me a hint about clever use of Redis tools and data structures?
edit
Ideally, my program flow would look like this:
start the program
retrieve key from Redis
tell Redis, "hey, notify me on changes of key".
launch something asynchronously, with a callback for new messages.
By writing this, an idea came up: The publisher not only publishes message on topic key, but also set key message. This way, an application could initially get and then subscribe.
Good idea or not really?
What I did after I got the answer below (the accepted one)
Keyspace notifications are really what I need here. Redis acts as the primary source for information, my client subscribes to keyspace notifications, which notify the subscribers about events affecting specific keys. Now, in the asynchronous part of my client, I subscribe to notifications about my key of interest. Those notifications set a key_has_updates flag. When I need the value, I get it from Redis and unset the flag. With an unset flag, I know that there is no new value for that key on the server. Without keyspace notifications, this would have been the part where I needed to poll the server. The advantage is that I can use all sorts of data structures, not only the pub/sub mechanism, and a slow joiner which misses the first event is always able to get the initial value, which with pub/sib would have been lost.
When I need the value, I obtain the value from Redis and set the flag to false.
One idea is to push the data to a list (LPUSH) and trim it (LTRIM), so it doesn't grow forever if there are no consumers. On the other end, the consumer would grab items from that list and process them. You can also use keyspace notifications, and be alerted each time an item is added to that queue.
I pass data between application using two native redis command:
rpush and blpop .
"blpop blocks the connection when there are no elements to pop from any of the given lists".
Data are passed in json format, between application using list as queue.
Application that want send data (act as publisher) make a rpush on a list
Application that want receive data (act as subscriber) make a blpop on the same list
The code shuold be (in perl language)
Sender (we assume an hash pass)
#Encode hash in json format
my $json_text = encode_json \%$hash_ref;
#Connect to redis and send to list
my $r = Redis->new(server => "127.0.0.1:6379");
$r->rpush("shared_queue","$json_text");
$r->quit;
Receiver (into a infinite loop)
while (1) {
my $r = Redis->new(server => "127.0.0.1:6379");
my #elem =$r->blpop("shared_queue",0);
#Decode hash element
my $hash_ref=decode_json($elem\[1]);
#make some stuff
}
I find this way very usefull for many reasons:
The element are stored into list, so temporary disabling of receiver has no information loss. When recevier restart, can process all items into the list.
High rate of sender can be handled with multiple instance of receiver.
Multiple sender can send data on unique list. In ths case should be easily implmented a data collector
Receiver process that act as daemon can be monitored with specific tools (e.g. pm2)
From Redis 5, there is new data type called "Streams" which is append-only datastructure. The Redis streams can be used as reliable message queue with both point to point and multicast communication using consumer group concept Redis_Streams_MQ