127.0.0.1:6379> keys *
1) "trending_showrooms"
2) "trending_hashtags"
3) "trending_mints"
127.0.0.1:6379> sort trending_mints by *->id DESC LIMIT 0 12
1) "mint_14216"
2) "mint_14159"
3) "mint_14158"
4) "mint_14153"
5) "mint_14151"
6) "mint_14146"
The keys are expired but the keys are inside set. I have to remove the expire keys automatically in redis
You can't set a TTL on individual members within the SET.
This blog post dives a bit deeper on the issue and provides a few workarounds.
https://quickleft.com/blog/how-to-create-and-expire-list-items-in-redis/
Hope that helps.
Please ready this page entirely: https://redis.io/topics/notifications
Summing up, you must have a sentinel program listening to PUB/SUB messages, and you must alter the redis.conf file to enable keyevent expire notifications:
in redis.conf:
notify-keyspace-events Ex
In order to enable the feature a non-empty string is used, composed of
multiple characters, where every character has a special meaning
according to the following table
E Keyevent events, published with __keyevent#<db>__ prefix.
x Expired events (events generated every time a key expires)
Then the sentinel program must listen to the channel __keyevent#0__:del, if your database is 0. Change the database number if using any other than zero.
Then when you subscribe to the channel and receive the key which is expiring, you simply issue a SREM trending_mints key to remove it from the set.
IMPORTANT
The expired events are generated when a key is accessed and is found
to be expired by one of the above systems, as a result there are no
guarantees that the Redis server will be able to generate the expired
event at the time the key time to live reaches the value of zero.
If no command targets the key constantly, and there are many keys with
a TTL associated, there can be a significant delay between the time
the key time to live drops to zero, and the time the expired event is
generated.
Basically expired events are generated when the Redis server deletes
the key and not when the time to live theoretically reaches the value
of zero.
So keys will be deleted due to expiration, but the notification is not guaranteed to occur in the moment TTL reaches zero.
ALSO, if your sentinel program misses the PUB/SUB message, well... that's it, you won't be notified another time! (this is also on the link above)
Related
I have 2 applications. First one is writing keys such as
SET MyKey_1
SET MYKey_2
and then sends notification to second application via network.
Other application waits for notification and then counts how many keys with specific prefix is in DB:
KEYS MyKey_*
if key count is different from expected it raises an error:
waitNotification(firstAppSocket)
if redisCount("KEYS MyKey_*") != 2 {
panic("wrong key count")
}
Sometimes I encounter into race condition where 1-st app sets key and receives OK from Redis, notifies 2-nd app, but count returns 1. This happens approx. 1 in 10 times. If I retry count operation after a very short timeout (talking microseconds) it becomes correct.
Is there a race condition in Redis for such operation? Is there key population timeout?
I am playing with redis stream and it is good so far.
I am trying to understand if there is anyway for me to expire the old events based on time or some other way.
I know that we can remove by event id. But I do not want to remember / store the event id which is difficult. Instead I am looking for a way remove the last 10K events or something like that.
This is possible as of Redis 6.2.
If you use the default event IDs (by passing * as an ID to XADD) they will begin with the UNIX timestamp of when the event was inserted, followed by a dash.
Then you can use XTRIM $stream_name MINID $timestamp to remove all events with an ID lower than '$timestamp', which is equivalent to all events older than the timestamp.
So far, there's no way to expire events by time. Instead, the only expire strategy is to expire events by keeping the latest N events. You can use the XTRIM command to evict old events.
Should i do that very time? Can stream be configured to retain the last N events ?
If you want to always keep the latest N events, you can call XADD command with MAXLEN option to get a capped stream. Also with ~ option, you can have better performance, but inaccurately expire events. Check the doc for detail.
UPDATE
Since Redis 6.2, XTRIM supports a new trimming strategy: MINID. With this strategy, Redis will evict entries whose ids are lower than the given threshold.
So if you use timestamp as entry id, e.g. the default, auto-generated id use Unix timestamp (in milliseconds) as part of the id, you can use this strategy to expire events based on time, i.e. remove events older than the given timestamp.
I have a use case where I'm streaming and processing live data into an Elasticache Redis cluster. In essence, I want to kick off an event when all events of a certain type have completed (i.e. the size of a value is no longer growing over the course of 60 seconds).
For example:
foo [event1]
foo [event1, event2]
foo [event1, event2]
foo [event1, event2] -> triggers some event if this key/value is constant for 60 seconds.
Is this at all possible?
I would suggest that as part of all "changing" commands also set a key with a 60-second ttl. You can then subscribe to the expiration of that key using redis keyspace notifications.
In Redis Cache, is it possible to retrieve the original TimeOut that was set on a key? I know that there is a way to retrieve the pending TimeToLive of any key but I want the original value that was set while creating the key.
No, Redis doesn't store the original TTL for keys. It would be interesting to understand the use case that requires this.
You could, however, use a Sorted Set to keep track of the initial TTLs. The idea is that after each call to EXPIRE, call ZADD on that set with the member being the key's name. The score should be a decimal, where the part before the decimal point is the expiration timestamp and the fractional part is the TTL (padded with 0s according to your max TTL).
To retrieve the initial TTL, call ZSCORE on the set with the key's name and extract the part after the decimal point.
Note that by taking this approach you'll have to do some housekeeping, namely removing expired members from the set. To do that, periodically call ZREMBYSCORE from -inf to the current timestamp.
I'm starting to use Redis, and I've run into the following problem.
I have a bunch of objects, let's say Messages in my system. Each time a new User connects, I do the following:
INCR some global variable, let's say g_message_id, and save INCR's return value (the current value of g_message_id).
LPUSH the new message (including the id and the actual message) into a list.
Other clients use the value of g_message_id to check if there are any new messages to get.
Problem is, one client could INCR the g_message_id, but not have time to LPUSH the message before another client tries to read it, assuming that there is a new message.
In other words, I'm looking for a way to do the equivalent of adding rows in SQL, and having an auto-incremented index to work with.
Notes:
I can't use the list indexes, since I often have to delete parts of the list, making it invalid.
My situation in reality is a bit more complex, this is a simpler version.
Current solution:
The best solution I've come up with and what I plan to do is use WATCH and Transactions to try and perform an "autoincrement" myself.
But this is such a common use-case in Redis that I'm surprised there is not existing answer for it, so I'm worried I'm doing something wrong.
If I'm reading correctly, you are using g_message_id both as an id sequence and as a flag to indicate new message(s) are available. One option is to split this into two variables: one to assign message identifiers and the other as a flag to signal to clients that a new message is available.
Clients can then compare the current / prior value of g_new_message_flag to know when new messages are available:
> INCR g_message_id
(integer) 123
# construct the message with id=123 in code
> MULTI
OK
> INCR g_new_message_flag
QUEUED
> LPUSH g_msg_queue "{\"id\": 123, \"msg\": \"hey\"}"
QUEUED
> EXEC
Possible alternative, if your clients can support it: you might want to look into the
Redis publish/subscribe commands, e.g. cients could publish notifications of new messages and subscribe to one or more message channels to receive notifications. You could keep the g_msg_queue to maintain a backlog of N messages for new clients, if necessary.
Update based on comment: If you want each client to detect there are available messages, pop all that are available, and zero out the list, one option is to use a transaction to read the list:
# assuming the message queue contains "123", "456", "789"..
# a client detects there are new messages, then runs this:
> WATCH g_msg_queue
OK
> LRANGE g_msg_queue 0 100000
QUEUED
> DEL g_msg_queue
QUEUED
> EXEC
1) 1) "789"
2) "456"
3) "123"
2) (integer) 1
Update 2: Given the new information, here's what I would do:
Have your writer clients use RPUSH to append new messages to the list. This lets the reader clients start at 0 and iterate forward over the list to get new messages.
Readers need to only remember the index of the last message they fetched from the list.
Readers watch g_new_message_flag to know when to fetch from the list.
Each reader client will then use "LRANGE list index limit" to fetch the new messages. Suppose a reader client has seen a total of 5 messages, it would run "LRANGE g_msg_queue 5 15" to get the next 10 messages. Suppose 3 are returned, so it remembers the index 8. You can make the limit as large as you want, and can walk through the list in small batches.
The reaper client should set a WATCH on the list and delete it inside a transaction, aborting if any client is concurrently reading from it.
When a reader client tries LRANGE and gets 0 messages it can assume the list has been truncated and reset its index to 0.
Do you really need unique sequential IDs? You can use UUIDs for uniqueness and timestamps to check for new messages. If you keep the clocks on all your servers properly synchronized then timestamps with a one second resolution should work just fine.
If you really do need unique sequential IDs then you'll probably have to set up a Flickr style ticket server to properly manage the central list of IDs. This would, essentially, move your g_message_id into a database with proper transaction handling.
You can simulate auto-incrementing a unique key for new rows. Simply use DBSIZE to get the current number of rows, then in your code, increment that number by 1, and use that number as the key for the new row. It's simple and atomic.