How do I get records in a Redis stream by position instead of ID? - redis

With XREAD, I can get count from a specific ID (or first or last). With XRANGE, I can get a range from key to key (or from first - or to last +). Same in reverse with XREVRANGE.
How do I get by position? E.g. "give me the last 10 records in order"? XREVRANGE can do it with XREVRANGE stream + - COUNT 10, although it will be backwards in order, and XRANGE will give me the first 10 with XRANGE stream - + COUNT 10, but how do I:
get the last X in order?
get an arbitrary X in the middle, e.g. the 10 from position 100-109 (inclusive), when the stream has 5000 records in it?

Redis Streams currently (v6.0) do not provide an API for accessing their records via position/offset. The Stream's main data structure (a radix tree) can't provide this type of functionality efficiently.
This is doable with an additional index, for example a Sorted Set of the ids. A similar discussion is at https://groups.google.com/g/redis-db/c/bT00XGMq4DM/m/lWDzFa3zBQAJ
The real question is, as #sonus21 had asked, what's the use case.

It is an existing API that allows users to retrieve records from index
x to index y. It has a memory backend (just an array of records) and a
file one, want to put Redis in it as well.
We can just use Redis List to get the items randomly and at any offset using LRANGE
Attaching consumers to Redis LIST is easy but it does not come with an ack mechanism like a stream, you can build an ack mechanism in your code but that could be tricky. There could be many ways to build an ack mechanism in Redis, see one of them here Rqueue.
Ack mechanism is not always required but if you just want to consume from the LIST, you can use BLPOP/LPOP commands to consume elements from the list, that could be your simple consumer, but using BLPOP/LPOP commands would remove entries from the LIST so your offset would be dynamic, it will not always start from the 0. Instead of using BLPOP/LPOP, if you use LRANGE and track the offset in a simple key like my-consumer-offset, using this offset you can build your consumer that would always get the next element from the list based on the current offset.
Using LIST and STREAM to get random offset and streaming feature. Most important part would be you should use Consumer group so that streams are not trimmed. During push operation you should add element to STREAM as well as LIST. Once elements are inserted your consumer can work without any issue in a consumer group, for random offset you use LIST to get elements. Stream can grow as it'd be in append only mode, so you would need to periodically trim stream. See my other answer for deleting older entries

Related

get list of all streams name and length of each in Redis

We are creating redis streams dynamically.
We want to keep track of summary of streams like
streamname
count/length
Is there a command that can do this?
SCAN was suggested somewhere but it was not returning expected result
Last option is to maintain diff list of stream-names in redis..
Above are basic requirement.
It would be nice to have info about last element.I think that can be used by
XREVRANGE temp + - COUNT 1

Redis: How do I count the elements in a stream in a certain range?

Bussiness Objective
I'm creating a dashboard that will depend on some time-series and I'll use Redis to implement it. I'm new to using Redis and I'm trying to use Redis-Streams to count the elements in a stream.
XADD conversation:9:chat_messages * id 2583 user_type Bot
XADD conversation:9:chat_messages * id 732016 user_type User
XADD conversation:9:chat_messages * id 732017 user_type Staff
XRANGE conversation:9:chat_messages - +
I'm aware that I can get the total count of the elements using the XLEN command like this:
XLEN conversation:9:chat_messages
but I want to also know the elements in a period, for example:
XLEN conversation:9:chat_messages 1579551316273 1579551321872
I know I can use LUA to count those elements but I want some REALLY fast way to achieve this and I know that using Redis markup will be the fastest way.
Is there any way to achieve this with a straight forward Redis command? Or do I have to write a Lua script to do this?
Additional information
I'm limited by AWS' ElastiCache to use the only Redis 5.0.6, I cannot install other modules such as the RedisTimeSeries module. I'd like to use that module but it's not possible at the moment.
While the Redis Stream data structure doesn't support this, you can use a Sorted Set alongside it for keeping track of message ranges.
Basically, for each message ID you get from XADD - e.g. "1579551316273-0" - you need to do a ZADD conversation:9:ids 0 1579551316273-0. Then, you can use ZLEXCOUNT to get the "length" of a range.
Sorry, there is no commands-way to achieve this.
Your best option with Redis Streams would be to use a Lua script. You will get O(N) with N being the number of elements being counted, instead of O(log N) if a command existed.
local T = redis.call('XRANGE', KEYS[1], ARGV[1], ARGV[2])
local count = 0
for _ in pairs(T) do count = count + 1 end
return count
Note the difference between O(N) and O(log(N)) is significant for a large N, but for a chat application, if tracked by conversation, this won't make that big of a difference if chats have hundreds or even thousands of entries, once you account total command time including Round Trip Time which takes most of the time. The Lua script above removes network-payload and client-processing time.
You can switch to sorted sets if you really want O(log N) and you don't need consumer groups and other stream features. See How to store in Redis sorted set with server-side timestamp as score? if you want to use Redis server timestamp atomically.
Then you can use ZCOUNT which is O(log(N)).
If you do need Stream features, then you would need to keep the sorted set as a secondary index.

Redis: clean up members of ZSET

I'm currently studying Redis, and have the following case:
So what I have is a sorted set by google place id, for which all posts are sorted from recent to older.
The first page that is requested fetches posts < current timestamp.
When a cursor is sent to the backend, this cursor is a simple timestamp that indicates from where to fetch the next posts from the ZSET.
The query to retrieve posts by place id would be:
ZREVRANGEBYSCORE <gplaceId> <cur_timestamp> -INF WITHSCORES LIMIT <offset:timestamp as from where to fetch> <count:number of posts>
My question is what is the recommended way to clean up members of the ZSET.
Since I want to use Redis as a cache, I would prefer to limit the number of posts per place for example up until 50. When places get new posts when already 50 posts have been added to the set, I want to drop the last post from the set.
Of course I realise that I could make a manual check on every insert and perform operations as such, but I wonder if Redis has a recommended way of performing this cleanup. Alternatively I could create a scheduler for this, but I would prefer not having to do this.
Unfortunately Redis sorted set do not come with an out of the box feature for this. If sorted sets allowed a max size attribute with a configurable eviction strategy - you could have avoided some extra work.
See this related question:
How to specify Redis Sorted Set a fixed size?
In absence of such a feature, the two approaches you mentioned are right.
You can replace inserts with a transaction : insert, check size, delete if > 50
A thread that checks size of the set and trims it periodically

What Redis data type fit the most for following example

I have following scenario:
Fetch array of numbers (from REDIS) conditionally
For each number do some async stuff (fetch something from DB based on number)
For each thing in result set from DB do another async stuff
Periodically repeat 1. 2. 3. because new numbers will be constantly added to REDIS structure.Those numbers represent unix timestamp in milliseconds so out of the box those numbers will always be sorted in time of addition
Conditionally means fetch those unix timestamp from REDIS that are less or equal to current unix timestamp in milliseconds(Date.now())
Question is what REDIS data type fit the most for this use case having in mind that this code will be scaled up to N instances, so N instances will share access to single REDIS instance. To equally share the load each instance will read for example first(oldest) 5 numbers from REDIS. Numbers are unique (adding same number should fail silently) so REDIS SET seems like a good choice but reading M first elements from REDIS set seems impossible.
To prevent two different instance of the code to read same numbers REDIS read operation should be atomic, it should read the numbers and delete them. If any async operation fail on specific number (steps 2. and 3.), numbers should be added again to REDIS to be handled again. They should be re-added back to the head not to the end to be handled again as soon as possible. As far as i know SADD would push it to the tail.
SMEMBERS key would read everything, it looks like a hammer to me. I would need to include some application logic to get first five than to check what is less or equal to Date.now() and then to delete those and to wrap somehow everything in single transaction. Besides that set cardinality can be huge.
SSCAN sounds interesting but i don't have any clue how it works in "scaled" environment like described above. Besides that, per REDIS docs: The SCAN family of commands only offer limited guarantees about the returned elements since the collection that we incrementally iterate can change during the iteration process. Like described above collection will be changed frequently
A more appropriate data structure would be the Sorted Set - members have a float score that is very suitable for storing a timestamp and you can perform range searches (i.e. anything less or equal a given value).
The relevant starting points are the ZADD, ZRANGEBYSCORE and ZREMRANGEBYSCORE commands.
To ensure the atomicity when reading and removing members, you can choose between the the following options: Redis transactions, Redis Lua script and in the next version (v4) a Redis module.
Transactions
Using transactions simply means doing the following code running on your instances:
MULTI
ZRANGEBYSCORE <keyname> -inf <now-timestamp>
ZREMRANGEBYSCORE <keyname> -inf <now-timestamp>
EXEC
Where <keyname> is your key's name and <now-timestamp> is the current time.
Lua script
A Lua script can be cached and runs embedded in the server, so in some cases it is a preferable approach. It is definitely the best approach for short snippets of atomic logic if you need flow control (remember that a MULTI transaction returns the values only after execution). Such a script would look as follows:
local r = redis.call('ZRANGEBYSCORE', KEYS[1], '-inf', ARGV[1])
redis.call('ZREMRANGEBYSCORE', KEYS[1], '-inf', ARGV[1])
return r
To run this, first cache it using SCRIPT LOAD and then call it with EVALSHA like so:
EVALSHA <script-sha> 1 <key-name> <now-timestamp>
Where <script-sha> is the sha1 of the script returned by SCRIPT LOAD.
Redis modules
In the near future, once v4 is GA you'll be able to write and use modules. Once this becomes a reality, you'll be able to use this module we've made that provides the ZPOP command and could be extended to cover this use case as well.

Autoincrement in Redis

I'm starting to use Redis, and I've run into the following problem.
I have a bunch of objects, let's say Messages in my system. Each time a new User connects, I do the following:
INCR some global variable, let's say g_message_id, and save INCR's return value (the current value of g_message_id).
LPUSH the new message (including the id and the actual message) into a list.
Other clients use the value of g_message_id to check if there are any new messages to get.
Problem is, one client could INCR the g_message_id, but not have time to LPUSH the message before another client tries to read it, assuming that there is a new message.
In other words, I'm looking for a way to do the equivalent of adding rows in SQL, and having an auto-incremented index to work with.
Notes:
I can't use the list indexes, since I often have to delete parts of the list, making it invalid.
My situation in reality is a bit more complex, this is a simpler version.
Current solution:
The best solution I've come up with and what I plan to do is use WATCH and Transactions to try and perform an "autoincrement" myself.
But this is such a common use-case in Redis that I'm surprised there is not existing answer for it, so I'm worried I'm doing something wrong.
If I'm reading correctly, you are using g_message_id both as an id sequence and as a flag to indicate new message(s) are available. One option is to split this into two variables: one to assign message identifiers and the other as a flag to signal to clients that a new message is available.
Clients can then compare the current / prior value of g_new_message_flag to know when new messages are available:
> INCR g_message_id
(integer) 123
# construct the message with id=123 in code
> MULTI
OK
> INCR g_new_message_flag
QUEUED
> LPUSH g_msg_queue "{\"id\": 123, \"msg\": \"hey\"}"
QUEUED
> EXEC
Possible alternative, if your clients can support it: you might want to look into the
Redis publish/subscribe commands, e.g. cients could publish notifications of new messages and subscribe to one or more message channels to receive notifications. You could keep the g_msg_queue to maintain a backlog of N messages for new clients, if necessary.
Update based on comment: If you want each client to detect there are available messages, pop all that are available, and zero out the list, one option is to use a transaction to read the list:
# assuming the message queue contains "123", "456", "789"..
# a client detects there are new messages, then runs this:
> WATCH g_msg_queue
OK
> LRANGE g_msg_queue 0 100000
QUEUED
> DEL g_msg_queue
QUEUED
> EXEC
1) 1) "789"
2) "456"
3) "123"
2) (integer) 1
Update 2: Given the new information, here's what I would do:
Have your writer clients use RPUSH to append new messages to the list. This lets the reader clients start at 0 and iterate forward over the list to get new messages.
Readers need to only remember the index of the last message they fetched from the list.
Readers watch g_new_message_flag to know when to fetch from the list.
Each reader client will then use "LRANGE list index limit" to fetch the new messages. Suppose a reader client has seen a total of 5 messages, it would run "LRANGE g_msg_queue 5 15" to get the next 10 messages. Suppose 3 are returned, so it remembers the index 8. You can make the limit as large as you want, and can walk through the list in small batches.
The reaper client should set a WATCH on the list and delete it inside a transaction, aborting if any client is concurrently reading from it.
When a reader client tries LRANGE and gets 0 messages it can assume the list has been truncated and reset its index to 0.
Do you really need unique sequential IDs? You can use UUIDs for uniqueness and timestamps to check for new messages. If you keep the clocks on all your servers properly synchronized then timestamps with a one second resolution should work just fine.
If you really do need unique sequential IDs then you'll probably have to set up a Flickr style ticket server to properly manage the central list of IDs. This would, essentially, move your g_message_id into a database with proper transaction handling.
You can simulate auto-incrementing a unique key for new rows. Simply use DBSIZE to get the current number of rows, then in your code, increment that number by 1, and use that number as the key for the new row. It's simple and atomic.