Redis streams is returning an empty array - redis

I created a new Redis steam using the following command.
XGROUP CREATE A mygroup $ MKSTREAM
I added the below mentioned data
xadd A * X 1
xadd A * X 2
xadd A * X 3
xadd A * X 4
I am reading the data using the following command.
XREADGROUP GROUP mygroup Alice COUNT 1 STREAMS A 0
Its returning an empty array
1) 1) "A"
2) (empty array)
I am using Redis version 6.2.1. Kindly help me to debug the error.

When you use XREADGROUP command to read message, you should specify > as ID, instead of 0.
Reference from the doc:
The special > ID, which means that the consumer want to receive only messages that were never delivered to any other consumer. It just means, give me new messages.
Any other ID, that is, 0 or any other valid ID or incomplete ID (just the millisecond time part), will have the effect of returning entries that are pending for the consumer sending the command with IDs greater than the one provided. So basically if the ID is not >, then the command will just let the client access its pending entries: messages delivered to it, but not yet acknowledged. Note that in this case, both BLOCK and NOACK are ignored.
If ID is not >, you can only read pending messages, however, in your case, there's no pending message, since you have not consume anything.

Related

How to get pending items with minIdleTime greater then some value?

Using Redis stream we can have pending items which aren't finished by some consumers.
I can find such items using xpending command.
Let we have two pending items:
1) 1) "1-0"
2) "local-dev"
3) (integer) 9599
4) (integer) 1
2) 1) "2-0"
2) "local-dev"
3) (integer) 9599
4) (integer) 1
The problem that by using xpending we can set filters based on id only. I have a couple of service nodes (A, B) which make zombie check: XPENDING mystream test_group - 5 1
Each of them receives "1-0" item and they make xclaim and only one of them (for example A) becomes the owner and starts processing this item. But B runs xpending again to get new items but it receives again "1-0" because it hasn't been processed yet (A is working) and it looks like all my queue is blocked.
Is there any solution how I can avoid it and process pending items concurrently?
You want to see the documentation, in particular Recovering from permanent failures.
The way this is normally used is:
You allow the same consumer to consume its messages from PEL after recovering.
You only XCLAIM from another consumer when a reasonably large time elapsed, that suggests the original consumer is in permanent failure.
You use delivery count to detect poison pills or death letters. If a message has been retried many times, maybe it's better to report it to an admin for analysis.
So normally all you need is to see the oldest age in PEL from other consumers for the Permanent Failure Recovery logic, and you consume one by one.

Use XREADGROUP to get the id specified

I'm wondering if there is a command or parameter with which I can get the message for the RecordId specified e.g.
XREADGROUP GROUP mygroup myconsumer COUNT 1 STREAMS mystream 12345-0
I want the message with the ID 12345-0, but it seems I get the first message after 12345-0.
I cannot use XRANGE since it doesn't update the deliveryCount and lastDeliveryTime and it doesn't seem to understand the concept of consumer groups.
I'm also aware of
XREADGROUP GROUP mygroup myconsumer STREAMS mystream 0
which gives me all pending messages, but this would update the deliveryCount for all messages and I don't want that.
Redis itself doesn't provide the function you ask for. So instead you might have to
use something like
XREADGROUP GROUP mygroup myconsumer COUNT 1 STREAMS mystream 12344-99999
instead of "12345-0"
The entry id returned by Redis Stream is in the format of millisecondsTime-sequenceNumber.
Since it's unlikely you insert 99999 items in one milisecond, you can be sure that you get the correct item.

How to prevent Redis stream memory increases infinitely?

I just realized that the XACK do not auto delete message when only one consumer group exist.
I thought that when all consumer groups ack the same message, the message will be deleted by Redis-server, but seemed that this is not the case.
So, the Redis stream memory increases infinitely because of no messages will be deleted.
Maybe the only way to preventing this is manually XDEL message? But how can I know all consumer groups have acked the message?
Need some help, thanks!
Redis streams are primarily an append-only data structure. It's possible to remove an entry using the XDEL command, however that doesn't necessarily free up the memory used by the entry:
> XDEL mystream 1538561700640-0
(integer) 1
You could also cap the stream with an arbitrary threshold using the MAXLEN option to XADD or use the XTRIM command explicitly:
> XADD mystream MAXLEN 1000 * value 1
1526654998691-0
...
> XLEN mystream
(integer) 1000
But how can I know all consumer groups have acked the message?
You can inspect the list of pending messages for each consumer group using the XPENDING command:
> XPENDING mystream mygroup
1) (integer) 1
2) 1526984818136-0
3) 1526984818136-0
4) 1) 1) "consumer-1"
2) "1"

What is the race condition for Redis INCR Rate Limiter 2?

I have read the INCR documentation here but I could not understand why the Rate limiter 2 has a race condition.
In addition, what does it mean by the key will be leaked until we'll see the same IP address again in the documentation?
Can anyone help explain? Thank you very much!
You are talking about the following code, which has two problems in multiple-threaded environment.
1. FUNCTION LIMIT_API_CALL(ip):
2. current = GET(ip)
3. IF current != NULL AND current > 10 THEN
4. ERROR "too many requests per second"
5. ELSE
6. value = INCR(ip)
7. IF value == 1 THEN
8. EXPIRE(ip,1)
9. END
10. PERFORM_API_CALL()
11.END
the key will be leaked until we'll see the same IP address again
If the client dies, e.g. client is killed or machine is down, before executing LINE 8. Then the key ip won't be set an expiration. If we'll never see this ip again, this key will always persist in Redis database, and is leaked.
Rate limiter 2 has a race condition
Suppose key ip doesn't exist in the database. If there are more than 10 clients, say, 20 clients, execute LINE 2 simultaneously. All of them will get a NULL current, and they all will go into the ELSE clause. Finally all these clients will execute LINE 10, and the API will be called more than 10 times.
This solution fails, because these's a time window between LINE 2 and LINE 3.
A Correct Solution
value = INCR(ip)
IF value == 1 THEN
EXPIRE(ip, 1)
END
IF value <= 10 THEN
return true
ELSE
return false
END
Wrap the above code into a Lua script to ensure it runs atomically. If this script returns true, perform the API call. Otherwise, do nothing.

How to get priority of current job?

In beanstalkd
telnet localhost 11300
USING foo
put 0 100 120 5
hello
INSERTED 1
How can I know what is the priority of this job when I reserve it? And can I release it by making the new priority equals to current priority +100?
Beanstalkd doesn't return the priority with the data - but you could easily add it as metadata in your own message body. for example, with Json as a message wrapper:
{'priority':100,'timestamp':1302642381,'job':'download http://example.com/'}
The next message that will be reserved will be the next available entry from the selected tubes, according to priority and time - subject to any delay that you had requested when you originally sent the message to the queue.
Addition: You can get the priority of a beanstalk job (as well as a number of other pieces of information, such as how many times it has previously been reserved), but it's an additional call - to the stats-job command. Called with the jobId, it returns about a dozen different pieces of information. See the protocol document, and your libraries docs.