How to remove redis stream pending entries from a consumer? - redis

When a consumer from a group xread a stream entry, it will go to the consumer pending entry
and as far as I know, it could be removed from the PEL either the consumer do XACK or it is XCLAIM/XAUTOCLAIM by another consumer.
Though my question, if the original consumer died after reading the entry without sending XACK, is there a way to retrieve(?) or change it's status back to unprocessed, so when a consumer read the stream it was given to that consumer. or is the only way by claiming it to another consumer?
because XCLAIM can only be done if we know who's the consumer right? and in this case, i was hoping i could just change the status of the entry to unprocessed, so new consumer can read it
I'm new to Redis, so any thought would be appreciated :) thankyou

if the original consumer died after reading the entry without sending XACK, is there a way to retrieve(?) or change it's status back to unprocessed
No, there isn't: once in the PEL, entries can only be pointed to another consumer but can't be released as if they were never read.
because XCLAIM can only be done if we know who's the consumer right?
Indeed, but you could have a special consumer which, within a single LUA script to guarantee atomicity:
XCLAIMs pending entries based on certain criteria (e.g. idle time) and assigns them to itself;
XREADGROUPs them from the PEL;
XADDs them back to the original stream; and,
XACKs the entries read in step 2.
I would consider this sequence of commands overkill for most scenarios I can think of and I would suggest to avoid using it. But it would allow to re-read and re-assign a copy of the entries which were in the PEL to other consumers as they XREADGROUP them.

Related

How do I clear the PEL list for a consumer group in Redis

I would like to know how to clear the PEL list for a given Redis consumer group without individually acknowledging every message.
Context
I have a consumer group for my one of my Redis streams. For testing purposes, I need to clear the PEL list of the consumer group without deleting the stream.
From a consumer view, you can use XPENDING to know which are the message-ids list that are not yet acknowledged. Then using XCLAIM you can claim a specific message.
There is also XAUTOCLAIM command, which composes XPENDING and XCLAIM for every message in the group.
Anyway by design the message is in a PEL and it's removed only when it's acknowledged, claimed by another consumer or deleted from the stream.
XPENDING
XCLAIM
XAUTOCLAIM
Redis doesn't implement any command to do that functionality. Only way to remove entries from the PEL of a consumer is to ACK each pending message individually.
This thread explains why deleted messages from a stream still are part of a PEL.

RabbitMQ message processing by specific order

i would like to know what is the best practice for processing messages from the queue if i need to make sure that another message (has same product Id) at the same time is not being processed by another consumer.
my problem is that if someone places and order with product ids and another message comes with same products i want to process them one by one, but not in parallel.
i am thinking of using redis where i would save the ids that are being processed and clear them after the processing is over. But maybe there are some better solutions for this kind of situation.
In rabbitmq, messages in same queue are ordered by "publish order". It means, messages will be received in the same order that they were sent to queue.
Take a look about ordering of messages : https://www.rabbitmq.com/semantics.html#ordering
If you want to consume messages exactly in publish order, you should use a "direct exchange" with one queue.
Also you must have exactly one consumer, with concurrency limit set to one.

Dead lettering messages on an expired queue bound with a consistent hash exchange

I have a situation where I am processing events that are related to specific sources. Each source has a key or ID, which I can use as the hash. Events from each source have to be processed in order, but events from different sources can be parallelized, to achieve horizontal scalability. There will be hundreds of source keys.
I am planning to set the key as part of the routing key when submitting messages to RabbitMQ, and then use the consistent-hash-exchange so that events from the same source are routed to the same queue. I was then thinking of dynamically binding private queues from consumers, with a TTL (so that they are gracefully removed if a consumer is down). At the beginning I will just have 2 or 3 consumers for redundancy, but if I want to scale up due to an increased number of messages, I can just start another consumer.
My question is what happens if a consumer is down and there are messages in its queue? Ideally I would want the messages in the queue to be rerouted back to the exchange, with the consistent-hash-exchange routing them to a different queue (since the original queue would be no longer there).
The RabbitMQ documentation about dead lettering doesn't explicitly mention the scenario of TTL on consumer queues, or what happens when the queue gets deleted.
Does my approach make sense? How can I achieve the consumer fault-tolerance I am looking for while retaining the ordering by a specific routing key?
Note: I know there is even a more subtle race condition if during the process of routing dead lettered messages to the exchange new messages come that were originally routed to the expired queue, which will now be routed to a different consumer, thus ordering will be broken at that specific instance.
There are more then one questions to be answered here, I'll try to go in the same order.
My question is what happens if a consumer is down and there are messages in its queue?
Outside of the context (rest of the question) - messages stay in the queue until they are ACKed or their TTL expires.
The RabbitMQ documentation about dead lettering doesn't explicitly mention the scenario of TTL on consumer queues, or what happens when the queue gets deleted.
It does say ...The TTL for the message expires..., so basically if the message is not ACKed within given TTL, it get's to DLX. For the queue TTL, check this link - it's basically an "expiry time" for the queue. Additionally, if the queue get's deleted, the messages are gone (when not taking into account any mirroring of course).
Now for the "does it makes sense" part. For the messages from the different sources, I think it's clear - process as much as you can in parallel and that's it. There are no collisions (well usually no) there.
How can I achieve the consumer fault-tolerance I am looking for while retaining the ordering by a specific routing key?
For sequential processing, basically you need exactly one consumer that does one source. Now for monitoring this consumer maybe add a watchdog to start it again if it crashes, or restart it if hangs etc. Maybe it would also make sense to use get instead of consume (amqp) method. I can't really recommend or not recommend this approach, because (for me at least) it's quite use case specific (performance, how often is there a new message etc), but I would say that in that way it's easier to achieve a "more synchronous" behavior.
And for sure (now referring to what you wrote in the note) you should try and avoid DLX-ing messages (higher TTL etc) if you really want to keep the original order of the sequence (said it redundantly on purpose :) )

How to solve message disorder in RabbitMq? [duplicate]

I need to choose a new Queue broker for my new project.
This time I need a scalable queue that supports pub/sub, and keeping message ordering is a must.
I read Alexis comment: He writes:
"Indeed, we think RabbitMQ provides stronger ordering than Kafka"
I read the message ordering section in rabbitmq docs:
"Messages can be returned to the queue using AMQP methods that feature
a requeue
parameter (basic.recover, basic.reject and basic.nack), or due to a channel
closing while holding unacknowledged messages...With release 2.7.0 and later
it is still possible for individual consumers to observe messages out of
order if the queue has multiple subscribers. This is due to the actions of
other subscribers who may requeue messages. From the perspective of the queue
the messages are always held in the publication order."
If I need to handle messages by their order, I can only use rabbitMQ with an exclusive queue to each consumer?
Is RabbitMQ still considered a good solution for ordered message queuing?
Well, let's take a closer look at the scenario you are describing above. I think it's important to paste the documentation immediately prior to the snippet in your question to provide context:
Section 4.7 of the AMQP 0-9-1 core specification explains the
conditions under which ordering is guaranteed: messages published in
one channel, passing through one exchange and one queue and one
outgoing channel will be received in the same order that they were
sent. RabbitMQ offers stronger guarantees since release 2.7.0.
Messages can be returned to the queue using AMQP methods that feature
a requeue parameter (basic.recover, basic.reject and basic.nack), or
due to a channel closing while holding unacknowledged messages. Any of
these scenarios caused messages to be requeued at the back of the
queue for RabbitMQ releases earlier than 2.7.0. From RabbitMQ release
2.7.0, messages are always held in the queue in publication order, even in the presence of requeueing or channel closure. (emphasis added)
So, it is clear that RabbitMQ, from 2.7.0 onward, is making a rather drastic improvement over the original AMQP specification with regard to message ordering.
With multiple (parallel) consumers, order of processing cannot be guaranteed.
The third paragraph (pasted in the question) goes on to give a disclaimer, which I will paraphrase: "if you have multiple processors in the queue, there is no longer a guarantee that messages will be processed in order." All they are saying here is that RabbitMQ cannot defy the laws of mathematics.
Consider a line of customers at a bank. This particular bank prides itself on helping customers in the order they came into the bank. Customers line up in a queue, and are served by the next of 3 available tellers.
This morning, it so happened that all three tellers became available at the same time, and the next 3 customers approached. Suddenly, the first of the three tellers became violently ill, and could not finish serving the first customer in the line. By the time this happened, teller 2 had finished with customer 2 and teller 3 had already begun to serve customer 3.
Now, one of two things can happen. (1) The first customer in line can go back to the head of the line or (2) the first customer can pre-empt the third customer, causing that teller to stop working on the third customer and start working on the first. This type of pre-emption logic is not supported by RabbitMQ, nor any other message broker that I'm aware of. In either case, the first customer actually does not end up getting helped first - the second customer does, being lucky enough to get a good, fast teller off the bat. The only way to guarantee customers are helped in order is to have one teller helping customers one at a time, which will cause major customer service issues for the bank.
It is not possible to ensure that messages get handled in order in every possible case, given that you have multiple consumers. It doesn't matter if you have multiple queues, multiple exclusive consumers, different brokers, etc. - there is no way to guarantee a priori that messages are answered in order with multiple consumers. But RabbitMQ will make a best-effort.
Message ordering is preserved in Kafka, but only within partitions rather than globally. If your data need both global ordering and partitions, this does make things difficult. However, if you just need to make sure that all of the same events for the same user, etc... end up in the same partition so that they are properly ordered, you may do so. The producer is in charge of the partition that they write to, so if you are able to logically partition your data this may be preferable.
I think there are two things in this question which are not similar, consumption order and processing order.
Message Queues can -to a degree- give you a guarantee that messages will get consumed in order, they can't, however, give you any guarantees on the order of their processing.
The main difference here is that there are some aspects of message processing which cannot be determined at consumption time, for example:
As mentioned a consumer can fail while processing, here the message's consumption order was correct, however, the consumer failed to process it correctly, which will make it go back to the queue. At this point the consumption order is intact, but the processing order is not.
If by "processing" we mean that the message is now discarded and finished processing completely, then consider the case when your processing time is not linear, in other words processing one message takes longer than the other. For example, if message 3 takes longer to process than usual, then messages 4 and 5 might get consumed and finish processing before message 3 does.
So even if you managed to get the message back to the front of the queue (which by the way violates the consumption order) you still cannot guarantee they will also be processed in order.
If you want to process the messages in order:
Have only 1 consumer instance at all times, or a main consumer and several stand-by consumers.
Or don't use a messaging queue and do the processing in a synchronous blocking method, which might sound bad but in many cases and business requirements it is completely valid and sometimes even mission critical.
There are proper ways to guarantuee the order of messages within RabbitMQ subscriptions.
If you use multiple consumers, they will process the message using a shared ExecutorService. See also ConnectionFactory.setSharedExecutor(...). You could set a Executors.newSingleThreadExecutor().
If you use one Consumer with a single queue, you can bind this queue using multiple bindingKeys (they may have wildcards). The messages will be placed into the queue in the same order that they were received by the message broker.
For example you have a single publisher that publishes messages where the order is important:
try (Connection connection2 = factory.newConnection();
Channel channel2 = connection.createChannel()) {
// publish messages alternating to two different topics
for (int i = 0; i < messageCount; i++) {
final String routingKey = i % 2 == 0 ? routingEven : routingOdd;
channel2.basicPublish(exchange, routingKey, null, ("Hello" + i).getBytes(UTF_8));
}
}
You now might want to receive messages from both topics in a queue in the same order that they were published:
// declare a queue for the consumer
final String queueName = channel.queueDeclare().getQueue();
// we bind to queue with the two different routingKeys
final String routingEven = "even";
final String routingOdd = "odd";
channel.queueBind(queueName, exchange, routingEven);
channel.queueBind(queueName, exchange, routingOdd);
channel.basicConsume(queueName, true, new DefaultConsumer(channel) { ... });
The Consumer will now receive the messages in the order that they were published, regardless of the fact that you used different topics.
There are some good 5-Minute Tutorials in the RabbitMQ documentation that might be helpful:
https://www.rabbitmq.com/tutorials/tutorial-five-java.html

Redis Pub/Sub with Reliability

I've been looking at using Redis Pub/Sub as a replacement to RabbitMQ.
From my understanding Redis's pub/sub holds a persistent connection to each of the subscribers, and if the connection is terminated, all future messages will be lost and dropped on the floor.
One possible solution is to use a list (and blocking wait) to store all the message and pub/sub as just a notification mechanism. I think this gets me most of the way there, but I still have some concerns about the failure cases.
what happens when a subscriber dies, and comes back online, how should it process all it's pending messages?
when a malformed message comes though the system, how do you handle those exceptions? DeadLetter Queue?
is there a standard practice to implementing a retry policy?
When a subscriber (consumer) dies, your list will continue to grow until the client returns. Your producer could trim the list (from either side) once it reaches a specific limit, but that is something you would need to handle at the application level. If you include a timestamp within each message, your consumer can then act on the age of a message, assuming you have application logic you want to enforce on message age.
I'm not sure how a malformed message would enter the system, as the connection to Redis is usually TCP with the its integrity assurances. But if this happens, perhaps due to a bug in message encoding at the producer layer, you could provide a general mechanism for handling errors by keeping a queue-per-producer that received consumer's exception messages.
Retry policies will depend greatly on your application needs. If you need 100% assurance that a message has been received and processed, then you should consider using Redis transactions (MULTI/EXEC) to wrap the work done by a consumer, so you can ensure that a client doesn't remove a message unless it has completed its work. If you need explicit acknowlegement, then you could use an explicit ACK message on a queue dedicated to the producer process(es).
Without knowing more about your application needs, it's hard to know how to choose wisely. Generally, if your messages require full ACID protection, then you probably also need to use redis transactions. If your messages are only meaningful when they are timely, then transactions may not be needed. It sounds as though you can't tolerate dropped messages, so your approach of using a list is good. If you need to implement a priority queue for your messages, you can use the sorted set (the Z-commands) to store your messages, using their priority as the score value, along with a polling consumer.
If you want a pub/sub system where subscribers won't lose messages when they die, consider using Redis Streams instead of Redis Pub/sub.
Redis Streams have their own architecture and pros/cons to Redis Pub/sub. With Redis Streams, a subscriber can issue the command:
the last message I received was X, now give me the next message;
if there is no new message, then wait for one to arrive.
Antirez's article linked above is a good intro to Redis streams with more info.
What I did is use a sorted set using the timestamp as the score and the key to the data as the member value. I use the score from the last item to retrieve the next few ones and then get the keys. Once the work is done I wrap both the zrem and the del in a MULTI/EXEC transaction.
Essentially what Edward said, but with the twist of storing the keys in the sorted set, as my messages can be pretty big.
Hope this helps!