Here's the scenario:
Consumer (C) is listening for messages on Queue (Q) and Publisher (P) publishes messages to Q. While C is waiting for messages to be put on Q, Q gets deleted, then P publishes a message, thus Q is recreated with a new message. The issue is that C now doesn't get this message, even though the Q it was listening on has been recreated.
Is there a way to get the Consumer to "reconnect" with the "new" Queue after it's been deleted and recreated? I noticed too that when Q gets deleted, C still listens as if nothing's happened.
Yes, your consumer, "C" can catch connection-related failures. You can then configure "C" to attempt to reconnect at regular intervals, every X seconds after such an event has been caught. Once reconnected, "C" will resume consuming messages from "Q".
Related
I am using java client of
https://www.rabbitmq.com/tutorials/tutorial-six-java.html
. My setup is RPC. My server is creating queue and client is also creating same queue and sending the message. After receiving message server is performing some operation and sending result back to client.
Now if server created the queue and connect with it while queue get's deleted for some reason. The server is not throwing any exception and when the client is creating the same queue and putting messages server is not getting those messages either as it's not connected.
How do server knows that the queue get deleted?
Thanks so much
It sounds like the following situation is happening:
Queue A is created.
Consumer 1 subscribes to Queue A
Queue A is deleted while Consumer 1 is still active
Queue A is re-created (call it A')
Now, you're wondering why Consumer 1 is not getting any messages? You would have to re-subscribe your consumer. I don't usually delete queues, because there is no need to do so under any reasonable scenario (instead, use the queue.expires property to handle auto-deletion of queues).
According to the AMQP 0-9-1 Specification,
When a queue is deleted any pending messages are sent to a dead-letter
queue if this is defined in the server configuration, and all
consumers on the queue are cancelled.
So, based on the description of the behavior, this is a bug with the consumer. It should throw an exception or otherwise exit the consuming loop in this case. In any case, you'll have to re-subscribe to A' before you'll get any more messages.
Assume several producers publish to the same exchange E (fanout). Each producer has its own channel. Queue Q is bound to exchange E. producer P1 publishes message M1 to E and receives acknowledge A1 from E. Only after acknowledge A1 second producer P2 publishes second message M2. Does RabbitMQ guarantie order of messages in Q: M1 is first, M2 is second? That is will subscribed to Q consumer always receive M1 and after that M2?
RabbitMQ guarantees order of messages in a queue: First In, First Out. The first message to go into the queue will be the first message to come out of the queue, and they will remain in order (assuming you are just consuming and acking them... if you start nacking / rejecting message, re-publishing them, etc, things change)
That is the only guarantee that it will make on the order of messages: FIFO Queues.
If you need to guarantee the order that messages are delivered to a queue, you have to build that process yourself.
FWIW, it's very difficult to build this guarantee. The only truly guaranteed way to ensure the order of messages is not to send the next one until after the first one has been processed.
Even if you wait for the publisher acknowledgement before sending the next one, it is possible for the next one to end up in the queue before the first one (though it is highly unlikely).
You may want to look into Message Sequence and Resequencer if you need to guarantee the client gets messages in a certain order.
I have a scenario in my RabbitMQ setup that I'm curious about how to solve. The diagram below illustrates it (exchanges and most queues removed for succinctness):
Scenario
Producer creates message A(1), it is received by the top consumer, which begins processing the message.
Producer creates message A(2), it is received by the bottom consumer (assuming both consumers are on a round-robin exchange).
The bottom consumer publishes message B(2), which is put into Message B consumer's queue
The poor slow top consumer finally finishes and emits its message B(1).
Problem
If we assume that B consumer cannot be made idempotent, how do we ensure the result of both B messages are applied in the correct order?
I had thought of using a timestamp that is applied to the initial publish of message A, and having the consumer maintain a timestamp of last change, rejecting any timestamps before that time, but that only works if each message causes the exact same kind of change and requires a lot of tracking.
Other ideas for how to approach this would be appreciated. Thanks!
I am not sure what is specific to RabbitMQ here, but the idea with timestamps sounds like a good start if you have a single producer.
The producer attaches a timestamp to the messages A, each message B take the same timestamp of its respective message A.
With your approach some messages would not be processed, eg, message B(1). If all messages should be processed by consumer B, but they should be processed in a deterministic order, then you can do a deterministic merge:
Consumer B is equipped with two queues, one queue for each consumer A. Consumer B always checks the top of both queues:
if both queues are non-empty, consumer B pops the message with the lowest timestamp.
if at least one queue is empty, the consumer B waits.
With this approach the order in which consumer B processes messages is given by the timestamps of the producer and no message is discarded. Assumptions are:
queues are FIFO
no process crashes
always the case that eventually each consumer A processes a message
consumer B can check the top of the queues in a non-blocking fashion
Here's what we have here:
Topic Exchange DLE, which is intended to be a Dead-Letter Exchange
Topic Exchange E, which is the "main" Exchange
Several Queues (EQ1, ..., EQn) bound to E (and initialized with x-dead-letter-exchange = DLE), each with own Routing Key. These queues are the ones being consumed from.
For each EQn, there's a DLEQn (initialized with x-dead-letter-exchange = E and x-message-ttl = 5000), bound to DLE with the same routing key as EQn. These queues are not being consumed from
What I want is the following: if a consumer cannot process a message from EQn, it Nacks the message with requeue: false and it gets to the DLEQn - that is, to an appropriate queue on the Dead-Letter Exchange. Now, I want this message to sit on the DLEQn for some time and then get routed back to the original queue EQn to be processed again.
Try as I might, I could not get the "redelivery to the original queue" working. I see that messages sit in the DLEQn with all the right headers and Routing Key intact, but after TTL expires they just vanish into thin air.
What am I doing wrong here?
Yes, you can do this. We are currently doing this in production and it works great. The code is too long to include here but I will show you the diagram I created that represents the process. The basic idea is that the First DLX has a TTL, once that TTL expires the message goes into a 2nd queue to be re-sent back into the original.
RabbitMQ detects message flow cycling (E -> DLE -> E -> DLE ...) and silently drops messages:
From DLX manual (Routing Dead-Lettered Messages section):
It is possible to form a cycle of dead-letter queues. For instance, this can happen when a queue dead-letters messages to the default exchange without specifiying a dead-letter routing key. Messages in such cycles (i.e. messages that reach the same queue twice) will be dropped if the entire cycle is due to message expiry.
That post is pretty old, but it took me days to find a solution for a similar problem, so I thought I should share my solution here.
We're receiving messages in TargetQueue (no TTL!!!, bound to TargetExchange) which may be nacked by the consumer. TargetQueue has a DLX defined (RetryExchange), which in turn has bound a corresponding queue (RetryQueue, with a TTL of 60 secs, TargetExchange defined as DLX).
So if the consumer nacks a message from TargetQueue, it gets queued up in the RetryQueue and because of the TTL, the message gets nacked again and requeued in the original TargetQueue. The clue was, that TargetQueue may not have a TTL defined, otherwise a message like this appears in the RabbitMQ log:
Dead-letter queues cycle detected: [<<"TargetQueue">>,<<"RetryQueue">>,<<"TargetQueue">>]
So in the end the solution is pretty straight forward (and only needs one consumer). I got the final inspiration from https://medium.com/#igkuz/ruby-retry-scheduled-tasks-with-dead-letter-exchange-in-rabbitmq-9e38aa39089b
I want to read the payload, or messageId of unacknowledged messages in a RabbitMQ queue. Is this possible?
The reason I want to do so is I trying to use RabbitMQ dead letter feature to build a cycle to for auto-generating message periodically.
Briefly, create two queues - work queue and delay queue.
Set TTL of the message in delay queue as the time frequency of need to periodically. Can have different messages with different TTL for different job purpose;
put a message into the delay queue. When the message expires, it gets republished into the work queue. The message can sit in the work queue as long as needed until a consumer is up to consume it.
One consumer picks up the message, and process it. If processing succeeds, the consumer needs acknowledge the work queue, and then write the message back to the delay queue; If processing fails (e.g., the thread crashes), no acknowledgement. Then the message would re-appear in the worker queue automatically. Then another consumer can take up the job. When the message sent back to the delay queue gets expired again, it gets republished, then re-consumed by a consumer ...... A cycle constructed, workload distributed.
I want to make sure there is no missing or duplicate messages in the cycle since I do not want missing job or double doing the job at the same time. However, there is tiny tiny chance duplicate messages can happen. Below show the consumer first write back the message to delay queue, and acknowledge the work queue. If the thread crashes right between below two lines, the message would be in the delay queue, and Rabbit republish the message again into work queue. The end up with duplicate messages in the cycle.
channel.basicPublish(DELAY_EXCHANGE, "", null, message.getBytes());
channel.basicAck(delivery.getEnvelope().getDeliveryTag(), false);
To prevent above, I want to add a dog watch logic after above two line:
Check the total number of messages in the cycle (total messages in both queues) to see whether it is equal my expected number (I expected the number less 10);
If the number does not matches, I want to figure out which one is missing or which one is duplicate, then deal with it. I do not care about the sequence of those messages, or the frequency has been disturbed since this is a really really edge case to consider. I can easily retrieve those messages which are ready and requeue them. But the problem is how to deal with those unacknowledged messages?
Thank you very much in advance!
Roy
It's not possible to read unacknowledged messages from other context the original messages was consumed and held as un-aked.