If I use publisher confirms, I can be (reasonably) sure that a message sent to an exchange on the RabbitMQ server, and which received ACK from the RabbitMQ server is not lost even if the RabbitMQ server crashes (power outage for example).
However what happens when a message arrives at a dead letter exchange after a manual rejection in the consumer? (channel.basicReject, I use Spring AMQP.)
Can I still be sure that in the case in which the original message is dequeued from the queue to which the consumer is listening, and the RabbitMQ server subsequently crashes, I will eventually find the message, after the RabbitMQ server is restarted, in the queues which are bound to the dead letter exchange (if normally the message would have arrived there)?
If the answer is negative, is there a way to ensure that this is the case?
As #GaryRussell suggested, I posted a similar question on rabbitmq-users Google group.
Here is the answer I got from Daniil Fedotov
"Hi,
There is no delivery guarantees in place. Dead lettering does not check if the message was enqueued or saved to disk.
Dead-lettering does not use publisher confirms or any other confirm mechanisms.
It's not that easy to implement reliable dead-lettering from one queue to another and there are plans to address this issue eventually, but it may take a while.
If you want to safely reject messages from the consumer without a risk of losing them - you can publish them from the consumer application manually to the dead-letter queue, wait for the confirmation and then reject."
Related
I'm not sure how to resiliently handle RabbitMQ messages in the event of an intermittent outage.
I subscribe in a windows service, read the message, then store it my database. If I can't process the record because of the data I publish it to a dead letter queue for a human to address and reprocess.
I am not sure what to do if I have some intermittent technical issue that will fix itself (database reboot, network outage, drive space, etc). I don't want hundreds of messages showing up on dead letter that just needed to wait for a for a glitch but now would be waiting on a human.
Currently, I re-queue the event and retry it once, but it retries so fast the issue is not usually resolved. I thought of retrying forever but I don't want a real issue to get stuck in an infinite loop.
Is a broad topic but from the server side you could persist your messages and make your queues durable, this means that in the eventuality the server gets restarted they won't be lost, check more here How to persist messages during RabbitMQ broker restart?
For the consumer (client) it will depend on how you configure your client, from the docs:
In the event of network failure (or a node crashing), messages can be duplicated, and consumers must be prepared to handle them. If possible, the simplest way to handle this is to ensure that your consumers handle messages in an idempotent way rather than explicitly deal with deduplication.
If a message is delivered to a consumer and then requeued (because it was not acknowledged before the consumer connection dropped, for example) then RabbitMQ will set the redelivered flag on it when it is delivered again (whether to the same consumer or a different one). This is a hint that a consumer may have seen this message before (although that's not guaranteed, the message may have made it out of the broker but not into a consumer before the connection dropped). Conversely if the redelivered flag is not set then it is guaranteed that the message has not been seen before. Therefore if a consumer finds it more expensive to deduplicate messages or process them in an idempotent manner, it can do this only for messages with the redelivered flag set.
Check more here: https://www.rabbitmq.com/reliability.html#consumer
I am using RabbitMQ as a MQ broker. Is it possible to get a notification that a certain message has been acknowledged by all queues? That is, if it was sent to 5 queues, we get a notification after the acknowledgment of the last/5th consumer.
I know you can introduce reply-to queues, but that's not what I am looking for. I don't want to force the consumer to send an acknowledgment message to some queue after acknowledgment.
Is it also possible to continue this follow-up after a broker and/or publisher restart?
No, it is not possible as you state it.
You cannot, from the publisher side, know whether a message has been ACK'd at the consumer side, and in most patterns it's not really something you'd want anyway.
You can, however, use Publisher Confirms. These would inform the publisher that the message has been routed to all the bound queues.
There are several mechanisms for data safety on both the publisher and consumer side. You would normally trust that the broker does not miss messages in between, the same way you trust that a database will hold the records over time.
If nevertheless your workflow requires that your publisher side is informed about the completion of a complex distributed task, and you really can't get away with fire and forget, then you will need to implement that response yourself, normally by means of an additional message.
I have the following problem.
My program sends messages directly to the Queue (without exchange). I need to monitor incoming of new messages and send them to other Queue without removing them from source queue.
I don't have access to program code, so I'm not able to publish messages to exchange first.
Is it possible to solve this problem using the management web interface of RabbitMQ?
I tried to use shovel plugin, but it removes all messages from source queue after ack.
First to clear up few things:
My program sends messages directly to the Queue (without exchange) This is not true, at the very least (and most likely in this case) nameless exchange is used.
removes all messages from source queue after ack
this is by design and therefore perfectly fine.
You should never keep messages in the queue, queue is made to be consumed. As Derick Bailey says here
RabbitMQ is not a database. RabbitMQ is a message broker and queueing system.
on the same link you will find your answer. I cannot give a concrete one since you didn't provide motivation, but whatever it is keeping messages in the queue is never good!
Maybe you want to log/store your message first and then process it with the consequence of processing being some 3rd action or whatever...
I'm in a phase of learning RabbitMQ/AMQP from the RabbitMQ documentation. Something that is not clear to me that I wanted to ask those who have hands-on experience.
I want to have multiple consumers listening to the same queue in order to balance the work load. What I need is pretty much close to the "Work Queues" example in the RabbitMQ tutorial.
I want the consumer to acknowledge message explicitly after it finishes handling it to preserve the message and delegate it to another consumer in case of crash. Handling a message may take a while.
My question is whether AMQP postpones next message processing until the previous message is ack'ed? If so how do I achieve load balancing between multiple workers and guarantee no messages get lost?
No, the other consumers don't get blocked. Other messages will get delivered even if they have unacknowledged but delivered predecessors. If a channel closes while holding unacknowledged messages, those messages get returned to the queue.
See RabbitMQ Broker Semantics
Messages can be returned to the queue using AMQP methods that feature a requeue parameter (basic.recover, basic.reject and basic.nack), or due to a channel closing while holding unacknowledged messages.
EDIT In response to your comment:
Time to dive a little deeper into the AMQP specification then perhaps:
3.1.4 Message Queues
A message queue is a named FIFO buffer that holds message on behalf of a set of consumer applications.
Applications can freely create, share, use, and destroy message queues, within the limits of their authority.
Note that in the presence of multiple readers from a queue, or client transactions, or use of priority fields,
or use of message selectors, or implementation-specific delivery optimisations the queue MAY NOT
exhibit true FIFO characteristics. The only way to guarantee FIFO is to have just one consumer connected
to a queue. The queue may be described as “weak-FIFO” in these cases. [...]
3.1.8 Acknowledgements
An acknowledgement is a formal signal from the client application to a message queue that it has
successfully processed a message.[...]
So acknowledgement confirms processing, not receipt. The broker will hold on to the message until it's gotten acknowleged, so that it can redeliver them. But it is free to deliver more messages to consumers even if the prededing messages have not yet been acknowledged. The consumers will not be blocked.
Does anyone know if the pop operation on a RabbitMQ queue is atomic?
I have several processes reading from the same queue (the queue is marked as durable, running on version 2.0.0) and I am seeing some quite odd behaviour.
If your multiple processes are consuming messages from the same queue then they should never consume the same message.
Here are the caveats, though:
If a message has been delivered by the broker to one of your consumers and it rejects the message (or terminates before getting a chance to acknowledge it) then the broker will put it back on the same queue and it would be delivered to one of your remaining active consumers.
If your consumers are pulling from distinct queues -- each with a matching binding -- then the broker will put copies of the message on each queue and each consumer will get a copy of the same message.