RabbitMQ dead letter queue housekeeping - rabbitmq

I have a RabbitMQ instance that has an exchange, a regular queue and a dead letter queue. Rejected messages are moved from the regular queue to the dead letter queue.
These rejected messages are not important to me because any missed data is supplied again the next day.
Currently I regularly purge the messages in the dead letter queue, but I want to automate it.
How do I do that?
All the tutorials that I've found so far explain how to expire messages using policies or tags, by which they are moved from the regular queue to the dead letter queue. But none of these tutorials talk about the situation where you want to expire messages that are already in the dead letter queue.
I just want to get rid of those messages, not save them to reprocess later.
How do I do that?

You should set a message TTL for your dead-letter queue -
https://www.rabbitmq.com/ttl.html
NOTE: the RabbitMQ team monitors the rabbitmq-users mailing list and only sometimes answers questions on StackOverflow.

Related

How to shift messages from queue to another queue?

I have RabbitMQ queue.
How to shift all messages from this queue after specific time (5 minutes) to another queue?
There are a couple of ways to do this. I recommend setting a time to live on your messages and configuring the queue with a dead letter exchange and dead letter routing key. Documentation for dead lettering can be found here: https://www.rabbitmq.com/dlx.html

RabbitMQ - How to Dead-letter / Process Messages in Expired Queues?

I have an a queue that has x-expires set. The issue I am having is that I need to do further processing on the messages that are in the queue IF the queue expires. My initial idea was to set x-dead-letter-exchange on the queue. But, when the queue expires, the messages just vanish without making it to the dead-letter exchange.
How can I dead-letter, or otherwise process, messages that are in a queue that expires?
As suggested in the comments, you cannot do this by relying only on the x-expire feature. But a solution that worked in a similar case I had was to:
Use x-message-ttl to make sure messages die if not consumed in a timely manner,
Assign a dead letter exchange to the queue where all those messages will be routed,
Use x-expires to set the queue expiration to a value higher than the TTL of the messages,
(and this is the tricky part) Assuming you have control over your consumers, before the last consumer goes offline, delete the binding to your "dying" queue, potentially through a REST API call - this will prevent new messages from being routed to the queue.
This way the messages that were published before the last consumer died were already processed, existing messages will be dead-lettered before the queue expires, and new messages cannot come into the queue.
You need to add a new dead letter queue that is bound to your dead letter exchange with the binding routing key set as the original queue name. In this way all expired messages sent to the dead letter exchange are routed to the dead letter queue.

Rabbitmq message arrival and respond time stamp

Is there a way to get the timestamp when a message was placed on the queue, from a consumer. Not when it was published, but when it actually made it to the queue.
First, a correction - consumers do not "place" messages on queues, publishers publish messages to exchanges, which then route messages to queues.
Yo can use the RabbitMQ message timestamp community plugin to add a timestamp when a message is published to RabbitMQ.
Please note that RabbitMQ does not guarantee that messages are actually routed to any queues. It's up to you to bind queues correctly to exchanges to ensure that your messages end up where you expect them.
NOTE: the RabbitMQ team monitors the rabbitmq-users mailing list and only sometimes answers questions on StackOverflow.

RabbitMQ dead letter handling guarantees

If I use publisher confirms, I can be (reasonably) sure that a message sent to an exchange on the RabbitMQ server, and which received ACK from the RabbitMQ server is not lost even if the RabbitMQ server crashes (power outage for example).
However what happens when a message arrives at a dead letter exchange after a manual rejection in the consumer? (channel.basicReject, I use Spring AMQP.)
Can I still be sure that in the case in which the original message is dequeued from the queue to which the consumer is listening, and the RabbitMQ server subsequently crashes, I will eventually find the message, after the RabbitMQ server is restarted, in the queues which are bound to the dead letter exchange (if normally the message would have arrived there)?
If the answer is negative, is there a way to ensure that this is the case?
As #GaryRussell suggested, I posted a similar question on rabbitmq-users Google group.
Here is the answer I got from Daniil Fedotov
"Hi,
There is no delivery guarantees in place. Dead lettering does not check if the message was enqueued or saved to disk.
Dead-lettering does not use publisher confirms or any other confirm mechanisms.
It's not that easy to implement reliable dead-lettering from one queue to another and there are plans to address this issue eventually, but it may take a while.
If you want to safely reject messages from the consumer without a risk of losing them - you can publish them from the consumer application manually to the dead-letter queue, wait for the confirmation and then reject."

Does rabbitmq support to push the same data to multi consumers?

I have a rabbitmq cluster used as a working queue. There are 5 kinds of consumers who want to consume exactly the same data.
What I know for now is using fanout exchange to "copy" the data to 5 DIFFERENT queues. And the 5 consumers can consume different queue. This is kind of wasting resources because the data is the same in file queues.
My question is, does rabbitmq support to push the same data to multi consumers? Just like a message need to be acked for a specified times to be deleted.
I got the following answer from rabbitmq email group. In short, the answer is no... and what I did above is the correct way.
http://rabbitmq.1065348.n5.nabble.com/Does-rabbitmq-support-to-push-the-same-data-to-multi-consumers-td36169.html#a36170
... fanout exchange to "copy" the data to 5 DIFFERENT queues. And the 5 consumers can consume different queue. This is kind of wasting resources because the data is the same in file queues.
You can consume with 5 consumers from one queue if you do not want to duplicate messages.
does rabbitmq support to push the same data to multiple consumers
In AMQP protocol terms you publish message to exchange and then broker (RabbitMQ) decide what to do with messages - assume it figured out the queue message intended for (one or more) and then put that message on top of that queue (queues in RabbitMQ are classic FIFO queues which is somehow break AMQP implementation in RabbitMQ). Only after that message may be delivered to consumer (or die due to queue length limit or per-queue or per-message ttl, if any).
message need to be acked for a specified times to be deleted
There are no way to change message body or attributes after message being published (actually, Dead Letter Exchanges extension and some other may change routing key, for example and add,remove and change some headers, but this is very specific case). So if you want to track ack's number you have to re-publish consumed message with changed body or header (depends on where do you plan to store ack's counter, but headers fits pretty nice for this.
Also note, that there are redeliverd message attribute which denotes whether message was already was consumed, but then redelivered. This flag doesn't count redelivers number so it usage is quite limited.