RabbitMQ multiple consumers consuming from a single queue with some routing key - rabbitmq

I have a scenario where we have a fanout exchange and a number of queues (>1K) and a consumer for each of these queues. I want to have a dead-letter echange for all of these queues. Each of these queues will have a different dead-letter routing key. Now I want to bind a queue to the dead-letter exchange and want all my consumers to consume from that queue whenever the message in the queue matches their dead-letter routing key. Is that possible?
I can't afford to create a dead-letter queue for each of these consumers as that would blow up the total number of queues. I just want the consumer to perform some action whenever a message is dead-lettered from the queue it's consuming from.
I looked into the documentation and it seems the routing based on keys happens at the queue level. If there are multiple consumers to a queue, they will consume in a round robin fashion only. Is there a way for a consumer to selectively read messages from a queue or for all consumers to be able to consume all messages from a single queue?

Related

RabbitMQ: fanout exchange with no message loss?

RabbitMQ: version 3.11.2
I wish to configure a fanout exchange for which there will be two consumers and a single producer. Each of the two consumers can go offline for several minutes at a time; in the worst case scenario, one of the consumers could go offline for hours.
QUESTION: how should the fanout exchange and/or consumer queues be configured such that no message is ever lost, even if and when one of the consumers goes offline for several minutes or hours?
It is enough to bind queues to your exchange.
See here for more details https://www.rabbitmq.com/tutorials/tutorial-three-python.html
result = channel.queue_declare(queue='')
At this point result.method.queue contains a random queue name. For example it may look like amq.gen-JzTY20BRgKO-HjmUJj0wLg.
Secondly, once the consumer connection is closed, the queue should be deleted. There's an exclusive flag for that:
result = channel.queue_declare(queue='', exclusive=True)
channel.queue_bind(exchange='logs',
queue=result.method.queue)
Note:
If you need to handle the offline consumers, you need persistent queues and persistent messages instead of exclusive and auto-delete queues

RabbitMQ - Copy other queue's messages at queue creation

I have multiple producers that publish to their specific (durable and limited) queues using the amq.direct exchange and particular routing key
Queues:
producer.06
producer.07
...
Routing keys:
"producer.06" -> producer.06
"producer.07" -> producer.07
...
I also have multiple consumers. When they connect, they create their own (exclusive) queue and routing keys to receive all the messages from the queues that are of interest to them. This way multiple consumers can get the same messages.
Queues:
consumer.a
consumer.b
...
Routing keys:
"producer.06" -> consumer.a
"producer.06" -> consumer.b
"producer.07" -> consumer.b
...
I would like to populate the consumer's queue with a snapshot of messages of the relevant producer's queues, prior to binding the routing keys. Loosing a few messages in the interval between the message copy and routing key binding is acceptable, and a better alternative than out-of-order messages for my application. The consumer should not remove messages from producer's queues (as they would be needed by other consumers).
Is there a way to achieve this? -copying a snapshot of a queue into another one- or does anyone has a suggestion on how to achieve this?
I am running RabbitMQ 3.8.4 on Erlang 23.0.2, and using Rabbit .Net client 6.0.0.0 for the consumers.

RabbitMQ - How to Dead-letter / Process Messages in Expired Queues?

I have an a queue that has x-expires set. The issue I am having is that I need to do further processing on the messages that are in the queue IF the queue expires. My initial idea was to set x-dead-letter-exchange on the queue. But, when the queue expires, the messages just vanish without making it to the dead-letter exchange.
How can I dead-letter, or otherwise process, messages that are in a queue that expires?
As suggested in the comments, you cannot do this by relying only on the x-expire feature. But a solution that worked in a similar case I had was to:
Use x-message-ttl to make sure messages die if not consumed in a timely manner,
Assign a dead letter exchange to the queue where all those messages will be routed,
Use x-expires to set the queue expiration to a value higher than the TTL of the messages,
(and this is the tricky part) Assuming you have control over your consumers, before the last consumer goes offline, delete the binding to your "dying" queue, potentially through a REST API call - this will prevent new messages from being routed to the queue.
This way the messages that were published before the last consumer died were already processed, existing messages will be dead-lettered before the queue expires, and new messages cannot come into the queue.
You need to add a new dead letter queue that is bound to your dead letter exchange with the binding routing key set as the original queue name. In this way all expired messages sent to the dead letter exchange are routed to the dead letter queue.

RabbitMQ: Publishing message when consumer is down and later consumer can't consume message without named queue

I have a producer and a consumer. Multiple instances of the consumer are running. When producer publishes a message, my intention is to consume the message by all the instances. So, I am using the direct exchange. Producer publishes a message to the direct exchange with a topic. Consumers are listening to that topic with the exclusive queue. This process is working fine when the consumer is up and producer publishes a message. But when consumers are down and producer publishes a message, consumers are not consuming this message when up.
I googled about the issue. A suggestion was to use named queue. But if I use named queue, messages will be consumed following the round-robin algorithm. That does not meet my expectation to consume the same message by all the consumers.
Is there any other solution?
Appreciated your help.
There are two solutions to your issue.
Using named queue is one of them.
Set your exchange in fanout mode and subscribe your named queues to it. Doing so, when a publisher send a message in your exchange, it will be dispatched to all the queues listening.
You can then have one or more consumer for each queue (allowing you to scale). You'll have to define a named queue / consumer. When one consumer disconnect, his queue still receive messages and when he comes back he can consume them.
You should be able to do what you want that way.
The other way is more for your personnal knowledge since you said you want to use RabbitMQ. But in that particular case you could use Kafkha, your consummer could then, after reconnection, resume at the message index he was when he disconnected.
Please update me if it doesn't work :)

Consume message from another queue when routinq keys are used in RabbitMQ

I've defined one topic exchange (alarms) and multiple queues, each with its own routing key:
allAlarms, with routing key alarms.#: I want this to be used for receiving all alarms in a monitoring application
alarms_[deviceID], with routing key alarms.[deviceID], where the number of devices can vary at any given time
When sending an alarm from the device, I publish it using the routing key alarms.[deviceID]. The monitoring app, however, only consumes from the allAlarms queue. This leads to the following problem:
The messages in the allAlarms queue have been consumed, while the messages in the remaining queues are ready. Is there a better way of handling messages from multiple consumers? Ideally, I'd like to be able to also send commands back to the devices using the same queues where the devices publish their alarms.
It looks like you have consumers bound to the allAlarms queue but not to any of the alarms_[deviceID] queues.
In AMQP, a single consumer is bound to a single queue by name (and each queue can have multiple consumers bound to it). Messages are delivered to the consumers of a queue in round robin such that for a given message in a queue there is exactly one consumer that will receive the message. That is, consumers cannot listen to multiple queues.
Since you're using a topic exchange, you're correctly routing a single message to multiple queues via the routing key and queue bindings. This means that you can have a consumer for each queue and when a message is delivered to the exchange, each queue will get a copy of the message and each queue will deliver the message to exactly one consumer on each queue.
Thus, if allAlarms is consuming messages, it's because it has a consumer attached to the queue. If any of the alarms_[deviceID] are not consuming messages then they must not have consumers bound to those individual queues. You have to start up consumers for each alarms_[deviceID] by name. That will allow you to also have different consumer logic for different queues.
One last thing:
Ideally, I'd like to be able to also send commands back to the devices using the same queues where the devices publish their alarms.
You don't want to do this using the same queue because there's nothing that will stop the non-device consumers on the queue from picking up those messages.
I believe you're describing RPC over RabbitMQ. For that you will want to publish the messages to the alarms queues with a reply-to header which is the name of a temporary queue. This temp queue is a single-use queue that the consumer will publish to when it's done to communicate back to the device. The device will publish to the alarms exchange and then immediately start listening to the temp queue for a response from the consumer.
For more info on RPC over RabbitMQ check out this tutorial.
I don't think you need any of the queues for the devices - the alarm_[deviceid] queues.
You don't have any consumer code set up on these queues, and the messages are backed up and waiting for you to consume them.
You also haven't mentioned a need to consume messages from these queues. Instead, you are only consuming messages form the alarmAll queue.
Therefore, I would drop all of the alarm_[deviceid] queues and only have the alarmAll queue.
Just publish the alarms through your exchange and route them all to the alarmAll queue and be done with it. No need for any other routing or queues.