RabbitMQ: Publishing message when consumer is down and later consumer can't consume message without named queue - rabbitmq

I have a producer and a consumer. Multiple instances of the consumer are running. When producer publishes a message, my intention is to consume the message by all the instances. So, I am using the direct exchange. Producer publishes a message to the direct exchange with a topic. Consumers are listening to that topic with the exclusive queue. This process is working fine when the consumer is up and producer publishes a message. But when consumers are down and producer publishes a message, consumers are not consuming this message when up.
I googled about the issue. A suggestion was to use named queue. But if I use named queue, messages will be consumed following the round-robin algorithm. That does not meet my expectation to consume the same message by all the consumers.
Is there any other solution?
Appreciated your help.

There are two solutions to your issue.
Using named queue is one of them.
Set your exchange in fanout mode and subscribe your named queues to it. Doing so, when a publisher send a message in your exchange, it will be dispatched to all the queues listening.
You can then have one or more consumer for each queue (allowing you to scale). You'll have to define a named queue / consumer. When one consumer disconnect, his queue still receive messages and when he comes back he can consume them.
You should be able to do what you want that way.
The other way is more for your personnal knowledge since you said you want to use RabbitMQ. But in that particular case you could use Kafkha, your consummer could then, after reconnection, resume at the message index he was when he disconnected.
Please update me if it doesn't work :)

Related

Rabbit MQ - can a message be persisted until all subscribed consumers received it?

I'm having a little trouble figuring if Rabbit MQ can publish a message to a single queue with multiple subscribers, where the message will not get deleted until all subscribers to that queue have gotten the message.
The closest I can find is https://www.rabbitmq.com/tutorials/amqp-concepts.html, where it states:
AMQP 0-9-1 has a built-in feature called message acknowledgements (sometimes referred to as acks) that consumers use to confirm message delivery and/or processing. If an application crashes (the AMQP broker notices this when the connection is closed), if an acknowledgement for a message was expected but not received by the AMQP broker, the message is re-queued (and possibly immediately delivered to another consumer, if any exists).
Does this mean if the queue has more than one subscriber, it will wait until the message is consumed by all subscribers?
You should use multiple queues bound to the same exchange, using the same binding. Then, when a message matches the binding, it will be delivered to all queues, which presumably each have a consumer.
If you have multiple consumers on a single queue, RabbitMQ will round-robin deliveries among those consumers (which is not what you want).
NOTE: the RabbitMQ team monitors the rabbitmq-users mailing list and only sometimes answers questions on StackOverflow.

When message will be erased from queue?

Let's suppose we have one producer, one queue and some consumers which are subscribed on queue.
Producer -> Queue -> Consumers
Queues contains messages about life events. These messages should receive all consumers.
When queue will be erased?
When all consumers get message?
Or when one of consumers confirm message with flag ack (true)?
And how to manage priority, who from consumers must to get message first/last (don't confuse with message priority).
As instance I have 10 consumers and I want that the fifth consumer get message first, remaining consumers later after specified time.
Be careful: when there are many consumers on one queue, only one of them will receive a given message, provided that it is consumed and acked properly. You need to bind as many queues as consumers to an exchange to have all consumers receive the message.
For your priority question, there is no built-in mecanism to have consumers receive the same message with a notion of priority: consumer priority exists (see https://www.rabbitmq.com/consumer-priority.html), but it is made to have consumer receive a given message before the others on a given queue, so the other consumers won't receive this message. It you need to orchestrate the delivery of your messages, you have to think of a more complex system (maybe a saga or a resequencer?).
Note that you can delay messages using this pattern. Again, this requires having multiple queues.
Finally, there are many scenarios when a queue is deleted. Take a look at the documentation, these are well explained.

Rabbitmq won't deliever message after service goes up again

In my system I use spring-cloud-stream and RabbitMQ for sending and receiving events. I got my RabbitMQ running, service A up and service B down. Service A sends an event to service B. Then I turn up my service B and now I expect Rabbit to deliever the event - but nothing happens. Is it correct behaviour? I'm new to RabbitMQ but I though that it should guarantee that all events will eventually finds its receivers. My application is simple, based on example on github with no extra configuration. What do I miss?
If your consumers don't have a group, the queue is an anonymous, auto-delete queue. You need a group for persistence. See consumer groups.
Producers don't bind queues to the exchange, consumers do.
If you bind the producer first, before a new consumer group, messages will also be lost.
With the RabbitMQ binder, if you know the consumer groups ahead of time, you can set the ...producer.requiredGroups property and the queue(s) will be bound.
See the documentation.
requiredGroups
A comma-separated list of groups to which the producer must ensure message delivery even if they start after it has been created (e.g., by pre-creating durable queues in RabbitMQ).

RabbitMQ: dropping messages when no consumers are connected

I'm trying to setup RabbitMQ in a model where there is only one producer and one consumer, and where messages sent by the producer are delivered to the consumer only if the consumer is connected, but dropped if the consumer is not present.
Basically I want the queue to drop all the messages it receives when no consumer is connected to it.
An additional constraint is that the queue must be declared on the RabbitMQ server side, and must not be explicitly created by the consumer or the producer.
Is that possible?
I've looked at a few things, but I can't seem to make it work:
durable vs non-durable does not work, because it is only useful when the broker restarts. I need the same effect but on a connection.
setting auto_delete to true on the queue means that my client can never connect to this queue again.
x-message-ttl and max-length make it possible to lose message even when there is a consumer connected.
I've looked at topic exchanges, but as far as I can tell, these only affect the routing of messages between the exchange and the queue based on the message content, and can't take into account whether or not a queue has connected consumers.
The effect that I'm looking for would be something like auto_delete on disconnect, and auto_create on connect. Is there a mechanism in rabbitmq that lets me do that?
After a bit more research, I discovered that one of the assumptions in my question regarding x-message-ttl was wrong. I overlooked a single sentence from the RabbitMQ documentation:
Setting the TTL to 0 causes messages to be expired upon reaching a queue unless they can be delivered to a consumer immediately
https://www.rabbitmq.com/ttl.html
It turns out that the simplest solution is to set x-message-ttl to 0 on my queue.
You can not doing it directly, but there is a mechanism not dificult to implement.
You have to enable the Event Exchange Plugin. This is a exchange at which your server app can connect and will receive internal events of RabbitMQ. You would be interested in the consumer.created and consumer.deleted events.
When these events are received you can trigger an action (create or delete the queue you need). More information here: https://www.rabbitmq.com/event-exchange.html
Hope this helps.
If your consumer is allowed to dynamically bind / unbind a queue during start/stop on the broker it should be possible by that way (e.g. queue is pre setup and the consumer binds the queue during startup to an exchange it wants to receive messages from)

Consume message from another queue when routinq keys are used in RabbitMQ

I've defined one topic exchange (alarms) and multiple queues, each with its own routing key:
allAlarms, with routing key alarms.#: I want this to be used for receiving all alarms in a monitoring application
alarms_[deviceID], with routing key alarms.[deviceID], where the number of devices can vary at any given time
When sending an alarm from the device, I publish it using the routing key alarms.[deviceID]. The monitoring app, however, only consumes from the allAlarms queue. This leads to the following problem:
The messages in the allAlarms queue have been consumed, while the messages in the remaining queues are ready. Is there a better way of handling messages from multiple consumers? Ideally, I'd like to be able to also send commands back to the devices using the same queues where the devices publish their alarms.
It looks like you have consumers bound to the allAlarms queue but not to any of the alarms_[deviceID] queues.
In AMQP, a single consumer is bound to a single queue by name (and each queue can have multiple consumers bound to it). Messages are delivered to the consumers of a queue in round robin such that for a given message in a queue there is exactly one consumer that will receive the message. That is, consumers cannot listen to multiple queues.
Since you're using a topic exchange, you're correctly routing a single message to multiple queues via the routing key and queue bindings. This means that you can have a consumer for each queue and when a message is delivered to the exchange, each queue will get a copy of the message and each queue will deliver the message to exactly one consumer on each queue.
Thus, if allAlarms is consuming messages, it's because it has a consumer attached to the queue. If any of the alarms_[deviceID] are not consuming messages then they must not have consumers bound to those individual queues. You have to start up consumers for each alarms_[deviceID] by name. That will allow you to also have different consumer logic for different queues.
One last thing:
Ideally, I'd like to be able to also send commands back to the devices using the same queues where the devices publish their alarms.
You don't want to do this using the same queue because there's nothing that will stop the non-device consumers on the queue from picking up those messages.
I believe you're describing RPC over RabbitMQ. For that you will want to publish the messages to the alarms queues with a reply-to header which is the name of a temporary queue. This temp queue is a single-use queue that the consumer will publish to when it's done to communicate back to the device. The device will publish to the alarms exchange and then immediately start listening to the temp queue for a response from the consumer.
For more info on RPC over RabbitMQ check out this tutorial.
I don't think you need any of the queues for the devices - the alarm_[deviceid] queues.
You don't have any consumer code set up on these queues, and the messages are backed up and waiting for you to consume them.
You also haven't mentioned a need to consume messages from these queues. Instead, you are only consuming messages form the alarmAll queue.
Therefore, I would drop all of the alarm_[deviceid] queues and only have the alarmAll queue.
Just publish the alarms through your exchange and route them all to the alarmAll queue and be done with it. No need for any other routing or queues.