I'm a bit confused about RabbitMQ best practices regarding the use of Queues and Exchanges. Let's say I would like to deliver a GenerateInvoice message with some data for an invoice, and have multiple consumers processing the invoice data and generate a PDF. Each GenerateInvoice should be processed only by one consumer.
One approach is to declare an queue and publish the GenerateInvoice messages to this queue and let all consumers consume from this queue. That would distribute the message across the different consumers.
It's unclear to me, if the above is okay or best practice is delivering the messages to a Exchange instead of publishing them directly to a queue. Using an Exchange I have to ensure that an queue is declared after the producer has created the Exchange before it starts to publish messages. Otherwise no queue would receive the messages and the message would be lost.
Declaring a queue, publishing GenerateInvoice messages to the queue and having multiple consumers for the queue would work in this scenario.
The messages published to the queue will not be lost and they stay on RMQ if there are no consumers. Only thing is to make sure queue is declared before messages are published.
Java Example:
channel.queueDeclare(QUEUE_NAME, false, false, false, null);
Then, Publish can be done as:
channel.basicPublish("", QUEUE_NAME, null, message.getBytes());
and consume can be done as:
channel.basicConsume(QUEUE_NAME, true, deliverCallback, consumerTag -> { });
Related
Let's suppose we have one producer, one queue and some consumers which are subscribed on queue.
Producer -> Queue -> Consumers
Queues contains messages about life events. These messages should receive all consumers.
When queue will be erased?
When all consumers get message?
Or when one of consumers confirm message with flag ack (true)?
And how to manage priority, who from consumers must to get message first/last (don't confuse with message priority).
As instance I have 10 consumers and I want that the fifth consumer get message first, remaining consumers later after specified time.
Be careful: when there are many consumers on one queue, only one of them will receive a given message, provided that it is consumed and acked properly. You need to bind as many queues as consumers to an exchange to have all consumers receive the message.
For your priority question, there is no built-in mecanism to have consumers receive the same message with a notion of priority: consumer priority exists (see https://www.rabbitmq.com/consumer-priority.html), but it is made to have consumer receive a given message before the others on a given queue, so the other consumers won't receive this message. It you need to orchestrate the delivery of your messages, you have to think of a more complex system (maybe a saga or a resequencer?).
Note that you can delay messages using this pattern. Again, this requires having multiple queues.
Finally, there are many scenarios when a queue is deleted. Take a look at the documentation, these are well explained.
I have a producer and a consumer. Multiple instances of the consumer are running. When producer publishes a message, my intention is to consume the message by all the instances. So, I am using the direct exchange. Producer publishes a message to the direct exchange with a topic. Consumers are listening to that topic with the exclusive queue. This process is working fine when the consumer is up and producer publishes a message. But when consumers are down and producer publishes a message, consumers are not consuming this message when up.
I googled about the issue. A suggestion was to use named queue. But if I use named queue, messages will be consumed following the round-robin algorithm. That does not meet my expectation to consume the same message by all the consumers.
Is there any other solution?
Appreciated your help.
There are two solutions to your issue.
Using named queue is one of them.
Set your exchange in fanout mode and subscribe your named queues to it. Doing so, when a publisher send a message in your exchange, it will be dispatched to all the queues listening.
You can then have one or more consumer for each queue (allowing you to scale). You'll have to define a named queue / consumer. When one consumer disconnect, his queue still receive messages and when he comes back he can consume them.
You should be able to do what you want that way.
The other way is more for your personnal knowledge since you said you want to use RabbitMQ. But in that particular case you could use Kafkha, your consummer could then, after reconnection, resume at the message index he was when he disconnected.
Please update me if it doesn't work :)
I am trying to understand the logic for message deletion in RabbitMQ.
My goal is to make messages persist even if there is not a client connected to read them, so that when clients reconnect the messages are waiting for them. I can use durable, lazy queues so that messages are persisted to disk, and I can use HA replication to ensure that multiple nodes get a copy of all queued messages.
I want to have messages go to two or more queues, using topic or header routing, and have one or more clients reading each queue.
I have two queues, A and B, fed by a header exchange. Queue A gets all messages. Queue B gets only messages with the "archive" header. Queue A has 3 consumers reading. Queue B has 1 consumer. If the consumer of B dies, but the consumers of A continue acknowledging messages, will RabbitMQ delete the messages or continue to store them? Queue B will not have anyone consuming it until B is restarted, and I want the messages to remain available for later consumption.
I have read a bunch of documentation so far, but still have not found a clear answer to this.
RabbitMQ will decide when to delete the messages upon acknowledgement.
Let's say you have a message sender:
var factory = new ConnectionFactory() { HostName = "localhost", Port = 5672, UserName = "guest", Password = "guest" };
using (var connection = factory.CreateConnection())
using (var channel = connection.CreateModel())
{
channel.QueueDeclare(queue: "hello",
durable: true,
exclusive: false,
autoDelete: false,
arguments: null);
string message = "Hello World!";
var body = Encoding.UTF8.GetBytes(message);
channel.BasicPublish(exchange: "",
routingKey: "hello",
basicProperties: null,
body: body);
Console.WriteLine(" [x] Sent {0}", message);
}
This will create a durable queue "hello" and send the message "Hello World!" to it. This is what the queue would look like after sending one message to it.
Now let's set up two consumers, one that acknowledges the message was received and one that doesn't.
channel.BasicConsume(queue: "hello",
autoAck: false,
consumer: consumer);
and
channel.BasicConsume(queue: "hello",
autoAck: true,
consumer: consumer);
If you only run the first consumer, the message will never be deleted from the queue, because the consumer states that the messages will only disappear from the queue if the client manually acknowledges them: https://www.rabbitmq.com/confirms.html
The second consumer however will tell the queue that it can safely delete all the messages it received, automatically/immediately.
If you don't want to automatically delete these messages, you must disable autoAck and do some manual acknowledgement using the documentation:
http://codingvision.net/tips-and-tricks/c-send-data-between-processes-w-memory-mapped-file (Scroll down to "Manual Acknowledgement").
channel.BasicAck(deliveryTag: ea.DeliveryTag, multiple: false);
The simple answer is that messages consumed from one queue have no bearing on messages in another. Once you publish a message, the broker distributes copies to as many queues as appropriate - but they are true copies of the message and are absolutely unrelated from that point forward so far as the broker is concerned.
Messages enqueued into a durable queue remain until they are pulled by a consumer on the queue, and optionally acknowledged.
Note that there are specific queue-level and message-level TTL settings that could affect this. For example, if the queue has a TTL, and the consumer does not reconnect before it expires, the queue will evaporate along with all its messages. Similarly, if a message has been enqueued with a specific TTL (which can also be set as a default for all messages on a particular queue), then once that TTL passes, the message will not be delivered to the consumer.
Secondary Note In the case where a message expires on the queue due to TTL, it will actually remain on the queue until it is next up to be delivered.
There are different ways where RabbitMQ deletes the messages.
Some of them are:
After Ack from consumer
Time-to-live(TTL) for that Queue reached.
Time-to-live(TTL) for messages on that Queue reached.
The last two points state that RabbitMQ allows you to set TTL(Time-to-live) for both messages and queues.
TTL can be set for a given queue by setting the x-message-ttl argument to queue.declare, or by setting the message-ttl policy.
Expiry time can be set for a given queue by setting the x-expires argument to queue.declare, or by setting the expires policy.
A message that has been in the queue for longer than the configured TTL is said to be dead.
Important point to note here is that a single message routed to different Queues can die at different times or sometimes never in each queue where it resides.
The death of a message in one Queue has no impact on the life of same message in some other Queue
I've defined one topic exchange (alarms) and multiple queues, each with its own routing key:
allAlarms, with routing key alarms.#: I want this to be used for receiving all alarms in a monitoring application
alarms_[deviceID], with routing key alarms.[deviceID], where the number of devices can vary at any given time
When sending an alarm from the device, I publish it using the routing key alarms.[deviceID]. The monitoring app, however, only consumes from the allAlarms queue. This leads to the following problem:
The messages in the allAlarms queue have been consumed, while the messages in the remaining queues are ready. Is there a better way of handling messages from multiple consumers? Ideally, I'd like to be able to also send commands back to the devices using the same queues where the devices publish their alarms.
It looks like you have consumers bound to the allAlarms queue but not to any of the alarms_[deviceID] queues.
In AMQP, a single consumer is bound to a single queue by name (and each queue can have multiple consumers bound to it). Messages are delivered to the consumers of a queue in round robin such that for a given message in a queue there is exactly one consumer that will receive the message. That is, consumers cannot listen to multiple queues.
Since you're using a topic exchange, you're correctly routing a single message to multiple queues via the routing key and queue bindings. This means that you can have a consumer for each queue and when a message is delivered to the exchange, each queue will get a copy of the message and each queue will deliver the message to exactly one consumer on each queue.
Thus, if allAlarms is consuming messages, it's because it has a consumer attached to the queue. If any of the alarms_[deviceID] are not consuming messages then they must not have consumers bound to those individual queues. You have to start up consumers for each alarms_[deviceID] by name. That will allow you to also have different consumer logic for different queues.
One last thing:
Ideally, I'd like to be able to also send commands back to the devices using the same queues where the devices publish their alarms.
You don't want to do this using the same queue because there's nothing that will stop the non-device consumers on the queue from picking up those messages.
I believe you're describing RPC over RabbitMQ. For that you will want to publish the messages to the alarms queues with a reply-to header which is the name of a temporary queue. This temp queue is a single-use queue that the consumer will publish to when it's done to communicate back to the device. The device will publish to the alarms exchange and then immediately start listening to the temp queue for a response from the consumer.
For more info on RPC over RabbitMQ check out this tutorial.
I don't think you need any of the queues for the devices - the alarm_[deviceid] queues.
You don't have any consumer code set up on these queues, and the messages are backed up and waiting for you to consume them.
You also haven't mentioned a need to consume messages from these queues. Instead, you are only consuming messages form the alarmAll queue.
Therefore, I would drop all of the alarm_[deviceid] queues and only have the alarmAll queue.
Just publish the alarms through your exchange and route them all to the alarmAll queue and be done with it. No need for any other routing or queues.
I have a producer and broker on the same machine. The producer sends messages like so:
channel = connection.createChannel();
//Create a durable queue (if not already present)
channel.queueDeclare(merchantId, true, false, false, null);
//Publish message onto the queue
channel.basicPublish("", consumerId, true, false,
MessageProperties.MINIMAL_PERSISTENT_BASIC, "myMessage");
The consumer sits on another machine and listens to messages. It uses explicit acknowledgement like so:
while (true) {
QueueingConsumer.Delivery delivery = consumer.nextDelivery();
//Handle message here
channel.basicAck(delivery.getEnvelope().getDeliveryTag(), false);
}
From what I understand, the ack is meant for the broker to dequeue the message.
But how can my producer come to know about the ack that the consumer sent?
Producers and consumers normally don't interact. This is by AMQP protocol design. For example, consuming a specific message may be done a long time after it was published, and there is no sense in leaving the producer up and running for a long time. Another example is when a publisher sends one message to a broker, and due to routing logic that message gets duplicated to more than one queue, leading to ambiguity (because multiple consumers can acknowledge the same message). AMQP protocol is asynchronous (mostly), and letting the publisher know about its message being consumed just doesn't fit the AMQP async model.
There are exceptions from that, notably, RPC calls. Then the producer becomes a producer-consumer. It sends a message and then immediately waits for a reply (there is a good RabbitMQ manual - Direct reply-to related to RPC with RabbtiMQ).
In general, you can ensure that a message is delivered to a broker with Confirms (aka Publisher Acknowledgements) alongside with Dead Letter Exchanges and Alternate Exchanges. Those cover most cases under which a message can be lost from its normal flow.