Priority Messaging in rabbitMq - rabbitmq

I am working with RabbitMQ where I am publishing the message to a retrial queue once it has failed during processing. I would want the high-priority ones to take precedence and move to the front of the queue.For me the priority of a message is to be decided by the number of times it has been retried and pushed to the retrial-queue. Recently I came across concept of message priority in RabbitMQ. But I am not gained a lot of confidence to work with it as the documentation is very brief.
I am working with multiple consumers for this queue. I want to understand how will the priority messaging working in rabbitMQ for multiple consumers.

Related

How is priority in RabbitMQ implemented

Under the hood, how is a FIFO queue turned into a priority queue in a distributed fashion? Are they actually swapping the underlying datastructure, or is it a "hacked" fix
The underlying data structures are multiple queues, each assigned a priority. Each queue is an Erlang VM process. This is why having more than 10 or so priorities isn't recommended as performance suffers. If your load is light enough, this may be acceptable.
NOTE: the RabbitMQ team monitors the rabbitmq-users mailing list and only sometimes answers questions on StackOverflow.
When a message is published with a priority header, the message with the higher priority value gets placed on the head of the queue. This is done by actually swapping the messages in the queue. This is all done when the message is waiting to be consumed in the queue. In order to allow RabbitMQ to actually prioritise the messages, set the basic.qos of the consumer as low as possible. So if a consumer connects to an empty queue whose basic.qos is not set and to which messages are subsequently published, the messages may not spend any time at all waiting in the queue. In this case, the priority queue will not get any opportunity to prioritise them.
Reference: https://www.rabbitmq.com/priority.html

Distribute messages from RabbitMQ to consumers running on Heroku dynos as a 'round robin'

I have a RabbitMQ setup in which jobs are sent to an exchange, which passes them to a queue. A consumer carries out the jobs from the queue correctly in turn. However, these jobs are long processes (several minutes at least). For scalability, I need to be able to have multiple consumers picking a job from the top of the queue and executing it.
The consumer is running on a Heroku dyno called 'queue'. When I scale the dyno, it appears to create additional consumers for each dyno (I can see these on the RabbitMQ dashboard). However, the number of tasks in the queue is unchanged - the extra consumers appear to be doing nothing. Please see the picture below to understand my setup.
Am I missing something here?
Why are the consumers showing as 'idle'? I know from my logs that at least one consumer is actively working through a task.
How can my consumer utilisation be 0% when at least one consumer is definitely working hard.
How can I make the other three consumers actually pull some jobs from the queue?
Thanks
EDIT: I've discovered that the round robin dispatching is actually working, but only if the additional consumers are already running when the messages are sent to the queue. This seems like counterintuitive behaviour to me. If I saw a large queue and wanted to add more consumers, the added consumers would do nothing until more items are added to the queue.
To pick out the key point from the other answer, the likely culprit here is pre-fetching, as described under "Consumer Acknowledgements and Publisher Confirms".
Rather than delivering one message at a time and waiting for it to be acknowledged, the server will send batches to the consumer. If the consumer acknowledges some but then crashes, the remaining messages will be sent to a different consumer; but if the consumer is still running, the unacknowledged messages won't be sent to any new consumer.
This explains the behaviour you're seeing:
You create the queue, and deliver some messages to it, with no consumer running.
You run a single consumer, and it pre-fetches all the messages on the queue.
You run a second consumer; although the queue isn't empty, all the messages are marked as sent to the first consumer, awaiting acknowledgement; so the second consumer sits idle.
A new message arrives in the queue; it is distributed in round-robin fashion to the second consumer.
The solution is to specify the basic.qos option in the consumer. If you set this to 1, RabbitMQ won't send a message to a consumer until it has acknowledged the previous message; multiple consumers with that setting will receive messages in strictly round-robin fashion.
I am not familiar to Heroku, so I don't know how Heroku worker build rabbitMQ consumer, I just have a quick view over Heroku document.
Why are the consumers showing as 'idle'?
I think your mean the queue is 'idle'? Because the queue's state is about the queue's traffic, it just means there is not on-doing job for the queue's job thread. And it will become 'running' when a message is published in the queue.
How can my consumer utilisation be 0% when at least one consumer is definitely working hard.
The same as queue state, from official explanation, consumer utilisation too low means:
There were more consumers
The consumers were faster
The consumers had a higher prefetch count
In your situation, prefetch_count = 0 means no limits on prefetch, so it's too large. And Messages.total = Messages.unacked = 78 means your consumer is too slow, there are two many messages have been processed by consumer.
So if your message rate is not large enough, the state and consumer utilisation field of the queue is useless.
If I saw a large queue and wanted to add more consumers, the added consumers would do nothing until more items are added to the queue.
Because these unacked messages have already been prefetched by exist consumers, they will not be consumed by new consumers unless you requeue the unacked messages.

RabbitMQ delivery throttle

So I'm testing RabbitMQ in one node. Plain and simple,
One producer sends messages to the queue,
Multiple consumers take tasks from that queue.
Currently consumers execute thousands of messages per second, they are too fast so I need them to slow down. Managing consumer-side throttling is not possible due to network unreliable nature.
Collectively consumers must not take more than 10 messages per second altogether from that queue.
Is there a way to configure RabbitMQ so as the queue dispatches a maximum of 10 messages per second?
If I remember correctly, once Rabbit MQ has delivered a message to the queue, it's up to consumers to consume a message. There are various consumers in different languages, you haven't mentioned anything specific, so I'm giving a generic answer.
In my understanding, you shouldn't try to impose any restrictions on Rabbit MQ itself, instead, consider implementing connection pool of message consumers that will be able to handle not more than X messages simultaneously on the client side. Alternatively, you can provide some kind of semaphore at the handler itself, but not on the Rabbit MQ server itself.

Rabbitmq : Prioritize consuming messages from multiple queues

If I have two queues from which I want to consume messages, and I use a single SimpleMessageQueueListenerContainer for it, in which order would the listeners be invoked/messages consumed when both queues have messages?
I will try to be more specific of the problem I am working on:
I have a consumer application which needs to consume messages from 2 queues – say regular-jobs-queue and infrequent-jobs-queue. If there are any messages in ‘infrequent-jobs-queue’ I want to consume those before consuming messages from ‘regular-jobs-queue’. I might not be able to combine these and put all messages into a single rabbitmq level priority queue and assign higher priority to infrequent-job message because of some upcoming use-cases like purging regular-jobs without affecting infrequent-jobs and others.
I am aware that RabbitMQ has support for consumer priority but I am not very sure if it will be applicable here. I want all instances of my consumer application to first consume messages of infrequent-jobs-queue if any and not prioritize amongst these consumers.
Or should I like have 2 containers, with dedicated consumer thread(s) per queue and have an internal priority-queue data structure into which I can put messages as and when consumed from rabbitmq queue.
Any help would be really appreciated. Thanks.
~Rashida
You can't do what you want; messages will be delivered with equal priority.
Moving them to an internal in-memory queue will risk message loss.
You might want to consider using one of the RabbitTemplate.receive() or receiveAndConvert() methods instead of a message-driven container.
That way you have complete control.

How to listen to multiple queues in order of priority in a Mule application.

I have a amqp connector setup that listens on a single queue for JSON messages and is working fine. The business has dropped a use case that my application now needs to listen on multiple queues in order of priority. For example having three queues:
HighQ
NormalQ
LowQ
I want the mule connector to first read from HighQ until empty, then NormalQ until empty and LowQ until empty. Restarting from HighQ after every message.
I feel like this should be standard but my google foo is failing me.
Any pointers in the right direction?
In the usecase you specified I think it would be better to go with a single queue, but posting messages with 3 priority levels.
THis way the messages are always read in the order of their priority with the highest prioritymessage are always read first.
So you can make the message producers to post the messages onto the queue with 3 priority levels(say 9 for high , 4 for normal, 0 for low).
You inbound JMS endpoint will read all the messages with priority of 9 first. Then it will read all the messages with priotiy of 4 and then the messages with priority of 0.
Sample JMS Outbound posting messages with priority.
<jms:outbound-endpoint queue="StudioOUT" connector-ref="MyAppJMS" doc:name="JMS">
<set-property propertyName="Priority" value="9"/>
</jms:outbound-endpoint>
I hope this should address your scenario.
More on priority of JMS.
http://www.christianposta.com/blog/?p=289
Dealing with message priority is really something that your broker should handle. Dealing with this yourself can be tricky and cumbersome.
Processing queues sequentially in order to simulate priority seems like a bad idea. Lets say you've processed all messages from the high priority queue and start processing the normal priority queue.
While processing the normal priority queue new messages are coming in on the high priority queue. These high priority messages will be sitting there until both the normal and low priority queues are entirely processed.
You could probably improve your mechanism to handle situations like this a bit better, but it will be hard to make it bullet proof. You really don't want to deal with stuff like this yourself.
The JMS api has the concept of 'message priority' built in, but that's of little use to U if you aren't using a JMS broker.
If you're using rabbit mq then you should have a look at this stackoverflow post: rabbitmq-and-message-priority.
As Rabbit MQ queues are basically FIFO queues there's no easy way to use "real" prioritized message (such as in JMS).
There is however a plugin that claims to provide the functionality that you are looking for: rabbitmq-priority-queue.
According to the documentation the next version of RabbitMQ (3.5.0) will support prioritized queues out of the box.
If using the plugin isn't an option and if the priority of the messages is really important then I would not use the pattern you described using multiple queues. The pattern also doesn't scale very well if more priority levels are needed. I would opt to receive all the messages on a single channel (given that each message has a property that represents the priority level) and forward them to a (non amqp) new channel that handles the resequencing for you. An open-source product that could help you with this is Apache ActiveMQ but there are also other options available.