I have ActiveMQ 5.10.0 and configured the receiver and the broker the same:
tcp://localhost:61666?jms.prefetchPolicy.queuePrefetch=0&jms.redeliveryPolicy.maximumRedeliveries=5&jms.redeliveryPolicy.initialRedeliveryDelay=5000&jms.redeliveryPolicy.useExponentialBackOff=true&jms.redeliveryPolicy.backOffMultiplier=2.0&jms.nonBlockingRedelivery=true&jms.redeliveryPolicy.maximumRedeliveryDelay=180
When there is only one message on the queue that throws an exception, it is redelivered as expected, after 5s, 10, 20, 40 and 80. then it is placed on the deadletterque.
When multiple messages are placed on the queue, the times are doubled, not for the each message, but for the queue. I expect the maximum time is 180 seconds, but message 1 is retried 170 s after message 2 and the next message is retried after 340s. The next message is after 680s...
Did I find a bug in ActiveMQ or is my configuration wrong?
To keep the order, the message is retried and will block that consumer. So if you put two "bad" messages at the queue, the first will retry for 180 seconds, then the second one will be retried for 180 seconds and so forth. This is the expected behaviour to keep message order.
Related
I have 2 RabbitMQ queues:
incoming_message => where I push all messages that I want to process later
incoming_message_dlx => where I push the message whose the processing failed
As you can supposed with its name, the incoming_message_dlx queue use the Dead Letter Exchange feature, that means when the message expires, it will be requeue to my incoming_message.
What I try to achieve is to increase the expiration of messages each time the processing failed and that they are push to the DLX queue.
The problem is that even if a message expired, it will not be requeue to my incoming_message while it's not at the bottom (head) of the queue. So if there is a message with an expiration time of 7 days in the DLX queue and that we enqueue a new message with the expiration time of 5 seconds, this message will only be requeue to the incoming_message after 7 days + 5 seconds...
I've found on the documentation that I can use my DLX queue as a priority queue and put a priority on my messages according to the expiration time, but it doesn't work as expected, the priority seems to be ignored.
However, when I use the RabbitMQ admin (management plugin) and that I get the first message of the queue, it's always the one with the higher priority, but the "internal consumer" of the DLX queue seems to ignore this priority.
Do you know what could be the problem?
Thanks a lot in advance.
PS: I'm using RabbitMQ server version 3.6.10.
as a queue structure(fifo),rabbitmq do expire from the head of the queue.
queue ttl contains 3 type:
Per-Queue Message TTL: x-message-ttl
Per-Message TTL: expiration
Queue TTL:x-expires
when you want the message just deliver on the ttl value ,try use multi level ttl queue.
you can predefined some dlx queue as you need.
eg: you want error message do retry in (5s,15s,60s), you can define 3 dlx queue by set different x-message-ttl value, and this 3 incoming_message_dlx queue binding the dlx router to the incoming_message;
but if you have a message ttl=30s ,you just prefdefind 3 queue with ttl(5s,15s,60s) , so where to diliver ? try priority queue.
offical doc
Messages which should expire will still only expire from the head of the queue. This means that unlike with normal queues,
even per-queue TTL can lead to expired lower-priority messages getting stuck behind
non-expired higher priority ones.
These messages will never be delivered, but they will appear in queue statistics.
expired lower-priority messages getting stuck behind non-expired higher priority ones
queue like [60s(p=1),30s(p=0)] will not happen!
we defined 3 queue ttl(5s,15s,60s),in order to prevent lower ttl message getting stucked , we push the message to the queue with flor ttl not ceil ttl;
so ttl=30s will deliver to queue which ttl=60s,and set priority=1
ttl=30s is between the predefined queue (15s,60s);
set ttl=60s queue's max-priority=1, default is 0;
deliver ttl=30s message with priority=1;
so the message in a queue just like [30,60,60,60,60].
ttl=30s will not be blocked by the ttl=60s.
I have a simple service that subscribes to messages from RabbitMQ and writes them down to a datastore. Sometimes this datastore is unavailable for some short periods of time (sometimes seconds but sometimes minutes). If this happens we do a basic.reject on the failed message with requeue set to true. While this works the message seems to get redelivered immediately. I'd like RabbitMQ to gracefully backoff the redelivery. For example first try to redeliver "immediately" then after 2, 3, 5, 8, 13 seconds etc. Is this possible and if so how?
in addition to what Louis F. posted as a commented, check out the Delayed Message Exchange plugin https://github.com/rabbitmq/rabbitmq-delayed-message-exchange/
You could set up a dead-letter exchange using the delayed message exchange type, and have this very easily accomplished without having to do a bunch of configuration and use TTLs like that.
I would like to have this constraint on a queue in RabbitMQ:
Next message in the queue can't be dequeued before previous message (the one being processed) is acked.
Through this I will achieve ordered processing of events and parallel processing across multiple queues. How do I/can I configure RabbitMQ for this?
Edit (clarification): There will be many consumers all trying to get work from all the queues and since they can't get work from a queue that has an event being processed that isn't acked - ordered processing is maintained.
Next message in the queue can't be dequeued before previous message (the one being processed) is acked.
you can do this through the consumer prefetch limit for a single consumer.
Through this I will achieve ordered processing of events and parallel processing across multiple queues.
unfortunately, this won't have the effect that you want.
you can set for an individual consumer. that consumer will wait for a message to be acknowledged before getting the next one.
However, this applies to the individual consumer, not the queue.
if you have 2 consumers, each of them will process a message in parallel. if you have 10 consumers, 10 messages will be processed in parallel.
the only way to process every message in order, is to have a single consumer with a prefetch of 1.
Can I retry a message N times and then send it to a dead queue without making ack and republishing the message from the consumer?
The only way I can think of is to use multiple queues with dlx setup which fills the next retry queue like this:
test ---> test.retries.1 ---> ... ---> test.retries.N ---> test.dead
Is this ok? I am not sure want I mean by ok. I've started playing with rabbitmq recently. Let's say is this a common setup? Are there any disadvantages?
Is there another way? Maybe a plugin that adds a counter to basic.reject and does the same thing?
Side note: I want to known this because I distrust the idea of having a consumer that will acknowledge a message (even though he cannot process it) and then publish it again. At the end you will end up with multiple liers that will publish a message and from time to time fetch it immediately before everyone else "just to be sure" and.. you'll make them remember.. (and they won't) [this also happens in the scenario with the multiple retry queues but at least the broker will control where the message is going not the consumer]
basic.reject with requeue + TTL
You have one queue and you requeue the message multiple times on failure and when the ttl expires you can setup a dlx.
basic.reject with multiple queues
On failure you always do basic.reject without requeue and use the dlx to send the message to the next retry queue:
test ---> test.retries.1 ---> ... ---> test.retries.N ---> test.z_dead
At the moment I am using this approach with only 1 retry queue and I have a special queue that receives certain messages from the dlx and sends me an email. (In my case a message is acknowledged in few hours)
basic.reject with counting of the number of retries
When you do basic.reject without requeue and use a dlx you can check x-death header added by the dlx to determine the number of retries.
Here is how it is done in sneakers - a ruby gem:
---> test (queue)
|
| test.retry (exchange)
|
---> test.retry (queue - wait for some time with ttl)
|
| test.retry.requeue (exchange)
|
---> test (queue)
At the end you count how many times you have passed through the test queue and when you exceed your retry count, you'll have to acknowledge the message (maybe after publishing it somewhere so you could be notified for the error).
ActiveMQ: 5.10.2 inside ServiceMix's Karaf OSGi
KahaDB persistence.
Default broker settings.
Default settings in connections(tcp://x.x.x.x:61616)
16 queues predefined in activemq.xml.
Two client connections to ActiveMQ. One for producer sessions, one for consumer sessions.
Producers send messages to all queues.
16 consumer sessions consumes messages.
All going ok, but:
If I reduce number of consumers to 1 (or 2 or three, I don't know where is threshold) so that messages from 1 queue are consuming and messages from another queues are storing.
While some time passing, I see this picture:
That 1 consumer stop receiving message. He think that there are no more messages.
From activemqweb I can see that message count on that consuming queue is > 0
From activemqweb I cannot see any messages in Message Browser in that consuming queue.
I can see messages from other queues in Message Browser.
If I start some other consumer(or restart activemq) to consume messages from different queue I see:
I start to see messages in first queue Message Browser(those that were sent before but haven't been seen after "freeze").
First queue continue to consuming
Second queue begin to consuming.
The "freeze" can occur again in some time and start consuming another queue will help again.
If I start all consumers I see no "message freeze".
If just stop and start consumer on "frozen" queue, nothing happens. It need to be done on "unfrozen queue" to "unfroze" "frozen queue".
It also happens if there is no active producer, only consumer.
What can it be?
Thank you.
Oups. I've found what it was.
It's just available memory exceeded.
I didn't set -Xms and -Xmx, so it run with only 512mb of max heap.
And when messages size stored and not consumed is closed to the top, I get these behavior.