I'm currently writing a Dead Letter queue to handle exceptions occurring within a route. In the dead letter queue I plan for each message to be delayed by 1 day (or any other long time). The code I have written currently delays the message properly except doesn't delay asynchronously and messages end up getting backed up while waiting for the previously delayed messaged to wait out their delay period.
<route>
<from uri="activemq:queue:foo"/>
<delay asyncDelayed="true">
<constant> 60000 </constant>
</delay>
<to uri="activemq:aDelayedQueue"/>
</route>
I read that asyncDelayed="true" should schedule a task to be executed in the future to process the latter part of the route except when I try to run the above code the messages end up backed up in the foo queue while trickling in 1 at a time for the aDelayedQueue.
Why would it do this and is there something that could fix this issue?
Thanks!
EDIT, I found a workaround but I'm curious to see what went wrong originally.
Rewording the question again. Here's what my queue pipeline looks like:
QueueA -> QueueB -> QueueC
QueueB pulls the messages that are in QueueA. The goal is for each message to sit in QueueB for X amount of time before being sent to QueueC. The above code snippet was placed in QueueB. The issue I was facing was that if 5 messages arrived simultaneously in QueueA, QueueB would only pull one of those messages in, wait the 60 seconds, then send that message off to QueueC. My intended functionality was for all 5 messages to get placed onto QueueB where they would sit for 60 seconds before being placed onto QueueC all at once. The original issue was that messages started to stack up in QueueA because QueueB was waiting on the delay.
The JMS client route handles messages one by one and not in parallel. Only, when one message leaves the queue, the next queue can enter the route. Therefore, if one message is delayed, no other message is read from the JMS queue.
Beside your workaround, you could have parallelized your route:
<route>
<from uri="activemq:queue:foo"/>
<to uri="seda:delayer"/>
</route>
<route>
<from uri="seda:delayer?concurrentConsumers=1000"/>
<delay asyncDelayed="true">
<constant> 60000 </constant>
</delay>
<to uri="activemq:aDelayedQueue"/>
</route>
However, your workaround AMQ_SCHEDULED_DELAYis more robust even if your client route shuts down, see Persisting failed messages in Camel's SEDA queue.
I had the exact same problem, except that the routes were vm routes as opposed to activemq. So, the following route definition would block synchronously:
from("vm:a").
throttle(1).asyncDelayed().
to("vm:b");
However, when I added the maximumRequestsPerPeriod value, it worked as expected:
from("vm:a").
throttle(1).asyncDelayed().maximumRequestsPerPeriod(100L).
to("vm:b");
If you don't provide a maximumRequestsPerPeriod value, it seems to not queue requests and thus blocks the caller.
Related
I have a publisher that sends messages to a consumer that moves a motor.
The motor has a work queue which I cannot access, and it works slower than the rate of the incoming messages, so I'm trying to control the traffic on the consumer.
To keep updated and relevant data coming to the motor without the queue filling up and creating a traffic jam, I set the RabbitMQ queue size limit to 5 and basicQos to 1.
The idea is that the RabbitMQ queue will drop the old messages when it is filled up, so the newest commands are at the front of the queue.
Also by setting basicQos to 1 I ensure that the consumer doesn't grab all messages from the queue and bombards the motor at once, which is exactly what i'm trying to avoid since I can't do anything once the command was sent to the motor.
This way the consumer takes messages from the queue one by one, while new messages replace the old ones on the queue.
Practically this moves the bottleneck to the RabbitMQ queue instead of the motor's queue.
I also cannot check the motor's work queue, so all traffic control must be done on the consumer.
I added messageId and tested, and found out many messages are still coming and going long after the publisher is being shut down.
I'm expecting around 5 messages after shutdown since that's the size of the queue, but i'm getting hundreds.
I also added a few seconds of sleep inside the callback to make sure this isn't the robot queue that's acting up, but i'm still getting many messages after shutdown, and I can see in the logs the callback is being called every time so it's definitely still getting messages from somewhere.
Please help.
Thanks.
Moving the acknowledgment to the end of the callback solved the problem.
I'm guessing that by setting basicQos to 1 it did execute the callback for each message one after another, but in the background it kept grabbing messages from the queue.
So even when the publisher was shutdown, the consumer still had messages that were taken from the queue in it, and those messages were the ones that I saw being executed.
I am doing a POC to work with RabbitMQ and have a questions about how to listen to queues conditionally!
We are consuming messaging from a queue and once consumed, the message will be involved in an upload process that takes longer times based on the file size. And as the file sizes are larger, sometimes the external service we invoke running out of memory if multiple messages are consumed and upload process is continuing for the previous messages.
That said, we would like to only consume the next message from the queue once the current/previous message is processed completely. I am new to JMS and wondering how to do it.
My current thought is, the code flow will manually pull the next message from the queue when it completes the process of previous message as the flow knows that it has completed the processing but if that listener is only used in code flow to manually call, how it will pull the very first message!
The JMS spec says that message consumers work sequentially:
The session used to create the message consumer serializes the
execution of all message listeners registered with the session
If you create a MessageListener and use that with your consumer, the JMS spec states the listener's onMessage will be called sequentially, i.e. once per message after each message has been processed by the listener. So in effect each message waits until the previous has completed.
This seems like a pretty basic question, but I seem to be losing messages when the consumer falls over before acknowledging them. I have set up the broker with an exchange audit:exchange and a queue bound to it audti:queue. Both are durable, and as expected if I send messages when no consumer is active they sit on the queue and get processed by the consumer when it starts up. However if I put a break point in the consumer and kill the process half way through, the message is not requeued - it just seems to get lost. The consumer is set up using the annotation
#RabbitListener(queues="audit:queue")
public void process(Message message) {
routeMessage(message) //stop here and kill process - message removed from q
}
I can't reproduce your issue.
With the breakpoint triggered, I see the message still in the queue (unacked=1) on the rabbit console.
When the process is killed; the message goes back to ready.
Have you configured the listener container factory to use Acknowledgemode.NONE?
That will exhibit the behavior you describe.
The default is AUTO which means the message will only be acknowledged when the listener returns successfully.
If you still think there's an issue; please supply the complete test case.
Sorry this was my bad (I just wasted a few hours .. sigh). I was killing the app from within my ide. Which probably detaches and then kills the process - allowing time for it to proceed just enough that it actually does send the ack. When I just killed the process from a terminal it worked exactly as expected. Particualr apologies to you Gary for wasting your time as well.
I am using the alerts feature of IronMQ service provided by IronIO to start workers.
I have things setup so that a message is pushed onto the push queue. The push queue sends an alert that starts a worker. The worker pulls off the message on the push queue, reserving it. Sometimes for whatever reason the job fails, the reservation for a message expires, and the message becomes available again. However, from what I can tell, no alert is sent when the reservation expires on a message. So the message sits in the queue until another message is added to the queue firing an alert and starting a worker. But the new message is not processed.
Are alerts created for messages that have a reservation expire in IronMQ? Is there any documentaion that I missed describing what can happen?
I am working on having workers pull off multiple messages but I am running into issues unrelated to iron io when processing multiple messages in the same worker.
Also is there a way to pull off the top of the queue. To avoid pulling off messages that may be causing errors? Should I just modify my workers to delete messages causing errors?
Currently there are no alerts for when a message times out and goes back on the queue, but that does seem like it would be a good idea. I assume this is a pretty inactive queue? I made a feature request for this here: https://trello.com/c/XcHi0NdN/35-fire-alert-when-a-message-times-out-goes-back-on-queue
And regarding messages that are causing issues, your best bet would be to add them to a different queue (an error queue) and delete them off the original queue. Then you can go through the error queue to figure out why certain messages are causing you problems. This is known as a "dead letter queue" btw and we have a feature request for it here, please give it a vote! https://trello.com/c/bGnJcNa9/26-dead-letter-queue
In the console pane rabbitmq one day I had accumulated 8000 posts, but I am embarrassed that their status is idle at the counter ready and total equal to 1. What status should be completed at the job, idle? In what format is registered x-pires? It seems to me that I had something wrong =(
While it's difficult to fully understand what you are asking, it seems that you simply don't have anything pulling messages off of the queue in question.
In general, RabbitMQ will hold on to a message in a queue until a listener pulls it off and successfully ACKs, indicating that the message was successfully processed. You can configure queues to behave differently by setting a Time-To-Live (TTL) on messages or having different queue durabilities (eg. destroyed when there are no more listeners), but the default is to play it safe.