Rabbit MQ idle Queue not handling first message - rabbitmq

We have a RabbitMQ queue that sometimes goes idle.
Whenever we have 2 or more exchanges on this queue, the first one is missing and everything is OK for the others.
It seems like the first one "wakes" the queue up but is not handled correctly.
Is this a normal RabbitMQ way of handling ?

Related

RabbitMQ more messages than expected on fixed size queue

I have a publisher that sends messages to a consumer that moves a motor.
The motor has a work queue which I cannot access, and it works slower than the rate of the incoming messages, so I'm trying to control the traffic on the consumer.
To keep updated and relevant data coming to the motor without the queue filling up and creating a traffic jam, I set the RabbitMQ queue size limit to 5 and basicQos to 1.
The idea is that the RabbitMQ queue will drop the old messages when it is filled up, so the newest commands are at the front of the queue.
Also by setting basicQos to 1 I ensure that the consumer doesn't grab all messages from the queue and bombards the motor at once, which is exactly what i'm trying to avoid since I can't do anything once the command was sent to the motor.
This way the consumer takes messages from the queue one by one, while new messages replace the old ones on the queue.
Practically this moves the bottleneck to the RabbitMQ queue instead of the motor's queue.
I also cannot check the motor's work queue, so all traffic control must be done on the consumer.
I added messageId and tested, and found out many messages are still coming and going long after the publisher is being shut down.
I'm expecting around 5 messages after shutdown since that's the size of the queue, but i'm getting hundreds.
I also added a few seconds of sleep inside the callback to make sure this isn't the robot queue that's acting up, but i'm still getting many messages after shutdown, and I can see in the logs the callback is being called every time so it's definitely still getting messages from somewhere.
Please help.
Thanks.
Moving the acknowledgment to the end of the callback solved the problem.
I'm guessing that by setting basicQos to 1 it did execute the callback for each message one after another, but in the background it kept grabbing messages from the queue.
So even when the publisher was shutdown, the consumer still had messages that were taken from the queue in it, and those messages were the ones that I saw being executed.

Message is not routing to dead letter queue when consumer is down

I've a service A which is publishing message to Queue(Q-A).
I've a dead letter queue(DLQ) bounded to DLX with DLRK.
Queue A is bounded to an exchange(E-A) with a routing key(RA).
I've also set x-letter-exchange(DLX) and x-dead-letter-routing-key(DLRK) on Q-A with ttl-per-message on this queue to 60 seconds
The DLQ is also set with x-letter-exchange(E-A) and x-dead-letter-routing-key(DLRK) with ttl-per-message on this queue to 60 seconds.
With above configuration I'm trying to route the message to DLQ from Q-A after ttl expires and vice versa.
On the consumer side which is another service, I throw AMQPRejectAndDontRequeueException with defaultRequeueRejected set to fals.
The above configuration works fine when the consumer is up and throws the
exception.
But I'm trying to limit my queue size to 1 and then publish 3 messages to the Q-A and also shutting down the consumer. I see all the three messages placed in both Q-A and DLQ and eventually all the messages are dropped.
But if I don't set the queue limit to 1 or start the consumer, everything works fine.
I've also set the x-overflow to reject-publish and when there is overflow, I get a nack at the publisher and then I've a scheduler which publish it again to Q-A.
Note: Both exchanges are Direct and I'm using routing keys to bind it to respective queue.
Kindly, let me know if I'm missing something here and let me know need to share my config
After digging through, I think i finally found the answer from the link Dead-lettering dead-lettered messages in RabbitMQ
answer by pinepain
It is possible to form a cycle of dead-letter queues. For instance, this can happen when a queue dead-letters messages to the default exchange without specifiying a dead-letter routing key. Messages in such cycles (i.e. messages that reach the same queue twice) will be dropped if the entire cycle is due to message expiry.
So I think to solve the problem I need to create another consumer to consume from dead letter queue and publish it back to original queue from the consumer and not directly ttl from the dead letter queue. Please correct me if my understanding is right.
I may have arrived at this too late, But I think I can help you with this.
Story:
You want a retry queue to send dead messages to and retrieve and re-queue them in the main queue after a certain amount of time.
Solution:
Declare your main queue and bind it to an exchange. We call them main_queue and main_exchange and add this feature to the main_queue: x-dead-letter-exchange: retry_exchange
Create your retry queue and bind it to another exchange. We call these retry_queue and retry_exchange and add these features to the retry queue: x-dead-letter-exchange: main_exchange and x-message-ttl: 10000
With this combination, dead messages from main_queue will be sent to retry_queue and after 10 seconds they will be sent again to the main_queue which will they last indefinitely until a consumer declares them dead.
Note: This method works only if you publish your messages to the exchange and not directly in the queue.

RabbitMQ Round Robin With Acknowledge

Lets say I have a queue with a bunch of messages in it. I have 2 consumers connected to that queue, both set with a prefetch = 1. The work that these consumers do takes some time, and I don't want to acknowledge the message until the work is done (in case the consumer crashes or something - I want the message to automatically reenter the queue in exceptional cases).
But I also want these consumers to work in parallel, and that doesn't appear to be happening. In other words, as long as there are 2+ messages in the queue, I'd expect both consumers to be busy.
What appears to be happening instead is that consumer 1 receives a message, but consumer 2 will wait until consumer 1 has acknowledged the message. Then consumer 2 receives a message and consumer 1 waits, etc.
Is there an option I'm missing? Or should this be working, I just have a bug in my code somewhere? Or is this not possible?
You should be able to pull messages off the queue while previous messages are still being processed by other consumers. The RabbitMQ tutorial specifically points to parallelism as a strength of round-robin dispatching (http://www.rabbitmq.com/tutorials/tutorial-two-python.html). Are your two consumers running as threads in the same process? I wonder if you've just made a mistake in the implementation.

RabbitMQ pop operation atomicity

Does anyone know if the pop operation on a RabbitMQ queue is atomic?
I have several processes reading from the same queue (the queue is marked as durable, running on version 2.0.0) and I am seeing some quite odd behaviour.
If your multiple processes are consuming messages from the same queue then they should never consume the same message.
Here are the caveats, though:
If a message has been delivered by the broker to one of your consumers and it rejects the message (or terminates before getting a chance to acknowledge it) then the broker will put it back on the same queue and it would be delivered to one of your remaining active consumers.
If your consumers are pulling from distinct queues -- each with a matching binding -- then the broker will put copies of the message on each queue and each consumer will get a copy of the same message.

ActiveMQ "freeze" message on queue consuming

ActiveMQ: 5.10.2 inside ServiceMix's Karaf OSGi
KahaDB persistence.
Default broker settings.
Default settings in connections(tcp://x.x.x.x:61616)
16 queues predefined in activemq.xml.
Two client connections to ActiveMQ. One for producer sessions, one for consumer sessions.
Producers send messages to all queues.
16 consumer sessions consumes messages.
All going ok, but:
If I reduce number of consumers to 1 (or 2 or three, I don't know where is threshold) so that messages from 1 queue are consuming and messages from another queues are storing.
While some time passing, I see this picture:
That 1 consumer stop receiving message. He think that there are no more messages.
From activemqweb I can see that message count on that consuming queue is > 0
From activemqweb I cannot see any messages in Message Browser in that consuming queue.
I can see messages from other queues in Message Browser.
If I start some other consumer(or restart activemq) to consume messages from different queue I see:
I start to see messages in first queue Message Browser(those that were sent before but haven't been seen after "freeze").
First queue continue to consuming
Second queue begin to consuming.
The "freeze" can occur again in some time and start consuming another queue will help again.
If I start all consumers I see no "message freeze".
If just stop and start consumer on "frozen" queue, nothing happens. It need to be done on "unfrozen queue" to "unfroze" "frozen queue".
It also happens if there is no active producer, only consumer.
What can it be?
Thank you.
Oups. I've found what it was.
It's just available memory exceeded.
I didn't set -Xms and -Xmx, so it run with only 512mb of max heap.
And when messages size stored and not consumed is closed to the top, I get these behavior.