RabbitMq Consumer not processing messages - rabbitmq

I have made a consumer for RabbitMQ as a console application written in C#.NET. It is programmed to listen to a queue perpetually and whenever it find a message in the queue, it processes it. The consumer processes on an average 35 messages / second. The consumers are scheduled to run at system startup in the task scheduler. The consumers run fine for 3 - 4 days. But then, they keep on running but don't process any messages although the queue has messages in it. When the consumer is stopped and again started, it again starts processing the messages properly. But, by the time you manually restart, millions of messages get queued. Can someone please help me explain this abnormal behavior. I have other queues too which are running since months together without ceasing to stop.
Requesting prompt response. Thanks in advance to the experts.

I suggest you to look at the consumer code, it might be running but stuck in RabbitMQ exceptions. It sounds odd that it runs fine for 3-4 days.
I had similar problem of consumer not consuming messages in the queue because I was using "RabbitMQ.Client.QueueingBasicConsumer" to dequeue the message and when the queue was closed abruptly although consumer was running but it was in System.IO.EndOfStreamException. I am using "RabbitMQ.Client.Events.EventingBasicConsumer" which has helped me to solve the issue.

Related

RabbitMQ + kombu - A long callback blocks the heartbeat leading to aborting the connection

We have been trying to use RabbitMQ to transfer data from Project A to Project B.
We created a producer who takes the data from Project A and puts it in a queue, and that was relatively easy. Then, create a k8s pod for Project B, which listens to the appropriate queue with the ConsumerMixin of kombu.
Overall, the integration was reasonable and straightforward. But when we started to process long messages, we noticed that they were coming back into the queue repeatedly.
After research, we found out that whenever the processing of the message takes more than 20 seconds, the message showed up in the queue again, even though the processing was successful.
The source of this issue lies with the heartbeat of RabbitMQ. We set the heartbeat for 10 seconds, and the RabbitMQ checks the connection twice before it kills it. However, because the process of the callback takes more than 20 seconds, and the .ack() (acknowledge) of the message happens at the end of the callback (to ensure it was successful), the heartbeat is being blocked by the process of this message (as described here: https://github.com/celery/kombu/issues/621#issuecomment-251836611).
We have been trying to find a workaround with Threading, to process the message on a different thread and avoid the block of the heartbeat, but it didn't work. Also, it feels like we were trying to hack things and not solve the problem.
So my question here is if there is a proper workaround to handle this situation, or what alternatives do we have? RabbitMQ seemed like the right choice since we use it in standalone projects with Celery, and it is also recommended on the internet.

RabbitMQ more messages than expected on fixed size queue

I have a publisher that sends messages to a consumer that moves a motor.
The motor has a work queue which I cannot access, and it works slower than the rate of the incoming messages, so I'm trying to control the traffic on the consumer.
To keep updated and relevant data coming to the motor without the queue filling up and creating a traffic jam, I set the RabbitMQ queue size limit to 5 and basicQos to 1.
The idea is that the RabbitMQ queue will drop the old messages when it is filled up, so the newest commands are at the front of the queue.
Also by setting basicQos to 1 I ensure that the consumer doesn't grab all messages from the queue and bombards the motor at once, which is exactly what i'm trying to avoid since I can't do anything once the command was sent to the motor.
This way the consumer takes messages from the queue one by one, while new messages replace the old ones on the queue.
Practically this moves the bottleneck to the RabbitMQ queue instead of the motor's queue.
I also cannot check the motor's work queue, so all traffic control must be done on the consumer.
I added messageId and tested, and found out many messages are still coming and going long after the publisher is being shut down.
I'm expecting around 5 messages after shutdown since that's the size of the queue, but i'm getting hundreds.
I also added a few seconds of sleep inside the callback to make sure this isn't the robot queue that's acting up, but i'm still getting many messages after shutdown, and I can see in the logs the callback is being called every time so it's definitely still getting messages from somewhere.
Please help.
Thanks.
Moving the acknowledgment to the end of the callback solved the problem.
I'm guessing that by setting basicQos to 1 it did execute the callback for each message one after another, but in the background it kept grabbing messages from the queue.
So even when the publisher was shutdown, the consumer still had messages that were taken from the queue in it, and those messages were the ones that I saw being executed.

Settings Autoack true in Rabbitmq and celery

I am using celery and rabbitmq , but due to pushing several task in queue my server memory utilization becomes more than 40% , so that rabbit further will not accepting any task . so i want to delete those message which are already executed , but due to durable behavior of rabbitmq those message not automatically delete, so i want to set some configuration like autoAck=True , so that if message is consumed from celery ,it will delete from rabbitmq queues and also from my server memory. please explain how can we do that .
OK, so while I don't fully understand why you have the problem you have, it is clear what is going on.
A publisher puts a message task in the queue
Your worker process pulls the message and processes it
The message is never actually removed from the queue
This behavior happens when a consumer fails to acknowledge the processing of a message. To confirm, if you look at the RabbitMQ management plug-in, you'll see a whole bunch of unacknowledged messages. These will be unavailable for consumption, but will continue to be held on the server and taking up disk space and memory.
Further, if you do a Basic.Recover, all of these messages will then get dumped back into the queue to be processed again.
This problem is due to incorrect configuration of your consumer. There are two ways to address this:
You can configure the consumer to auto-ack (i.e. acknowledge the message automatically upon receipt). This is done when you declare the consumer (using Basic.Consume). Edit: It looks like this may be the default behavior of Celery.
You can configure your worker process to submit an acknowledgement (using Basic.Ack). Edit: this is done via the acks_late property in Celery.

When a new consumer comes online, can it read the last x messages?

I'm confused how rabbitmq works when a new consumer comes online.
I understand when there are currently x number of consumers connected, and then a producer sends a message the consumers will receive these messages.
But say consumerX was down, and now comes online or it is a brand new consumer. Is it possible for it to replay messages in the past 24 hours?
This is a normal behavior for RabbitMQ.
Please read:
https://www.rabbitmq.com/tutorials/tutorial-two-python.html
Is it possible for it to replay messages in the past 24 hours?
It depends on how you set things up.
If you have queues that don't auto-delete, they'll just keep collecting messages and waiting around for a consumer to connect.
I've had instances w/ thousands of messages stuck in a queue because my consumer was crashing. As soon as I fixed my code, the messages started consuming again.
But, if you're letting your queues get deleted when your consumers die, then you're in a bit of trouble.
There is a plugin to read the last ## of messages from an exchange, but it doesn't work in a time-based manner... just the last ## of messages: https://github.com/rabbitmq/rabbitmq-recent-history-exchange

RabbitMQ consumer on demand?

I want a consumer to perform some actions every time that a message is received. Must the consumer be running 24/7 "listening" to the queue or it can be run only when an appropiate message is received?
I am not sure your question makes sense. The message can only be received from a queue by the consumer of a queue. To know if a message is in the queue one must look at the queue. The only way to do that is to be a consumer.
If you really want you could have a script that ran the commandline interface for the management plugin. That could poll the queue and when it had a size of more than one could start a program that would run a consumer to consume from the queue.
None of this makes any sense. If it is just sitting waiting for the queue and doing nothing else it is hardly consuming any resources so I do not see what the problem would be running a consumer 24/7.
Of course the consumer doesn't have to run 24/7, thats part of the point of MQ. It is asynchronous. The consumer does not have to be running when the producer writes to the queue. You could therefore have a scheduled task that runs your consumer periodically to check and process messages from the queue. But I do not think that is what you want.
The whole point of listening is: do nothing until a message comes, process the message, do nothing until the next message. This is what you want the first sentence of your question. Why the problem with listening?