Rabbitmq can't manage after a time - rabbitmq

I have a rabbitmq queue that each data which publish to this queue is approximately 1 MB size. Every second 4 or 5 data publish to this queue.
Consumer consume each data one by one(Fetch=1). When i stop the consumer service 30000 queued message became ready to consume. When i start the consumer its consume rate 30/s+. It is just fine for now.
Hovewer in day light publisher never stops publishing and consumer can handle the queue. But at night publisher don't send data anymore(it is not error. It is how it suppose to be). At the first light of the next day publisher starts to publish 7 data per second. This time rabbit queue is started to go up continuesly.
First thought was that consumer can't handle the data. But it can consume 30/s+ data every sec.
I know that consume speed depends on consumer.
BUT.
I think that rabbit have some mechanism that after a time it decrease the consumer speed. Maybe it is lock mechanism maybe internal logs. I couldn't find any solution. Please help
This picture shows the limit of consumer speed.

Thanks to Lutz Horn
How is the consumer implemented? Does it use a push or a pull approach? It only consumes 3.8 messages per second. Edit: The screenshot shows a Consumer utilisation of 0%. This means that RabbitMQ always has to wait for the consumer to be able to handle the next message. RabbitMQ can never just push a published message to the consumer. See https://www.rabbitmq.com/blog/2014/04/14/finding-bottlenecks-with-rabbitmq-3-3/– Lutz Horn

Related

High messages_ready in Rabbitmq when using CELERY Consumers

I am using Celery COnsumers and Rabbitmq for Publishing.
I am using upto 20 Worker Threads but as the time goes by, the messages_ready Count is very Huge.
Would this be a problem.
There would be hardly, few messages in the UNACKED count.
Have set message-ttl expires for Queue messages which doesnt work because i believe this doesnt work on messages in ready count.
How do i make sure, there are less messages in the "Ready" Count ?
Having high READY Count messages mean, that the Consumers are slow.
I also see the problem where consumers are not consuming after a point and the Ready Count doesnt go down at all.
Any Help would be great. Thanks

Distribute messages from RabbitMQ to consumers running on Heroku dynos as a 'round robin'

I have a RabbitMQ setup in which jobs are sent to an exchange, which passes them to a queue. A consumer carries out the jobs from the queue correctly in turn. However, these jobs are long processes (several minutes at least). For scalability, I need to be able to have multiple consumers picking a job from the top of the queue and executing it.
The consumer is running on a Heroku dyno called 'queue'. When I scale the dyno, it appears to create additional consumers for each dyno (I can see these on the RabbitMQ dashboard). However, the number of tasks in the queue is unchanged - the extra consumers appear to be doing nothing. Please see the picture below to understand my setup.
Am I missing something here?
Why are the consumers showing as 'idle'? I know from my logs that at least one consumer is actively working through a task.
How can my consumer utilisation be 0% when at least one consumer is definitely working hard.
How can I make the other three consumers actually pull some jobs from the queue?
Thanks
EDIT: I've discovered that the round robin dispatching is actually working, but only if the additional consumers are already running when the messages are sent to the queue. This seems like counterintuitive behaviour to me. If I saw a large queue and wanted to add more consumers, the added consumers would do nothing until more items are added to the queue.
To pick out the key point from the other answer, the likely culprit here is pre-fetching, as described under "Consumer Acknowledgements and Publisher Confirms".
Rather than delivering one message at a time and waiting for it to be acknowledged, the server will send batches to the consumer. If the consumer acknowledges some but then crashes, the remaining messages will be sent to a different consumer; but if the consumer is still running, the unacknowledged messages won't be sent to any new consumer.
This explains the behaviour you're seeing:
You create the queue, and deliver some messages to it, with no consumer running.
You run a single consumer, and it pre-fetches all the messages on the queue.
You run a second consumer; although the queue isn't empty, all the messages are marked as sent to the first consumer, awaiting acknowledgement; so the second consumer sits idle.
A new message arrives in the queue; it is distributed in round-robin fashion to the second consumer.
The solution is to specify the basic.qos option in the consumer. If you set this to 1, RabbitMQ won't send a message to a consumer until it has acknowledged the previous message; multiple consumers with that setting will receive messages in strictly round-robin fashion.
I am not familiar to Heroku, so I don't know how Heroku worker build rabbitMQ consumer, I just have a quick view over Heroku document.
Why are the consumers showing as 'idle'?
I think your mean the queue is 'idle'? Because the queue's state is about the queue's traffic, it just means there is not on-doing job for the queue's job thread. And it will become 'running' when a message is published in the queue.
How can my consumer utilisation be 0% when at least one consumer is definitely working hard.
The same as queue state, from official explanation, consumer utilisation too low means:
There were more consumers
The consumers were faster
The consumers had a higher prefetch count
In your situation, prefetch_count = 0 means no limits on prefetch, so it's too large. And Messages.total = Messages.unacked = 78 means your consumer is too slow, there are two many messages have been processed by consumer.
So if your message rate is not large enough, the state and consumer utilisation field of the queue is useless.
If I saw a large queue and wanted to add more consumers, the added consumers would do nothing until more items are added to the queue.
Because these unacked messages have already been prefetched by exist consumers, they will not be consumed by new consumers unless you requeue the unacked messages.

Pushing messages to new subscribers

I am creating a bulk video processing system using spring-boot. Here the user will provide all the video related information through an xlsx sheet and we will process the videos in the backend. I am using the Rabbitmq for queuing up the request.
Let say a user has uploaded a sheet with 100 rows,then there will be 100 messages in the Rabbitmq queue. In the back-end, we are auto-scaling the subscribers (servers). So we will start with one subscriber-only and based on the load (number of messages in the queue) we will scale up to 15 subscribers.
But our producer is very fast and it allocating all the messages to our first subscriber (before other subscribers are coming up) and all our new subscriber are not getting any messages from the queue.
If all the subscribers are available before producer started pushing the messages then it is allocating the messages to all servers.
Please provide me a solution of how can our new subscribers pull the messages from the queue that were produced earlier.
You are probably being affected by the listener container prefetchCount property - it defaults to 250 with recent versions (since 2.0).
So the first consumer will get up to 250 messages when it starts.
It sounds like you should reduce it to a small number, even all the way down to 1 so only one message is outstanding at each consumer.

When a new consumer comes online, can it read the last x messages?

I'm confused how rabbitmq works when a new consumer comes online.
I understand when there are currently x number of consumers connected, and then a producer sends a message the consumers will receive these messages.
But say consumerX was down, and now comes online or it is a brand new consumer. Is it possible for it to replay messages in the past 24 hours?
This is a normal behavior for RabbitMQ.
Please read:
https://www.rabbitmq.com/tutorials/tutorial-two-python.html
Is it possible for it to replay messages in the past 24 hours?
It depends on how you set things up.
If you have queues that don't auto-delete, they'll just keep collecting messages and waiting around for a consumer to connect.
I've had instances w/ thousands of messages stuck in a queue because my consumer was crashing. As soon as I fixed my code, the messages started consuming again.
But, if you're letting your queues get deleted when your consumers die, then you're in a bit of trouble.
There is a plugin to read the last ## of messages from an exchange, but it doesn't work in a time-based manner... just the last ## of messages: https://github.com/rabbitmq/rabbitmq-recent-history-exchange

RabbitMq Consumer not processing messages

I have made a consumer for RabbitMQ as a console application written in C#.NET. It is programmed to listen to a queue perpetually and whenever it find a message in the queue, it processes it. The consumer processes on an average 35 messages / second. The consumers are scheduled to run at system startup in the task scheduler. The consumers run fine for 3 - 4 days. But then, they keep on running but don't process any messages although the queue has messages in it. When the consumer is stopped and again started, it again starts processing the messages properly. But, by the time you manually restart, millions of messages get queued. Can someone please help me explain this abnormal behavior. I have other queues too which are running since months together without ceasing to stop.
Requesting prompt response. Thanks in advance to the experts.
I suggest you to look at the consumer code, it might be running but stuck in RabbitMQ exceptions. It sounds odd that it runs fine for 3-4 days.
I had similar problem of consumer not consuming messages in the queue because I was using "RabbitMQ.Client.QueueingBasicConsumer" to dequeue the message and when the queue was closed abruptly although consumer was running but it was in System.IO.EndOfStreamException. I am using "RabbitMQ.Client.Events.EventingBasicConsumer" which has helped me to solve the issue.