ActiveMQ Consumer takes a long time to receive messages on startup - indexing

I've been experiencing an issue on the consumer side when restarting. After dumping the heap and sifting through threads, I've determined that the issue is due to the compression of the kahadb local repository index file. As this file gets larger, the time it takes for the consumer to start getting messages again increases. I've deleted my local repository directory, restarted, and verified that the consumer gets messages almost instantly.
Has anyone experienced this issue when working with ActiveMQ and KahaDB? On occasion, if the directory isn't wiped out, it can take up to 1.5 hours for my consumer to start getting messages from the broker again.
I've also verified that the messages are being published in a timely matter, they're just not being consumed because the index compression thread is blocking the "add" threads.
Any insight would be greatly appreciated!

Related

Rabbitmq msg_store_transient of a queue is consuming all disk space

erlang version = 1:24.0.2-1
rabbitmq-server version = 3.8.16-1
Recently installed latest rabbitmq on Ubuntu20.
I verified that all was working fine and consumer was consuming the notification from messaging queue as required.
After approximately a day, rabbbitmq crashed as there was 0 disk space left.
After analysis found that around 10G was consumed by msg_store_transient, to which restarting rabbitmq solved the issue.
But after a day, it happens again.
Can someone help me further?
most likely you are consuming messages without sending back the basic_ack, see for example here the ch.basic_ack
What to do:
check the unacked messages see:
check if you are using too many not persistent messages
check if you are using too many not persistent queues
Issue is Fixed:
We had high number of Ready messages because of which rdq files was taking huge space
There was a bug in code, that was listening to only one queue not all.

RabbitMQ is not deleting all messages

I have a current situation in which I am attempting to purge all messages within a RabbitMQ Queue. I have tried purging the queue as well as deleting the queue itself but each time I try to re-create the same queue again via the Web UI or code, it still insists that it has some messages to process despite the fact I am unable to retrieve a single message from the queue. Has anyone come across this issue before because it is quite bizarre?

Memory consumption of queues in ActiveMQ

Our services create queues on an ActiveMQ server. These queues exist for certain hours then they become inactive due to clients being turned off. ActiveMQ removes them by inactivity timeout of 1hr. We acquired heap dump and saw that memory consumption for the queues is as is and it grows permanently. We also added message timeout of 5 minutes for every message which we are sending. We verified this on different version of ActiveMQ even with latest, but issue is as it is. Can anybody tell us exact cause for this?

how to resove "connection.blocked: true" in capabilities on the RabbitMQ UI

"rabbitmqctl list_connections" shows as running but on the UI in the connections tab, under client properties, i see "connection.blocked: true".
I can see that messages are in queued in RabbitMq and the connection is in idle state.
I am running Airflow with Celery. My jobs are not executing at all.
Is this the reason why jobs are not executing?
How to resolve the issue so that my jobs start running
I'm experiencing the same kind of issue by just using celery.
It seems that when you have a lot of messages in the queue, and these are fairly chunky, and your node memory goes high, the rabbitMQ memory watermark gets trespassed and this triggers a blocking into consumer connections, so no worker can access that node (and related queues).
At the same time publishers are happily sending stuff via the exchange so you get in a lose-lose situation.
The only solution we had is to avoid hitting that memory watermark and scale up the number of consumers.
Keep messages/tasks lean so that the signature is not MB but KB

MSMQ error Insufficient Resources transactinoal dead-letter queue is filling up

Was running a system that uses multiple msmq's on the same machine, ran fine for about a day then I get the error about Insufficient resources when trying to post a message to one of the queues. Investigated via this blog post:
http://blogs.msdn.com/b/johnbreakwell/archive/2006/09/18/761035.aspx
I don't see anything in there about investigating the dead-letter queue.
Looked at the queues, realized the only queue that had any messages left in it was the transactional dead-letter queue, purged it, now the app(s) run again and can post messages to private queues.
I guess my main question is explain to me the trans dead-letter queue and how I can manage it.
thanks.
There will be nothing in the blog about the Dead Letter Queue as it is just a queue, like any other.
You have messages in the DLQ because you have enabled Negative Source Journaling in your application. An error condition has meant the original messages have died and ended up in the DLQ, as requested by your application. Ideally, if you are using the DLQ, you have a separate thread looking for messages in it.
You should have monitoring enabled on the total number of messages in the server so that you get an early alert when messages start piling up somewhere unexpectedly.
Cheers
John Breakwell
Ran into this issue today with our MSMQ/NServiceBus setup. From what I understand, manual queue purges will move messages to the Transaction Dead Messages queue. Clearing this queue out resolved the problem for us.