G'day everyone.
I've got a very strange trouble in RabbitMQ.
I have a Queue with just 1 message. I tried many ways to purge it. I erased the queue, but when I started it again, the 1 message still there. I tried to purge messages, to Ack message requeue false and all this options return to me a message that the queue is empty. I tried to reboot the rabbit too and the 1 message still there as well. It's like a phantom message.
The rabbitmq is running in a pod on kubernetes.
Anyone has a clue for how to deal with it?
Thanks!
UPDATE:
Problem solved. I just restarted the pod where the rabbit is running and this phantom message disappeared.
Related
erlang version = 1:24.0.2-1
rabbitmq-server version = 3.8.16-1
Recently installed latest rabbitmq on Ubuntu20.
I verified that all was working fine and consumer was consuming the notification from messaging queue as required.
After approximately a day, rabbbitmq crashed as there was 0 disk space left.
After analysis found that around 10G was consumed by msg_store_transient, to which restarting rabbitmq solved the issue.
But after a day, it happens again.
Can someone help me further?
most likely you are consuming messages without sending back the basic_ack, see for example here the ch.basic_ack
What to do:
check the unacked messages see:
check if you are using too many not persistent messages
check if you are using too many not persistent queues
Issue is Fixed:
We had high number of Ready messages because of which rdq files was taking huge space
There was a bug in code, that was listening to only one queue not all.
After the rabbitmq server or cluster is restarted, all the queue have recover all the message even the messages have be acked (from the point that rabbitmq server is started), and process all messages again.
Queue details
From my understanding, setting persistenet to false in the message arguments, the message will not survive if broker restart. Also, I have set durabele to false for the queue.
Did I missed any other settings?
Making a message persistent true is fine as you do not want to loose message in case of rabbitmq restart. Secondly, it is also fine to make the queue durable so that you dont want to loose the queue in case of rabbitmq restart. I will suggest please check the message consumer code as it looks like it is not commiting the transaction on its side making the message available on the queue. What you can do is after consuming messages please stop the consumer and check on the RabbitMQ if the messages are still available on the queue. If the messages are still available on queue after stopping the consumer , then there must be some issue on the consumer code.
We have observed the following behavior of RabbitMQ and are trying to understand if it is correct and how to resolve it.
Scenario:
A (persistent) message is delivered into a durable queue
The (single) Consumer (Spring-AMQP) takes the message and starts processing => Message goes from READY to UNACK
Now the broker is shut down => Client correctly reports "Channel shutdown"
The consumer finishes the processing, but can not acknowledge the message as the broker is still down
Broker is started again => Client reconnects
As a result, one message remains unack'ed forever (or until the client is restarted).
Side note: In the Rabbit Admin UI, I can see that two channels are existing now. The "dead" one that was created before the broker restart, containing the unacked message and a new one that is healthy.
Is this behavior expected to be like that? It seems to me "correct" in the way, that RabbitMQ can not know after the broker restart, whether the message processing was completed or not. But what solution would exist than to get that unacked message back into the queue and to heal the system without restarting the consumer process?
The RabbitMQ team monitors this mailing list and only sometimes answers questions on StackOverflow.
Is this behavior expected to be like that? It seems to me "correct" in the way, that RabbitMQ can not know after the broker restart, whether the message processing was completed or not.
Yes, you are observing expected behavior. RabbitMQ will re-enqueue the message once it determines that the consumer is really dead. Since your consumer re-connects with what must be the same consumer tag as before, it is up to that process to ack or nack the message.
My application consumes JMS messages in a Glassfish 3.1.2 server and OpenMQ as JMS provider.
The strange behavior happens when a consumer fails to process a message. In this situation Glassfish correctly move the message to the Dead Message Queue (after 2 attempts). And this is fine.
When I restart the server, the message stored in the DMQ, is sent again to the original destination (and that's ok, althoug I didn't expect this behavior). Now, also if the consumer succeed, the message remains in the destination.
That's incorrect because, after another restart of the server, the message is consumed again. Strangely this time the message is permanently removed from the queue.
The questions are:
why the message remains in the queue?
And why GF try to move automatically the message from the DMQ into the original one, after a restart?
Was running a system that uses multiple msmq's on the same machine, ran fine for about a day then I get the error about Insufficient resources when trying to post a message to one of the queues. Investigated via this blog post:
http://blogs.msdn.com/b/johnbreakwell/archive/2006/09/18/761035.aspx
I don't see anything in there about investigating the dead-letter queue.
Looked at the queues, realized the only queue that had any messages left in it was the transactional dead-letter queue, purged it, now the app(s) run again and can post messages to private queues.
I guess my main question is explain to me the trans dead-letter queue and how I can manage it.
thanks.
There will be nothing in the blog about the Dead Letter Queue as it is just a queue, like any other.
You have messages in the DLQ because you have enabled Negative Source Journaling in your application. An error condition has meant the original messages have died and ended up in the DLQ, as requested by your application. Ideally, if you are using the DLQ, you have a separate thread looking for messages in it.
You should have monitoring enabled on the total number of messages in the server so that you get an early alert when messages start piling up somewhere unexpectedly.
Cheers
John Breakwell
Ran into this issue today with our MSMQ/NServiceBus setup. From what I understand, manual queue purges will move messages to the Transaction Dead Messages queue. Clearing this queue out resolved the problem for us.