RabbitMQ is not deleting all messages - rabbitmq

I have a current situation in which I am attempting to purge all messages within a RabbitMQ Queue. I have tried purging the queue as well as deleting the queue itself but each time I try to re-create the same queue again via the Web UI or code, it still insists that it has some messages to process despite the fact I am unable to retrieve a single message from the queue. Has anyone come across this issue before because it is quite bizarre?

Related

Multiple consumers are created after site recycle. RabbitMq/Masstransit

I use Masstransit in my .net web application to connect to RabbitMq.
Sometime after site recycle, I see lots of consumers on a single queue. I didn't set competing consumers and in normal situation I should have only one consumer per queue.
When this problem happens, my messages get processed very slowly (I assume the time depends on my retry policy) and I have to showdown the site and start it again.
I use Masstransit 5.2.3/RabbitMq 3.7.6.
Could anyone give any clue what the problem could be?

ActiveMQ Consumer takes a long time to receive messages on startup

I've been experiencing an issue on the consumer side when restarting. After dumping the heap and sifting through threads, I've determined that the issue is due to the compression of the kahadb local repository index file. As this file gets larger, the time it takes for the consumer to start getting messages again increases. I've deleted my local repository directory, restarted, and verified that the consumer gets messages almost instantly.
Has anyone experienced this issue when working with ActiveMQ and KahaDB? On occasion, if the directory isn't wiped out, it can take up to 1.5 hours for my consumer to start getting messages from the broker again.
I've also verified that the messages are being published in a timely matter, they're just not being consumed because the index compression thread is blocking the "add" threads.
Any insight would be greatly appreciated!

How do you replay missed messages when using STOMP to connect to RabbitMQ?

I've got an iOS application which uses a STOMP Client to talk to RabbitMQ. The application loads a lot of state during startup, and then keeps that state in sync by receiving updates published on STOMP. Of course, if it loses its connection, it can no longer be sure it's in sync, and therefore has to re-load that large initial blob. Any kind of network interruption triggers this behavior and makes my customers sad.
There are a lot of big-picture ways to fix this (and I'm working on them) but in the meantime, I'm trying to use persistent queues to solve this problem. The idea is that the server will create a queue, bind it to the appropriate topics, and then start building the large startup bundle. When finished, it will hand everything off to the client. The client will set itself up with the startup bundle, open a subscription to the queue, and then process any updates which happened while the server was getting things ready. Similarly, if the client should become disconnected, it can simply reconnect and resume reading the messages it finds in the queue.
My problem is that while the client successfully receives messages sent after it connects, if there were any messages in the queue before it connected, they are not read. Likewise, if the client becomes disconnected, when it reconnects, it won't see any messages which arrived while it was away.
Can anyone suggest how I might get the client to be able to read those missing messages?
It turns out what was happening was that the STOMP adapter was consuming the messages but failing to deliver them. Thus, when the client reconnected, it wouldn't have any messages waiting for it.
To fix the problem, I changed the "ack" setting on the subscribe request to "client", meaning that STOMP shouldn't consider the message delivered until the client sends back an ACK frame. By changing my client appropriately, messages now get delivered even after the client has been away.

message deleted from queue

I have used BlockingQueue implementation to process my events by services from a queue. However in case if the server goes down, all my events from that queue are getting deleted and hence I am missing events to process. (I am looking for some internal DB where server can store the event/messages from queue and if server goes down and up again, it can load all events/messages to process again, without manually intervention).
Any help on this. I am not sure if I should use Apache ActiveMQ. I am using apache servicemix.
Thanks in advance.
I can not answer about how to do this with BlockingQueue.
But ActiveMQ has two features that you will benefit from:
Persistent Queues and possibly you might also want to look at Durable Queues
It has a built in database that just does this under the hood and allows messages to be persisted in queue even if broker or consumer has to restart.

MSMQ error Insufficient Resources transactinoal dead-letter queue is filling up

Was running a system that uses multiple msmq's on the same machine, ran fine for about a day then I get the error about Insufficient resources when trying to post a message to one of the queues. Investigated via this blog post:
http://blogs.msdn.com/b/johnbreakwell/archive/2006/09/18/761035.aspx
I don't see anything in there about investigating the dead-letter queue.
Looked at the queues, realized the only queue that had any messages left in it was the transactional dead-letter queue, purged it, now the app(s) run again and can post messages to private queues.
I guess my main question is explain to me the trans dead-letter queue and how I can manage it.
thanks.
There will be nothing in the blog about the Dead Letter Queue as it is just a queue, like any other.
You have messages in the DLQ because you have enabled Negative Source Journaling in your application. An error condition has meant the original messages have died and ended up in the DLQ, as requested by your application. Ideally, if you are using the DLQ, you have a separate thread looking for messages in it.
You should have monitoring enabled on the total number of messages in the server so that you get an early alert when messages start piling up somewhere unexpectedly.
Cheers
John Breakwell
Ran into this issue today with our MSMQ/NServiceBus setup. From what I understand, manual queue purges will move messages to the Transaction Dead Messages queue. Clearing this queue out resolved the problem for us.