In the rabbitMq webinterface I have the shovel plugin.
Hovever, when I use it to shovel messages then the graph that shows message rates goes up and down as-if something happens, but the Queued messages stays the same.
When I, with the webinterface, looks at the target queue then I can also see that the "message rate" changes, but nothing goes into the queued messages.
I don't even know where to look for errors so any help is appreciated.
Added:
I don't know what to add about my setup, as you most likely can read then I am not that familiar with RabbitMQ.
I have a queue for dead messages. I.e. messages that has been the result of some kind of error. When my queue is named myqueue then this queue is named myqueue.dead. After fixing a bug I shovel all from the dead queue to the queue to be processed again.
In the web-interface is the button for "Move messages". It normally works but now it doesn't.
Now I don't see messages gets moved, but if I quickly again click the dead queue then Even though the graph for queued messages are unchanged then the "Message Rates" chart quickly spikes with "Consumer ack" e.g.a red graph.
The same is visible in the target. It spikes with a lot of "Deliver manual" i.e. a blue graph. But it just goes up and down very fast and nothing happens.
Related
I have a publisher that sends messages to a consumer that moves a motor.
The motor has a work queue which I cannot access, and it works slower than the rate of the incoming messages, so I'm trying to control the traffic on the consumer.
To keep updated and relevant data coming to the motor without the queue filling up and creating a traffic jam, I set the RabbitMQ queue size limit to 5 and basicQos to 1.
The idea is that the RabbitMQ queue will drop the old messages when it is filled up, so the newest commands are at the front of the queue.
Also by setting basicQos to 1 I ensure that the consumer doesn't grab all messages from the queue and bombards the motor at once, which is exactly what i'm trying to avoid since I can't do anything once the command was sent to the motor.
This way the consumer takes messages from the queue one by one, while new messages replace the old ones on the queue.
Practically this moves the bottleneck to the RabbitMQ queue instead of the motor's queue.
I also cannot check the motor's work queue, so all traffic control must be done on the consumer.
I added messageId and tested, and found out many messages are still coming and going long after the publisher is being shut down.
I'm expecting around 5 messages after shutdown since that's the size of the queue, but i'm getting hundreds.
I also added a few seconds of sleep inside the callback to make sure this isn't the robot queue that's acting up, but i'm still getting many messages after shutdown, and I can see in the logs the callback is being called every time so it's definitely still getting messages from somewhere.
Please help.
Thanks.
Moving the acknowledgment to the end of the callback solved the problem.
I'm guessing that by setting basicQos to 1 it did execute the callback for each message one after another, but in the background it kept grabbing messages from the queue.
So even when the publisher was shutdown, the consumer still had messages that were taken from the queue in it, and those messages were the ones that I saw being executed.
I have a project that consumes messages from activeMQ. It runs fine but sometimes it gets pending messages stuck in the queue. It says 1000 enqueued, 0 dequeued, 1000 dispatched. It also says 1000 pending messages.
What could be the possible cause of "Pending Messages"?
"Pending Messages" are messages on the queue which have not been acknowledged by a client. This is sometimes referred to as the "Message Count" of the queue or the "Depth" of the queue.
The most likely cause of an unchanging "pending message" count is that the consumer has failed somehow. It could be stuck in some other kind of blocking network operation or it could be offline completely.
Take a look at the consumer count on the queue. If it's > 0 then consumers are still connected. At that point you should inspect the individual consumers. Assuming the clients are Java-based, thread dumps are good to gather in this kind of situation as that will give you a clear picture of what the client is doing. If the consumer count is 0 then you'll need to reattach your consumers.
I have got several messages on error queue which has name TestQueue_errors.
One of the messages on error queue is important and should be moved back to service queue TestQueue so it can be processed again. The other messages on error queue are broken and should stay on error queue.
I have tried to do that with shovel plugin but it seems it is able only to move all messages from one queue to another. Is there a way I could achieve that, to move single message from one queue to another?
As far as I know Rabbit Management does not allow to do it. The only thing you can do is to publish this message again.
Maybe there are some tools which give possibility to achieve it but it is not a standard behaviour.
Here are actions which you are able to perform on the queue (from RabbitMQ Management page):
Move all messages from one queue to another
Get all messages without requeue option (they would not be in the queue anymore)
Get first N messages without requeue option and then move the rest of messages to another queue
I have this issue, I want to know my rabbit is working great.
I am not gonna send the message, so, Im not 100% sure is being sent correctly. But the problem is this.
After all is configured and all....
I see at the RabbitMQ web manager
And when I supposedly send a message the I see activity on the "message rates" chart but nothing at the "queued messages" .
I frankly dont know whats going on, is it too fast that doesnt need to queue the messages? Or something is misconfigured?
Any idea of the difference?
Thanks.
In case RabbitMQ receive non-routable message it drop it. So while message was received, it was not queued.
You may configure Alternate Exchanges to catch such messages.
In my case,
Situation1:
when my Exchange in rabbitTemplate.convertAndSend was not set properly -- the message was not sent to the correct queue -- the Queued messages was empty all time.
however, Message rates is not zero, it does show there are message get sent.
Which correspond to what the other answer is saying:
In case RabbitMQ receive non-routable message it drop it.
Situation2:
when my Exchange in rabbitTemplate.convertAndSend was indeed set properly -- the message was sent to the correct queue -- the Queued messages was queuing up the message.
Everything seems fine.
Situation3:
(continue from Situation2)
And now, I turn on the receiver service which has the #RabbitListener.
The Queued messages immediately drops down to 0, and never goes up again.
But the transporting of messages is still working fine.
Situation4:
(continue from Situation2)
And now, I change the receiver service to use the rabbitTemplate.receiveAndConvert.
Which I manually receive the message from the queue every 2s by using a loop.
(message is also sent from sender service every 2s by using a loop, same as the situations before.)
Now, the Queued messages stays at constant -- a straight line
(depends on how many message you have queued up, in my case 1, before the receiver service is up, then it stays at 1).
Conclusion:
I suspect that, when the message is consumed too fast, the Queued messages will just show 0.
Which correspond to what the OP is saying:
is it too fast that doesnt need to queue the messages?
(or, I could screw up some setting in RabbitMQ and led to wrong conclusion. I dont think so, but idk, I am not familiar with RabbitMQ.)
We've been using Rabbit successfully for about a year. Recently have upgraded to v2.6.1, because we want to use clusters with replicated message queues.
My testing has hit a puzzling behavior that smells like a Rabbit bug to me. The test that uncovers this is working with a two-node cluster. Both nodes are running v2.6.1. Both nodes have disk. Both nodes are running on Mac OS, though I doubt this is pertinent.
I'm also running Alice on the node that runs the test. The test uses it to programmatically do a stop_app on one of the nodes, because the test is trying to validate that if the cluster master fails, and a slave is elevated to take its place, that we don't lose messages.
So, the test has a small thread pool, which is given tasks that periodically 1) publish messages, and 2) toggle the state of the Rabbit master node (stopped if running; started if stopped). Other threads are consuming messages from queues.
I'm using publisher confirms, and I'm also acknowledging the messages in the consumers (using autoAck=false for channel.basicConsume()).
When the master node is stopped, I see both the producers and consumers catching ShutdownSignalException. They handle this by attempting to reconnect to the cluster. This works fine. When reconnected, they continue with their business.
Sometimes, what I see is that a consumer has successfully fetched a message from the broker, and is calling channel.basicAck() when it gets that ShutdownSignalException.
Later, when the consumer has reconnected, it again pulls down the same message. (The message bodies are tagged with a UUID, so I know it is the same one.) This time, when the consumer attempts to basicAck() the message, it again gets ShutdownSignalException, but this one has the following text in it: "reply-text=PRECONDITION_FAILED - unknown delivery tag 7".
In fact, that is the same delivery tag that was offered to the consumer by the broker before the master went down and the consumer reconnected.
Googling suggests that this event means that the consumer is attempting to ack the same message more than once.
But, how can this be so? If the first ack succeeded, then the message should have been removed from the broker's queues, and the consumer shouldn't see the same message again.
Yet, if the first ack did not succeed, then the consumer shouldn't be dinged for attempting to re-ack the message.
Anyone seen this before? It smells like a bug in Rabbit's replicated queues to me, but I've still new to Rabbit, and so am willing to believe there's a subtlety here in consuming from a clustered broker that I haven't yet grokked!
Thanks, --Steve
I'm not sure if my case matching yours, but I have seen similar "unknown delivery tag" on attempts to ack after reconnect and then the same message arrived again. Initially it looked like a bug to me, but in fact this is expected behavior. Consumer with QOS>1 may have in it's local buffer some messages and delivery tag will be invalid for all o them after reconnect. From another hand, attempt to ack even the current message after reconnect doesn't make any sense, because that message already nacked automatically on connection lost and this is why I got it again.