My application consumes JMS messages in a Glassfish 3.1.2 server and OpenMQ as JMS provider.
The strange behavior happens when a consumer fails to process a message. In this situation Glassfish correctly move the message to the Dead Message Queue (after 2 attempts). And this is fine.
When I restart the server, the message stored in the DMQ, is sent again to the original destination (and that's ok, althoug I didn't expect this behavior). Now, also if the consumer succeed, the message remains in the destination.
That's incorrect because, after another restart of the server, the message is consumed again. Strangely this time the message is permanently removed from the queue.
The questions are:
why the message remains in the queue?
And why GF try to move automatically the message from the DMQ into the original one, after a restart?
Related
I have a status topic, published using QoS 1 & retained message = true over ActiveMQ (docker rmohr/activemq:5.15.9). I want my dashboard to be able to late subscribe to the topics and always receive the last published message.
The retained functionality seems to work well but the message seems to be wiped out upon ActiveMQ broker restart.
If I stop publishing to the topic, restart the broker, and try to late subscribe, I do not receive the last message (the one that was retained before the broker restart).
I use the default container configuration (kahadb & filesystem directory mounted for data/ and conf/). I thought that the retained meesage would be in kahadb but it is empty. The ActiveMQ ui also shows empty queue for topic after broker restart.
It is expected behavior? Can I achieve retained message persistence through broker restart with ActiveMQ? How should I proceed?
The retain message should not be lost under any circumstances unless the client publishes an empty retain message
You can switch to EMQ x to avoid this problem. You can store the data on disk or in your favorite database
We have observed the following behavior of RabbitMQ and are trying to understand if it is correct and how to resolve it.
Scenario:
A (persistent) message is delivered into a durable queue
The (single) Consumer (Spring-AMQP) takes the message and starts processing => Message goes from READY to UNACK
Now the broker is shut down => Client correctly reports "Channel shutdown"
The consumer finishes the processing, but can not acknowledge the message as the broker is still down
Broker is started again => Client reconnects
As a result, one message remains unack'ed forever (or until the client is restarted).
Side note: In the Rabbit Admin UI, I can see that two channels are existing now. The "dead" one that was created before the broker restart, containing the unacked message and a new one that is healthy.
Is this behavior expected to be like that? It seems to me "correct" in the way, that RabbitMQ can not know after the broker restart, whether the message processing was completed or not. But what solution would exist than to get that unacked message back into the queue and to heal the system without restarting the consumer process?
The RabbitMQ team monitors this mailing list and only sometimes answers questions on StackOverflow.
Is this behavior expected to be like that? It seems to me "correct" in the way, that RabbitMQ can not know after the broker restart, whether the message processing was completed or not.
Yes, you are observing expected behavior. RabbitMQ will re-enqueue the message once it determines that the consumer is really dead. Since your consumer re-connects with what must be the same consumer tag as before, it is up to that process to ack or nack the message.
I'm using celery 3.0.18 with RabbitMQ 3.0.2. I have a task sent to another application by using celery.send_task, and I can see the send_task call in my logs, I can see the packets leaving the worker instance, and I can see the packets reaching the RabbitMQ instance when I call tcpflow -ce -i any port 5672, however, only the first message gets to the queue. They all have the same routing key, I tried recreating the exchange and bindings, and even a new RabbitMQ instance, and nothing seems to work. This used to work fine for months, until we had to rebuild the RabbitMQ from scratch after a crash in our AWS infrastructure. Strangely, I have the exact same setup working on other application, using the same broker and the same exchange, binding and queue, and it works perfectly there. Also, it works when I send the messages to the same exchange using the same call from a management script, running from the shell on the same instance, but it doesn't work when it's sent from the celery task in the worker process.
Any ideas on what the problem might be?
Eventually, I figured what's wrong, but it's not clear if this is the expected behavior, a celery bug, or a RabbitMQ bug.
What happens is that besides our application tasks, I have a custom logging handler used to send logs to a central location using RabbitMQ, using celery.send_task. This logging handler sends messages to an exchange named application.logger, with a routing key like application.logger.info, application.logger.warning, etc, and have bindings to route some logging levels to specific queues. This exchange, bindings and queues were created directly in RabbitMQ and not defined in Celery routes.
When the worker tries to send a message to this exchange and it doesn't exist, Celery would log a 404 NOT_FOUND error. After that, tasks sent to other exchanges using the same connection weren't delivered. They were sent by the worker instance, we could see the packets arriving and the RabbitMQ management screen for that connection even shows the data arriving from the client in kb/s, but no messages were delivered.
I have a problem using direct addressing with MCollective via ActiveMQ 5.8. (http://docs.puppetlabs.com/mcollective/deploy/middleware/activemq.html)
The problem arises when one of the nodes subscribed to the nodes queue via MCollective crashes and doesn't unsubscribe. When the host boots and subscribes again, there are now two subscribers with the same identity, because ActiveMQ doesn't recognize that the pre-crash one is no longer listening. This is a problem with direct addressing because it goes in the queue, ActiveMQ sends the message to only one subscriber, and it always seems to pick the one that's not listening; so the message is never delivered to the actual node. I can observe this happening if I have ActiveMQ log the message frames.
This may be related to the ActiveMQ concept of a "durable subscriber" (where a subscriber of the same identity unsubscribes any existing one) but I don't have any idea how that is configured from MCollective.
What I want is that either the new subscriber bumps the old, or that the dead subscriber is removed when a message is sent to it and the connection is dead (with Wireshark I can see the packets aren't ACKed, instead an ICMP packet returns "Destination unreachable").
Apparently, according to http://projects.puppetlabs.com/issues/23365, the solution is to use MCollective 2.3 (I was using 2.2) and Stomp 1.1 keepalives.
I've got some trouble understanding the confirm of RabbitMQ, I see the following explanation from RabbitMQ:
Notes
The broker loses persistent messages if it crashes before said
messages are written to disk. Under certain conditions, this causes
the broker to behave in surprising ways. For instance, consider this
scenario:
a client publishes a persistent message to a durable queue
a client consumes the message from the queue (noting that the message is persistent and the queue durable), but doesn't yet ack it,
the broker dies and is restarted, and
the client reconnects and starts consuming messages.
At this point, the client could reasonably assume that the message
will be delivered again. This is not the case: the restart has caused
the broker to lose the message. In order to guarantee persistence, a
client should use confirms. If the publisher's channel had been in
confirm mode, the publisher would not have received an ack for the
lost message (since the consumer hadn't ack'd it and it hadn't been
written to disk).
Then I am using this http://hg.rabbitmq.com/rabbitmq-java-client/file/default/test/src/com/rabbitmq/examples/ConfirmDontLoseMessages.java to do some basic test and verify the confirm, but get some weird results:
The waitForConfirmsOrDie method doesn't block the producer, which is different from my expectation, I suppose the waitForConfirmsOrDie will block the producer until all the messages have been ack'd or one of them is nack'd.
I remove the channel.confirmSelect() and channel.waitForConfirmsOrDie() from publisher, and change the consumer from auto ack to manual ack, I publish all messages to the queue and consume messages one by one, then I stop the rabbitmq server during the consuming process, what I expect now is the left messages will be lost after the rabbitmq server is restarted, because the channel is not in confirm mode, but I still see all other messages in the queue after the server restart.
Since I am new to RabbitMQ, can anyone tells me where is my problem of the confirm understanding?
My understanding is that "Channel Confirmation" is for Broker confirms it successfully got the message from producer, regardless of consumer ack this message or not. Depending on the queue type and message deliver mode, see http://www.rabbitmq.com/confirms.html for details,
the messages are confirmed when:
it decides a message will not be routed to queues
(if the mandatory flag is set then the basic.return is sent first) or
a transient message has reached all its queues (and mirrors) or
a persistent message has reached all its queues (and mirrors) and been persisted to disk (and fsynced) or
a persistent message has been consumed (and if necessary acknowledged) from all its queues
Old question but oh well..
I publish all messages to the queue and consume messages one by one, then I stop the rabbitmq server during the consuming process, what I expect now is the left messages will be lost after the rabbitmq server is restarted, because the channel is not in confirm mode, but I still see all other messages in the queue after the server restart.
This is actually how it should work, IF the persistence is enabled. If the server crashes or something else goes wrong, the messages cannot be confirmed, and thus, won't be removed from the queue.
Messages will only be removed from the queue if they are confirmed to be handled, or the broker didn't yet write it to memory or disk before the server crashed.
Confirming and acknowledging can be set off if wanted, and the producer won't be waiting for the acks. I cannot find the exact command for it right now, but it does exist.
More on the acks and confirms: https://www.rabbitmq.com/reliability.html