rabbitmq quorum Queue ensure retry if data is lost - rabbitmq

I read that Quorum Queue does not support ttl for both messages and Queues.
The producer in my system maintains state in database with message "READY_TO_SUBMIT" and then sends it to cluster of Quorum queue. In case the rabbitmq Queue crashes or for any reason the message is not delivered to consumer. How will my producer know that it should retry the message again.
In case of mirrored queue I assume I can put a ttl and then after the ttl gets over my producer can retry again if that status is not updated by consumer for "READY_TO_SUBMIT" to "SUBMITTED".

Your producers absolutely must use publisher confirms correctly: https://www.rabbitmq.com/confirms.html
Please see the detailed tutorial here: https://www.rabbitmq.com/tutorials/tutorial-seven-java.html
NOTE: the RabbitMQ team monitors the rabbitmq-users mailing list and only sometimes answers questions on StackOverflow.

Related

Redis Pub/Sub not keeping messages

I'm using Redis Pub/Sub implementation to exchange messages between two projects. I have a few channels subscribing the same queue. When both publisher and subscriber are running, everything goes well. When I have only the publisher working(and a lot of messages are published), I would expect that when the subscriber starts, it would read all the messages that were enqueued previously. But what happens is that Redis does not keep the messages if there is no subscriber. Is there any configuration I could use to keep the messages until a subscriber dequeue them?
Redis currently doesn't behave like an MQTT broker with his "retain" flag.
If the subscription occur after the message has been published, It will be missed for the subscriber and lost forever.

Active MQ VirtualTopic - messages stay enqueued even if dispatched to all defined/linked queues

Using Active MQ 5.15.4 and following the doc from http://activemq.apache.org/virtual-destinations.html, when sending to a VirtualTopic, the messages get sent to all connected queues, but they never get dequeued from the virtual topic where they were sent.
Do we need to manually clean the virtual topic?
What is the reason of having the messages kept in the topic? Is it that they can be re-sent later on? But when a new queue gets linked to the virtual topic, all existing enqueued messages are not sent to it.
Have not tested this, but are the messages in the connected queues respecting the persistence flag for the message sent in the virtual topic?
If there is no consumer on the Virtual Topic itself then the only messages retained are the one's placed on the subscription queues for the Virtual Topic consumers. For example if you send to VirtualTopic.FOO and there are no subscriptions on that Topic or the named Virtual Topic consumer queues such as Consumer.A.VirtualTopic.FOO then the message would be completely discarded. If there was some consumer on the consumer queue at some point then messages sent to the Topic are then forwarded to the Queue but the Topic itself retains nothing.
If there are consumers on the Virtual Topic itself they would get messages sent to them or held for them up to the configured pending message limit etc etc.
The Consumer Queues will respect the persistent value specified by the MessageProducer that sent them.

Rabbitmq requeue all the message after restarted

After the rabbitmq server or cluster is restarted, all the queue have recover all the message even the messages have be acked (from the point that rabbitmq server is started), and process all messages again.
Queue details
From my understanding, setting persistenet to false in the message arguments, the message will not survive if broker restart. Also, I have set durabele to false for the queue.
Did I missed any other settings?
Making a message persistent true is fine as you do not want to loose message in case of rabbitmq restart. Secondly, it is also fine to make the queue durable so that you dont want to loose the queue in case of rabbitmq restart. I will suggest please check the message consumer code as it looks like it is not commiting the transaction on its side making the message available on the queue. What you can do is after consuming messages please stop the consumer and check on the RabbitMQ if the messages are still available on the queue. If the messages are still available on queue after stopping the consumer , then there must be some issue on the consumer code.

Logstash with rabbitmq cluster

I have a 3 node cluster of Rabbitmq behind a HAproxy Load Balancer. When I shut down a node, Rabbitmq successfully switches the queue to the other nodes. However, I notice that Logstash stops pulling messages from the queue unless I restart it. Is this a problem with the way rabbitmq operates? i.e. it deactivates all active consumers. I am not sure if log stash has any retry capability. Anyone run into this issue?
Quoting rabbit mq documentation, page for clustering first
What is Replicated? All data/state required for the operation of a
RabbitMQ broker is replicated across all nodes. An exception to this
are message queues, which by default reside on one node, though they
are visible and reachable from all nodes.
and high availability
Clients that are consuming from a mirrored queue may wish to know that
the queue from which they have been consuming has failed over. When a
mirrored queue fails over, knowledge of which messages have been sent
to which consumer is lost, and therefore all unacknowledged messages
are redelivered with the redelivered flag set. Consumers may wish to
know this is going to happen.
If so, they can consume with the argument x-cancel-on-ha-failover set
to true. Their consuming will then be cancelled on failover and a
consumer cancellation notification sent. It is then the consumer's
responsibility to reissue basic.consume to start consuming again.
So, what does all this mean:
You have to mirror queues
The consumers should use manual ACK
The consumers should reconnect on their own
So the answer to your question is no, it's not a problem with rabbitmq, that's simply how it works. It's up to clients to reconnect.

Behavior of channels in "confirm" mode with RabbitMQ

I've got some trouble understanding the confirm of RabbitMQ, I see the following explanation from RabbitMQ:
Notes
The broker loses persistent messages if it crashes before said
messages are written to disk. Under certain conditions, this causes
the broker to behave in surprising ways. For instance, consider this
scenario:
a client publishes a persistent message to a durable queue
a client consumes the message from the queue (noting that the message is persistent and the queue durable), but doesn't yet ack it,
the broker dies and is restarted, and
the client reconnects and starts consuming messages.
At this point, the client could reasonably assume that the message
will be delivered again. This is not the case: the restart has caused
the broker to lose the message. In order to guarantee persistence, a
client should use confirms. If the publisher's channel had been in
confirm mode, the publisher would not have received an ack for the
lost message (since the consumer hadn't ack'd it and it hadn't been
written to disk).
Then I am using this http://hg.rabbitmq.com/rabbitmq-java-client/file/default/test/src/com/rabbitmq/examples/ConfirmDontLoseMessages.java to do some basic test and verify the confirm, but get some weird results:
The waitForConfirmsOrDie method doesn't block the producer, which is different from my expectation, I suppose the waitForConfirmsOrDie will block the producer until all the messages have been ack'd or one of them is nack'd.
I remove the channel.confirmSelect() and channel.waitForConfirmsOrDie() from publisher, and change the consumer from auto ack to manual ack, I publish all messages to the queue and consume messages one by one, then I stop the rabbitmq server during the consuming process, what I expect now is the left messages will be lost after the rabbitmq server is restarted, because the channel is not in confirm mode, but I still see all other messages in the queue after the server restart.
Since I am new to RabbitMQ, can anyone tells me where is my problem of the confirm understanding?
My understanding is that "Channel Confirmation" is for Broker confirms it successfully got the message from producer, regardless of consumer ack this message or not. Depending on the queue type and message deliver mode, see http://www.rabbitmq.com/confirms.html for details,
the messages are confirmed when:
it decides a message will not be routed to queues
(if the mandatory flag is set then the basic.return is sent first) or
a transient message has reached all its queues (and mirrors) or
a persistent message has reached all its queues (and mirrors) and been persisted to disk (and fsynced) or
a persistent message has been consumed (and if necessary acknowledged) from all its queues
Old question but oh well..
I publish all messages to the queue and consume messages one by one, then I stop the rabbitmq server during the consuming process, what I expect now is the left messages will be lost after the rabbitmq server is restarted, because the channel is not in confirm mode, but I still see all other messages in the queue after the server restart.
This is actually how it should work, IF the persistence is enabled. If the server crashes or something else goes wrong, the messages cannot be confirmed, and thus, won't be removed from the queue.
Messages will only be removed from the queue if they are confirmed to be handled, or the broker didn't yet write it to memory or disk before the server crashed.
Confirming and acknowledging can be set off if wanted, and the producer won't be waiting for the acks. I cannot find the exact command for it right now, but it does exist.
More on the acks and confirms: https://www.rabbitmq.com/reliability.html