What happens if a broker is configured not to support persistent messages and a client delivers a message with delivery mode PERSISTENT ?
If you turn off persistence on the broker side then Messages sent with persistence set to true aren't persisted. The broker uses a MemoryPersistenceAdapter and those messages are kept in memory but are lost once the broker is restarted. You normally only use this setting when unit testing, or in cases where you know your broker doesn't require persistence.
Related
I'm doing a test to see how the flow control behaves. I created a fast producer and slow consumers and set my destination queue policy highwater mark to 60 percent..
the queue did reach 60% so messages now went to the store, now the store is full and blocking as expected..
But now i cannot get my consumer to connect and pull from the queue.. Seem that blocking is also blocking the consumer from getting in to start pulling from the queue..
Is this the correct behavior?
The consumer should not be blocked by flow-control. Otherwise messages could not be consumed to free up space on the broker for producers to send additional messages.
So this issues surfaced when I was using a on demand jms service. The service will queue or dequeue via a REST services. The consumers are created on demand.. If the broker is being blocked as im my case being out of resource, then you cannot create a new consumer.
I've since modified the jms service to use a consumer pool(implemented a object pool pattern). The consumer pool is initialized when the application starts and this resolved the blocking issue
I have a 3 node cluster of Rabbitmq behind a HAproxy Load Balancer. When I shut down a node, Rabbitmq successfully switches the queue to the other nodes. However, I notice that Logstash stops pulling messages from the queue unless I restart it. Is this a problem with the way rabbitmq operates? i.e. it deactivates all active consumers. I am not sure if log stash has any retry capability. Anyone run into this issue?
Quoting rabbit mq documentation, page for clustering first
What is Replicated? All data/state required for the operation of a
RabbitMQ broker is replicated across all nodes. An exception to this
are message queues, which by default reside on one node, though they
are visible and reachable from all nodes.
and high availability
Clients that are consuming from a mirrored queue may wish to know that
the queue from which they have been consuming has failed over. When a
mirrored queue fails over, knowledge of which messages have been sent
to which consumer is lost, and therefore all unacknowledged messages
are redelivered with the redelivered flag set. Consumers may wish to
know this is going to happen.
If so, they can consume with the argument x-cancel-on-ha-failover set
to true. Their consuming will then be cancelled on failover and a
consumer cancellation notification sent. It is then the consumer's
responsibility to reissue basic.consume to start consuming again.
So, what does all this mean:
You have to mirror queues
The consumers should use manual ACK
The consumers should reconnect on their own
So the answer to your question is no, it's not a problem with rabbitmq, that's simply how it works. It's up to clients to reconnect.
I use the RabbitMQ shovel plugin (dynamic shovel, see below) to provide unidirectional messaging between two RabbitMQ brokers over an unreliable WAN link. I see regular connection losses in the RabbitMQ server log.
The relevant portions of the AMQP setup are identical for both brokers: one exchange (fanout, durable) and one queue (durable). The consuming application requires that messages are received in the same order they are produced at the sending side.
The observed behaviour seems to indicate that this is not the case, perhaps due to retransmissions, etc. Does the RabbitMQ shovel pluging preserve message ordering without message loss? What are the required configuration options?
For ensuring message ordering the following parameters should be configured:
"prefetch-count" : 1
"ack-mode" : "on-confirm"
prefetch-count The maximum number of unacknowledged messages copied over a shovel at any one time. Default is 1000.
ack-mode Determines how the shovel should acknowledge messages. If set to on-confirm (the default), messages are acknowledged to the source broker after they have been confirmed by the destination. This handles network errors and broker failures without losing messages, and is the slowest option. If set to on-publish, messages are acknowledged to the source broker after they have been published at the destination. This handles network errors without losing messages, but may lose messages in the event of broker failures. If set to no-ack, message acknowledgements are not used. This is the fastest option, but may lose messages in the event of network or broker failures.
For more information related to RabbitMQ's shovel mechanism please refer to the official documentation: https://www.rabbitmq.com/shovel-dynamic.html
The following is my ActiveMQ setup:
I have two AMQ broker which are configured with failover.
I have 40 producer but only on consumer.
Now the problem:
From time to time, one of the producer lost the connection to the master broker. The failover reacts and the producer gets a new connection to the slave which gets the messages. So far so good. But the consumer does not have the problem, he consumes still the messages from the master. He does not know, that the slave has also some messages.
How can i now solve the problem woth losing those messages thay are sent to the slave?
Thank in advance
I would recommend you configure a network of brokers. That way, your brokers will be connected as well, and it no longer matters which broker your producers and consumers connect to - the messages will get propagated across the network.
I've got some trouble understanding the confirm of RabbitMQ, I see the following explanation from RabbitMQ:
Notes
The broker loses persistent messages if it crashes before said
messages are written to disk. Under certain conditions, this causes
the broker to behave in surprising ways. For instance, consider this
scenario:
a client publishes a persistent message to a durable queue
a client consumes the message from the queue (noting that the message is persistent and the queue durable), but doesn't yet ack it,
the broker dies and is restarted, and
the client reconnects and starts consuming messages.
At this point, the client could reasonably assume that the message
will be delivered again. This is not the case: the restart has caused
the broker to lose the message. In order to guarantee persistence, a
client should use confirms. If the publisher's channel had been in
confirm mode, the publisher would not have received an ack for the
lost message (since the consumer hadn't ack'd it and it hadn't been
written to disk).
Then I am using this http://hg.rabbitmq.com/rabbitmq-java-client/file/default/test/src/com/rabbitmq/examples/ConfirmDontLoseMessages.java to do some basic test and verify the confirm, but get some weird results:
The waitForConfirmsOrDie method doesn't block the producer, which is different from my expectation, I suppose the waitForConfirmsOrDie will block the producer until all the messages have been ack'd or one of them is nack'd.
I remove the channel.confirmSelect() and channel.waitForConfirmsOrDie() from publisher, and change the consumer from auto ack to manual ack, I publish all messages to the queue and consume messages one by one, then I stop the rabbitmq server during the consuming process, what I expect now is the left messages will be lost after the rabbitmq server is restarted, because the channel is not in confirm mode, but I still see all other messages in the queue after the server restart.
Since I am new to RabbitMQ, can anyone tells me where is my problem of the confirm understanding?
My understanding is that "Channel Confirmation" is for Broker confirms it successfully got the message from producer, regardless of consumer ack this message or not. Depending on the queue type and message deliver mode, see http://www.rabbitmq.com/confirms.html for details,
the messages are confirmed when:
it decides a message will not be routed to queues
(if the mandatory flag is set then the basic.return is sent first) or
a transient message has reached all its queues (and mirrors) or
a persistent message has reached all its queues (and mirrors) and been persisted to disk (and fsynced) or
a persistent message has been consumed (and if necessary acknowledged) from all its queues
Old question but oh well..
I publish all messages to the queue and consume messages one by one, then I stop the rabbitmq server during the consuming process, what I expect now is the left messages will be lost after the rabbitmq server is restarted, because the channel is not in confirm mode, but I still see all other messages in the queue after the server restart.
This is actually how it should work, IF the persistence is enabled. If the server crashes or something else goes wrong, the messages cannot be confirmed, and thus, won't be removed from the queue.
Messages will only be removed from the queue if they are confirmed to be handled, or the broker didn't yet write it to memory or disk before the server crashed.
Confirming and acknowledging can be set off if wanted, and the producer won't be waiting for the acks. I cannot find the exact command for it right now, but it does exist.
More on the acks and confirms: https://www.rabbitmq.com/reliability.html