1, when I write "persistent" messages to master's queue, I can see it is both "pending messages" and "messages enqueued",
but in the slave machine, it seems only in the "pending messages" in that queue, not in the "messages enqueued"
I was thinking the master and slave should look exactly the same at any time point.
I am talking about levelDB, replicated by zookeeper.
Related
I am doing POC on RabbitMQ's Quorum Queues, especially focusing on fail-over mechanism. In my case I have two nodes (for example NodeA and NodeB) and one Quorum queue which resides on NodeA. Now whenever I am publishing a test message to Quorum queue of NodeA, I can see the same message on NodeB.
Now when testing the failover mechanism and stopping NodeA, I am unable to publish any message, also I can not see any messages in quorum queue, I think the NodeB is not promoted to be a new leader. I am supposing the leader would be promoted automatically, do I need to do anything to make the other Node leader ?
Kind Regards
quorum queues do not support two nodes clusters and two-node clusters are highly recommended against for any clusters.
From Quorum Queues documentation guide:
A quorum queue requires a quorum of the declared nodes to be available to function.
When a RabbitMQ node hosting a quorum queue's leader fails or is stopped another node
hosting one of that quorum queue's follower will be elected leader and resume operations.
My queues are durable and Messages are persistent. I have setup 3 RabbitMQ Server Cluster having HA mirroring of all queues among all servers. My Master node seems to be Rabbitmq3 When I shutdown RabbitmQ3. I get following errors.
Caused by: com.rabbitmq.client.ShutdownSignalException: channel error; protocol method: #method<channel.close>(reply-code=404, reply-text=NOT_FOUND - home node 'rabbit#rabbitmq3' of durable queue 'durable-test-queue' in vhost 'test' is down or inaccessible
I think if I have mirrored queues in Cluster. I should not create durable queue since they will cause problem if my rabbitmq master node goes down suddenly
The whole point of a cluster - you system should tolerate failure of any single node, including queue master. Your error is just a notification that current master is down. Cluster should elect new master and queue should continue to function, regardless of durability of the queue / persistence of messages.
You should be able to continue to send/receive messages on those durable queues.
I have a 3 node cluster of Rabbitmq behind a HAproxy Load Balancer. When I shut down a node, Rabbitmq successfully switches the queue to the other nodes. However, I notice that Logstash stops pulling messages from the queue unless I restart it. Is this a problem with the way rabbitmq operates? i.e. it deactivates all active consumers. I am not sure if log stash has any retry capability. Anyone run into this issue?
Quoting rabbit mq documentation, page for clustering first
What is Replicated? All data/state required for the operation of a
RabbitMQ broker is replicated across all nodes. An exception to this
are message queues, which by default reside on one node, though they
are visible and reachable from all nodes.
and high availability
Clients that are consuming from a mirrored queue may wish to know that
the queue from which they have been consuming has failed over. When a
mirrored queue fails over, knowledge of which messages have been sent
to which consumer is lost, and therefore all unacknowledged messages
are redelivered with the redelivered flag set. Consumers may wish to
know this is going to happen.
If so, they can consume with the argument x-cancel-on-ha-failover set
to true. Their consuming will then be cancelled on failover and a
consumer cancellation notification sent. It is then the consumer's
responsibility to reissue basic.consume to start consuming again.
So, what does all this mean:
You have to mirror queues
The consumers should use manual ACK
The consumers should reconnect on their own
So the answer to your question is no, it's not a problem with rabbitmq, that's simply how it works. It's up to clients to reconnect.
The following is my ActiveMQ setup:
I have two AMQ broker which are configured with failover.
I have 40 producer but only on consumer.
Now the problem:
From time to time, one of the producer lost the connection to the master broker. The failover reacts and the producer gets a new connection to the slave which gets the messages. So far so good. But the consumer does not have the problem, he consumes still the messages from the master. He does not know, that the slave has also some messages.
How can i now solve the problem woth losing those messages thay are sent to the slave?
Thank in advance
I would recommend you configure a network of brokers. That way, your brokers will be connected as well, and it no longer matters which broker your producers and consumers connect to - the messages will get propagated across the network.
I want to understand zookeeper's role in replicated leveldb for ActiveMQ broker.
About zookeeper election : How does zookeeper knows that out of all the clients connected to zookeeper, which clients are ActiveMQ brokers fighting to become master. Is there any particular key or configuration which is passed by all the brokers connecting to zookeeper which says that we (let say 3) ActiveMQ brokers belong to same environment and fighting to become master.
At what interval slave broker copy data from master broker ? Any corner cases where data might be lost ?
Does ActiveMQ provides guarantee of message ordering using replicated leveldb ? I am talking about the case when re-election of master happens while producer is sending messages in sequence to the broker?
Thanks,
Anuj
By zkPath in the Zookeeper configuration and by broker name.
Each message is synced to a quorum (nodes/2+1) brokers before the transaction completes. So there is no sync interval, it's synced in real time. The cluster will no function unless you have a quorum of brokers online so there should be no data loss.
The messages are synced to a majority of the nodes in a synchronous fashion. At reelection, a node with the latest updates will be elected. Ordered messages should be no problem. However, it's generally problematic to rely critically on ordered messages in a message queueing. As a rule of thumb - message order will only be complete under "happy days". Dead letters, multiple consumers and so forth might as well mess up message order.