I set up lab about ha for rabbitmq using cluster and mirror queue.
I am using centos 7 and rabbitmq-server 3.3.5. with three server (ha1, ha2, ha3).
I have just joined ha1 and ha2 to ha3, but do not set policy for mirror queue. When I test create queue with name "hello" on ha1 server, after i check on ha2, and ha3 using rabbitmqctl list queue, hello queue is exist on all node on cluster.
I have a question, why i do not set policy to mirror queue on cluster, but it automatic mirror queue have been created on any node on cluster?
Please give me advice about I have mistake or only join node on cluster, queue will be mirror on all node of cluster. Thanks
In rabbitmq , by default, one queue is stored only to one node. When you create a cluster, the queue is available across nodes.
But It does't mean that the queue is mirrored, if the node gets down the queue is marked as down and you can't access.
Suppose to create one queue to the node, the queue will work until the node is up, as:
if the node is down you will have:
you should always apply the mirror policy, otherwise you could lose the messages
Related
We have the following setup:
Now, on the Upstream side, I see two connections to the Cluster. One to rabbitmq-1 and one to rabbitmq-2.
The one to rabbitmq-1 is piling up messages. Note the message count of 413'584.
In the downstream, on the Cluster, I see only the connection to rabbitmq-2.
If I delete the queue to rabbitmq-1 it reappears after some time.
Why are there two queues, and why is the one to rabbitmq-1 not processing any messages?
This happens in the following case:
Your cluster has no name defined. In such case the name of the node is used as a cluster name.
Your cluster is behind a load balancer which selects node randomly.
You use the load balancer url to setup the federation upstream. In such case when the node restarts. The connection from another node is made which has different name.
Solution
The easiest solution is to set the cluster name on any node in the cluster with the following command.
rabbitmqctl set_cluster_name "rabbitmq-cluster"
After that all nodes in the cluster will return the same name and no redundant exchanges or queues will be created
Can I change the node name from RabbitMq Management Console for a specific queue? I tried, but I think that this is created when I started my app. Can I change it afterwards? My queue is on node RabbitMQ1, and my connection on node RabbitMQ2, so I cannot read messages from that queue. Maybe I can change my connection node?
The node name is not just a label, but it's where the queue is physically located. In fact by default queues are not distributed/mirrored, but created on the server where the application connected, as you correctly guessed.
However you can make your queue mirrored using policies, so you can consume messages from both the servers.
https://www.rabbitmq.com/ha.html
You can change the policy for the queues by using the rabbitmqctl command or from the management console, admin -> policies.
You need to synchronize the queue in order to clone the old messages to the mirror queue with:
rabbitmqctl sync_queue <queue_name>
Newly published messages will end in both the copies of the queue, and can be consumed from both alternatively (the same message won't be consumed from both).
We have setup a RabbitMQ cluster with 3 nodes. If an effort to have some form of load balancing, we setup the policy to only sync across 2 of the nodes:
rabbitmqctl set_policy ha-2 . '{"ha-mode":"exactly","ha-params":2,"ha-sync-mode":"automatic"}'
This works as expected when all 3 nodes are online.
When we shutdown one of the nodes (to simulate a failure) queues mastered on the failed node are still available (on the slave) but not synchronized to another node. If we manually re-apply the policy, the queues then synchronize as expected.
Should we expect that all queues be mirrored in the scenario that one node fails with this policy?
Works as expected in RabbitMQ 3.5.4
I would like to run RabbitMQ Highly Available Queues in a cluster of two RabbitMQ instances on two separate servers. It's not clear to me from the documentation how can I detect which node is considered as master by RabbitMQ in order to determine which node should I publish messages to and consume from.
Is that something that RabbitMQ resolves internally (and so I can publish and consume from master even when connected to a slave node) or should the application know about master node for each queue and connect only to it?
RabbitMQ will take care of that. The idea of HA queues is that you publish and consume from either node, and RabbitMQ will try to keep a consistent state.
I have a clustered HA rabbitmq setup. I am using the "exactly" policy similar to:
rabbitmqctl set_policy ha-two "^two\." \'{"ha-mode":"exactly","ha-params":10,"ha-sync-mode":"automatic"}'
I have 30 machines running, of which 10 are HA nodes with queues replicated. When my broker goes down (randomly assigned to be the first HA node), I need my celery workers to point to a new HA node (one of the 9 left). I have a script that automates this. The problem is: I do not know how to distinguish between a regular cluster node and a HA node. When I issue the command:
rabbitmqctl cluster_status
The categories I get are "running nodes", "disc", and "ram". but there is no way here to tell if a node is HA.
Any ideas?
In cluster every node share everything with another, so you don't have do distinguish nodes in your application in order to access all entities.
In your case when one of HA node goes down (their number get to 9), HA queues will be replicated to first available node (doesn't matter disc or ram).