We have a RabbitMQ cluster with three nodes configured. Each node has two virtual hosts (Vhost-A and Vhost-B). We need a possibility to move messages from Vhost-A to Vhost-B. To accomplish this, we setup a shovel directing messages from Exchange-1 on Vhost-A to Exchange-2 on Vhost-B.
rabbitmqctl -p Vhost-A set_parameter shovel shovel-exchange-1-to-vhost-b /
'{"src-uri": "amqp://user#/Vhost-A", "src-exchange": "Exchange-1", /
"src-exchange-key": "#", "dest-uri": "amqp://user#/Vhost-B", /
"dest-exchange": "Exchange-2", "add-forward-headers": false, /
"ack-mode": "on-confirm", "delete-after": "never"}'
This has the side-effect of replicating the messages on the destination Exchange-2. Meaning, the test-queue we bound to Exchange-2 on Vhost-B receives the same message three times (once from each cluster node).
How can we prevent this? Does it require a change in the shovel configuration or in the cluster configuration?
RabbitMQ version: 3.6.15
UPDATE 1:
We have two exclusive queues that cannot be deleted, because they are locked. Those queues disappear, once we disable shovel plugin on all cluster nodes. As soon as we reactivate the plugin on one node, the queues are created again.
It seems to have been a configuration error. I tested the shovel plugin manually and I must have added shovel configurations to the root ('/') virtual host. For some reason, these did not show up in the management console. Using
rabbitmqctl list_parameters
I saw two additional shovel configurations on the root virtual host. After dropping those shovels, the according queues were removed as well. Adding the shovel as described in the question added the shovel and created one queue. Since then, the message is only forwarded once.
Thank you #Olivier for hinting at multiple shovels. That comment brought me on the right track.
Related
We have the following setup:
Now, on the Upstream side, I see two connections to the Cluster. One to rabbitmq-1 and one to rabbitmq-2.
The one to rabbitmq-1 is piling up messages. Note the message count of 413'584.
In the downstream, on the Cluster, I see only the connection to rabbitmq-2.
If I delete the queue to rabbitmq-1 it reappears after some time.
Why are there two queues, and why is the one to rabbitmq-1 not processing any messages?
This happens in the following case:
Your cluster has no name defined. In such case the name of the node is used as a cluster name.
Your cluster is behind a load balancer which selects node randomly.
You use the load balancer url to setup the federation upstream. In such case when the node restarts. The connection from another node is made which has different name.
Solution
The easiest solution is to set the cluster name on any node in the cluster with the following command.
rabbitmqctl set_cluster_name "rabbitmq-cluster"
After that all nodes in the cluster will return the same name and no redundant exchanges or queues will be created
I have rabbitMQ broker running on two nodes as a cluster. I have observed that if node, where queue have been created, goes down, then queue would not be available on other node. If I try to publish a message from other node it fails. Even if I remove the failed node from cluster(using forget cluster command) and try to publish message from other node, the behavior is same.
I dont want to enable mirroring of the queue for the simple reason that it would replicate the messages which would be additional load on inter-network.
Is there way available in rabbitMQ to achieve this?
The behaviour you are experiencing is the default behaviour of rabbitmq and its exactly what i supposed to happen.The node where you created the queue becomes the producer now and if this node goes down then any connection available to it or queues or exchanges associated with it will not work at all. There are two options to resolve this issue.
One option is that there is one separate queue for every node and any node that wants to listen to receive messages from a particular node can subscribe to that particular queue's exchange. This seems to be not a very good idea since you need to manage a lot of things for it.
Second option is to always declare a queue before you publish so if your queue is not available then a new queue would take its place and all the nodes subscribe to would be able to listen and any producer node will be able to post that queue. This option will resolve the problems of node getting down or node not available. from the docs
before sending we need to make sure the recipient queue exists. If we send a message to non-existing location, RabbitMQ will just drop the message. Let's create a hello queue to which the message will be delivered:
RabbitMQ lets you import and export definitions. Definitions are json files which contain all broker objects (queues, exchanges, bindings, users, virtual hosts, permissions and parameters). They do not include the messages of queues.
You can export definitions of the node who owns the queue and import them to the slave node of the cluster periodically. You have to enable the management plugin for this task.
More information here: https://www.rabbitmq.com/management.html#configuration
We have setup a RabbitMQ cluster with 3 nodes. If an effort to have some form of load balancing, we setup the policy to only sync across 2 of the nodes:
rabbitmqctl set_policy ha-2 . '{"ha-mode":"exactly","ha-params":2,"ha-sync-mode":"automatic"}'
This works as expected when all 3 nodes are online.
When we shutdown one of the nodes (to simulate a failure) queues mastered on the failed node are still available (on the slave) but not synchronized to another node. If we manually re-apply the policy, the queues then synchronize as expected.
Should we expect that all queues be mirrored in the scenario that one node fails with this policy?
Works as expected in RabbitMQ 3.5.4
We had a network network partition and RabbitMQ ended up in "split brain".
After the cluster recovered, I have a queue that I cant delete. In the mgmt. interface the queue is just listed with "?", and I'm unable to delete it from using mgmt. interface or from commandline.
I have tried to remove the node "sh-mq-cl1a-04" from the cluster, but the queue remains in the cluster.
I had a similar issue where I couldn't delete some queues, and the solution listed here worked for me: https://community.pivotal.io/s/article/Queue-cant-be-deleted-or-purged-in-RabbitMQ
I ssh'd onto one of the nodes in my cluster (the one where the queue is hosted is probably best), sudo'd as root, and then ran this command:
rabbitmqctl eval '{ok, Q} = rabbit_amqqueue:lookup(rabbit_misc:r(<<"VHOST">>, queue, <<"QUEUE">>)), rabbit_amqqueue:delete_crashed(Q).'
You'll need to replace VHOST with your virtual host name, and QUEUE with your queue name (which I realize it might be tricky to figure out, in your situation).
Here is what I try to achieve with ActiveMQ:
I'd like to have 2 clusters of brokers: clusterA and clusterB. Data between these 2 clusters should be mirrored. So, when clusterA receives a message it will be stored at storageA and also this message should be forwarded to clusterB (if there is such demand) and stored in storageB. On the other hand if clusterB receives a message it should be forwarded to clusterA.
I'm wondering whether config like this considered to be valid according to described above:
<networkConnectors>
<networkConnector
uri="static:(failover(tcp://clusterB_broker1:port,tcp://clusterB_broker2:port,tcp://clusterB_broker3:port))"
name="bridge"
duplex="true"
conduitSubscriptions="true"
decreaseNetworkConsumerPriority="false"/>
</networkConnectors>
This is a valid configuration. It indicates (assuming that all ClusterA brokers are configured this way) that brokers in ClusterA will store and forward first to clusterB_broker1, and if it is down will instead store and forward to clusterB_broker2, and then to clusterB_broker3 if clusterB_broker2 is down. But depending on your intra-cluster broker configuration, it is not going to do what you want it to.
The broker configurations must be set up for failover themselves or else you will lose messages when clusterB_broker1 goes down. If clusterB brokers are not working together as described below, then when clusterB_broker1 goes down, any messages sent to it will not be present or accessible on the other clusterB brokers. New messages will forward to them.
How to do failover within the cluster depends on your ActiveMQ version.
The latest version (5.9.0) supports 3 failover (or master/slave) cluster configurations. For quick reference, they are:
Shared File System Master Slave
JDBC Master Slave
Replicated LevelDB Store
Earlier versions supported a master/slave configuration that had one master and one slave node where messages were forwarded to the slave broker. This setup was not well maintained, had bugs, and has been removed from ActiveMQ.