We had a network network partition and RabbitMQ ended up in "split brain".
After the cluster recovered, I have a queue that I cant delete. In the mgmt. interface the queue is just listed with "?", and I'm unable to delete it from using mgmt. interface or from commandline.
I have tried to remove the node "sh-mq-cl1a-04" from the cluster, but the queue remains in the cluster.
I had a similar issue where I couldn't delete some queues, and the solution listed here worked for me: https://community.pivotal.io/s/article/Queue-cant-be-deleted-or-purged-in-RabbitMQ
I ssh'd onto one of the nodes in my cluster (the one where the queue is hosted is probably best), sudo'd as root, and then ran this command:
rabbitmqctl eval '{ok, Q} = rabbit_amqqueue:lookup(rabbit_misc:r(<<"VHOST">>, queue, <<"QUEUE">>)), rabbit_amqqueue:delete_crashed(Q).'
You'll need to replace VHOST with your virtual host name, and QUEUE with your queue name (which I realize it might be tricky to figure out, in your situation).
Related
We have the following setup:
Now, on the Upstream side, I see two connections to the Cluster. One to rabbitmq-1 and one to rabbitmq-2.
The one to rabbitmq-1 is piling up messages. Note the message count of 413'584.
In the downstream, on the Cluster, I see only the connection to rabbitmq-2.
If I delete the queue to rabbitmq-1 it reappears after some time.
Why are there two queues, and why is the one to rabbitmq-1 not processing any messages?
This happens in the following case:
Your cluster has no name defined. In such case the name of the node is used as a cluster name.
Your cluster is behind a load balancer which selects node randomly.
You use the load balancer url to setup the federation upstream. In such case when the node restarts. The connection from another node is made which has different name.
Solution
The easiest solution is to set the cluster name on any node in the cluster with the following command.
rabbitmqctl set_cluster_name "rabbitmq-cluster"
After that all nodes in the cluster will return the same name and no redundant exchanges or queues will be created
We have a RabbitMQ cluster with three nodes configured. Each node has two virtual hosts (Vhost-A and Vhost-B). We need a possibility to move messages from Vhost-A to Vhost-B. To accomplish this, we setup a shovel directing messages from Exchange-1 on Vhost-A to Exchange-2 on Vhost-B.
rabbitmqctl -p Vhost-A set_parameter shovel shovel-exchange-1-to-vhost-b /
'{"src-uri": "amqp://user#/Vhost-A", "src-exchange": "Exchange-1", /
"src-exchange-key": "#", "dest-uri": "amqp://user#/Vhost-B", /
"dest-exchange": "Exchange-2", "add-forward-headers": false, /
"ack-mode": "on-confirm", "delete-after": "never"}'
This has the side-effect of replicating the messages on the destination Exchange-2. Meaning, the test-queue we bound to Exchange-2 on Vhost-B receives the same message three times (once from each cluster node).
How can we prevent this? Does it require a change in the shovel configuration or in the cluster configuration?
RabbitMQ version: 3.6.15
UPDATE 1:
We have two exclusive queues that cannot be deleted, because they are locked. Those queues disappear, once we disable shovel plugin on all cluster nodes. As soon as we reactivate the plugin on one node, the queues are created again.
It seems to have been a configuration error. I tested the shovel plugin manually and I must have added shovel configurations to the root ('/') virtual host. For some reason, these did not show up in the management console. Using
rabbitmqctl list_parameters
I saw two additional shovel configurations on the root virtual host. After dropping those shovels, the according queues were removed as well. Adding the shovel as described in the question added the shovel and created one queue. Since then, the message is only forwarded once.
Thank you #Olivier for hinting at multiple shovels. That comment brought me on the right track.
I have rabbitMQ broker running on two nodes as a cluster. I have observed that if node, where queue have been created, goes down, then queue would not be available on other node. If I try to publish a message from other node it fails. Even if I remove the failed node from cluster(using forget cluster command) and try to publish message from other node, the behavior is same.
I dont want to enable mirroring of the queue for the simple reason that it would replicate the messages which would be additional load on inter-network.
Is there way available in rabbitMQ to achieve this?
The behaviour you are experiencing is the default behaviour of rabbitmq and its exactly what i supposed to happen.The node where you created the queue becomes the producer now and if this node goes down then any connection available to it or queues or exchanges associated with it will not work at all. There are two options to resolve this issue.
One option is that there is one separate queue for every node and any node that wants to listen to receive messages from a particular node can subscribe to that particular queue's exchange. This seems to be not a very good idea since you need to manage a lot of things for it.
Second option is to always declare a queue before you publish so if your queue is not available then a new queue would take its place and all the nodes subscribe to would be able to listen and any producer node will be able to post that queue. This option will resolve the problems of node getting down or node not available. from the docs
before sending we need to make sure the recipient queue exists. If we send a message to non-existing location, RabbitMQ will just drop the message. Let's create a hello queue to which the message will be delivered:
RabbitMQ lets you import and export definitions. Definitions are json files which contain all broker objects (queues, exchanges, bindings, users, virtual hosts, permissions and parameters). They do not include the messages of queues.
You can export definitions of the node who owns the queue and import them to the slave node of the cluster periodically. You have to enable the management plugin for this task.
More information here: https://www.rabbitmq.com/management.html#configuration
I set up lab about ha for rabbitmq using cluster and mirror queue.
I am using centos 7 and rabbitmq-server 3.3.5. with three server (ha1, ha2, ha3).
I have just joined ha1 and ha2 to ha3, but do not set policy for mirror queue. When I test create queue with name "hello" on ha1 server, after i check on ha2, and ha3 using rabbitmqctl list queue, hello queue is exist on all node on cluster.
I have a question, why i do not set policy to mirror queue on cluster, but it automatic mirror queue have been created on any node on cluster?
Please give me advice about I have mistake or only join node on cluster, queue will be mirror on all node of cluster. Thanks
In rabbitmq , by default, one queue is stored only to one node. When you create a cluster, the queue is available across nodes.
But It does't mean that the queue is mirrored, if the node gets down the queue is marked as down and you can't access.
Suppose to create one queue to the node, the queue will work until the node is up, as:
if the node is down you will have:
you should always apply the mirror policy, otherwise you could lose the messages
We have a couple of crusty AWS hosts running a RabbitMQ implementation in a cluster. We need to upgrade the hardware, and therefore we developed a Chef cookbook to spawn replacement servers.
One thing that we would rather not recreate by hand is the admin users, the queues, etc.
What is the best method to get that stuff from the old hosts to the new ones? I believe it's everything that lives in the /var/lib/rabbitmq/mnesia directory.
Is it wise to copy the files from one host to another?
Is there a programmatic means to do this?
Can it be coded into our Chef cookbook?
You can definitely export and import configuration via command line: https://www.rabbitmq.com/management-cli.html
I'm not sure about admin user, though.
If you create new rabbitmq nodes on your new hardware, you will get all the users in that new node. This is easy to try:
run docker container with image of rabbitmq (with management plugin)
and create a user
run another container and add that node to the
cluster of the first one
kill rabbitmq on the first one, or delete
the docker container and you will see that you still have the newly
created user on the 2nd (but now master) node
I wrote docker since it's faster to create a cluster this way, but if you already have a cluster you could use it for testing if you prefer.
For the queues and exchanges, I don't want to quote almost everything found in the rabbitmq doc page for the high availability, but I will just say that you have to pay attention to the following:
exclusive queues because they are gone once the client connection is gone
queue mirroring (if you have any set up, if not it would be wise to consider it, if not even necessary)
I would do the migration gradually, waiting for the queues to get emptied and then kill of the nodes on the old hardware. It maybe doable in a big-bang fashion, but seems riskier. If you have a running system, than set up queue mirroring and try to find appropriate moment to do manual sync - but careful, this has a huge impact on the broker performance.
Additionally there is this shovel plugin (I have to point out that I did not use it or even explore it) but that may be another way to go since (quoting form the link):
In essence, a shovel is a simple pump. Each shovel:
connects to the source broker and the destination broker, consumes
messages from the queue, re-publishes each message to the destination
broker (using, by default, the original exchange name and
routing_key).