I have a rabbitmq cluster with 2 nodes. Node A and B. Node A is up and running. Everytime I run the following commadn on node A I get:
./rabbitmqctl cluster_status
Cluster status of node rabbit#A ...
[{nodes,[{disc,[rabbit#A,rabbit#B]}]},
{running_nodes,[rabbit#A]},
{partitions,[]}]
...done.
Interestingly node B is up and running. Everytime I have it join the other node (A) to get it clustered it states:
rabbitmqctl join_cluster rabbit#A
...done (already_member).
rabbitmqctl cluster_status
Cluster status of node rabbit#B ...
[{nodes,[{disc,[rabbit#B]}]}]
...done.
So somehow node A cannot see B. And on be the "already_member" does not seem to be reflected from the cluster_status command...
I can check the queues on both nodes and they are different. Node A has dozens of queues and node B none, therefore it is clear the cluster is not established. Both node A and B can ping each other and nothing gets reported in the rabbitmq's logs
Any idea how this is not working ?
In case of cluster I will suggest you to go for Load Balancer . Make sure you already set HA Policy for your cluster.
To set HA Policy
$ rabbitmqctl set_policy ha-all "" '{"ha-mode":"all","ha-sync-mode":"automatic"}'
More here RabbitMQ Cluster
I was able to solve this same problem. Given NodeA is the parent and NodeB is trying to join the cluster.
Stop NodeB app rabbitmqctl stop_app
On NodeA forget cluster node rabbitmqctl forget_cluster_node rabbit#NodeB
Reset NodeB rabbitmqctl reset
Join NodeB to the cluster rabbitmqctl join_cluster rabbit#NodeA
Start NodeB rabbitmqctl start_app
Related
What kind of health checks can be configured for rabbitmq in openshift?
I wasn't able to configure rabbitmqctl as suggested in [1] or [2] as it delivers errors when used with the images registry.centos.org/centos/rabbitmq or luiscoms/openshift-rabbitmq
Example:
$ rabbitmqctl cluster_status
Error: could not recognise command
...
Only root or rabbitmq should run rabbitmqctl
[1] https://github.com/docker-library/rabbitmq/pull/174#issuecomment-452002696
[2] https://github.com/rabbitmq/rabbitmq-peer-discovery-k8s/blob/master/examples/k8s_statefulsets/rabbitmq_statefulsets.yaml
There is clustered rabbitmq.
There is one connected consumer to it.
root#serwer1:~# rabbitmqctl list_consumers -p vhost1
Listing consumers ...
queue1.dlq <rabbit#serwer1.3.9529.109> amq.ctag-C5lFDLY7LZjnDdi1hjbAIA true 1000 []
...done.
There is connected with it channel:
root#serwer1:~# rabbitmqctl list_channels vhost pid state connection |grep rabbit#serwer1.3.9529.109
vhost1 <rabbit#serwer1.3.9529.109> running <rabbit#serwer1.3.9537.109>
But - there is no connected connection for it:
root#serwer1:~# rabbitmqctl list_connections vhost pid state |grep rabbit#serwer1.3.9529.109
root#serwer1:~#
How such situation is possible? How to fix it?
(rabbitmq 3.3.5 from debian jessie)
I am new to rabbitmq and trying to set up a cluster. However I am getting the following error. The cookie is same in both machine in the C:\Windows and C:\Users\<user in context> directories
rabbitmqctl join_cluster rabbit#node1 Clustering node rabbit#node2
with rabbit#node1 ... Error: unable to connect to nodes
[rabbit#node1]: nodedown
DIAGNOSTICS
attempted to contact: [rabbit#node1]
rabbit#node1: * connected to epmd (port 4369) on node1 * epmd
reports node 'rabbit' running on port 25672 * TCP connection
succeeded but Erlang distribution failed * suggestion: hostname
mismatch? * suggestion: is the cookie set correctly? * suggestion:
is the Erlang distribution using TLS?
current node details:
- node name: 'rabbitmq-cli-552#node1'
- home dir: C:\Users\dataimports
- cookie hash: AWMNITV6TcxGSxvEF6Gndw==
Any help is much appreciated.
Looks like your rabbit#node2 node is looking for a node named rabbit#node1 when the node that exists is rabbitmq-cli-552#node1.
This happens when rabbitmq is started on install. Best way to get around this is to stop the rabbitmq process (sudo /etc/init.d/rabbitmq-server stop) then to start it (sudo /etc/init.d/rabbitmq-server start).
If the first command fails to stop it, you can always kill the rabbitmq server process, then start it again. The node coming up should have the correct name.
I have master-slave configuration of RabbitMQ. As two Docker containers, with dynamic internal IP (changed on every restart).
Clustering works fine on clean run, but if one of servers got restarted it cannot reconnect to the cluster:
rabbitmqctl join_cluster --ram rabbit#master
Clustering node 'rabbit#slave' with 'rabbit#master' ...
Error: {ok,already_member}
And following:
rabbitmqctl cluster_status
Cluster status of node 'rabbit#slave' ...
[{nodes,[{disc,['rabbit#slave']}]}]
says that node not in a cluster.
Only way I found it remove this node, and only then try to rejoin cluster, like:
rabbitmqctl -n rabbit#master forget_cluster_node rabbit#slave
rabbitmqctl join_cluster --ram rabbit#master
That works, but doesn't look good for me. I believe there should be better way to rejoin cluster, than forgetting and join again. I see there is a command update_cluster_nodes also, but seems that this something different, not sure if it could help.
What is correct way to rejoin cluster on container restart?
I realize that this has been opened for a year but I though I would answer just in case it might help someone.
I believe that this issue has been resolved in a recent RabbitMQ release.
I implemented a Dockerized RabbitMQ Cluster using the Rabbit management 3.6.5 image and my nodes are able to auto rejoin the cluster on container or Docker host restart.
I am having server issues with getting rabbit to cluster.
I boot up two nodes on ec2.
On the the first node booted I do this.
rabbitmqctl stop_app
rabbitmqctl reset
rabbitmqctl start_app
I boot another node.
sudo service rabbitmq-server stop
#Copy cookie from the first server booted
sudo su - -c 'echo -n "cookie" > /var/lib/rabbitmq/.erlang.cookie'
rabbitmqctl stop_app
rabbitmqctl reset
rabbitmqctl cluster rabbit#server1
1) sever1 is running
2) What ports to need open? I have 22, 4369, 5672
sudo rabbitmqctl cluster rabbit#aws-rabbit-server-east-development-20121102162143
Clustering node 'rabbit#aws-rabbit-server-east-development-20121103033005' with ['rabbit#aws-rabbit-server-east-development-20121102162143'] ...
Error: {no_running_cluster_nodes,['rabbit#aws-rabbit-server-east-development-20121102162143'],
['rabbit#aws-rabbit-server-east-development-20121102162143']}
What could possibility be missing from there docs or what what am I missing?
I had a similar problem on EC2 with two windows machines. I eventually got it working but I'm not sure I did it in the correct way so there may be a better solution.
The issue I found was that the two nodes could not see each other when trying to cluster. Each time you start a Rabbit node it seemed to be assigned a port number dynamically.
This obviously makes it very difficult to know which port to open up in the security group so to solve this, I restricted the range of ports Rabbit chose from when assigning the port. I restricted this to a range of 1 port on each node so I always know which port was being assigned.
The easiest way I found to do this was by editing the sbin\rabbitmq-service.bat file.
find the line -kernel inet_default_connect_options "[{nodelay,true}]" ^
add the following two lines to the file underneath:
-kernel inet_dist_listen_min ##### ^
-kernel inet_dist_listen_max ##### ^
replacing ##### with your chosen port number.
So you should now open up the following ports:
5672 - RabbitMQ’s listening port
4369 - Erlang Port Mapper Daemon
##### - the chosen port number for the Erlang nodes to communicate via
Because Erlang does not recognise FQDNs you may need to modify the hosts file on all the servers to make sure they are all able to resolve all the Erlang node name to an IP address, e.g.
123.123.123.111 NODE1
123.123.123.222 NODE2
once this is done you should then be able to see each node from the other. you can do this by using calling the following from the command line (replacing rabbit#NODE2 with whichever node you want to see)
rabbitmqctl status -n rabbit#NODE2
Hope this give you some help, I'm no expert but found this got things working for me!