RabbitMq queue being removed when node stopped - rabbitmq

I have created two RabbitMQ nodes (say A and B) and I have cluster them. I have then done the following in the management UI :
(note that node A is intially the master)
On node A I created a queue (durable=true, auto-delete=false) and can see it shared on node B
Stopped node A, I can still see it on B (great)
Started node A again
Stopped node B, the queue has been removed from node A
This seems strange as node B was not even involved in the created of the queue
I then tried the same from node B :
On node B I created a queue (durable=true, auto-delete=false) and can see it shared on node A
Stopped node A, I can still see it on B (great)
Started node A again
Stopped node B, the queue has been removed from node A
The situation I am looking for is that no matter which node is stopped that the queue is still available on the other node.

I just noticed that the policies I setup have been removed from each node... no idea why. Just in case somebody else is having the same issue you can create policies using (e.g.)
rabbitmqctl set_policy ha-all "^com\.mydomain\." '{"ha-mode":"all","ha-sync-mode":"automatic"}'
It's immediately noticeable in the RabbitMQ Web UI as you can see the policy on the queue definition (in this case "ha-all").
See https://www.rabbitmq.com/ha.html for creating and,
See Policy Management section http://www.rabbitmq.com/man/rabbitmqctl.1.man.html for administration

Related

Apache Ignite : NODE_LEFT event

I wanted to understand the how a Node left event is triggered for an Apache Ignite grid.
Do nodes keep pinging each other constantly to find it if nodes are present or they ping each other only when required?
If ping from client node is not successful then can it also trigger NODE_LEFT event or it can only be triggered by server node.
Once a node has left, then which node triggers topology update event i.e. PME. Can it be triggered by client node or only server nodes can trigger it.
Yes, nodes are pinging each other to verify the connection. Here is more detailed explanation of how a node failure happens. You might also check this video.
The final decision of failing a node (leaving the cluster) is made on the Coordinator node issuing a special event that has to be acked by other nodes (NODE FAILED).
Though a node might leave a cluster explicitly, sending a TcpDiscoveryNodeLeftMessage (aka triggering a NODE_LEFT event), for example when you stop it gracefully.
Only the coordinator node can change topology version, meaning that a PME always starts on the coordinator and is spread to other nodes afterward.

Is it possible to connect to other RabbitMQ nodes when one node is down?

The environment I have consists of two separate servers, one each with RabbitMQ service application running. They are correctly clustered and the queues are using mirroring correctly.
Node A is master
Node B is slave
My question is more specifically when Node A goes down but Service A is still up. Node B and Service B are still up. At this point, Node B is now promoted to master. When an application connects to Node B it connects okay, of course.
rabbitmqctl cluster_status on Node B shows cluster is up with two nodes and Node B is running. rabbitmqctl cluster_status on Node A shows node is down. This is expected behavior.
It is possible for an application to connect to Node A and be able to publish/pop queue items as normal?

HA RabbitMQ without set mirror policy

I set up lab about ha for rabbitmq using cluster and mirror queue.
I am using centos 7 and rabbitmq-server 3.3.5. with three server (ha1, ha2, ha3).
I have just joined ha1 and ha2 to ha3, but do not set policy for mirror queue. When I test create queue with name "hello" on ha1 server, after i check on ha2, and ha3 using rabbitmqctl list queue, hello queue is exist on all node on cluster.
I have a question, why i do not set policy to mirror queue on cluster, but it automatic mirror queue have been created on any node on cluster?
Please give me advice about I have mistake or only join node on cluster, queue will be mirror on all node of cluster. Thanks
In rabbitmq , by default, one queue is stored only to one node. When you create a cluster, the queue is available across nodes.
But It does't mean that the queue is mirrored, if the node gets down the queue is marked as down and you can't access.
Suppose to create one queue to the node, the queue will work until the node is up, as:
if the node is down you will have:
you should always apply the mirror policy, otherwise you could lose the messages

RabbitMQ policy synchronising of queues across cluster

We have setup a RabbitMQ cluster with 3 nodes. If an effort to have some form of load balancing, we setup the policy to only sync across 2 of the nodes:
rabbitmqctl set_policy ha-2 . '{"ha-mode":"exactly","ha-params":2,"ha-sync-mode":"automatic"}'
This works as expected when all 3 nodes are online.
When we shutdown one of the nodes (to simulate a failure) queues mastered on the failed node are still available (on the slave) but not synchronized to another node. If we manually re-apply the policy, the queues then synchronize as expected.
Should we expect that all queues be mirrored in the scenario that one node fails with this policy?
Works as expected in RabbitMQ 3.5.4

CouchBase 2.5 2 nodes in replica: 1 node fail: the service is no more available

We are testing Couchbase with a two node cluster with one replica.
When we stop the service on one node, the other one does not respond until we restart the service or manually failover the stopped node.
Is there a way to maintain the service from the good node when one node is temporary unavailable?
If a node goes down then in order to activate the replicas on the other node you will need to manually fail it over. If you want this to happen automatically then you can enable auto-failover, but in order to use that feature I'm pretty sure you must have at least a three node cluster. When you want to add the failed node back then you can just re-add it to the cluster and rebalance.