I try to build a system with multiple servers messages exchange.
I have server called Master and another server called Slave.
Master sends messages to Slave and Slave sends messages To Master asynchronously .
I have rabbitmq server on both servers and use federation plugin on both of them to get messages.
So publishers and consumers on both servers communicate only with local rabbitmq server. And all messages exchanges between servers are done only with rabbitmq .
It works fine. When both servers are online.
My requirement is that when there is no network connection between servers then messages should be accomulated until a connection is back.
And it doesn't work with federation plugin . If federation connection is not active then messages are not stored on local rabbitmq.
What should i do to have a model where messages can wait for connection to be delivered to other rabbitmq server?
Do i need to provide more info on my current model?
There is simpler description
RabbitMQ1 has exchange MASTER. RabbitMQ2 created federation with a link to RabbitMQ1 and assigned permissions to the exchange MASTER
Publisher writes to RabbitMQ1 to exchange MASTER with routing key 'myqueue'
Consumer listens RabbitMQ2 on exchange MASTER and queue 'myqueue'.
If there is connection then all works fine
if no connection then messages posted to RabbitMQ1 are not delivered to RabbitMQ2 when connection is back.
How to solve this?
I found the solution for this. Federation is not good plugin for such solution
I used shovel . It does exactly what i need
Related
There are three nodes in a RabbitMQ cluster as below.
Within RabbitMQ, there are two queues, q1 and q2.
The master replica of q1 and q2 are distributed on different nodes. Both queues are mirrored by other nodes.
There is a load balancer in front of three nodes.
AMQP(node port 5672) and Management HTTP API(node port 15672) are exposed by load balancer.
When application establishes a connection through load balancer, it could reach a random RabbitMQ node behind. And this is invisible to application.
Question:
Is it ok for application to consume both queues in a single AMQP channel over a single connection no matter which RabbitMQ node it reaches?
It is ok for application to call management HTTP API no matter which RabbitMQ node its request hits?
When RabbitMQ is set up as a cluster and you have your queues mirrored across them, it doesn't matter to which node you are connected. Because the AMQP connection for a queue will be automatically routed to the node containing the master queue and this handled by RabbitMQ internally. So, if a request to publish or consume on queue q1 comes, it will be routed to Node #1.
Answers to your question.
It is not advisable to consume more than one queues in a single AMQP connection. Exception from one consuming process may cause the connection to close which will interrupt the other one.
It is ok for application to call management HTTP API no matter which RabbutMQ node its request hits. Once management plugin in a RabbitMQ cluster is enabled, all the nodes will accept the Management HTTP API requests.
Reference: https://www.rabbitmq.com/clustering.html
I have a cluster of 2 RabbitMQ nodes (each running version 3.6.10 of RabbitMQ with MQTT plugin enabled) and an AWS classic load balancer in front of them. Server and clients exchange MQTT messages.
Clients (apps running on mobile devices and using Eclipse Paho client lib) connect to the load balancer which distributes connections in round-robin fashion.
When I bring down one node, say Node1, all clients that were connected to Node1 get a callback indicating connection to the broker is lost.
These clients try to reconnect but the connection attempt fails indicating the broker is not reachable.
From AWS console I can see that AWS ELB detects that Node1 is down and marks it as "OutOfService".
Connection requests from new clients are routed to the "InService" node Node2; however, connection requests from existing clients that were previously connected to Node1 always fail!
ELB is configured with idle timeout of 180 seconds. Enabling or disabling connection draining in ELB did not make any difference.
Is there any specific configuration to make ELB forget that the existing clients were connected to Node1 and allow them to connect to Node2?
I tried by adding following HA policy :
rabbitmqctl set_policy ha-mqtt "^mqtt" \ '{"ha-mode":"exactly","ha-params":2,"ha-sync-mode":"automatic"}'
With this policy in place, all queues created for MQTT clients were mirrored. Now when Node1 is down, connection attempts from existing client IDs also get routed to the other active node!
This makes me wonder what is the relationship between client IDs from MQTT clients and their connection to broker nodes? I thought mirroring of queues is necessary only to retain and access messages that were not yet acknowledged when the queue master node goes down. But I see that the clients are not even able to establish a connection!
Is it possible to use federations or shovels to mirror the creation of exchanges and queues on one server to another ?
All the examples I've seen of using shovels and federations use exchanges and queues that already exist on the servers. What I want to do is create an exchange on server A and have a federation or shovel re-create it on Server B then start to send messages to it.
If this cannot be done with a federation or shovel is there anyway of achieving this without using clustering, the connection between the two servers is not consistent so clustering isn't possible.
I'm running RabbitMQ on windows.
You can use the federation plug-in.
It supports the exchange exchange and the queue federation, in order to mirror the queues and exchanges you can configure a policies ( using the management console or command line),for example with this parameters:
Name: my_policy
Pattern: ^mirr\. <---- mirror exchanges and queues with prefix “mirr.”
Definition: federation-upstream-set:all
you can apply the configuration for exchanges and queues, as:
The pattern policy supports regular expression
In this way each new or old exchange or queue that starts with the prefix “mirr.” will be mirrored to the other broker.
I think this could solve your problem.
Unfortunately in this way it is not possible to do this, because the connection is a point-to-point connection. You have to link a exchange with a remote exchange and in your topology this cant be created automatically.
I had also this problem in the past. And how i resolved the problem was over a business logic side. If there was a need for a new Exchange/Queue "on the fly", my data input gateway recognized this and created on the local and on the remote exchange the new exchange and queues with the connection, before the message was sent to RabbitMQ.
We are using Apache ActiveMQ 5.5.
We have a broker (let us call it Main Broker) running at tcp://0.0.0.0:61616. This broker does a store and forward message to a remote broker. To do that, we have a network connection from this broker to two remote brokers. We want one of the remote brokers to serve as primary and other as failover. This is the network connect URI that we are using
static:(failover://(tcp://<b>server1</b>:61617,tcp://<b>server2</b>:61617)?randomize=false)
We are using spring DefaultMessageListenerContainer to listen for the messages
failover://(tcp://<b>server1</b>:61617,tcp://<b>server2</b>:61617)?randomize=false
In the normal scenario when all the brokers are up and running and a message is sent to Main Broker, it is getting forwarded to server1 and is consumed by the listener.
If we stop the broker on server1, the fail over is happening successfully and the messages are getting forwarded to server2 and successfully consumed by the listener. The problem is when we bring the server1 back up, the messages continue to be forwarded by the main broker to server2. Our requirement is that once the server1 is up and running, the Main broker should start forwarding the messages to server1 and the listener should connect back to server1 and consume messages. We cannot change randomize to true because we want only one of the servers1 or server2 to be active at a time.
Please let me know whether it is possible and how.
You need to set to true the option "priorityBackup". You URI will become:
static:(failover://(tcp://server1:61617,tcp://server2:61617)?randomize=false&priorityBackup=true)
This will make server1 (the first in the list of servers) priority backup. When server1 goes down, he will failover to server2, but constantly try to reconnect to server1. Hence, when it goes back up again, he will switch back to server1. This option is only available in version 5.6
The complete details are here:
http://activemq.apache.org/failover-transport-reference.html
There is also an interesting blog here:
http://bsnyderblog.blogspot.com/2010/10/new-features-in-activemq-54-automatic.html
I have a little problem here with my sample JMS layout.
I have two brokers (A, B) on two machines, which are linked via network connector. The idea is that the producer can send to any broker and the consumer can listen to any broker and the topic to send to/receive from is available globally.
The topic has two durable subscriber clients (one on each machine) that both will process all the messages in the topic. I want it to be a durable subscription so that the processes won't loose any workload if a process has to be restarted. Both subscriber clients are configured to have a failover broker url, so that they first try to connect to their localhost broker and if not available to the other. Failover of the clients seems to work, but I found a problem in the following situation:
Each broker 'A' and 'B' have a subscriber client connected The producer is sending to 'A'. Broker 'B' gets restarted. Client of 'B' registers connection loss and switches to 'A'. 'B' comes up again, and because it had itself registered as a durable subscriber to 'A' it gets the message feed. It has no active durable subscriber now ('A' has now three, including 'B') and piles up until it reaches its connection limits.
Is my configuration wrong? Is it possible what I've intended?
Cheers,
Kai
Are you running master-slave configuration?
Why do you want both brokers to have connected clients at the same time?
If you user failover connection string (identifying both brokers in it) your consumers/producers will use ActiveMQ failover implementation and will connect/reconnect to the active node when needed. I don't think having two active instances with active clients is a good idea - unless you are trying to duplicate your processes (in this case there will be no synchronization)
To make both nodes (master and slave) to always have the same durable data you need
to persist your messages to the same place accessible to both nodes. It can be JDBC adapter connected to a single instance of database (probably behind the cluster) or it can be NAS with shared network folder for KahaDB.