How to replicate two rabbitmq server - rabbitmq

I have two server and this server have installed to rabbitmq.How to replicate these rabbitmq server? One of them is down when you want the other active.
-Server Name : Rabbit1 ,Node Name:rabbit#rabbit1
-Server Name : Rabbit2, Node Name:rabbit#rabbit2
Now,I use rabbit2 server.At the same time Publishers sends to message Rabbit1 Server but if Rabbit1 server shutdown,Publishers sends to message Rabbit2 server and then Consumers should continue reading Rabbit2 server.What should I do for it? Options :
-Two different servers using load balancer
-Two different servers using cluster but Rabbit2 server must join to Rabbit1 server cluster system.

You want clustering and/or high availability queues. You can find the basic guides in the RabbitMQ documentation, here:
https://www.rabbitmq.com/clustering.html
and here:
https://www.rabbitmq.com/ha.html

Related

Azure VM SQL Availability Group Listener Connection Problem

We are building a two-node SQL Availability Group with SQL Server 2016 SP3.
Steps taken:
1.) Build two VMs in Azure in the same region, but different Zones
2.) Install Windows Failover Cluster on both nodes
3.) Install SQL Server 2016 SP3 on each node
4.) Create a failover cluster with each node and a cloud witness
5.) Enable the failover cluster on the SQL engine service
6.) Create an availability group and add both nodes and a database
7.) Add a listener to the availability group
At this point, we can connect to the listener name if we try from the primary node with SSMS. The DNS entry has been created and assigned the IP address given to the listener.
If I go to node2 and try and connect to the listener name, I get a connection timeout. If I nslookup the correct IP is given.
When I failover from node1 to node2 the connection to the listener stops working on node1 and starts working on node 2.
We have moved node 2 to a separate subnet and still see the same behavior.
I know there are some intricacies with Azure VMs and failover clustering communications, but we have tried the things we have found concerning this.
The only thing we have been hesitant to do is the standard load balancer.
Does anyone have a direction we can look at next?

Remote workers on multiple servers with python-rq (redis)

I am using redis and python-rq to manage a data processing task. I wish to distribute the data processing across multiple servers (each server would manage several rq workers) but I would like to keep a unique queue on a master server.
Is there a way to achieve this using python-rq ?
Thank you.
It turned out to be easy enough. There are two steps:
1) Configure Redis on the master machine so that it is open to external communications by the remote "agent" server. This is done by editing the bind information as explained in this post. Make sure to set a password if setting the bind value to 0.0.0.0 as this will open the Redis connection to anyone.
2) Start the worker on the remote "agent" server using the url parameter:
rq worker --url redis://:[your_master_redis_password]#[your_master_server_IP_address]
On the master server, you can check that the connection was properly made by typing:
rq info --url redis://:[your_master_redis_password]#localhost
If you enabled the localhost binding, this should display all the workers available to Redis from your "master" including the new worker you created on your remote server.

Multiple redis master replication

Is it possible to have a multi-master redis setup running behind a haproxy / nutcracker?
I want to achieved that wherever the proxy throws the request, they can do both read and write.
Any help will be much appreciated.
It's not possible in terms of Redis master/slave. Redis support replication from master to N slaves. One slave server support just one master server. But in haproxy you can configure TCP proxy to any count of Redis nodes to achieve parallel command processing.

RabbitMQ multi server configuration model

I try to build a system with multiple servers messages exchange.
I have server called Master and another server called Slave.
Master sends messages to Slave and Slave sends messages To Master asynchronously .
I have rabbitmq server on both servers and use federation plugin on both of them to get messages.
So publishers and consumers on both servers communicate only with local rabbitmq server. And all messages exchanges between servers are done only with rabbitmq .
It works fine. When both servers are online.
My requirement is that when there is no network connection between servers then messages should be accomulated until a connection is back.
And it doesn't work with federation plugin . If federation connection is not active then messages are not stored on local rabbitmq.
What should i do to have a model where messages can wait for connection to be delivered to other rabbitmq server?
Do i need to provide more info on my current model?
There is simpler description
RabbitMQ1 has exchange MASTER. RabbitMQ2 created federation with a link to RabbitMQ1 and assigned permissions to the exchange MASTER
Publisher writes to RabbitMQ1 to exchange MASTER with routing key 'myqueue'
Consumer listens RabbitMQ2 on exchange MASTER and queue 'myqueue'.
If there is connection then all works fine
if no connection then messages posted to RabbitMQ1 are not delivered to RabbitMQ2 when connection is back.
How to solve this?
I found the solution for this. Federation is not good plugin for such solution
I used shovel . It does exactly what i need

how to use master/slave configuration in activemq using apache zookeeper?

I'm trying to configure master/slave configuration using apache zookeeper. I have 2 application servers only on which I'am running activemq. as per the tutorial given at
[1]: http://activemq.apache.org/replicated-leveldb-store.html we should have atleast 3 zookeeper servers running. since I have only 2 machines , can I run 2 zookeeper servers on 1 machine and remaining one on another ? also can I run just 2 zookeeper servers and 2 activemq servers respectively on my 2 machines ?
I will answer the zookeper parts of the question.
You can run two zookeeper nodes on a single server by specifying different port numbers. You can find more details at http://zookeeper.apache.org/doc/r3.2.2/zookeeperStarted.html under Running Replicated ZooKeeper header.
Remember to use this for testing purposes only, as running two zookeeper nodes on the same server does not help in failure scenarios.
You can have just 2 zookeeper nodes in an ensemble. This is not recommended as it is less fault tolerant. In this case, failure of one zookeeper node makes the zookeeper cluster unavailable since more than half of the nodes in the ensemble should be alive to service requests.
If you want just POC ActiveMQ, one zookeeper server is enough :
zkAddress="192.168.1.xxx:2181"
You need at least 3 AMQ serveur to valid your HA configuration. Yes, you can create 2 AMQ instances on the same node : http://activemq.apache.org/unix-shell-script.html
bin/activemq create /path/to/brokers/mybroker
Note : don't forget du change port number in activemq.xml and jetty.xml files
Note : when stopping one broker I notice that all stopping.