ActiveMQ : How to force failover on Broker side? - activemq

I have 2 Brokers and in specific scenario i want to force the client to connect the a specific broker.
How can I achieve it without dropping the other Broker while using the Failover mechanism ?

You can use the priority backup feature of the failover URI to indicate a preference for a specific broker which it try and stay connected to, if that broker goes down it will fail to any other broker that you have configured and keep trying to reconnect to the priority backup in the background.

I have the same requirement, frequently. For clarity, I use failover with two brokers A and B, A is currently the primary and I have an issue requiring a restart. I want to cause all sending clients to connect to B while I leave the consumers to empty the queues on A, when the queues are empty I restart A.
The only way I've found to do this is to close the activeMQ port on A then my sending clients connect to B and my consumer on A (running on the same machine fortunately) can empty the queues. As well as closing the port it seemed I had to also execute
iptables -I INPUT -p tcp --dport -j REJECT
YMMV

Related

ActiveMQ replicated levelDB with zookeeper, client must know all brokers?

client must know all brokers using Failover Transport, right? Like that,
failover:(tcp://broker1:61616,tcp://broker2:61616,tcp://broker3:61616)
Is there optimization,so that the client does not have to know the existence of each broker ?
Put a TCP load balancer in front of the brokers. Only forward requests to the master broker. The LB can ping who's online or not by checking the "Slave" attribute of the broker via Jolokia/JMX.
A standalone approach would be to provide an URL to a comma separated list of broker URLs to try in case of failure. Can be done using the updateURIsURL option in the failover URI.
There is also some possibilities to auto-discover brokers using Multicast or by querying an LDAP directory, but that requires certain infrastructure in place. Read more about it here.

Simulating loss of broker publisher connectivity in ActiveMQ

I wish to run an experiment in which the publisher loses connection with the broker and then enqueues messages in its own queue and then when it regains connectivity it sends all its queued messages to the broker. How can I I do this since if I call close connection, I can no longer send(raises an exception). A trick that I can think of is to use a network of two brokers and simulate the above by breaking the connection between the two brokers. Is there an API call that I can use to do the above?
This is very much like facebook messenger or whatsapp acting as a publisher and enqueuing our to-send messages if we are offline and sending them once we are connected.
There is plenty of solutions you could use to break the connection in order to test, here is a non-comprehensive list :
Make a script that can set/unset a firewall rule on your environement blocking the connection port
If you are working with VMs, you can suspend/resume the one running Activemq, you can even automate it with tools like vagrant (vagrant suspend, then vagrant up)
Tweak the connection manualy accessing the activemq jmx
Develop an activemq plugin able to trash connections on demand (or maybe there is one ?)
Now in order to have the behavior you wish to obtain there is two options :
1) Make sure your connection is failover so it can be reestablished, and store your message on disk before sending them with your producer.
2)Produce to a local broker embbeded in your app, and connect this one to the remote broker.

RabbitMQ shovel losing messages (try to emulate network problems)

I am building the cluster with RabbitQM servers . I use shovel plugin to deliver mesages from one rabbitmq to other (different machines)
It works fine . But i want to test how it will behave when no network connection between servers.
On each server i have local queue. I push messages to it and then shovel delivers a message to remote rabbitmq queue
To emulate network problems i did
iptables -D OUTPUT -d xx.xx.xx.xx -j DROP
to disable connection to remote server xx.xx.xx.xx
then i push message to local queue, it disappears from the queue but is not on remote server!
How can it be? does shovel check if remote queue is available before remove a message from a queue?
How to make it to work correctly? I want shovel doesn't remove a message from q queue till ensured ti si delivered to remote queue.
I have found the solution for my problem.
I changed settings os a shovel.
There was the option
ask_mode,on_publish
I changed to
ask_mode, on_confirm
and it started to work correctly.

Switching back to the primary remote broker after successful failover

We are using Apache ActiveMQ 5.5.
We have a broker (let us call it Main Broker) running at tcp://0.0.0.0:61616. This broker does a store and forward message to a remote broker. To do that, we have a network connection from this broker to two remote brokers. We want one of the remote brokers to serve as primary and other as failover. This is the network connect URI that we are using
static:(failover://(tcp://<b>server1</b>:61617,tcp://<b>server2</b>:61617)?randomize=false)
We are using spring DefaultMessageListenerContainer to listen for the messages
failover://(tcp://<b>server1</b>:61617,tcp://<b>server2</b>:61617)?randomize=false
In the normal scenario when all the brokers are up and running and a message is sent to Main Broker, it is getting forwarded to server1 and is consumed by the listener.
If we stop the broker on server1, the fail over is happening successfully and the messages are getting forwarded to server2 and successfully consumed by the listener. The problem is when we bring the server1 back up, the messages continue to be forwarded by the main broker to server2. Our requirement is that once the server1 is up and running, the Main broker should start forwarding the messages to server1 and the listener should connect back to server1 and consume messages. We cannot change randomize to true because we want only one of the servers1 or server2 to be active at a time.
Please let me know whether it is possible and how.
You need to set to true the option "priorityBackup". You URI will become:
static:(failover://(tcp://server1:61617,tcp://server2:61617)?randomize=false&priorityBackup=true)
This will make server1 (the first in the list of servers) priority backup. When server1 goes down, he will failover to server2, but constantly try to reconnect to server1. Hence, when it goes back up again, he will switch back to server1. This option is only available in version 5.6
The complete details are here:
http://activemq.apache.org/failover-transport-reference.html
There is also an interesting blog here:
http://bsnyderblog.blogspot.com/2010/10/new-features-in-activemq-54-automatic.html

ActiveMQ network of brokers with durable subscription topics

I have a little problem here with my sample JMS layout.
I have two brokers (A, B) on two machines, which are linked via network connector. The idea is that the producer can send to any broker and the consumer can listen to any broker and the topic to send to/receive from is available globally.
The topic has two durable subscriber clients (one on each machine) that both will process all the messages in the topic. I want it to be a durable subscription so that the processes won't loose any workload if a process has to be restarted. Both subscriber clients are configured to have a failover broker url, so that they first try to connect to their localhost broker and if not available to the other. Failover of the clients seems to work, but I found a problem in the following situation:
Each broker 'A' and 'B' have a subscriber client connected The producer is sending to 'A'. Broker 'B' gets restarted. Client of 'B' registers connection loss and switches to 'A'. 'B' comes up again, and because it had itself registered as a durable subscriber to 'A' it gets the message feed. It has no active durable subscriber now ('A' has now three, including 'B') and piles up until it reaches its connection limits.
Is my configuration wrong? Is it possible what I've intended?
Cheers,
Kai
Are you running master-slave configuration?
Why do you want both brokers to have connected clients at the same time?
If you user failover connection string (identifying both brokers in it) your consumers/producers will use ActiveMQ failover implementation and will connect/reconnect to the active node when needed. I don't think having two active instances with active clients is a good idea - unless you are trying to duplicate your processes (in this case there will be no synchronization)
To make both nodes (master and slave) to always have the same durable data you need
to persist your messages to the same place accessible to both nodes. It can be JDBC adapter connected to a single instance of database (probably behind the cluster) or it can be NAS with shared network folder for KahaDB.