activemq failover : primary node recover consumers? - activemq

I am new to activemq. I have configured two servers of activemq and using them in failover transport. They are working fine. I mean if one activemq goes down another pick up the queues. My problem is when main server comes up it do not restore the queues. Is there any such configuration or protocol that can manage it if main server is up then consumers should come back to to it.
Currently my configuration is :
<transportConnectors>
<transportConnector name="openwire" uri="tcp://0.0.0.0:61616" updateClusterClients="true" rebalanceClusterClients="true"/>
</transportConnectors>
<networkConnectors>
<networkConnector uri="static:(tcp://192.168.0.122:61616)"
networkTTL="3"
prefetchSize="1"
decreaseNetworkConsumerPriority="true" />
</networkConnectors>
and my connection uri is :
failover:(tcp://${ipaddress.master}:61616,tcp://${ipaddress.backup}:61616)?randomize=false
Also i want to send mail in case of failover occurs so that i could know if activemq is down.

What you have configured there is not a true HA deployment, but a network of brokers. If you have two brokers configured in a network, each has its own message store, which at any time contains a partial set of messages (see how networks of brokers work).
The behaviour that you would likely expect to see is that if one broker falls over, the other takes its place resuming from where the failed one left off (with all of the undelivered messages that the failed broker held). For that you need to use a (preferably shared-storage) master-slave configuration.

I have done that. And posting solution in case any one is having same problem.
This feature is available in activemq 5.6. priorityBackup=true in connection url is the key to tell to tell consumer to come back on primary node if available.
My new connection uri is :
failover:master:61616,tcp://backup:61616)?randomize=false&priorityBackup=true
see here for more details.

Related

ActiveMQ Messages appear to be dequeued from one broker but not arrive at other in Network of Brokers

I have a network of brokers, whereby messages are produced from a Java app onto a small local broker in the field, this then has a static network destination for the queue pointing at a broker (in AWS), which has a consumer sat across all the queues for the remote brokers.
If there is any interruption in the connection, the remote broker will reconnect successfully. The issue is that in some cases, the remote broker will reconnect, the queue on the remote broker will appear to rack up the dequeue count, however the central broker to which it is meant to be forwarding doesn't show up an increasing enqueue for the queue.
The messages enqueued at the remote end are Persistent = YES, priority = 4. If I manually add a message on the broker with Persistent = Yes, the same behaviour exhibits. If I set persistent = NO, the message successfully hits the other end. If I restart the broker, the persistent messages flow again (although the ones it thought it was successfully sending are lost).
What situation could cause this loss of messages, and is there a configuration option that can be tweaked to fix this?
The configuration for the remote brokers is a standard AMQ install, with the following network connector defined:
<networkConnectors>
<networkConnector
uri="${X.remoteMsgURI}"
staticBridge="true"
dynamicOnly="false"
alwaysSyncSend="true"
userName="${X.remoteUsername}"
password="${X.remotePassword}"
>
<staticallyIncludedDestinations>
<queue physicalName="cloud.${X.environment}.queue.${X.queuebase}"/>
</staticallyIncludedDestinations>
</networkConnector>
</networkConnectors>
The connection string for the static remote is:
remoteMsgURI=static:(failover:(ssl:/X-1.mq.eu-west-2.amazonaws.com:61617?wireFormat.maxInactivityDuration=30000,ssl://X-2.amazonaws.com:61617?wireFormat.maxInactivityDuration=30000))

Why does this ActiveMQ broker configuration fail after many brokers are added?

Software and configurations:
ActiveMQ 5.14.0, Java 8
All brokers have non-persistent topics only; advisory messages are on.
OS: Linux (RHEL 6)
Some terminology:
We have headless Java applications (services) that use a broker to communicate with the web server. They use Java Plugin Framework (JPF). Each service connects to its own personal broker and subscribes to a single topic that is unique to that service (a "service-specific" topic).
This broker has a network connector that automatically connects to all local web instance brokers (typically only 1 of them per LAN).
This broker has a network connector that automatically connects to all local root borker instances (typically only 1 of them per LAN).
Only the web servers ever publish messages to this topic.
Services don't send messages intended for one another.
The web server acts as a front end for the services. It uses Spring Boot. Each web server connects to its own personal broker and subscribes to a single global topic that is shared among all web servers; if a message is sent to that topic, all web servers receive it.
Only the services ever publish messages to this topic.
Web servers don't send messages intended for one another.
This broker has a network connector that automatically connects to all local root borker instances (typically only 1 of them per LAN).
The root broker is a glorified Java appliction that launches an ActiveMQ broker. It uses Spring Boot. The root brokers don't publish or subscribe; they merely act as a bridge between LANs.
Each root broker will connect to its parent broker (not done via XML). This is done in Java code.
Settings: duplex=true, checkDuplicateMessagesOnDuplex=true, suppressDuplicateTopicSubscriptions=false, networkTTL=10
I followed this guide in configuring all of the brokers: http://activemq.apache.org/networks-of-brokers.html
Here's a diagram of the intended architecture:
What we're observing is that after a certain number of services come online, messages stop flowing between the service and web application instances, even if they're on the same LAN and directly connected to one another. We can see the producers creating the messages (in the web application logs), but the consumers never receive the network data (verified using Wireshark). I can't tell if the the brokers are sending the messages to the wrong location or not. The correct topics show up in our JMX MBEANS when we view a running instance using jconsole.
There are no errors/warnings from any of the JVMs.
One observation we've made is that adding a new web server and service in a different discovery group appears to work with no problem. They have no communication issues whatsoever, so we believe this is a broker configuration issue.
service-broker.xml:
<!-- Connects to root broker and web broker -->
<networkConnectors>
<networkConnector name="service-${broker.id}-broker-nc" duplex="true" networkTTL="10"
checkDuplicateMessagesOnDuplex="true" suppressDuplicateTopicSubscriptions="false"
uri="multicast://225.5.5.5:6555?group=GROUP_BROKER&maxReconnectAttempts=1&joinNetworkInterface=${broker.netInterface}" />
<networkConnector name="service-${broker.id}-web-nc" duplex="true" networkTTL="10"
checkDuplicateMessagesOnDuplex="true" suppressDuplicateTopicSubscriptions="false"
uri="multicast://225.5.5.5:6555?group=GROUP_WEB&maxReconnectAttempts=1&joinNetworkInterface=${broker.netInterface}" />
</networkConnectors>
<!-- Don't advertise the broker (only connection should be from localhost) -->
<transportConnectors>
<transportConnector name="openwire"
uri="tcp://${broker.ip}:${broker.port}?maximumConnections=1000&wireFormat.maxFrameSize=104857600" />
</transportConnectors>
web-broker.xml:
<!-- Connect to root broker -->
<networkConnectors>
<networkConnector name="web-${broker.id}-broker-nc" duplex="true" networkTTL="10"
checkDuplicateMessagesOnDuplex="true" suppressDuplicateTopicSubscriptions="false"
uri="multicast://225.5.5.5:6555?group=GROUP_BROKER&maxReconnectAttempts=1" />
</networkConnectors>
<!-- Advertise web broker (service will connect to this) -->
<transportConnectors>
<transportConnector name="openwire"
uri="tcp://${broker.ip}:${broker.port}?maximumConnections=1000&wireFormat.maxFrameSize=104857600"
discoveryUri="multicast://225.5.5.5:6555?group=GROUP_WEB" />
</transportConnectors>
root-broker.xml:
<!-- Advertise root broker (service and web will connect to this) -->
<transportConnectors>
<transportConnector name="openwire"
uri="tcp://${broker.ip}:${broker.port}?maximumConnections=1000&wireFormat.maxFrameSize=104857600"
discoveryUri="multicast://225.5.5.5:6555?group=GROUP_BROKER" />
</transportConnectors>
Do the configurations shown above support the architecture shown above?
Do you see any other problems/pitfalls?

ActiveMQ forwarding bridge with failover

Here is what I try to achieve with ActiveMQ:
I'd like to have 2 clusters of brokers: clusterA and clusterB. Data between these 2 clusters should be mirrored. So, when clusterA receives a message it will be stored at storageA and also this message should be forwarded to clusterB (if there is such demand) and stored in storageB. On the other hand if clusterB receives a message it should be forwarded to clusterA.
I'm wondering whether config like this considered to be valid according to described above:
<networkConnectors>
<networkConnector
uri="static:(failover(tcp://clusterB_broker1:port,tcp://clusterB_broker2:port,tcp://clusterB_broker3:port))"
name="bridge"
duplex="true"
conduitSubscriptions="true"
decreaseNetworkConsumerPriority="false"/>
</networkConnectors>
This is a valid configuration. It indicates (assuming that all ClusterA brokers are configured this way) that brokers in ClusterA will store and forward first to clusterB_broker1, and if it is down will instead store and forward to clusterB_broker2, and then to clusterB_broker3 if clusterB_broker2 is down. But depending on your intra-cluster broker configuration, it is not going to do what you want it to.
The broker configurations must be set up for failover themselves or else you will lose messages when clusterB_broker1 goes down. If clusterB brokers are not working together as described below, then when clusterB_broker1 goes down, any messages sent to it will not be present or accessible on the other clusterB brokers. New messages will forward to them.
How to do failover within the cluster depends on your ActiveMQ version.
The latest version (5.9.0) supports 3 failover (or master/slave) cluster configurations. For quick reference, they are:
Shared File System Master Slave
JDBC Master Slave
Replicated LevelDB Store
Earlier versions supported a master/slave configuration that had one master and one slave node where messages were forwarded to the slave broker. This setup was not well maintained, had bugs, and has been removed from ActiveMQ.

Configuring a duplex connector for linking with Apollo Broker

I have an Apollo broker configured as a stomp server. Now I want to configure an ActiveMQ broker which links to the Apollo broker and enable message propagation in both directions.
That is, I want the Apollo broker and ActiveMQ broker to work both as consumers and producers.
Will this networkconnector configuration at ActiveMQ broker meet my requirement?
<networkConnectors>
<networkConnector name="linkToApolloBroker"
uri="static:(stomp://apollo_broker_ip:61000)"
networkTTL="3"
duplex="true" />
</networkConnectors>
<persistenceAdapter>
<kahaDB directory="${activemq.data}/dynamic-broker1/kahadb"/>
</persistenceAdapter>
...
<transportConnectors>
<transportConnector name="stomp" uri="stomp://0.0.0.0:61613"/>
</transportConnectors>
Actually, I need the Apollo to provide services for the web while passing messages to and fro to ActiveMQ broker. If I have 2 brokers talking with each other, their local clients can have direct access to the locally persisted queues and to an extend remain immune to network fluctuations.
There is interoperability in the Network of brokers configuration between ActiveMQ and Apollo. You might be able to configure a bridge between the two using the JMS Bridge feature of ActiveMQ since Apollo does support openwire. The configuration you have won't work.
Have a look at the JMS to JMS bridge documentation.
Apache Camel is also a potential solution to your problem. You can probably create a Camel route that does what you want.

Active MQ - Network of Brokers

I have configured network of brokers with the topology as below.
Producer(P1) connected to Broker(B1) and Producer(P2) connected to Broker(B2)
Broker(B1) and Broker(B2) are connected as network of Brokers and are laod balancing
Consumer(C1) connected to Broker(B1) and Consumer(C2) connected to Broker(B2)
Clients are configured to use the failover as:
Consumer-1 = failover:tcp://localhost:61616,tcp://localhost:61615?randomize=false
Consumer-2 = failover:tcp://localhost:61615,tcp://localhost:61616?randomize=false
Once Channel-2 goes down P2 and C2 shifts to Channel-1 which is the desired behaviour for failover.
I want to understand the behavior when Chaneel-2 come back?
I have noticed it is only Channel-1 which continues to serve all
the connections even after Channel-2 has recovered and thus losing load balancing between Channels.
I want to know if it is possible once Channel-2 is back, load balancing will start automatically between channelsand respective Producer-2, Consumers-2 shifts to Channel-2 and thus giving full load balancing and full failover?
I have came across an article 'Combining Fault Tolerance with Load Balancing' on
http://fusesource.com/docs/broker/5.4/clustering/index.html is this recommended for combining Fault Tolerance and Load Balancing?
Regards,
-Amber
On both of your brokers, you need to setup your transportConnector to enable updateClusterClients and rebalanceClusterClients.
<transportConnectors>
<transportConnector name="tcp-connector" uri="tcp://192.168.0.23:61616" updateClusterClients="true" rebalanceClusterClients="true" />
</<transportConnectors>
Specifically, you should want rebalanceClusterClients. From the docs at http://activemq.apache.org/failover-transport-reference.html it states that:
if true, connected clients will be asked to rebalance across a cluster
of brokers when a new broker joins the network of brokers
You must be using ActiveMQ 5.4 or greater to have these options available.
As an answer to your follow up question:
"Is there a way of logging Broker URI as discussed in the article ?"
In order to show what client is connected to what broker,
modify the client's Log4j configuration as follow:
<log4j:configuration debug="true"
xmlns:log4j="http://jakarta.apache.org/log4j/">
...
<logger name="org.apache.activemq.transport.failover.FailoverTransport">
<level value="debug"/>
</logger>
...
</log4j:configuration>