ActiveMQ doesn't propagate topics on failover reconnection - activemq

I'm attempting to use a network of brokers that bridges two LANs over a duplex WAN connector:
There are actually many subscribers in our setup, each connecting to a different "Broker A", if that makes sense. All of the Broker A instances have their own connections to a single "Broker B".
Software and configurations:
ActiveMQ 5.14.0, Java 8
All brokers have non-persistent topics only; advisory messages are on.
OS: Linux (RHEL 6)
When I initially bring everything online, regardless of the order in which I bring things online, communication between the publisher and subscriber works flawlessly. I've had the system up-and-running for weeks at a time without issue.
What I've observed is that if broker C is restarted, no new topics that show up in broker B ever appear in broker C. New topics are still appearing in broker B as they are created by the subscriber(s). Neither existing nor new topics ever propagate across the WAN to broker C. I've verified this using jconsole.
If I restart broker B, the problem goes away immediately. The topics contained in broker B (according to jconsole) are the same as they were prior to restart, but now they've magically appeared in C.
Brokers B and C have the same configuration (shown below). The only difference is that B creates a duplex network connector to C created using the following code:
final NetworkConnector wanNC = new DiscoveryNetworkConnector(
new URI(String.format("static:(failover:(tcp://%s:%d))", parentNode, port)));
wanNC.setCheckDuplicateMessagesOnDuplex(true);
wanNC.setDecreaseNetworkConsumerPriority(true);
wanNC.setDuplex(true);
wanNC.setName(NetworkUtils.getHostName());
wanNC.setNetworkTTL(10);
wanNC.setSuppressDuplicateTopicSubscriptions(false);
broker.addNetworkConnector(wanNC);
broker.xml:
<beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd
http://activemq.apache.org/schema/core http://activemq.apache.org/schema/core/activemq-core.xsd">
<!-- Allows us to use system properties as variables in this configuration file -->
<bean class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer" />
<broker xmlns="http://activemq.apache.org/schema/core" brokerName="${broker.id}" start="false"
offlineDurableSubscriberTimeout="5000" offlineDurableSubscriberTaskSchedule="5000"
persistent="false" useJmx="true" schedulePeriodForDestinationPurge="86400000">
[...]
<networkConnectors>
<networkConnector name="z-broker-${broker.id}-x-nc"
decreaseNetworkConsumerPriority="true"
networkTTL="10"
uri="multicast://225.5.5.5:6555?group=TO_X">
<excludedDestinations>
<topic physicalName="X.A" />
</excludedDestinations>
</networkConnector>
<networkConnector name="z-broker-${broker.id}-y-nc"
decreaseNetworkConsumerPriority="true"
networkTTL="10"
uri="multicast://225.5.5.5:6555?group=TO_Y">
<excludedDestinations>
<topic physicalName="X.B.>" />
</excludedDestinations>
</networkConnector>
</networkConnectors>
<transportConnectors>
<transportConnector name="openwire"
uri="tcp://${broker.ip}:${broker.port}?maximumConnections=1000&wireFormat.maxFrameSize=104857600"
discoveryUri="multicast://225.5.5.5:6555?group=TO_Z" />
</transportConnectors>
</broker>
</beans>
Why don't topics from broker B (existing or new) ever show up in broker C?
Why does restarting broker B solve the issue immediately?

Apparently the trick was changing the network connector URI from
static:(failover:(tcp://<ip>:<port>))
to
static:(tcp://<ip>:<port>)
I didn't need failover transport for any reason since the connection is intended as a network bridge and there's a single remote address.
For whatever reason, using failover prevented topics from propagating on reconnect.

Related

ActiveMQ classic to ActiveMQ Artemis failover does not work

I'm trying to migrate from ActiveMQ "Classic" to ActiveMQ Artemis.
We have a cluster of 2 active nodes that we try to migrate without impacting the consumers and producers. To do so, we stop the first node, migrate it, start it and do the same on the 2nd when the first is back up.
We are observing that the consumers/producers are not able to reconnect:
o.a.a.t.f.FailoverTransport | | Failed to connect to [tcp://172.17.233.92:63616?soTimeout=30000&soWriteTimeout=30000&keepAlive=true, tcp://172.17.233.93:63616?soTimeout=30000&soWriteTimeout=30000&keepAlive=true] after: 30 attempt(s) continuing to retry.
Consumers/producers are able to connect after we have restarted them.
Is it normal behavior ?
Here is the ActiveMQ Artemis broker :
<connectors>
<connector name="netty-connector">tcp://172.17.233.92:63616</connector>
<connector name="server_0">tcp://172.17.233.93:63616</connector>
</connectors>
<acceptors>
<acceptor name="netty-acceptor">tcp://172.17.233.92:63616?protocols=OPENWIRE</acceptor>
<acceptor name="invm">"vm://0</acceptor>
</acceptors>
<cluster-connections>
<cluster-connection name="cluster">
<connector-ref>netty-connector</connector-ref>
<retry-interval>500</retry-interval>
<use-duplicate-detection>true</use-duplicate-detection>
<message-load-balancing>ON_DEMAND</message-load-balancing>
<max-hops>1</max-hops>
<static-connectors>
<connector-ref>server_0</connector-ref>
</static-connectors>
</cluster-connection>
</cluster-connections>
And here is the ActiveMQ "classic" configuration
<!-- Transport protocol -->
<transportConnectors>
<transportConnector name="openwire"
uri="nio://172.17.233.92:63616?transport.soTimeout=15000&transport.threadName&keepAlive=true&transport.soWriteTimeout=15000&wireFormat.maxInactivityDuration=0"
enableStatusMonitor="true" rebalanceClusterClients="true" updateClusterClients="true" updateClusterClientsOnRemove="true" />
</transportConnectors>
<!-- Network of brokers setup -->
<networkConnectors>
<!-- we need conduit subscriptions for topics , but not for queue -->
<networkConnector name="NC_topic" duplex="false" conduitSubscriptions="true" networkTTL="1" uri="static:(tcp://172.17.233.92:63616,tcp://172.17.233.93:63616)" decreaseNetworkConsumerPriority="true" suppressDuplicateTopicSubscriptions="true" dynamicOnly="true">
<excludedDestinations>
<queue physicalName=">" />
</excludedDestinations>
</networkConnector>
<!-- we need conduit subscriptions for topics , but not for queue -->
<networkConnector name="NC_queue" duplex="false" conduitSubscriptions="false" networkTTL="1" uri="static:(tcp://172.17.233.92:63616,tcp://172.17.233.93:63616)" decreaseNetworkConsumerPriority="true" suppressDuplicateQueueSubscriptions="true" dynamicOnly="true">
<excludedDestinations>
<topic physicalName=">" />
</excludedDestinations>
</networkConnector>
</networkConnectors>
This issue should due to updateClusterClientsOnRemove, if true, will update clients when a cluster is removed from the network, see broker-side options for failover.
When the first node is stopped the clients will remove it and they will not add it again because the second node with ActiveMQ Classic isn't able to connect to the first node with ActiveMQ Artemis.
At the end, we decided to first stop the 2 nodes, then upgrade and restart. It implies an interruption from consumer/producer point of view but all the subscription are done properly after the restart.

In ActiveMQ, is there a way to send a copy of message from one queue to another queue in a remote broker?

Problem statement: There are two Queues in two different brokers. Each Queue has one Consumer to it. The producer is dropping messages on the first Queue. We would want to send a copy of message to the second Queue. For visualization
                   Producer
                         |
Broker1 --> Queue1 --> Consumer1
                       | (copy)
Broker2 --> Queue2 --> Consumer2 (consumes same message as Consumer1 but is independent of Consumer1)
The ask is
Only 1 queue is created in each broker. I have achieved the above with 4 Queues but looking for more optimized solution.
Prefer no topics to be used.
To be done only through activemq provided configuration.
What have I done till now:
I managed to do the above with 4 queues.
In Broker1, Queue1 forwarding a copy to a Virtual Destination Queue. Also, sending the messages in Virtual Destination to broker 2 through network connector.
<destinationInterceptors>
<virtualDestinationInterceptor>
<virtualDestinations>
<compositeQueue name="Queue1" forwardOnly="false">
<forwardTo>
<queue physicalName="IntermediateQueue"/>
</forwardTo>
</compositeQueue>
</virtualDestinations>
</virtualDestinationInterceptor>
</destinationInterceptors>
<networkConnectors>
<networkConnector
name="Q:broker1->broker2"
uri="static:(tcp://localhost:31616)"
duplex="false"
staticBridge="true">
<staticallyIncludedDestinations>
<queue physicalName="IntermediateQueue"/>
</staticallyIncludedDestinations>
</networkConnector>
</networkConnectors>
In Broker2, forwarding all messages received in the intermediate Queue to the actual destination queue.
<destinationInterceptors>
<virtualDestinationInterceptor>
<virtualDestinations>
<compositeQueue name="IntermediateQueue">
<forwardTo>
<queue physicalName="FinalDestinationQueue" />
</forwardTo>
</compositeQueue>
</virtualDestinations>
</virtualDestinationInterceptor>
</destinationInterceptors>
Appreciate any help, as going through activemq documentation and forums didn't yield an optimized answer to this problem.
You are essentially re-creating pub+sub and then adding in a transmission-queue pattern for multi-broker integration. There are valid use cases to do this and your approach is valid and within the intended design of Composite Destinations and Network Connectors. The trade-off in this approach is the heavy administration and configuration management that is required.
I understand you prefer to not use topics. However, you may consider looking at Virtual Topics1 which solve this problem in an elegant way and allows you to add new consumers dynamically and without having to modify the broker configuration.
Producer send to Topic:
topic://VT.ORDER.EVENT
Consumer(s) read from special named Queues
clientA: queue://VQ.CLIENTA.VT.ORDER.EVENT
clientB: queue://VQ.CLIENTB.VT.ORDER.EVENT
ref: Virtual Topics

ActiveMQ messages getting stuck in networked broker setup

I have two ActiveMQ brokers in a network setup. The clients are configured with randomize=true and are able to connect fine. However, the messages do not get forwarded from one broker to the other and remain in the queue. For example, I have a particular queue which has multiple producers and one consumer. If I look at the queue on the broker to which the one consumer is connected to, all messages are dequeued immediately. However, on the other broker messages get queued and do not get drained.
Listed below are my networkConnectors and transportConnectors setup for the two brokers. I have tried adding duplex="true" as well as changing the networkTTL to 1 and those didn't seem to make any difference.
BrokerA:
<networkConnectors>
<networkConnector name="LocalBrokerToB"
networkTTL="2"
uri="static:(tcp://hostnameB:61617)"/>
</networkConnectors>
<transportConnectors>
<transportConnector name="nioConnectorFront" uri="nio://hostnameA:61616?maximumConnections=1024 "/>
<transportConnector name="nioConnectorBack" uri="tcp://hostnameA:61617?maximumConnections=1024 "/>
</transportConnectors>
BrokerB:
<networkConnectors>
<networkConnector name="LocalBrokerToA"
networkTTL="2"
uri="static:(tcp://hostnameA:61617)"/>
</networkConnectors>
<transportConnectors>
<transportConnector name="nioConnectorFront" uri="nio://hostnameB:61616?maximumConnections=1024 "/>
<transportConnector name="nioConnectorBack" uri="tcp://hostnameB:61617?maximumConnections=1024 "/>
</transportConnectors>
Any ideas on what could be the problem? An example configuration that someone has working would be a great help.
You should connect the networkConnector to the transport connector of the other broker. That is port 61616 in your example, not 61617.
You should verify in the broker logs or via Web Console / JMX that the network connection actually gets established.
Adding duplex="true" let's one of the brokers initiate the connection which is great in case of firewalls etc. In your case, that should not matter.

Maintain order of messages while forwarding messages between two ActiveMQ brokers

I have an ActiveMQ setup where a source broker living in one data center forwards all messages arriving on certain topics to a destination broker in another data center. The consumer application consumes messages only from the destination broker. (This setup is mainly to ensure fast and efficient forwarding of messages between the two data centers.)
The configuration for forwarding looks something like this:
<networkConnectors>
<networkConnector name="Q:DontForwardQueueMessages"
uri="static:(tcp://destination-broker.example.com:61616)"
duplex="false" decreaseNetworkConsumerPriority="true" networkTTL="2"
dynamicOnly="true">
<excludedDestinations>
<queue physicalName=">" />
</excludedDestinations>
</networkConnector>
<networkConnector name="T:ForwardSampleMessages"
uri="static:(tcp://destination-broker.example.com:61616)"
duplex="false" decreaseNetworkConsumerPriority="true" networkTTL="2"
dynamicOnly="true">
<excludedDestinations>
<topic physicalName=">" />
</excludedDestinations>
<staticallyIncludedDestinations>
<topic physicalName="SampleTopic1" />
<topic physicalName="SampleTopic2" />
<topic physicalName="SampleTopic3" />
<topic physicalName="SampleTopic4" />
</staticallyIncludedDestinations>
</networkConnector>
</networkConnectors>
Our application needs message order to be maintained. However, we are losing messages when the destination broker goes down. Messages arriving at the source broker pile up in the topic, but do not get forwarded when the connection with the destination broker is re-established. However, messages arriving after re-connection are forwarded as usual.
I'm looking for a way I can configure the setup so that:
All messages waiting at the source are sent as soon as the destination is re-connected, maintaining the correct order,
Messages arriving after re-connection wait for older messages to be forwarded before they are forwarded.
It looks like it was a poor design choice to have messages forwarded from a Topic. As per the ActiveMQ documentation:
Only subscribers who had an active subscription at the time the broker receives the message will get a copy of the message.
The destination broker acts like a subscriber to the source topic from which messages are being forwarded. So, when messages arrive in the source topic in the absence of a subscriber (destination disconnected), they are not available to anyone.
As a solution, I changed the design:
Remove the Virtual Destination configuration in the destination broker
Add the same Virtual Destination configuration in the source broker (so now messages are distributed into their respective queues right here)
Add networkConnector rules to the source broker to forward messages in these queues to corresponding queues on the destination broker.
Now since messages at the source are in a queue, they will be consumed in the order in which they were received, and no messages are lost, even if the brokers are disconnected from each other.

Unable to start AMQ broker with persistent store on an NFSv3 share

I've been struggling to start AMQ broker node with persistent store on an NFSv3 share.
I keep getting the below error complaining of unavailable locks.
I've made sure that all java processes are killed and the lock file on the shared folder is deleted before starting the AMQ master broker.
When I start AMQ, it seems to create a lock file on the shared folder and after that it complains of unavailable locks.
Loading message broker from: xbean:activemq.xml
INFO | Refreshing org.apache.activemq.xbean.XBeanBrokerFactory$1#73cf56e9: startup date [Mon Dec 23 05:28:23 UTC 2013]; root of context hierarchy
INFO | PListStore:[/home/pnarayan/apache-activemq-5.9.0/activemq-data/notificationsBroker/tmp_storage] started
INFO | Using Persistence Adapter: KahaDBPersistenceAdapter[/home/y/share/nfs/amqnfs]
INFO | JMX consoles can connect to service:jmx:rmi:///jndi/rmi://localhost:1099/jmxrmi
INFO | Database /home/y/share/nfs/amqnfs/lock is locked... waiting 10 seconds for the database to be unlocked. Reason: java.io.IOException: No locks available
Below is the activemq xml configuration file I'm using:
<beans
xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd
http://activemq.apache.org/schema/core http://activemq.apache.org/schema/core/activemq-core.xsd">
<broker
xmlns="http://activemq.apache.org/schema/core"
xmlns:spring="http://www.springframework.org/schema/beans"
brokerName="notificationsBroker"
useJmx="true"
start="true"
persistent="true"
useShutdownHook="false"
deleteAllMessagesOnStartup="false">
<persistenceAdapter>
<kahaDB directory="/home/y/share/nfs/amqnfs" />
</persistenceAdapter>
<transportConnectors>
<transportConnector name="openwire" uri="tcp://0.0.0.0:61616?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
<transportConnector name="stomp" uri="stomp://0.0.0.0:61613?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
</transportConnectors>
</broker>
</beans>
Is this because of the reason that I'm using NFSv3, but not NFSv4 as recommended by AMQ?
I believe the issue with NFSv3 is that it cannot cleanup the lock if at all the broker process dies abruptly. However it shouldn't be having any issues in starting the broker. If my understanding is right why am I observing the above error?
You are absolutely right in what you say - NFS3 does not clean up its locks properly. When using KahaDB, the broker creates a file in $ACTIVEMQ_DATA/lock. If that file exists, chances are that something has a hold on it (or at least NFS3 thinks that it does) and the broker will be blocked. Check to see whether the file is there, and if so, use the lsof command to determine the process id of its holder.