ActiveMQ messages getting stuck in networked broker setup - activemq

I have two ActiveMQ brokers in a network setup. The clients are configured with randomize=true and are able to connect fine. However, the messages do not get forwarded from one broker to the other and remain in the queue. For example, I have a particular queue which has multiple producers and one consumer. If I look at the queue on the broker to which the one consumer is connected to, all messages are dequeued immediately. However, on the other broker messages get queued and do not get drained.
Listed below are my networkConnectors and transportConnectors setup for the two brokers. I have tried adding duplex="true" as well as changing the networkTTL to 1 and those didn't seem to make any difference.
BrokerA:
<networkConnectors>
<networkConnector name="LocalBrokerToB"
networkTTL="2"
uri="static:(tcp://hostnameB:61617)"/>
</networkConnectors>
<transportConnectors>
<transportConnector name="nioConnectorFront" uri="nio://hostnameA:61616?maximumConnections=1024 "/>
<transportConnector name="nioConnectorBack" uri="tcp://hostnameA:61617?maximumConnections=1024 "/>
</transportConnectors>
BrokerB:
<networkConnectors>
<networkConnector name="LocalBrokerToA"
networkTTL="2"
uri="static:(tcp://hostnameA:61617)"/>
</networkConnectors>
<transportConnectors>
<transportConnector name="nioConnectorFront" uri="nio://hostnameB:61616?maximumConnections=1024 "/>
<transportConnector name="nioConnectorBack" uri="tcp://hostnameB:61617?maximumConnections=1024 "/>
</transportConnectors>
Any ideas on what could be the problem? An example configuration that someone has working would be a great help.

You should connect the networkConnector to the transport connector of the other broker. That is port 61616 in your example, not 61617.
You should verify in the broker logs or via Web Console / JMX that the network connection actually gets established.
Adding duplex="true" let's one of the brokers initiate the connection which is great in case of firewalls etc. In your case, that should not matter.

Related

In ActiveMQ, is there a way to send a copy of message from one queue to another queue in a remote broker?

Problem statement: There are two Queues in two different brokers. Each Queue has one Consumer to it. The producer is dropping messages on the first Queue. We would want to send a copy of message to the second Queue. For visualization
                   Producer
                         |
Broker1 --> Queue1 --> Consumer1
                       | (copy)
Broker2 --> Queue2 --> Consumer2 (consumes same message as Consumer1 but is independent of Consumer1)
The ask is
Only 1 queue is created in each broker. I have achieved the above with 4 Queues but looking for more optimized solution.
Prefer no topics to be used.
To be done only through activemq provided configuration.
What have I done till now:
I managed to do the above with 4 queues.
In Broker1, Queue1 forwarding a copy to a Virtual Destination Queue. Also, sending the messages in Virtual Destination to broker 2 through network connector.
<destinationInterceptors>
<virtualDestinationInterceptor>
<virtualDestinations>
<compositeQueue name="Queue1" forwardOnly="false">
<forwardTo>
<queue physicalName="IntermediateQueue"/>
</forwardTo>
</compositeQueue>
</virtualDestinations>
</virtualDestinationInterceptor>
</destinationInterceptors>
<networkConnectors>
<networkConnector
name="Q:broker1->broker2"
uri="static:(tcp://localhost:31616)"
duplex="false"
staticBridge="true">
<staticallyIncludedDestinations>
<queue physicalName="IntermediateQueue"/>
</staticallyIncludedDestinations>
</networkConnector>
</networkConnectors>
In Broker2, forwarding all messages received in the intermediate Queue to the actual destination queue.
<destinationInterceptors>
<virtualDestinationInterceptor>
<virtualDestinations>
<compositeQueue name="IntermediateQueue">
<forwardTo>
<queue physicalName="FinalDestinationQueue" />
</forwardTo>
</compositeQueue>
</virtualDestinations>
</virtualDestinationInterceptor>
</destinationInterceptors>
Appreciate any help, as going through activemq documentation and forums didn't yield an optimized answer to this problem.
You are essentially re-creating pub+sub and then adding in a transmission-queue pattern for multi-broker integration. There are valid use cases to do this and your approach is valid and within the intended design of Composite Destinations and Network Connectors. The trade-off in this approach is the heavy administration and configuration management that is required.
I understand you prefer to not use topics. However, you may consider looking at Virtual Topics1 which solve this problem in an elegant way and allows you to add new consumers dynamically and without having to modify the broker configuration.
Producer send to Topic:
topic://VT.ORDER.EVENT
Consumer(s) read from special named Queues
clientA: queue://VQ.CLIENTA.VT.ORDER.EVENT
clientB: queue://VQ.CLIENTB.VT.ORDER.EVENT
ref: Virtual Topics

ActiveMQ doesn't propagate topics on failover reconnection

I'm attempting to use a network of brokers that bridges two LANs over a duplex WAN connector:
There are actually many subscribers in our setup, each connecting to a different "Broker A", if that makes sense. All of the Broker A instances have their own connections to a single "Broker B".
Software and configurations:
ActiveMQ 5.14.0, Java 8
All brokers have non-persistent topics only; advisory messages are on.
OS: Linux (RHEL 6)
When I initially bring everything online, regardless of the order in which I bring things online, communication between the publisher and subscriber works flawlessly. I've had the system up-and-running for weeks at a time without issue.
What I've observed is that if broker C is restarted, no new topics that show up in broker B ever appear in broker C. New topics are still appearing in broker B as they are created by the subscriber(s). Neither existing nor new topics ever propagate across the WAN to broker C. I've verified this using jconsole.
If I restart broker B, the problem goes away immediately. The topics contained in broker B (according to jconsole) are the same as they were prior to restart, but now they've magically appeared in C.
Brokers B and C have the same configuration (shown below). The only difference is that B creates a duplex network connector to C created using the following code:
final NetworkConnector wanNC = new DiscoveryNetworkConnector(
new URI(String.format("static:(failover:(tcp://%s:%d))", parentNode, port)));
wanNC.setCheckDuplicateMessagesOnDuplex(true);
wanNC.setDecreaseNetworkConsumerPriority(true);
wanNC.setDuplex(true);
wanNC.setName(NetworkUtils.getHostName());
wanNC.setNetworkTTL(10);
wanNC.setSuppressDuplicateTopicSubscriptions(false);
broker.addNetworkConnector(wanNC);
broker.xml:
<beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd
http://activemq.apache.org/schema/core http://activemq.apache.org/schema/core/activemq-core.xsd">
<!-- Allows us to use system properties as variables in this configuration file -->
<bean class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer" />
<broker xmlns="http://activemq.apache.org/schema/core" brokerName="${broker.id}" start="false"
offlineDurableSubscriberTimeout="5000" offlineDurableSubscriberTaskSchedule="5000"
persistent="false" useJmx="true" schedulePeriodForDestinationPurge="86400000">
[...]
<networkConnectors>
<networkConnector name="z-broker-${broker.id}-x-nc"
decreaseNetworkConsumerPriority="true"
networkTTL="10"
uri="multicast://225.5.5.5:6555?group=TO_X">
<excludedDestinations>
<topic physicalName="X.A" />
</excludedDestinations>
</networkConnector>
<networkConnector name="z-broker-${broker.id}-y-nc"
decreaseNetworkConsumerPriority="true"
networkTTL="10"
uri="multicast://225.5.5.5:6555?group=TO_Y">
<excludedDestinations>
<topic physicalName="X.B.>" />
</excludedDestinations>
</networkConnector>
</networkConnectors>
<transportConnectors>
<transportConnector name="openwire"
uri="tcp://${broker.ip}:${broker.port}?maximumConnections=1000&wireFormat.maxFrameSize=104857600"
discoveryUri="multicast://225.5.5.5:6555?group=TO_Z" />
</transportConnectors>
</broker>
</beans>
Why don't topics from broker B (existing or new) ever show up in broker C?
Why does restarting broker B solve the issue immediately?
Apparently the trick was changing the network connector URI from
static:(failover:(tcp://<ip>:<port>))
to
static:(tcp://<ip>:<port>)
I didn't need failover transport for any reason since the connection is intended as a network bridge and there's a single remote address.
For whatever reason, using failover prevented topics from propagating on reconnect.

ActiveMQ configuration

I'm new to ActiveMQ and I'd like to know how and where to add this line of code to enable MQTT on my broker. I'm running broker on Mac.
Kindly help me with this configuration.
By default, MQTT protocol is supported for ActiveMQ when it starts if you download from Apache ActiveMQ. Apache has configured the broker well in /conf/activemq.xml so that you can run it directly. Meanwhile, openwire, amqp, stomp and ws are enabled for ActiveMQ as well.
<transportConnectors>
<!-- DOS protection, limit concurrent connections to 1000 and frame size to 100MB -->
<transportConnector name="openwire" uri="tcp://0.0.0.0:61616?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
<transportConnector name="amqp" uri="amqp://0.0.0.0:5672?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
<transportConnector name="stomp" uri="stomp://0.0.0.0:61613?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
<transportConnector name="mqtt" uri="mqtt://0.0.0.0:1883?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
<transportConnector name="ws" uri="ws://0.0.0.0:61614?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
</transportConnectors>
After you have create a new broker using tool ARTEMIS_PATH/bin/artemis[.cmd in Windows] named for example TestBroker, there will be broker's work path in ARTEMIS_PATH/bin/TestBroker.
The file that configure connectors is broker.xml and it will be located in ARTEMIS_PATH/bin/TestBroker/etc/broker.xml

Maintain order of messages while forwarding messages between two ActiveMQ brokers

I have an ActiveMQ setup where a source broker living in one data center forwards all messages arriving on certain topics to a destination broker in another data center. The consumer application consumes messages only from the destination broker. (This setup is mainly to ensure fast and efficient forwarding of messages between the two data centers.)
The configuration for forwarding looks something like this:
<networkConnectors>
<networkConnector name="Q:DontForwardQueueMessages"
uri="static:(tcp://destination-broker.example.com:61616)"
duplex="false" decreaseNetworkConsumerPriority="true" networkTTL="2"
dynamicOnly="true">
<excludedDestinations>
<queue physicalName=">" />
</excludedDestinations>
</networkConnector>
<networkConnector name="T:ForwardSampleMessages"
uri="static:(tcp://destination-broker.example.com:61616)"
duplex="false" decreaseNetworkConsumerPriority="true" networkTTL="2"
dynamicOnly="true">
<excludedDestinations>
<topic physicalName=">" />
</excludedDestinations>
<staticallyIncludedDestinations>
<topic physicalName="SampleTopic1" />
<topic physicalName="SampleTopic2" />
<topic physicalName="SampleTopic3" />
<topic physicalName="SampleTopic4" />
</staticallyIncludedDestinations>
</networkConnector>
</networkConnectors>
Our application needs message order to be maintained. However, we are losing messages when the destination broker goes down. Messages arriving at the source broker pile up in the topic, but do not get forwarded when the connection with the destination broker is re-established. However, messages arriving after re-connection are forwarded as usual.
I'm looking for a way I can configure the setup so that:
All messages waiting at the source are sent as soon as the destination is re-connected, maintaining the correct order,
Messages arriving after re-connection wait for older messages to be forwarded before they are forwarded.
It looks like it was a poor design choice to have messages forwarded from a Topic. As per the ActiveMQ documentation:
Only subscribers who had an active subscription at the time the broker receives the message will get a copy of the message.
The destination broker acts like a subscriber to the source topic from which messages are being forwarded. So, when messages arrive in the source topic in the absence of a subscriber (destination disconnected), they are not available to anyone.
As a solution, I changed the design:
Remove the Virtual Destination configuration in the destination broker
Add the same Virtual Destination configuration in the source broker (so now messages are distributed into their respective queues right here)
Add networkConnector rules to the source broker to forward messages in these queues to corresponding queues on the destination broker.
Now since messages at the source are in a queue, they will be consumed in the order in which they were received, and no messages are lost, even if the brokers are disconnected from each other.

ActiveMQ 5.8 network of broker with custom jmx port

I am trying to run a network of broker with 2 brokers on the same network but on 2 different virtual machines.
Because of some internal constraints I have to use a custom jmx port.
I am using the Tanuki wrapper to launch ActiveMQ on an Ubuntu server.
Here is the relevant part of my activemq.xml
<broker xmlns="http://activemq.apache.org/schema/core" advisorySupport="false" useJmx="true" brokerName="test1" dataDirectory=".../data/activemq">
<networkConnectors>
<networkConnector uri="multicast://1.2.3.4:101234?group=test"
dynamicOnly="true"
networkTTL="3"
prefetchSize="1"
decreaseNetworkConsumerPriority="true"
userName="user"
password="password"/>
</networkConnectors>
<transportConnectors>
<transportConnector name="openwire" uri="tcp://0.0.0.0:61616" rebalanceClusterClients="true" updateClusterClients="true" />
<transportConnector name="nio" uri="nio://0.0.0.0:61617" rebalanceClusterClients="true" updateClusterClients="true" discoveryUri="multicast://1.2.3.4:101234?group=test" />
</transportConnectors>
...
<managementContext>
<managementContext createConnector="false"/>
</managementContext>
...
</broker>
Here is the relevant part of wrapper.conf:
# Uncomment to enable jmx
wrapper.java.additional.1=-Dcom.sun.management.jmxremote
wrapper.java.additional.2=-Dcom.sun.management.jmxremote.port=4321
wrapper.java.additional.3=-Dcom.sun.management.jmxremote.authenticate=false
wrapper.java.additional.4=-Dcom.sun.management.jmxremote.ssl=false
When running ActiveMQ on both broker, I see the process with the expected options:
activemq 30682 30680 3 13:27 ? 00:00:30 java -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=4321 -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Djavax.net.ssl.keyStore=../../conf/broker.ks -Djavax.net.ssl.trustStore=../../conf/broker.ts -Dcom.sun.management.jmxremote -Dorg.apache.activemq.UseDedicatedTaskRunner=true -Djava.util.logging.config.file=logging.properties -Dactivemq.conf=../../conf -Dactivemq.data=../../data -Xms2048m -Xmx2048m -Djava.library.path=../../bin/linux-x86-64/ -classpath ../../bin/wrapper.jar:../../bin/activemq.jar -Dwrapper.key=y4TuwO32Hj6kN7w8 -Dwrapper.port=32000 -Dwrapper.jvm.port.min=31000 -Dwrapper.jvm.port.max=31999 -Dwrapper.pid=30680 -Dwrapper.version=3.2.3 -Dwrapper.native_library=wrapper -Dwrapper.service=TRUE -Dwrapper.cpu.timeout=10 -Dwrapper.jvmid=1 org.tanukisoftware.wrapper.WrapperSimpleApp org.apache.activemq.console.Main start
The port is open on the running shorewall.
The network of broker is up but I cannot connect to the jmx using jvisualvm with server_dns:4321. It returns the error
"cannot connect to server_dns:4321 using service:jmx:rmi:///jndi/rmi://server_dns:4321/jmxrmi
I can see some traffic on the port via tcpdump.
Could anybody tell me what I am doing wrong or how I should use ActiveMQ as a Network Of Broker with a custom jmx port?
JMX needs 2 open ports. An extra one is necessary for rmi.
I figured it out thanks to this post: Apache ActiveMQ browser can't connect to JMX console
In my case the fix is to change the configuration of my wrapper to expose the rmi port and open the port on the firewall
# Uncomment to enable jmx
wrapper.java.additional.1=-Dcom.sun.management.jmxremote
wrapper.java.additional.2=-Dcom.sun.management.jmxremote.port=4321
wrapper.java.additional.3=-Dcom.sun.management.jmxremote.port=8765
wrapper.java.additional.4=-Dcom.sun.management.jmxremote.authenticate=false
wrapper.java.additional.5=-Dcom.sun.management.jmxremote.ssl=false