Software and configurations:
ActiveMQ 5.14.0, Java 8
All brokers have non-persistent topics only; advisory messages are on.
OS: Linux (RHEL 6)
Some terminology:
We have headless Java applications (services) that use a broker to communicate with the web server. They use Java Plugin Framework (JPF). Each service connects to its own personal broker and subscribes to a single topic that is unique to that service (a "service-specific" topic).
This broker has a network connector that automatically connects to all local web instance brokers (typically only 1 of them per LAN).
This broker has a network connector that automatically connects to all local root borker instances (typically only 1 of them per LAN).
Only the web servers ever publish messages to this topic.
Services don't send messages intended for one another.
The web server acts as a front end for the services. It uses Spring Boot. Each web server connects to its own personal broker and subscribes to a single global topic that is shared among all web servers; if a message is sent to that topic, all web servers receive it.
Only the services ever publish messages to this topic.
Web servers don't send messages intended for one another.
This broker has a network connector that automatically connects to all local root borker instances (typically only 1 of them per LAN).
The root broker is a glorified Java appliction that launches an ActiveMQ broker. It uses Spring Boot. The root brokers don't publish or subscribe; they merely act as a bridge between LANs.
Each root broker will connect to its parent broker (not done via XML). This is done in Java code.
Settings: duplex=true, checkDuplicateMessagesOnDuplex=true, suppressDuplicateTopicSubscriptions=false, networkTTL=10
I followed this guide in configuring all of the brokers: http://activemq.apache.org/networks-of-brokers.html
Here's a diagram of the intended architecture:
What we're observing is that after a certain number of services come online, messages stop flowing between the service and web application instances, even if they're on the same LAN and directly connected to one another. We can see the producers creating the messages (in the web application logs), but the consumers never receive the network data (verified using Wireshark). I can't tell if the the brokers are sending the messages to the wrong location or not. The correct topics show up in our JMX MBEANS when we view a running instance using jconsole.
There are no errors/warnings from any of the JVMs.
One observation we've made is that adding a new web server and service in a different discovery group appears to work with no problem. They have no communication issues whatsoever, so we believe this is a broker configuration issue.
service-broker.xml:
<!-- Connects to root broker and web broker -->
<networkConnectors>
<networkConnector name="service-${broker.id}-broker-nc" duplex="true" networkTTL="10"
checkDuplicateMessagesOnDuplex="true" suppressDuplicateTopicSubscriptions="false"
uri="multicast://225.5.5.5:6555?group=GROUP_BROKER&maxReconnectAttempts=1&joinNetworkInterface=${broker.netInterface}" />
<networkConnector name="service-${broker.id}-web-nc" duplex="true" networkTTL="10"
checkDuplicateMessagesOnDuplex="true" suppressDuplicateTopicSubscriptions="false"
uri="multicast://225.5.5.5:6555?group=GROUP_WEB&maxReconnectAttempts=1&joinNetworkInterface=${broker.netInterface}" />
</networkConnectors>
<!-- Don't advertise the broker (only connection should be from localhost) -->
<transportConnectors>
<transportConnector name="openwire"
uri="tcp://${broker.ip}:${broker.port}?maximumConnections=1000&wireFormat.maxFrameSize=104857600" />
</transportConnectors>
web-broker.xml:
<!-- Connect to root broker -->
<networkConnectors>
<networkConnector name="web-${broker.id}-broker-nc" duplex="true" networkTTL="10"
checkDuplicateMessagesOnDuplex="true" suppressDuplicateTopicSubscriptions="false"
uri="multicast://225.5.5.5:6555?group=GROUP_BROKER&maxReconnectAttempts=1" />
</networkConnectors>
<!-- Advertise web broker (service will connect to this) -->
<transportConnectors>
<transportConnector name="openwire"
uri="tcp://${broker.ip}:${broker.port}?maximumConnections=1000&wireFormat.maxFrameSize=104857600"
discoveryUri="multicast://225.5.5.5:6555?group=GROUP_WEB" />
</transportConnectors>
root-broker.xml:
<!-- Advertise root broker (service and web will connect to this) -->
<transportConnectors>
<transportConnector name="openwire"
uri="tcp://${broker.ip}:${broker.port}?maximumConnections=1000&wireFormat.maxFrameSize=104857600"
discoveryUri="multicast://225.5.5.5:6555?group=GROUP_BROKER" />
</transportConnectors>
Do the configurations shown above support the architecture shown above?
Do you see any other problems/pitfalls?
Related
We're using Apache James 3.0-beta4 which uses embedded ActiveMQ 5.5.0 for FIFO message queue, and sometimes messages get stuck. Therefore, we need to monitor it. Is there any way to monitor an ActiveMQ queue like message size and most recent message-id in the queue (if possible).
In the JAMES spring-server.xml I found that:
<amq:broker useJmx="true" persistent="true" brokerName="james" dataDirectory="filesystem=file://var/store/activemq/brokers" useShutdownHook="false" schedulerSupport="false" id="broker">
<amq:destinationPolicy>
<amq:policyMap>
<amq:policyEntries>
<!-- Support priority handling of messages -->
<!-- http://activemq.apache.org/how-can-i-support-priority-queues.html -->
<amq:policyEntry queue=">" prioritizedMessages="true"/>
</amq:policyEntries>
</amq:policyMap>
</amq:destinationPolicy>
<amq:managementContext>
<amq:managementContext createConnector="false"/>
</amq:managementContext>
<amq:persistenceAdapter>
<amq:amqPersistenceAdapter/>
</amq:persistenceAdapter>
<amq:plugins>
<amq:statisticsBrokerPlugin/>
</amq:plugins>
<amq:transportConnectors>
<amq:transportConnector uri="tcp://localhost:0" />
</amq:transportConnectors>
</amq:broker>
also one old part from readme:
- Telnet Management has been removed in favor of JMX with client shell
- More metrics counters available via JMX
...
* Monitor via JMX (launch any JMX client and connect to URL=service:jmx:rmi:///jndi/rmi://localhost:9999/jmxrmi)
which is confusion on how to use it.
This is part of the bigger "monolith" project which now is recreated for microservices but still need to be supported ;) All was fine till mid of March.
It looks like ActiveMQ management and monitoring is not possible because JMX is disabled.
I have a network of brokers, whereby messages are produced from a Java app onto a small local broker in the field, this then has a static network destination for the queue pointing at a broker (in AWS), which has a consumer sat across all the queues for the remote brokers.
If there is any interruption in the connection, the remote broker will reconnect successfully. The issue is that in some cases, the remote broker will reconnect, the queue on the remote broker will appear to rack up the dequeue count, however the central broker to which it is meant to be forwarding doesn't show up an increasing enqueue for the queue.
The messages enqueued at the remote end are Persistent = YES, priority = 4. If I manually add a message on the broker with Persistent = Yes, the same behaviour exhibits. If I set persistent = NO, the message successfully hits the other end. If I restart the broker, the persistent messages flow again (although the ones it thought it was successfully sending are lost).
What situation could cause this loss of messages, and is there a configuration option that can be tweaked to fix this?
The configuration for the remote brokers is a standard AMQ install, with the following network connector defined:
<networkConnectors>
<networkConnector
uri="${X.remoteMsgURI}"
staticBridge="true"
dynamicOnly="false"
alwaysSyncSend="true"
userName="${X.remoteUsername}"
password="${X.remotePassword}"
>
<staticallyIncludedDestinations>
<queue physicalName="cloud.${X.environment}.queue.${X.queuebase}"/>
</staticallyIncludedDestinations>
</networkConnector>
</networkConnectors>
The connection string for the static remote is:
remoteMsgURI=static:(failover:(ssl:/X-1.mq.eu-west-2.amazonaws.com:61617?wireFormat.maxInactivityDuration=30000,ssl://X-2.amazonaws.com:61617?wireFormat.maxInactivityDuration=30000))
I'm new to active MQ.
I have a requirement to create a local Active MQ and connect it to a remote IBM MQ.
Can anyone help me on how to connect to Distributed Queue manager and Queues .
You can use Apache Camel to bridge between the two providers. The routes can be run from within the broker, pull from the ActiveMQ queue and push to the WMQ Queue (or the other way around). The concept is almost like the concept of a Channel in WMQ pulling from a transmit queue and pushing it to the appropriate destination on the remote queue manager.
Assuming you are using WMQ V7+ for all QMgrs and Clients, its simply a matter of learning how to set up the route and configure the connection factories. Older versions of WMQ and you may have to understand how to deal with RFH2 headers for native WMQ clients if they are the consumers.
The most simple route configured in spring would look like:
<route id="amq-to-wmq" >
<from uri="amq:YOUR.QUEUE" />
<to uri="wmq:YOUR.QUEUE" />
</route>
The "wmq" and "amq" would point to beans where the JMS components are configured. This is where you would set up you connection factories to each provider and how the clients behave (transacted or not for example), so I'll hold off on giving an example on that.
This would go in the camel.xml (or whatever you name it) and get imported from your broker's XML. ActiveMQ comes with several examples you can use to get you started using Camel JMS components. Just take a look at the default camel.xml that comes with a normal install.
I have an Apollo broker configured as a stomp server. Now I want to configure an ActiveMQ broker which links to the Apollo broker and enable message propagation in both directions.
That is, I want the Apollo broker and ActiveMQ broker to work both as consumers and producers.
Will this networkconnector configuration at ActiveMQ broker meet my requirement?
<networkConnectors>
<networkConnector name="linkToApolloBroker"
uri="static:(stomp://apollo_broker_ip:61000)"
networkTTL="3"
duplex="true" />
</networkConnectors>
<persistenceAdapter>
<kahaDB directory="${activemq.data}/dynamic-broker1/kahadb"/>
</persistenceAdapter>
...
<transportConnectors>
<transportConnector name="stomp" uri="stomp://0.0.0.0:61613"/>
</transportConnectors>
Actually, I need the Apollo to provide services for the web while passing messages to and fro to ActiveMQ broker. If I have 2 brokers talking with each other, their local clients can have direct access to the locally persisted queues and to an extend remain immune to network fluctuations.
There is interoperability in the Network of brokers configuration between ActiveMQ and Apollo. You might be able to configure a bridge between the two using the JMS Bridge feature of ActiveMQ since Apollo does support openwire. The configuration you have won't work.
Have a look at the JMS to JMS bridge documentation.
Apache Camel is also a potential solution to your problem. You can probably create a Camel route that does what you want.
I am new to activemq. I have configured two servers of activemq and using them in failover transport. They are working fine. I mean if one activemq goes down another pick up the queues. My problem is when main server comes up it do not restore the queues. Is there any such configuration or protocol that can manage it if main server is up then consumers should come back to to it.
Currently my configuration is :
<transportConnectors>
<transportConnector name="openwire" uri="tcp://0.0.0.0:61616" updateClusterClients="true" rebalanceClusterClients="true"/>
</transportConnectors>
<networkConnectors>
<networkConnector uri="static:(tcp://192.168.0.122:61616)"
networkTTL="3"
prefetchSize="1"
decreaseNetworkConsumerPriority="true" />
</networkConnectors>
and my connection uri is :
failover:(tcp://${ipaddress.master}:61616,tcp://${ipaddress.backup}:61616)?randomize=false
Also i want to send mail in case of failover occurs so that i could know if activemq is down.
What you have configured there is not a true HA deployment, but a network of brokers. If you have two brokers configured in a network, each has its own message store, which at any time contains a partial set of messages (see how networks of brokers work).
The behaviour that you would likely expect to see is that if one broker falls over, the other takes its place resuming from where the failed one left off (with all of the undelivered messages that the failed broker held). For that you need to use a (preferably shared-storage) master-slave configuration.
I have done that. And posting solution in case any one is having same problem.
This feature is available in activemq 5.6. priorityBackup=true in connection url is the key to tell to tell consumer to come back on primary node if available.
My new connection uri is :
failover:master:61616,tcp://backup:61616)?randomize=false&priorityBackup=true
see here for more details.