How to monitor ActiveMQ queue in Apache James - activemq

We're using Apache James 3.0-beta4 which uses embedded ActiveMQ 5.5.0 for FIFO message queue, and sometimes messages get stuck. Therefore, we need to monitor it. Is there any way to monitor an ActiveMQ queue like message size and most recent message-id in the queue (if possible).
In the JAMES spring-server.xml I found that:
<amq:broker useJmx="true" persistent="true" brokerName="james" dataDirectory="filesystem=file://var/store/activemq/brokers" useShutdownHook="false" schedulerSupport="false" id="broker">
<amq:destinationPolicy>
<amq:policyMap>
<amq:policyEntries>
<!-- Support priority handling of messages -->
<!-- http://activemq.apache.org/how-can-i-support-priority-queues.html -->
<amq:policyEntry queue=">" prioritizedMessages="true"/>
</amq:policyEntries>
</amq:policyMap>
</amq:destinationPolicy>
<amq:managementContext>
<amq:managementContext createConnector="false"/>
</amq:managementContext>
<amq:persistenceAdapter>
<amq:amqPersistenceAdapter/>
</amq:persistenceAdapter>
<amq:plugins>
<amq:statisticsBrokerPlugin/>
</amq:plugins>
<amq:transportConnectors>
<amq:transportConnector uri="tcp://localhost:0" />
</amq:transportConnectors>
</amq:broker>
also one old part from readme:
- Telnet Management has been removed in favor of JMX with client shell
- More metrics counters available via JMX
...
* Monitor via JMX (launch any JMX client and connect to URL=service:jmx:rmi:///jndi/rmi://localhost:9999/jmxrmi)
which is confusion on how to use it.
This is part of the bigger "monolith" project which now is recreated for microservices but still need to be supported ;) All was fine till mid of March.

It looks like ActiveMQ management and monitoring is not possible because JMX is disabled.

Related

Apollo takes 100% CPU

I use ActiveMQ Apollo 1.7.1 in Linux. I use MQTT to send message from server to client.
Apollo config like below:
<broker xmlns="http://activemq.apache.org/schema/activemq/apollo">
<notes>
The default configuration with tls/ssl enabled.
</notes>
<log_category console="console" security="security" connection="connection" audit="audit"/>
<authentication domain="apollo"/>
<!-- Give admins full access -->
<access_rule allow="admins" action="*"/>
<access_rule allow="*" action="connect" kind="connector"/>
<virtual_host id="myapollo">
<host_name>myapollo</host_name>
<access_rule allow="users" action="connect create destroy send receive consume"/>
<leveldb_store directory="${apollo.base}/data"/>
</virtual_host>
<connector id="tcp" bind="tcp://0.0.0.0:61613"/>
<key_storage file="${apollo.base}/etc/keystore" password="password" key_password="password"/>
</broker>
Can someone tell me how to find information about why the Apollo process is taking 100% of the CPU?
After this problem happens we can't build new connections through TCP.
ActiveMQ Apollo has had no releases in almost a decade and the readme.md in the source code indicates the project is dead. Therefore you're unlikely to get much help. My recommendation would be to get a couple of thread dumps from the JVM, and then restart the broker. Hopefully things will return to normal, and then you can inspect the thread dumps to investigate the underlying cause of the CPU utilization.
Eventually you should switch to a broker under active development. Personally I'd recommend ActiveMQ Artemis which has replaced Apollo as the next-generation broker from ActiveMQ. MQTT is a standard protocol so you should be able to deploy any broker which implements MQTT and your clients should still work. For what it's worth, ActiveMQ Artemis recently implemented MQTT 5 so if you think you'll ever upgrade your clients to MQTT 5 you'll have a smooth upgrade path.

Why does this ActiveMQ broker configuration fail after many brokers are added?

Software and configurations:
ActiveMQ 5.14.0, Java 8
All brokers have non-persistent topics only; advisory messages are on.
OS: Linux (RHEL 6)
Some terminology:
We have headless Java applications (services) that use a broker to communicate with the web server. They use Java Plugin Framework (JPF). Each service connects to its own personal broker and subscribes to a single topic that is unique to that service (a "service-specific" topic).
This broker has a network connector that automatically connects to all local web instance brokers (typically only 1 of them per LAN).
This broker has a network connector that automatically connects to all local root borker instances (typically only 1 of them per LAN).
Only the web servers ever publish messages to this topic.
Services don't send messages intended for one another.
The web server acts as a front end for the services. It uses Spring Boot. Each web server connects to its own personal broker and subscribes to a single global topic that is shared among all web servers; if a message is sent to that topic, all web servers receive it.
Only the services ever publish messages to this topic.
Web servers don't send messages intended for one another.
This broker has a network connector that automatically connects to all local root borker instances (typically only 1 of them per LAN).
The root broker is a glorified Java appliction that launches an ActiveMQ broker. It uses Spring Boot. The root brokers don't publish or subscribe; they merely act as a bridge between LANs.
Each root broker will connect to its parent broker (not done via XML). This is done in Java code.
Settings: duplex=true, checkDuplicateMessagesOnDuplex=true, suppressDuplicateTopicSubscriptions=false, networkTTL=10
I followed this guide in configuring all of the brokers: http://activemq.apache.org/networks-of-brokers.html
Here's a diagram of the intended architecture:
What we're observing is that after a certain number of services come online, messages stop flowing between the service and web application instances, even if they're on the same LAN and directly connected to one another. We can see the producers creating the messages (in the web application logs), but the consumers never receive the network data (verified using Wireshark). I can't tell if the the brokers are sending the messages to the wrong location or not. The correct topics show up in our JMX MBEANS when we view a running instance using jconsole.
There are no errors/warnings from any of the JVMs.
One observation we've made is that adding a new web server and service in a different discovery group appears to work with no problem. They have no communication issues whatsoever, so we believe this is a broker configuration issue.
service-broker.xml:
<!-- Connects to root broker and web broker -->
<networkConnectors>
<networkConnector name="service-${broker.id}-broker-nc" duplex="true" networkTTL="10"
checkDuplicateMessagesOnDuplex="true" suppressDuplicateTopicSubscriptions="false"
uri="multicast://225.5.5.5:6555?group=GROUP_BROKER&maxReconnectAttempts=1&joinNetworkInterface=${broker.netInterface}" />
<networkConnector name="service-${broker.id}-web-nc" duplex="true" networkTTL="10"
checkDuplicateMessagesOnDuplex="true" suppressDuplicateTopicSubscriptions="false"
uri="multicast://225.5.5.5:6555?group=GROUP_WEB&maxReconnectAttempts=1&joinNetworkInterface=${broker.netInterface}" />
</networkConnectors>
<!-- Don't advertise the broker (only connection should be from localhost) -->
<transportConnectors>
<transportConnector name="openwire"
uri="tcp://${broker.ip}:${broker.port}?maximumConnections=1000&wireFormat.maxFrameSize=104857600" />
</transportConnectors>
web-broker.xml:
<!-- Connect to root broker -->
<networkConnectors>
<networkConnector name="web-${broker.id}-broker-nc" duplex="true" networkTTL="10"
checkDuplicateMessagesOnDuplex="true" suppressDuplicateTopicSubscriptions="false"
uri="multicast://225.5.5.5:6555?group=GROUP_BROKER&maxReconnectAttempts=1" />
</networkConnectors>
<!-- Advertise web broker (service will connect to this) -->
<transportConnectors>
<transportConnector name="openwire"
uri="tcp://${broker.ip}:${broker.port}?maximumConnections=1000&wireFormat.maxFrameSize=104857600"
discoveryUri="multicast://225.5.5.5:6555?group=GROUP_WEB" />
</transportConnectors>
root-broker.xml:
<!-- Advertise root broker (service and web will connect to this) -->
<transportConnectors>
<transportConnector name="openwire"
uri="tcp://${broker.ip}:${broker.port}?maximumConnections=1000&wireFormat.maxFrameSize=104857600"
discoveryUri="multicast://225.5.5.5:6555?group=GROUP_BROKER" />
</transportConnectors>
Do the configurations shown above support the architecture shown above?
Do you see any other problems/pitfalls?

Connect Local Active MQ to remote IBM MQ

I'm new to active MQ.
I have a requirement to create a local Active MQ and connect it to a remote IBM MQ.
Can anyone help me on how to connect to Distributed Queue manager and Queues .
You can use Apache Camel to bridge between the two providers. The routes can be run from within the broker, pull from the ActiveMQ queue and push to the WMQ Queue (or the other way around). The concept is almost like the concept of a Channel in WMQ pulling from a transmit queue and pushing it to the appropriate destination on the remote queue manager.
Assuming you are using WMQ V7+ for all QMgrs and Clients, its simply a matter of learning how to set up the route and configure the connection factories. Older versions of WMQ and you may have to understand how to deal with RFH2 headers for native WMQ clients if they are the consumers.
The most simple route configured in spring would look like:
<route id="amq-to-wmq" >
<from uri="amq:YOUR.QUEUE" />
<to uri="wmq:YOUR.QUEUE" />
</route>
The "wmq" and "amq" would point to beans where the JMS components are configured. This is where you would set up you connection factories to each provider and how the clients behave (transacted or not for example), so I'll hold off on giving an example on that.
This would go in the camel.xml (or whatever you name it) and get imported from your broker's XML. ActiveMQ comes with several examples you can use to get you started using Camel JMS components. Just take a look at the default camel.xml that comes with a normal install.

Monitoring Broker Redelivery with ActiveMQ

We have configured our ActiveMQ Broker with the Broker redelivery Plugin using this configuration.
<redeliveryPlugin fallbackToDeadLetter="true"
sendToDlqIfMaxRetriesExceeded="true">
<redeliveryPolicyMap>
<redeliveryPolicyMap>
<redeliveryPolicyEntries>
</redeliveryPolicyEntries>
<!-- the fallback policy for all other destinations -->
<defaultEntry>
<redeliveryPolicy
maximumRedeliveries="15"
useExponentialBackOff="true"
initialRedeliveryDelay="5000"
useCollisionAvoidance="true"
backOffMultiplier="5"
maximumRedeliveryDelay="93600000" />
</defaultEntry>
</redeliveryPolicyMap>
</redeliveryPolicyMap>
</redeliveryPlugin>
The plugin performs as expected removing the failed message from the queue and retrying it at the specified intervals.
The problem we now face is that we need to monitor how many message are currently waiting to be retried for each queue, since they will not show up as waiting in the normal queue monitoring. I could not find anything in the JMX tree for ActiveMQ related to the redeliveryPlugin.
The messages are stored in the JobSchedulerStore which is a separate store from the normal AMQ KahaDB or JDBC stores. There's less visibility into this store however there should be an MBean for it. You can get some information via JMX or you can get information by sending JMS Messages with special headers set. There is an article on the JMS style administration here.

activemq failover : primary node recover consumers?

I am new to activemq. I have configured two servers of activemq and using them in failover transport. They are working fine. I mean if one activemq goes down another pick up the queues. My problem is when main server comes up it do not restore the queues. Is there any such configuration or protocol that can manage it if main server is up then consumers should come back to to it.
Currently my configuration is :
<transportConnectors>
<transportConnector name="openwire" uri="tcp://0.0.0.0:61616" updateClusterClients="true" rebalanceClusterClients="true"/>
</transportConnectors>
<networkConnectors>
<networkConnector uri="static:(tcp://192.168.0.122:61616)"
networkTTL="3"
prefetchSize="1"
decreaseNetworkConsumerPriority="true" />
</networkConnectors>
and my connection uri is :
failover:(tcp://${ipaddress.master}:61616,tcp://${ipaddress.backup}:61616)?randomize=false
Also i want to send mail in case of failover occurs so that i could know if activemq is down.
What you have configured there is not a true HA deployment, but a network of brokers. If you have two brokers configured in a network, each has its own message store, which at any time contains a partial set of messages (see how networks of brokers work).
The behaviour that you would likely expect to see is that if one broker falls over, the other takes its place resuming from where the failed one left off (with all of the undelivered messages that the failed broker held). For that you need to use a (preferably shared-storage) master-slave configuration.
I have done that. And posting solution in case any one is having same problem.
This feature is available in activemq 5.6. priorityBackup=true in connection url is the key to tell to tell consumer to come back on primary node if available.
My new connection uri is :
failover:master:61616,tcp://backup:61616)?randomize=false&priorityBackup=true
see here for more details.