I have an Apollo broker configured as a stomp server. Now I want to configure an ActiveMQ broker which links to the Apollo broker and enable message propagation in both directions.
That is, I want the Apollo broker and ActiveMQ broker to work both as consumers and producers.
Will this networkconnector configuration at ActiveMQ broker meet my requirement?
<networkConnectors>
<networkConnector name="linkToApolloBroker"
uri="static:(stomp://apollo_broker_ip:61000)"
networkTTL="3"
duplex="true" />
</networkConnectors>
<persistenceAdapter>
<kahaDB directory="${activemq.data}/dynamic-broker1/kahadb"/>
</persistenceAdapter>
...
<transportConnectors>
<transportConnector name="stomp" uri="stomp://0.0.0.0:61613"/>
</transportConnectors>
Actually, I need the Apollo to provide services for the web while passing messages to and fro to ActiveMQ broker. If I have 2 brokers talking with each other, their local clients can have direct access to the locally persisted queues and to an extend remain immune to network fluctuations.
There is interoperability in the Network of brokers configuration between ActiveMQ and Apollo. You might be able to configure a bridge between the two using the JMS Bridge feature of ActiveMQ since Apollo does support openwire. The configuration you have won't work.
Have a look at the JMS to JMS bridge documentation.
Apache Camel is also a potential solution to your problem. You can probably create a Camel route that does what you want.
Related
We're using Apache James 3.0-beta4 which uses embedded ActiveMQ 5.5.0 for FIFO message queue, and sometimes messages get stuck. Therefore, we need to monitor it. Is there any way to monitor an ActiveMQ queue like message size and most recent message-id in the queue (if possible).
In the JAMES spring-server.xml I found that:
<amq:broker useJmx="true" persistent="true" brokerName="james" dataDirectory="filesystem=file://var/store/activemq/brokers" useShutdownHook="false" schedulerSupport="false" id="broker">
<amq:destinationPolicy>
<amq:policyMap>
<amq:policyEntries>
<!-- Support priority handling of messages -->
<!-- http://activemq.apache.org/how-can-i-support-priority-queues.html -->
<amq:policyEntry queue=">" prioritizedMessages="true"/>
</amq:policyEntries>
</amq:policyMap>
</amq:destinationPolicy>
<amq:managementContext>
<amq:managementContext createConnector="false"/>
</amq:managementContext>
<amq:persistenceAdapter>
<amq:amqPersistenceAdapter/>
</amq:persistenceAdapter>
<amq:plugins>
<amq:statisticsBrokerPlugin/>
</amq:plugins>
<amq:transportConnectors>
<amq:transportConnector uri="tcp://localhost:0" />
</amq:transportConnectors>
</amq:broker>
also one old part from readme:
- Telnet Management has been removed in favor of JMX with client shell
- More metrics counters available via JMX
...
* Monitor via JMX (launch any JMX client and connect to URL=service:jmx:rmi:///jndi/rmi://localhost:9999/jmxrmi)
which is confusion on how to use it.
This is part of the bigger "monolith" project which now is recreated for microservices but still need to be supported ;) All was fine till mid of March.
It looks like ActiveMQ management and monitoring is not possible because JMX is disabled.
Software and configurations:
ActiveMQ 5.14.0, Java 8
All brokers have non-persistent topics only; advisory messages are on.
OS: Linux (RHEL 6)
Some terminology:
We have headless Java applications (services) that use a broker to communicate with the web server. They use Java Plugin Framework (JPF). Each service connects to its own personal broker and subscribes to a single topic that is unique to that service (a "service-specific" topic).
This broker has a network connector that automatically connects to all local web instance brokers (typically only 1 of them per LAN).
This broker has a network connector that automatically connects to all local root borker instances (typically only 1 of them per LAN).
Only the web servers ever publish messages to this topic.
Services don't send messages intended for one another.
The web server acts as a front end for the services. It uses Spring Boot. Each web server connects to its own personal broker and subscribes to a single global topic that is shared among all web servers; if a message is sent to that topic, all web servers receive it.
Only the services ever publish messages to this topic.
Web servers don't send messages intended for one another.
This broker has a network connector that automatically connects to all local root borker instances (typically only 1 of them per LAN).
The root broker is a glorified Java appliction that launches an ActiveMQ broker. It uses Spring Boot. The root brokers don't publish or subscribe; they merely act as a bridge between LANs.
Each root broker will connect to its parent broker (not done via XML). This is done in Java code.
Settings: duplex=true, checkDuplicateMessagesOnDuplex=true, suppressDuplicateTopicSubscriptions=false, networkTTL=10
I followed this guide in configuring all of the brokers: http://activemq.apache.org/networks-of-brokers.html
Here's a diagram of the intended architecture:
What we're observing is that after a certain number of services come online, messages stop flowing between the service and web application instances, even if they're on the same LAN and directly connected to one another. We can see the producers creating the messages (in the web application logs), but the consumers never receive the network data (verified using Wireshark). I can't tell if the the brokers are sending the messages to the wrong location or not. The correct topics show up in our JMX MBEANS when we view a running instance using jconsole.
There are no errors/warnings from any of the JVMs.
One observation we've made is that adding a new web server and service in a different discovery group appears to work with no problem. They have no communication issues whatsoever, so we believe this is a broker configuration issue.
service-broker.xml:
<!-- Connects to root broker and web broker -->
<networkConnectors>
<networkConnector name="service-${broker.id}-broker-nc" duplex="true" networkTTL="10"
checkDuplicateMessagesOnDuplex="true" suppressDuplicateTopicSubscriptions="false"
uri="multicast://225.5.5.5:6555?group=GROUP_BROKER&maxReconnectAttempts=1&joinNetworkInterface=${broker.netInterface}" />
<networkConnector name="service-${broker.id}-web-nc" duplex="true" networkTTL="10"
checkDuplicateMessagesOnDuplex="true" suppressDuplicateTopicSubscriptions="false"
uri="multicast://225.5.5.5:6555?group=GROUP_WEB&maxReconnectAttempts=1&joinNetworkInterface=${broker.netInterface}" />
</networkConnectors>
<!-- Don't advertise the broker (only connection should be from localhost) -->
<transportConnectors>
<transportConnector name="openwire"
uri="tcp://${broker.ip}:${broker.port}?maximumConnections=1000&wireFormat.maxFrameSize=104857600" />
</transportConnectors>
web-broker.xml:
<!-- Connect to root broker -->
<networkConnectors>
<networkConnector name="web-${broker.id}-broker-nc" duplex="true" networkTTL="10"
checkDuplicateMessagesOnDuplex="true" suppressDuplicateTopicSubscriptions="false"
uri="multicast://225.5.5.5:6555?group=GROUP_BROKER&maxReconnectAttempts=1" />
</networkConnectors>
<!-- Advertise web broker (service will connect to this) -->
<transportConnectors>
<transportConnector name="openwire"
uri="tcp://${broker.ip}:${broker.port}?maximumConnections=1000&wireFormat.maxFrameSize=104857600"
discoveryUri="multicast://225.5.5.5:6555?group=GROUP_WEB" />
</transportConnectors>
root-broker.xml:
<!-- Advertise root broker (service and web will connect to this) -->
<transportConnectors>
<transportConnector name="openwire"
uri="tcp://${broker.ip}:${broker.port}?maximumConnections=1000&wireFormat.maxFrameSize=104857600"
discoveryUri="multicast://225.5.5.5:6555?group=GROUP_BROKER" />
</transportConnectors>
Do the configurations shown above support the architecture shown above?
Do you see any other problems/pitfalls?
I am using RabbitMQ as a Stomp broker for Spring Websocket application. The client uses SockJS library to connect to the websocket interface.
Every queue created on the RabbitMQ by the Spring is durable while topics are non durable. Is there any way to make the queues non durable as well?
I do not think I can configure on the application side. I played a bit with RabbitMQ configuration but could not set it up either.
Example destination on RabbitMQ used for SUBSCRIBE and SEND:
services-user-_385b304f-7a8f-4cf4-a0f1-d6ceed6b8c92
It will be possible to specify properties for endpoints as of RabbitMQ 3.6.0 according to comment in RabbitMQ issues - https://github.com/rabbitmq/rabbitmq-stomp/issues/24#issuecomment-137896165:
as of 3.6.0, it will be possible to explicitly define properties for endpoints such as /topic/ and /queue using subscription headers: durable, auto-delete, and exclusive, respectively.
As a workaround you can try to create queues by your own using AMQP protocol and then refer to that queues from STOMP protocol.
We have configured our ActiveMQ Broker with the Broker redelivery Plugin using this configuration.
<redeliveryPlugin fallbackToDeadLetter="true"
sendToDlqIfMaxRetriesExceeded="true">
<redeliveryPolicyMap>
<redeliveryPolicyMap>
<redeliveryPolicyEntries>
</redeliveryPolicyEntries>
<!-- the fallback policy for all other destinations -->
<defaultEntry>
<redeliveryPolicy
maximumRedeliveries="15"
useExponentialBackOff="true"
initialRedeliveryDelay="5000"
useCollisionAvoidance="true"
backOffMultiplier="5"
maximumRedeliveryDelay="93600000" />
</defaultEntry>
</redeliveryPolicyMap>
</redeliveryPolicyMap>
</redeliveryPlugin>
The plugin performs as expected removing the failed message from the queue and retrying it at the specified intervals.
The problem we now face is that we need to monitor how many message are currently waiting to be retried for each queue, since they will not show up as waiting in the normal queue monitoring. I could not find anything in the JMX tree for ActiveMQ related to the redeliveryPlugin.
The messages are stored in the JobSchedulerStore which is a separate store from the normal AMQ KahaDB or JDBC stores. There's less visibility into this store however there should be an MBean for it. You can get some information via JMX or you can get information by sending JMS Messages with special headers set. There is an article on the JMS style administration here.
I am new to activemq. I have configured two servers of activemq and using them in failover transport. They are working fine. I mean if one activemq goes down another pick up the queues. My problem is when main server comes up it do not restore the queues. Is there any such configuration or protocol that can manage it if main server is up then consumers should come back to to it.
Currently my configuration is :
<transportConnectors>
<transportConnector name="openwire" uri="tcp://0.0.0.0:61616" updateClusterClients="true" rebalanceClusterClients="true"/>
</transportConnectors>
<networkConnectors>
<networkConnector uri="static:(tcp://192.168.0.122:61616)"
networkTTL="3"
prefetchSize="1"
decreaseNetworkConsumerPriority="true" />
</networkConnectors>
and my connection uri is :
failover:(tcp://${ipaddress.master}:61616,tcp://${ipaddress.backup}:61616)?randomize=false
Also i want to send mail in case of failover occurs so that i could know if activemq is down.
What you have configured there is not a true HA deployment, but a network of brokers. If you have two brokers configured in a network, each has its own message store, which at any time contains a partial set of messages (see how networks of brokers work).
The behaviour that you would likely expect to see is that if one broker falls over, the other takes its place resuming from where the failed one left off (with all of the undelivered messages that the failed broker held). For that you need to use a (preferably shared-storage) master-slave configuration.
I have done that. And posting solution in case any one is having same problem.
This feature is available in activemq 5.6. priorityBackup=true in connection url is the key to tell to tell consumer to come back on primary node if available.
My new connection uri is :
failover:master:61616,tcp://backup:61616)?randomize=false&priorityBackup=true
see here for more details.