I'm currently working on a project on which an external app sends data coming from many sensors via MQTT protocol.
I want to collect all of this data, and I want to send them to an external server. I want to create 2 MQTT brokers:
one local (on the machine with the app that sends data)
one in the distant server
I will create a network bridge between the two. It's a possibility given by my MQTT server app ActiveMQ (I imagine that's a common feature).
In this way the data producing app will publish on the local broker and, via the bridge, the same data will be published on the remote broker. The point is to let the app working without problems in case of connection loss.
When I lose the network connection between the brokers I don't get the data produced by the app during the time there were no connection. Do you know if it's possible to configure the bridge in order to make it work the way I want?
Will I have to develop a little program which listens on all topics from the local broker, detects connection losses, and re-sends all lost messages to the remote broker?
I add configuration files from my two brokers. My first ActiveMQ server is on the same machine as my app and the second ActiveMQ server is on another machine on the same network. Both computers ping each other perfectly.
Local broker:
<beans
xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd
http://activemq.apache.org/schema/core http://activemq.apache.org/schema/core/activemq-core.xsd">
<bean class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
<property name="locations">
<value>file:${activemq.conf}/credentials.properties</value>
</property>
</bean>
<bean id="logQuery" class="io.fabric8.insight.log.log4j.Log4jLogQuery"
lazy-init="false" scope="singleton"
init-method="start" destroy-method="stop">
</bean>
<broker xmlns="http://activemq.apache.org/schema/core" brokerName="localhost" dataDirectory="${activemq.data}">
<destinationPolicy>
<policyMap>
<policyEntries>
<policyEntry topic=">" >
<pendingMessageLimitStrategy>
<constantPendingMessageLimitStrategy limit="1000"/>
</pendingMessageLimitStrategy>
</policyEntry>
</policyEntries>
</policyMap>
</destinationPolicy>
<managementContext>
<managementContext createConnector="false"/>
</managementContext>
<persistenceAdapter>
<kahaDB directory="${activemq.data}/kahadb"/>
</persistenceAdapter>
<systemUsage>
<systemUsage>
<memoryUsage>
<memoryUsage percentOfJvmHeap="70" />
</memoryUsage>
<storeUsage>
<storeUsage limit="100 gb"/>
</storeUsage>
<tempUsage>
<tempUsage limit="50 gb"/>
</tempUsage>
</systemUsage>
</systemUsage>
<transportConnectors>
<transportConnector name="mqtt" uri="mqtt://0.0.0.0:1883?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
</transportConnectors>
<networkConnectors>
<networkConnector uri="static:(tcp://192.168.16.100:61616)"/>
</networkConnectors>
<shutdownHooks>
<bean xmlns="http://www.springframework.org/schema/beans" class="org.apache.activemq.hooks.SpringContextHook" />
</shutdownHooks>
</broker>
<import resource="jetty.xml"/>
</beans>
Remote broker:
<beans
xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd
http://activemq.apache.org/schema/core http://activemq.apache.org/schema/core/activemq-core.xsd">
<bean class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
<property name="locations">
<value>file:${activemq.conf}/credentials.properties</value>
</property>
</bean>
<bean id="logQuery" class="io.fabric8.insight.log.log4j.Log4jLogQuery"
lazy-init="false" scope="singleton"
init-method="start" destroy-method="stop">
</bean>
<broker xmlns="http://activemq.apache.org/schema/core" brokerName="localhost" dataDirectory="${activemq.data}">
<destinationPolicy>
<policyMap>
<policyEntries>
<policyEntry topic=">" >
<pendingMessageLimitStrategy>
<constantPendingMessageLimitStrategy limit="1000"/>
</pendingMessageLimitStrategy>
</policyEntry>
</policyEntries>
</policyMap>
</destinationPolicy>
<managementContext>
<managementContext createConnector="false"/>
</managementContext>
<persistenceAdapter>
<kahaDB directory="${activemq.data}/kahadb"/>
</persistenceAdapter>
<systemUsage>
<systemUsage>
<memoryUsage>
<memoryUsage percentOfJvmHeap="70" />
</memoryUsage>
<storeUsage>
<storeUsage limit="100 gb"/>
</storeUsage>
<tempUsage>
<tempUsage limit="50 gb"/>
</tempUsage>
</systemUsage>
</systemUsage>
<transportConnectors>
<transportConnector name="openwire" uri="tcp://0.0.0.0:61616?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
<transportConnector name="mqtt" uri="mqtt://0.0.0.0:1883?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
</transportConnectors>
<shutdownHooks>
<bean xmlns="http://www.springframework.org/schema/beans" class="org.apache.activemq.hooks.SpringContextHook" />
</shutdownHooks>
</broker>
<import resource="jetty.xml"/>
</beans>
In order to simulate disconnection between the two brokers I simply disconnects the second computer from the network.
I use MQTTBox on both computer to subscribe to topics I write on. That's how I saw that data sent on a topic in the local broker during the disconnection of the second computer is not published on the same topic of the remote broker when I reconnect it.
EDIT : new infos
I tried again my test today and I notice a checkbox "retain" on my MQTT client MQTTBox.
So :
With the computer A, I publish a message with retain checked on topic /test and computer B was listening on /#
When the 2 computers are connected, it obviously works well, I see the message on computer B.
When I disconnect computer B, publish 2 messages with retain checked then reconnect computer B, I only see the most recent of the 2 messages I published...
It's better, but I'd like to see the other message too... If anyone can help me, i'm lost...
I can also set a QoS for the message I want to publish. I tried with Qos = 0 and QoS = 1 : same thing.
QOS for messages works for bridge connections as well.
So if the bridge is configured for a topic with a QOS greater than 0 then the local broker will queue up the messages while the connection to the remote broker is down and will send them when the connection comes back up.
This way no messages will be lost.
This is perfectly normal deployment pattern for MQTT brokers.
Related
I have two brokers A and B. If I want to forward message from A to B everything is simple. I just need network connector in A broker configured like this:
<networkConnectors>
<networkConnector staticBridge="true" userName="user" password="pass" uri="static://(tcp://B:61616)">
<staticallyIncludedDestinations>
<queue physicalName="QUEUE.TO.FORWARD.MESSAGE" />
</staticallyIncludedDestinations>
</networkConnector>
</networkConnectors>
I tought if I want to consume messageges from broker B from some other queue (let's name it QUEUE.TO.CONSUME) i just need do the same thing but with duplex set to true and just listen on QUEUE.TO.CONSUME on broker A like this:
<networkConnectors>
<networkConnector name="from-B-to-A" staticBridge="true" duplex="true" userName="user" password="pass" uri="static://(tcp://B:61616)">
<staticallyIncludedDestinations>
<queue physicalName="QUEUE.TO.CONSUME" />
</staticallyIncludedDestinations>
</networkConnector>
<networkConnector staticBridge="true" userName="user" password="pass" uri="static://(tcp://B:61616)">
<staticallyIncludedDestinations>
<queue physicalName="QUEUE.TO.FORWARD.MESSAGE" />
</staticallyIncludedDestinations>
</networkConnector>
</networkConnectors>
But it does not work as I expected. It seem that only every second message is forwared and the remaining are just lost. Suprisingly that creates two consumers on broker B QUEUE.TO.CONSUME and I assume that one of them consumes message without forwarding to broker A. How to create bridge on broker A that allows me consume messages from broker B without loosing messages. Creating network connector in broker B is not an option for now.
I've also tried create inbound queue bridge like this:
<jmsBridgeConnectors>
<jmsQueueConnector outboundQueueConnectionFactory="#remoteBroker" localUsername="user" localPassword="password">
<inboundQueueBridges>
<inboundQueueBridge inboundQueueName="QUEUE.TO.CONSUME" localQueueName="QUEUE.TO.CONSUME" />
</inboundQueueBridges>
</jmsQueueConnector>
</jmsBridgeConnectors>
...
</broker>
<bean id="remoteBroker" class="org.apache.activemq.ActiveMQConnectionFactory">
<property name="brokerURL" value="failover://(nio:B:61616)" />
<property name="userName" value="user" />
<property name="password" value="password" />
</bean>
This configuration creates consumer on remote broker B but it doesn't consume any messages which just hanging as enqueued and nothing happens. Broker A still doesn't receive any messages to its local queue.
Ok, I figure it out. I've just used embedded Apache Camel to define routing to remote host and it looks like this (camel.xml in conf directory):
<beans
xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:camel="http://camel.apache.org/schema/spring"
xsi:schemaLocation="
http://camel.apache.org/schema/spring http://camel.apache.org/schema/spring/camel-spring.xsd
http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd">
<camelContext id="context" xmlns="http://camel.apache.org/schema/spring">
<route>
<from uri="remoteBroker:queue:QUEUE.TO.CONSUME"/>
<to uri="localBroker:queue:QUEUE.TO.CONSUME"/>
</route>
</camelContext>
<bean id="remoteBroker" class="org.apache.activemq.camel.component.ActiveMQComponent">
<property name="connectionFactory">
<bean class="org.apache.activemq.ActiveMQConnectionFactory">
<property name="brokerURL" value="tcp://B:61616"/>
<property name="userName" value="user"/>
<property name="password" value="password"/>
</bean>
</property>
</bean>
<bean id="localBroker" class="org.apache.activemq.camel.component.ActiveMQComponent">
<property name="connectionFactory">
<bean class="org.apache.activemq.ActiveMQConnectionFactory">
<property name="brokerURL" value="vm://localhost"/>
</bean>
</property>
</bean>
</beans>
where localhost i broker A. And in activemq.xml:
<import resource="camel.xml"/>
Whenever I try to stop ActiveMQ installed on RHEL server, it doesn't stops gracefully. Does anyone knows why? I am not sure why it tries to connect to JMX broker as shown below and fails. What do I need to fix these issues?
[activemq#myserver apache-activemq-5.15.11]$ bin/activemq stop
INFO: Loading '/web/servers/apache-activemq-5.15.11//bin/env'
INFO: Using java '/bin/java'
INFO: Waiting at least 30 seconds for regular process termination of pid '2963' :
Java Runtime: Oracle Corporation 1.8.0_252 /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.252.b09-2.el7_8.x86_64/jre
Heap sizes: current=62976k free=61991k max=932352k
JVM args: -Xms64M -Xmx1G -Djava.util.logging.config.file=logging.properties -Djava.security.auth.login.config=/web/servers/apache-activemq-5.15.11//conf/login.config -Dactivemq.classpath=/web/servers/apache-activemq-5.15.11//conf:/web/servers/apache-activemq-5.15.11//../lib/: -Dactivemq.home=/web/servers/apache-activemq-5.15.11/ -Dactivemq.base=/web/servers/apache-activemq-5.15.11/ -Dactivemq.conf=/web/servers/apache-activemq-5.15.11//conf -Dactivemq.data=/web/servers/apache-activemq-5.15.11//data
Extensions classpath:
[/web/servers/apache-activemq-5.15.11/lib,/web/servers/apache-activemq-5.15.11/lib/camel,/web/servers/apache-activemq-5.15.11/lib/optional,/web/servers/apache-activemq-5.15.11/lib/web,/web/servers/apache-activemq-5.15.11/lib/extra]
ACTIVEMQ_HOME: /web/servers/apache-activemq-5.15.11
ACTIVEMQ_BASE: /web/servers/apache-activemq-5.15.11
ACTIVEMQ_CONF: /web/servers/apache-activemq-5.15.11/conf
ACTIVEMQ_DATA: /web/servers/apache-activemq-5.15.11/data
Connecting to pid: 2963
INFO: failed to resolve jmxUrl for pid:2963, using default JMX url
Connecting to JMX URL: service:jmx:rmi:///jndi/rmi://localhost:1099/jmxrmi
INFO: Broker not available at: service:jmx:rmi:///jndi/rmi://localhost:1099/jmxrmi
...............................
INFO: Regular shutdown not successful, sending SIGKILL to process
INFO: sending SIGKILL to pid '2963'
EDIT:
Adding activemq.xml configuration file
<!--
Licensed to the Apache Software Foundation (ASF) under one or more
contributor license agreements. See the NOTICE file distributed with
this work for additional information regarding copyright ownership.
The ASF licenses this file to You under the Apache License, Version 2.0
(the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->
<!-- START SNIPPET: example -->
<beans
xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd
http://activemq.apache.org/schema/core http://activemq.apache.org/schema/core/activemq-core.xsd">
<!-- Allows us to use system properties as variables in this configuration file -->
<bean class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
<property name="locations">
<value>file:${activemq.conf}/credentials.properties</value>
</property>
</bean>
<!-- Allows accessing the server log -->
<bean id="logQuery" class="io.fabric8.insight.log.log4j.Log4jLogQuery"
lazy-init="false" scope="singleton"
init-method="start" destroy-method="stop">
</bean>
<!--
The <broker> element is used to configure the ActiveMQ broker.
-->
<broker xmlns="http://activemq.apache.org/schema/core" brokerName="localhost" dataDirectory="${activemq.data}">
<destinationPolicy>
<policyMap>
<policyEntries>
<policyEntry topic=">" >
<!-- The constantPendingMessageLimitStrategy is used to prevent
slow topic consumers to block producers and affect other consumers
by limiting the number of messages that are retained
For more information, see:
http://activemq.apache.org/slow-consumer-handling.html
-->
<pendingMessageLimitStrategy>
<constantPendingMessageLimitStrategy limit="1000"/>
</pendingMessageLimitStrategy>
</policyEntry>
</policyEntries>
</policyMap>
</destinationPolicy>
<!--
The managementContext is used to configure how ActiveMQ is exposed in
JMX. By default, ActiveMQ uses the MBean server that is started by
the JVM. For more information, see:
http://activemq.apache.org/jmx.html
-->
<managementContext>
<managementContext createConnector="false"/>
</managementContext>
<!--
Configure message persistence for the broker. The default persistence
mechanism is the KahaDB store (identified by the kahaDB tag).
For more information, see:
http://activemq.apache.org/persistence.html
-->
<persistenceAdapter>
<kahaDB directory="${activemq.data}/kahadb"/>
</persistenceAdapter>
<!--
The systemUsage controls the maximum amount of space the broker will
use before disabling caching and/or slowing down producers. For more information, see:
http://activemq.apache.org/producer-flow-control.html
-->
<systemUsage>
<systemUsage>
<memoryUsage>
<memoryUsage percentOfJvmHeap="70" />
</memoryUsage>
<storeUsage>
<storeUsage limit="100 gb"/>
</storeUsage>
<tempUsage>
<tempUsage limit="50 gb"/>
</tempUsage>
</systemUsage>
</systemUsage>
<!--
The transport connectors expose ActiveMQ over a given protocol to
clients and other brokers. For more information, see:
http://activemq.apache.org/configuring-transports.html
-->
<transportConnectors>
<!-- DOS protection, limit concurrent connections to 1000 and frame size to 100MB -->
<transportConnector name="openwire" uri="tcp://0.0.0.0:61616?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
<transportConnector name="amqp" uri="amqp://0.0.0.0:5672?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
<transportConnector name="stomp" uri="stomp://0.0.0.0:61613?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
<transportConnector name="mqtt" uri="mqtt://0.0.0.0:1883?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
<transportConnector name="ws" uri="ws://0.0.0.0:61614?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
</transportConnectors>
<!-- destroy the spring context on shutdown to stop jetty -->
<shutdownHooks>
<bean xmlns="http://www.springframework.org/schema/beans" class="org.apache.activemq.hooks.SpringContextHook" />
</shutdownHooks>
</broker>
<!--
Enable web consoles, REST and Ajax APIs and demos
The web consoles requires by default login, you can disable this in the jetty.xml file
Take a look at ${ACTIVEMQ_HOME}/conf/jetty.xml for more details
-->
<import resource="jetty.xml"/>
</beans>
<!-- END SNIPPET: example -->
Adding activemq.log file from /web/servers/apache-activemq-5.15.11/data
Pasting shareable link below:
https://drive.google.com/file/d/1vQ6HkOu53mzMi-GqbGxaQP6EJNC4RT4G/view?usp=sharing
Don't know if this is the issue for you, but after some reading it sounded like activemq uses jmx for its shutdown hook, but the default config that ships with activemq does not setup the jmx connector.
For me, activemq was able to shutdown if I changed:
<managementContext>
<managementContext createConnector="false"/>
</managementContext>
To specify a jmx connectorPort:
<managementContext>
<managementContext connectorPort="1099"/>
</managementContext>
Some references:
Activemq Shutdown fails and then kills process
https://issues.apache.org/jira/browse/AMQ-6927
I have an ActiveMQ v5.7.0 broker, running in Karaf v2.3.3, that I want to enable for remote connections. I've set the broker URL to 0.0.0.0:61616, to enable it to listen to network traffic. I've opened the firewall to allow the traffic from the client machines. However, all remote connections are being refused. A quick netstat seems to tell me that the broker isn't listening outside of localhost.
jeremy#server:~$ netstat -pan | grep 61616
tcp6 0 0 127.0.0.1:61616 :::* LISTEN -
Looking at the broker via Hawtio tells me that the URL looks as it should.
Transport connectors Openwire: tcp://0.0.0.0:61616?maximumConnections=1000&wireformat.maxFrameSize=104857600
The firewall is definitely OK, as the connections are being refused rather than just being dropped.
The broker is responding correctly to connections from localhost.
2013-10-14 17:34:29 Connected to localhost:61613
This is the sort of error I get from remote connections:-
Error connecting to xxx.xxx.xxx.xxx:61613: IO::Socket::INET: connect: Connection refused at /usr/local/share/perl/5.14.2/Net/Stomp.pm line 102.
EDIT: telnet output added
Localhost port 61613
jeremy#server:~$ telnet localhost 61613
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
Remote connection port 61613
jeremy#other-server:~$ telnet xxx.xxx.xxx.xxx 61613
Trying xxx.xxx.xxx.xxx...
telnet: Unable to connect to remote host: Connection refused
Localhost connection port 61616 (this one is interesting)
jeremy#server:~$ telnet localhost 61616
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
ðActiveMQ Þ
MaxFrameSizÿÿÿ CacheSize
CacheEnabledSizePrefixDisabled MaxInactivityDurationInitalDelay'TcpNoDelayEnabledMaxInactivityDurationu0TightEncodingEnabledStackTraceEnabledPuTTYConnection closed by foreign host.
Remote connection port 61616
jeremy#other-server:~$ telnet xxx.xxx.xxx.xxx 61616
Trying xxx.xxx.xxx.xxx...
telnet: Unable to connect to remote host: Connection refused
EDIT: remote server karaf log output added
2013-10-15 19:00:46,599 | ERROR | c.event.invited] | faultJmsMessageListenerContainer | .DefaultMessageListenerContainer 909 | 69 - org.springframework.jms - 3.2.4.RELEASE | Could not refresh JMS Connection for destination 'Consumer.notifications.VirtualTopic.event.invited' - retrying in 5000 ms. Cause: Error while attempting to add new Connection to the pool; nested exception is javax.jms.JMSException: Could not connect to broker URL: tcp://xxx.xxx.xxx.xxx:61616. Reason: java.net.ConnectException: Connection refused
Here's the broker.xml.
<?xml version="1.0" encoding="UTF-8"?>
<blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0"
xmlns:cm="http://aries.apache.org/blueprint/xmlns/blueprint-cm/v1.0.0"
xmlns:ext="http://aries.apache.org/blueprint/xmlns/blueprint-ext/v1.0.0"
xmlns:amq="http://activemq.apache.org/schema/core">
<ext:property-placeholder />
<broker xmlns="http://activemq.apache.org/schema/core"
brokerName="jellyfish-messaging"
dataDirectory="${karaf.data}/activemq/localhost"
useShutdownHook="false"
persistent="true"
schedulerSupport="true"
startAsync="true">
<destinationPolicy>
<policyMap>
<policyEntries>
<policyEntry topic=">" producerFlowControl="true" memoryLimit="1mb">
<pendingSubscriberPolicy>
<vmCursor />
</pendingSubscriberPolicy>
</policyEntry>
<policyEntry queue=">" producerFlowControl="true" memoryLimit="1mb">
</policyEntry>
</policyEntries>
</policyMap>
</destinationPolicy>
<persistenceAdapter>
<kahaDB directory="${karaf.data}/activemq/localhost/kahadb"/>
</persistenceAdapter>
<systemUsage>
<systemUsage>
<memoryUsage>
<memoryUsage limit="64 mb"/>
</memoryUsage>
<storeUsage>
<storeUsage limit="100 gb"/>
</storeUsage>
<tempUsage>
<tempUsage limit="50 gb"/>
</tempUsage>
</systemUsage>
</systemUsage>
<!-- The transport connectors ActiveMQ will listen to -->
<transportConnectors>
<!-- DOS protection, limit concurrent connections to 1000 and frame size to 100MB -->
<transportConnector name="openwire" uri="tcp://0.0.0.0:61616?maximumConnections=1000&wireformat.maxFrameSize=104857600"/>
<transportConnector name="stomp" uri="stomp://0.0.0.0:61613?maximumConnections=1000&wireformat.maxFrameSize=104857600"/>
</transportConnectors>
</broker>
<bean id="jmsConnectionFactory" class="org.apache.activemq.ActiveMQConnectionFactory">
<property name="brokerURL" value="tcp://0.0.0.0:61616" />
</bean>
<bean id="pooledConnectionFactory" class="org.apache.activemq.pool.PooledConnectionFactory">
<property name="maxConnections" value="8" />
<property name="maximumActive" value="500" />
<property name="connectionFactory" ref="jmsConnectionFactory" />
</bean>
<bean id="resourceManager" class="org.apache.activemq.pool.ActiveMQResourceManager" init-method="recoverResource">
<property name="transactionManager" ref="transactionManager" />
<property name="connectionFactory" ref="jmsConnectionFactory" />
<property name="resourceName" value="activemq.localhost" />
</bean>
<bean id="jmsConfig" class="org.apache.camel.component.jms.JmsConfiguration">
<property name="connectionFactory" ref="pooledConnectionFactory" />
<property name="transacted" value="false" />
<property name="concurrentConsumers" value="10" />
</bean>
<bean id="activemq" class="org.apache.activemq.camel.component.ActiveMQComponent">
<property name="configuration" ref="jmsConfig" />
</bean>
<reference id="transactionManager" interface="javax.transaction.TransactionManager" />
<service ref="pooledConnectionFactory" interface="javax.jms.ConnectionFactory">
<service-properties>
<entry key="name" value="localhost"/>
</service-properties>
</service>
</blueprint>
Can anyone tell me what I'm missing?
Thanks,
J.
I've solved this. It was neither a problem with the firewall, nor with the ActiveMQ configuration.
The Karaf kar file in which the ActiveMQ broker was defined included the activemq-web-console feature. We've not been using this feature, as we're fans of Hawtio, so had never configured it.
As per this blog post, the console was coming up with default settings, including listening on port 61616. This meant that two brokers were in a race condition on start-up and the webconsole-defined one was generally winning. Since by default it isn't configured for remote access, it was locking the port for localhost connections only.
The giveaway was a directory called ${activemq.data} (literally) within the Karaf home directory, containing a second Kahadb repository. All of our broker config was set to use the data directory and we've never specifically set the ActiveMQ environment variables, so this led us to look for where a second broker might have come from.
Might have spotted it more quickly had we done activemq:list inside a Karaf session, as it was listing two brokers.
Simple solution - delete activemq-web-console from the features XML.
i got my activemq config file and my mule flows with the "jms:activemq-connector" like this:
<jms:activemq-connector name="Active_MQ" specification="1.1" brokerURL="failover://(tcp://localhost:61616,tcp://localhost:61617)?randomize=false" validateConnections="false" maxRedelivery="1" doc:name="Active MQ"/>
This is my activemq.xml
<beans
xmlns="http://www.springframework.org/schema/beans"
xmlns:amq="http://activemq.apache.org/schema/core"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd
http://activemq.apache.org/schema/core http://activemq.apache.org/schema/core/activemq-core.xsd">
<!-- Allows us to use system properties as variables in this configuration file -->
<bean class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
<property name="locations">
<value>file:${activemq.conf}/credentials.properties</value>
</property>
</bean>
<!--
The <broker> element is used to configure the ActiveMQ broker.
-->
<broker xmlns="http://activemq.apache.org/schema/core" brokerName="localhost" dataDirectory="${activemq.data}" persistent="false">
<!--
For better performances use VM cursor and small memory limit.
For more information, see:
http://activemq.apache.org/message-cursors.html
Also, if your producer is "hanging", it's probably due to producer flow control.
For more information, see:
http://activemq.apache.org/producer-flow-control.html
-->
<destinations>
<queue physicalName="Example.Queue"/>
</destinations>
<destinationPolicy>
<policyMap>
<policyEntries>
<policyEntry topic=">" producerFlowControl="true">
<!-- The constantPendingMessageLimitStrategy is used to prevent
slow topic consumers to block producers and affect other consumers
by limiting the number of messages that are retained
For more information, see:
http://activemq.apache.org/slow-consumer-handling.html
-->
<pendingMessageLimitStrategy>
<constantPendingMessageLimitStrategy limit="1000"/>
</pendingMessageLimitStrategy>
</policyEntry>
<policyEntry queue=">" producerFlowControl="true" memoryLimit="1mb" useCache="true">
<!-- Use VM cursor for better latency
For more information, see:
http://activemq.apache.org/message-cursors.html
<pendingQueuePolicy>
<vmQueueCursor/>
</pendingQueuePolicy>
-->
</policyEntry>
</policyEntries>
</policyMap>
</destinationPolicy>
<!--
The managementContext is used to configure how ActiveMQ is exposed in
JMX. By default, ActiveMQ uses the MBean server that is started by
the JVM. For more information, see:
http://activemq.apache.org/jmx.html
-->
<managementContext>
<managementContext createConnector="false"/>
</managementContext>
<!--
Configure message persistence for the broker. The default persistence
mechanism is the KahaDB store (identified by the kahaDB tag).
For more information, see:
http://activemq.apache.org/persistence.html
-->
<persistenceAdapter>
<kahaDB directory="${activemq.data}/kahadb" cleanupInterval="30000"/>
</persistenceAdapter>
<!--
The systemUsage controls the maximum amount of space the broker will
use before slowing down producers. For more information, see:
http://activemq.apache.org/producer-flow-control.html
If using ActiveMQ embedded - the following limits could safely be used:
<systemUsage>
<systemUsage>
<memoryUsage>
<memoryUsage limit="20 mb"/>
</memoryUsage>
<storeUsage>
<storeUsage limit="1 gb"/>
</storeUsage>
<tempUsage>
<tempUsage limit="100 mb"/>
</tempUsage>
</systemUsage>
</systemUsage>
-->
<systemUsage>
<systemUsage>
<memoryUsage>
<memoryUsage limit="64 mb"/>
</memoryUsage>
<storeUsage>
<storeUsage limit="100 gb"/>
</storeUsage>
<tempUsage>
<tempUsage limit="50 gb"/>
</tempUsage>
</systemUsage>
</systemUsage>
<!--
The transport connectors expose ActiveMQ over a given protocol to
clients and other brokers. For more information, see:
http://activemq.apache.org/configuring-transports.html
-->
<transportConnectors>
<!-- DOS protection, limit concurrent connections to 1000 and frame size to 100MB -->
<transportConnector uri="tcp://localhost:61617?maximumConnections=1000&wireformat.maxFrameSize=104857600"/>
<transportConnector name="openwire" uri="tcp://0.0.0.0:61616?maximumConnections=1000&wireformat.maxFrameSize=104857600"/>
<transportConnector name="amqp" uri="amqp://0.0.0.0:5672?maximumConnections=1000&wireformat.maxFrameSize=104857600"/>
</transportConnectors>
<!-- destroy the spring context on shutdown to stop jetty -->
<shutdownHooks>
<bean xmlns="http://www.springframework.org/schema/beans" class="org.apache.activemq.hooks.SpringContextHook" />
</shutdownHooks>
</broker>
<!--
Enable web consoles, REST and Ajax APIs and demos
Take a look at ${ACTIVEMQ_HOME}/conf/jetty.xml for more details
-->
<import resource="jetty.xml"/>
</beans>
Looks like the failover with those url's "works".
Question, how am i supposed to test the failover??? any suggestions? this is the very first time i implement this.
Thanks.
Per requested:
I actually ran 2 different activemq's in separate cmd prompt, configured the conf files inside each activemq folder. That's all.
It worked by doing "failover://(tcp://localhost:61616,tcp://localhost:61617)" on Mule side.
After shutting down the 61616 one, all the entries would go to the second broker, the 61617 one. so it works great.
I am trying to setup a Topic at startup of ActiveMQ. We will have Durable subscribers but they are not yet available.
Startup Config says to add:
<destinations>
<queue physicalName="FOO.BAR" />
<topic physicalName="SOME.TOPIC" />
</destinations>
I have added this to activemq.xml but no luck. No Topic is created at startup of ActiveMQ. We are running 5.7.
Ideas?
EDIT:
I am trying to setup a Topic on Startup of ActiveMQ. When ActiveMQ is restarted (or shutdown and started) Topics are deleted because they are in memory. I want to add a Topic in the XML configuration so it is created on the fly when AMQ is started. in this way our ESB can reach it directly and can start to work. The ESB will be a Durable subscriber but not yet. Still implementing. The documentation says to add to above rows in the XML config. But I have no luck with that. A Topic is not created upon start.
So my I will just add them whereever?
<beans
xmlns="http://www.springframework.org/schema/beans"
xmlns:amq="http://activemq.apache.org/schema/core"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd
http://activemq.apache.org/schema/core http://activemq.apache.org/schema/core/activemq-core.xsd">
<!-- Allows us to use system properties as variables in this configuration file -->
<bean class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
<property name="locations">
<value>file:${activemq.conf}/credentials.properties</value>
</property>
</bean>
<!--
The <broker> element is used to configure the ActiveMQ broker.
-->
<broker xmlns="http://activemq.apache.org/schema/core" brokerName="localhost" dataDirectory="${activemq.data}">
<!-- Like here? -->
<destinations>
<queue physicalName="FOO.BAR" />
<topic physicalName="SOME.TOPIC" />
</destinations>
<!--
For better performances use VM cursor and small memory limit.
For more information, see:
http://activemq.apache.org/message-cursors.html
Also, if your producer is "hanging", it's probably due to producer flow control.
For more information, see:
http://activemq.apache.org/producer-flow-control.html
-->
<destinationPolicy>
<policyMap>
<policyEntries>
<policyEntry topic=">" producerFlowControl="true">
<!-- The constantPendingMessageLimitStrategy is used to prevent
slow topic consumers to block producers and affect other consumers
by limiting the number of messages that are retained
For more information, see:
http://activemq.apache.org/slow-consumer-handling.html
-->
<pendingMessageLimitStrategy>
<constantPendingMessageLimitStrategy limit="1000"/>
</pendingMessageLimitStrategy>
</policyEntry>
<policyEntry queue=">" producerFlowControl="true" memoryLimit="1mb">
<!-- Use VM cursor for better latency
For more information, see:
http://activemq.apache.org/message-cursors.html
<pendingQueuePolicy>
<vmQueueCursor/>
</pendingQueuePolicy>
-->
</policyEntry>
</policyEntries>
</policyMap>
</destinationPolicy>
<!--
The managementContext is used to configure how ActiveMQ is exposed in
JMX. By default, ActiveMQ uses the MBean server that is started by
the JVM. For more information, see:
http://activemq.apache.org/jmx.html
-->
<managementContext>
<managementContext createConnector="false"/>
</managementContext>
<!--
Configure message persistence for the broker. The default persistence
mechanism is the KahaDB store (identified by the kahaDB tag).
For more information, see:
http://activemq.apache.org/persistence.html
-->
<persistenceAdapter>
<kahaDB directory="${activemq.data}/kahadb"/>
</persistenceAdapter>
<!--
The systemUsage controls the maximum amount of space the broker will
use before slowing down producers. For more information, see:
http://activemq.apache.org/producer-flow-control.html
If using ActiveMQ embedded - the following limits could safely be used:
<systemUsage>
<systemUsage>
<memoryUsage>
<memoryUsage limit="20 mb"/>
</memoryUsage>
<storeUsage>
<storeUsage limit="1 gb"/>
</storeUsage>
<tempUsage>
<tempUsage limit="100 mb"/>
</tempUsage>
</systemUsage>
</systemUsage>
-->
<systemUsage>
<systemUsage>
<memoryUsage>
<memoryUsage limit="64 mb"/>
</memoryUsage>
<storeUsage>
<storeUsage limit="100 gb"/>
</storeUsage>
<tempUsage>
<tempUsage limit="50 gb"/>
</tempUsage>
</systemUsage>
</systemUsage>
<!--
The transport connectors expose ActiveMQ over a given protocol to
clients and other brokers. For more information, see:
http://activemq.apache.org/configuring-transports.html
-->
<transportConnectors>
<!-- DOS protection, limit concurrent connections to 1000 and frame size to 100MB -->
<transportConnector name="openwire" uri="tcp://0.0.0.0:61616?maximumConnections=1000&wireformat.maxFrameSize=104857600"/>
<transportConnector name="amqp" uri="amqp://0.0.0.0:5672?maximumConnections=1000&wireformat.maxFrameSize=104857600"/>
</transportConnectors>
<!-- destroy the spring context on shutdown to stop jetty -->
<shutdownHooks>
<bean xmlns="http://www.springframework.org/schema/beans" class="org.apache.activemq.hooks.SpringContextHook" />
</shutdownHooks>
</broker>
<!--
Enable web consoles, REST and Ajax APIs and demos
Take a look at ${ACTIVEMQ_HOME}/conf/jetty.xml for more details
-->
<import resource="jetty.xml"/>
/Ziggy
I just dropped those into a vanilla 5.7 AMQ installation (on MacOS) and I see the both the queue and topic via the web console...
you should try again with a clean install of AMQ to try to narrow the issue down
My own solution works. Though in our Linux environment we had more than one instance. One under /user/share and one under /home/activemq/
So it worked when I edited the corret file.
Thank you for your efforts.
Put your destination inside <destinations> tag like
<destinations>
<queue physicalName="FOO.BAR" />
<topic physicalName="SOME.TOPIC" />
</destinations>
inside
<broker></broker> tags.