Issue with message replication in cluster setup - activemq

I have setup a cluster with two nodes. Messages produced on one node are always consumed by single node.(The node which produces the message). I want the message to be consumed in all nodes.
I have a destination (topic) on both nodes.
So any messages raised should be consumed by both nodes.
I am not sure if there is any configuration issue with the way I configured,
<broker
xmlns="http://activemq.apache.org/schema/core"
useJmx="true"
brokerName="mybrokerA"
useShutdownHook="false"
persistent="true"
start="false"
schedulerSupport="true"
enableStatistics="false"
offlineDurableSubscriberTimeout="259200000"
offlineDurableSubscriberTaskSchedule="3600000">
<destinations>
<topic name="FooTopic" physicalName="FooTopic"/>
</destinations>
<networkConnectors>
<networkConnector name="dd" uri="static:(tcp://10.96.10.66:61816,tcp://10.96.10.25:61816)" duplex="true">
<staticallyIncludedDestinations>
<topic physicalName="FooTopic"/>
</staticallyIncludedDestinations>
</networkConnector>
</networkConnectors>
<transportConnectors>
<transportConnector name="default" uri="tcp://10.96.10.66:61816"/>
</transportConnectors>
</broker>
Same is the file in node2 with ip changed in the transport connector.
tomee file.
ServerUrl = tcp://10.96.10.66:61816
DataSource jdbc/teamsdb
BrokerXmlConfig = xbean:file:../conf/activemq.xml
Can any one please help me on this. ?

Related

ActiveMQ classic to ActiveMQ Artemis failover does not work

I'm trying to migrate from ActiveMQ "Classic" to ActiveMQ Artemis.
We have a cluster of 2 active nodes that we try to migrate without impacting the consumers and producers. To do so, we stop the first node, migrate it, start it and do the same on the 2nd when the first is back up.
We are observing that the consumers/producers are not able to reconnect:
o.a.a.t.f.FailoverTransport | | Failed to connect to [tcp://172.17.233.92:63616?soTimeout=30000&soWriteTimeout=30000&keepAlive=true, tcp://172.17.233.93:63616?soTimeout=30000&soWriteTimeout=30000&keepAlive=true] after: 30 attempt(s) continuing to retry.
Consumers/producers are able to connect after we have restarted them.
Is it normal behavior ?
Here is the ActiveMQ Artemis broker :
<connectors>
<connector name="netty-connector">tcp://172.17.233.92:63616</connector>
<connector name="server_0">tcp://172.17.233.93:63616</connector>
</connectors>
<acceptors>
<acceptor name="netty-acceptor">tcp://172.17.233.92:63616?protocols=OPENWIRE</acceptor>
<acceptor name="invm">"vm://0</acceptor>
</acceptors>
<cluster-connections>
<cluster-connection name="cluster">
<connector-ref>netty-connector</connector-ref>
<retry-interval>500</retry-interval>
<use-duplicate-detection>true</use-duplicate-detection>
<message-load-balancing>ON_DEMAND</message-load-balancing>
<max-hops>1</max-hops>
<static-connectors>
<connector-ref>server_0</connector-ref>
</static-connectors>
</cluster-connection>
</cluster-connections>
And here is the ActiveMQ "classic" configuration
<!-- Transport protocol -->
<transportConnectors>
<transportConnector name="openwire"
uri="nio://172.17.233.92:63616?transport.soTimeout=15000&transport.threadName&keepAlive=true&transport.soWriteTimeout=15000&wireFormat.maxInactivityDuration=0"
enableStatusMonitor="true" rebalanceClusterClients="true" updateClusterClients="true" updateClusterClientsOnRemove="true" />
</transportConnectors>
<!-- Network of brokers setup -->
<networkConnectors>
<!-- we need conduit subscriptions for topics , but not for queue -->
<networkConnector name="NC_topic" duplex="false" conduitSubscriptions="true" networkTTL="1" uri="static:(tcp://172.17.233.92:63616,tcp://172.17.233.93:63616)" decreaseNetworkConsumerPriority="true" suppressDuplicateTopicSubscriptions="true" dynamicOnly="true">
<excludedDestinations>
<queue physicalName=">" />
</excludedDestinations>
</networkConnector>
<!-- we need conduit subscriptions for topics , but not for queue -->
<networkConnector name="NC_queue" duplex="false" conduitSubscriptions="false" networkTTL="1" uri="static:(tcp://172.17.233.92:63616,tcp://172.17.233.93:63616)" decreaseNetworkConsumerPriority="true" suppressDuplicateQueueSubscriptions="true" dynamicOnly="true">
<excludedDestinations>
<topic physicalName=">" />
</excludedDestinations>
</networkConnector>
</networkConnectors>
This issue should due to updateClusterClientsOnRemove, if true, will update clients when a cluster is removed from the network, see broker-side options for failover.
When the first node is stopped the clients will remove it and they will not add it again because the second node with ActiveMQ Classic isn't able to connect to the first node with ActiveMQ Artemis.
At the end, we decided to first stop the 2 nodes, then upgrade and restart. It implies an interruption from consumer/producer point of view but all the subscription are done properly after the restart.

ActiveMQ is not shutting down properly -INFO: Regular shutdown not successful, sending SIGKILL to process

Whenever I try to stop ActiveMQ installed on RHEL server, it doesn't stops gracefully. Does anyone knows why? I am not sure why it tries to connect to JMX broker as shown below and fails. What do I need to fix these issues?
[activemq#myserver apache-activemq-5.15.11]$ bin/activemq stop
INFO: Loading '/web/servers/apache-activemq-5.15.11//bin/env'
INFO: Using java '/bin/java'
INFO: Waiting at least 30 seconds for regular process termination of pid '2963' :
Java Runtime: Oracle Corporation 1.8.0_252 /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.252.b09-2.el7_8.x86_64/jre
Heap sizes: current=62976k free=61991k max=932352k
JVM args: -Xms64M -Xmx1G -Djava.util.logging.config.file=logging.properties -Djava.security.auth.login.config=/web/servers/apache-activemq-5.15.11//conf/login.config -Dactivemq.classpath=/web/servers/apache-activemq-5.15.11//conf:/web/servers/apache-activemq-5.15.11//../lib/: -Dactivemq.home=/web/servers/apache-activemq-5.15.11/ -Dactivemq.base=/web/servers/apache-activemq-5.15.11/ -Dactivemq.conf=/web/servers/apache-activemq-5.15.11//conf -Dactivemq.data=/web/servers/apache-activemq-5.15.11//data
Extensions classpath:
[/web/servers/apache-activemq-5.15.11/lib,/web/servers/apache-activemq-5.15.11/lib/camel,/web/servers/apache-activemq-5.15.11/lib/optional,/web/servers/apache-activemq-5.15.11/lib/web,/web/servers/apache-activemq-5.15.11/lib/extra]
ACTIVEMQ_HOME: /web/servers/apache-activemq-5.15.11
ACTIVEMQ_BASE: /web/servers/apache-activemq-5.15.11
ACTIVEMQ_CONF: /web/servers/apache-activemq-5.15.11/conf
ACTIVEMQ_DATA: /web/servers/apache-activemq-5.15.11/data
Connecting to pid: 2963
INFO: failed to resolve jmxUrl for pid:2963, using default JMX url
Connecting to JMX URL: service:jmx:rmi:///jndi/rmi://localhost:1099/jmxrmi
INFO: Broker not available at: service:jmx:rmi:///jndi/rmi://localhost:1099/jmxrmi
...............................
INFO: Regular shutdown not successful, sending SIGKILL to process
INFO: sending SIGKILL to pid '2963'
EDIT:
Adding activemq.xml configuration file
<!--
Licensed to the Apache Software Foundation (ASF) under one or more
contributor license agreements. See the NOTICE file distributed with
this work for additional information regarding copyright ownership.
The ASF licenses this file to You under the Apache License, Version 2.0
(the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->
<!-- START SNIPPET: example -->
<beans
xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd
http://activemq.apache.org/schema/core http://activemq.apache.org/schema/core/activemq-core.xsd">
<!-- Allows us to use system properties as variables in this configuration file -->
<bean class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
<property name="locations">
<value>file:${activemq.conf}/credentials.properties</value>
</property>
</bean>
<!-- Allows accessing the server log -->
<bean id="logQuery" class="io.fabric8.insight.log.log4j.Log4jLogQuery"
lazy-init="false" scope="singleton"
init-method="start" destroy-method="stop">
</bean>
<!--
The <broker> element is used to configure the ActiveMQ broker.
-->
<broker xmlns="http://activemq.apache.org/schema/core" brokerName="localhost" dataDirectory="${activemq.data}">
<destinationPolicy>
<policyMap>
<policyEntries>
<policyEntry topic=">" >
<!-- The constantPendingMessageLimitStrategy is used to prevent
slow topic consumers to block producers and affect other consumers
by limiting the number of messages that are retained
For more information, see:
http://activemq.apache.org/slow-consumer-handling.html
-->
<pendingMessageLimitStrategy>
<constantPendingMessageLimitStrategy limit="1000"/>
</pendingMessageLimitStrategy>
</policyEntry>
</policyEntries>
</policyMap>
</destinationPolicy>
<!--
The managementContext is used to configure how ActiveMQ is exposed in
JMX. By default, ActiveMQ uses the MBean server that is started by
the JVM. For more information, see:
http://activemq.apache.org/jmx.html
-->
<managementContext>
<managementContext createConnector="false"/>
</managementContext>
<!--
Configure message persistence for the broker. The default persistence
mechanism is the KahaDB store (identified by the kahaDB tag).
For more information, see:
http://activemq.apache.org/persistence.html
-->
<persistenceAdapter>
<kahaDB directory="${activemq.data}/kahadb"/>
</persistenceAdapter>
<!--
The systemUsage controls the maximum amount of space the broker will
use before disabling caching and/or slowing down producers. For more information, see:
http://activemq.apache.org/producer-flow-control.html
-->
<systemUsage>
<systemUsage>
<memoryUsage>
<memoryUsage percentOfJvmHeap="70" />
</memoryUsage>
<storeUsage>
<storeUsage limit="100 gb"/>
</storeUsage>
<tempUsage>
<tempUsage limit="50 gb"/>
</tempUsage>
</systemUsage>
</systemUsage>
<!--
The transport connectors expose ActiveMQ over a given protocol to
clients and other brokers. For more information, see:
http://activemq.apache.org/configuring-transports.html
-->
<transportConnectors>
<!-- DOS protection, limit concurrent connections to 1000 and frame size to 100MB -->
<transportConnector name="openwire" uri="tcp://0.0.0.0:61616?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
<transportConnector name="amqp" uri="amqp://0.0.0.0:5672?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
<transportConnector name="stomp" uri="stomp://0.0.0.0:61613?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
<transportConnector name="mqtt" uri="mqtt://0.0.0.0:1883?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
<transportConnector name="ws" uri="ws://0.0.0.0:61614?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
</transportConnectors>
<!-- destroy the spring context on shutdown to stop jetty -->
<shutdownHooks>
<bean xmlns="http://www.springframework.org/schema/beans" class="org.apache.activemq.hooks.SpringContextHook" />
</shutdownHooks>
</broker>
<!--
Enable web consoles, REST and Ajax APIs and demos
The web consoles requires by default login, you can disable this in the jetty.xml file
Take a look at ${ACTIVEMQ_HOME}/conf/jetty.xml for more details
-->
<import resource="jetty.xml"/>
</beans>
<!-- END SNIPPET: example -->
Adding activemq.log file from /web/servers/apache-activemq-5.15.11/data
Pasting shareable link below:
https://drive.google.com/file/d/1vQ6HkOu53mzMi-GqbGxaQP6EJNC4RT4G/view?usp=sharing
Don't know if this is the issue for you, but after some reading it sounded like activemq uses jmx for its shutdown hook, but the default config that ships with activemq does not setup the jmx connector.
For me, activemq was able to shutdown if I changed:
<managementContext>
<managementContext createConnector="false"/>
</managementContext>
To specify a jmx connectorPort:
<managementContext>
<managementContext connectorPort="1099"/>
</managementContext>
Some references:
Activemq Shutdown fails and then kills process
https://issues.apache.org/jira/browse/AMQ-6927

Good use of bridged MQTT brokers

I'm currently working on a project on which an external app sends data coming from many sensors via MQTT protocol.
I want to collect all of this data, and I want to send them to an external server. I want to create 2 MQTT brokers:
one local (on the machine with the app that sends data)
one in the distant server
I will create a network bridge between the two. It's a possibility given by my MQTT server app ActiveMQ (I imagine that's a common feature).
In this way the data producing app will publish on the local broker and, via the bridge, the same data will be published on the remote broker. The point is to let the app working without problems in case of connection loss.
When I lose the network connection between the brokers I don't get the data produced by the app during the time there were no connection. Do you know if it's possible to configure the bridge in order to make it work the way I want?
Will I have to develop a little program which listens on all topics from the local broker, detects connection losses, and re-sends all lost messages to the remote broker?
I add configuration files from my two brokers. My first ActiveMQ server is on the same machine as my app and the second ActiveMQ server is on another machine on the same network. Both computers ping each other perfectly.
Local broker:
<beans
xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd
http://activemq.apache.org/schema/core http://activemq.apache.org/schema/core/activemq-core.xsd">
<bean class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
<property name="locations">
<value>file:${activemq.conf}/credentials.properties</value>
</property>
</bean>
<bean id="logQuery" class="io.fabric8.insight.log.log4j.Log4jLogQuery"
lazy-init="false" scope="singleton"
init-method="start" destroy-method="stop">
</bean>
<broker xmlns="http://activemq.apache.org/schema/core" brokerName="localhost" dataDirectory="${activemq.data}">
<destinationPolicy>
<policyMap>
<policyEntries>
<policyEntry topic=">" >
<pendingMessageLimitStrategy>
<constantPendingMessageLimitStrategy limit="1000"/>
</pendingMessageLimitStrategy>
</policyEntry>
</policyEntries>
</policyMap>
</destinationPolicy>
<managementContext>
<managementContext createConnector="false"/>
</managementContext>
<persistenceAdapter>
<kahaDB directory="${activemq.data}/kahadb"/>
</persistenceAdapter>
<systemUsage>
<systemUsage>
<memoryUsage>
<memoryUsage percentOfJvmHeap="70" />
</memoryUsage>
<storeUsage>
<storeUsage limit="100 gb"/>
</storeUsage>
<tempUsage>
<tempUsage limit="50 gb"/>
</tempUsage>
</systemUsage>
</systemUsage>
<transportConnectors>
<transportConnector name="mqtt" uri="mqtt://0.0.0.0:1883?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
</transportConnectors>
<networkConnectors>
<networkConnector uri="static:(tcp://192.168.16.100:61616)"/>
</networkConnectors>
<shutdownHooks>
<bean xmlns="http://www.springframework.org/schema/beans" class="org.apache.activemq.hooks.SpringContextHook" />
</shutdownHooks>
</broker>
<import resource="jetty.xml"/>
</beans>
Remote broker:
<beans
xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd
http://activemq.apache.org/schema/core http://activemq.apache.org/schema/core/activemq-core.xsd">
<bean class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
<property name="locations">
<value>file:${activemq.conf}/credentials.properties</value>
</property>
</bean>
<bean id="logQuery" class="io.fabric8.insight.log.log4j.Log4jLogQuery"
lazy-init="false" scope="singleton"
init-method="start" destroy-method="stop">
</bean>
<broker xmlns="http://activemq.apache.org/schema/core" brokerName="localhost" dataDirectory="${activemq.data}">
<destinationPolicy>
<policyMap>
<policyEntries>
<policyEntry topic=">" >
<pendingMessageLimitStrategy>
<constantPendingMessageLimitStrategy limit="1000"/>
</pendingMessageLimitStrategy>
</policyEntry>
</policyEntries>
</policyMap>
</destinationPolicy>
<managementContext>
<managementContext createConnector="false"/>
</managementContext>
<persistenceAdapter>
<kahaDB directory="${activemq.data}/kahadb"/>
</persistenceAdapter>
<systemUsage>
<systemUsage>
<memoryUsage>
<memoryUsage percentOfJvmHeap="70" />
</memoryUsage>
<storeUsage>
<storeUsage limit="100 gb"/>
</storeUsage>
<tempUsage>
<tempUsage limit="50 gb"/>
</tempUsage>
</systemUsage>
</systemUsage>
<transportConnectors>
<transportConnector name="openwire" uri="tcp://0.0.0.0:61616?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
<transportConnector name="mqtt" uri="mqtt://0.0.0.0:1883?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
</transportConnectors>
<shutdownHooks>
<bean xmlns="http://www.springframework.org/schema/beans" class="org.apache.activemq.hooks.SpringContextHook" />
</shutdownHooks>
</broker>
<import resource="jetty.xml"/>
</beans>
In order to simulate disconnection between the two brokers I simply disconnects the second computer from the network.
I use MQTTBox on both computer to subscribe to topics I write on. That's how I saw that data sent on a topic in the local broker during the disconnection of the second computer is not published on the same topic of the remote broker when I reconnect it.
EDIT : new infos
I tried again my test today and I notice a checkbox "retain" on my MQTT client MQTTBox.
So :
With the computer A, I publish a message with retain checked on topic /test and computer B was listening on /#
When the 2 computers are connected, it obviously works well, I see the message on computer B.
When I disconnect computer B, publish 2 messages with retain checked then reconnect computer B, I only see the most recent of the 2 messages I published...
It's better, but I'd like to see the other message too... If anyone can help me, i'm lost...
I can also set a QoS for the message I want to publish. I tried with Qos = 0 and QoS = 1 : same thing.
QOS for messages works for bridge connections as well.
So if the bridge is configured for a topic with a QOS greater than 0 then the local broker will queue up the messages while the connection to the remote broker is down and will send them when the connection comes back up.
This way no messages will be lost.
This is perfectly normal deployment pattern for MQTT brokers.

AMQ load balancing cluster setup

i know that there is a lot if information about AMQ cluster but i dont know how to setup this..
I need a load balancing cluster, with two machines, and two instances in each machine like 192.168.0.1 (instance1,instance2) 192.168.0.2(instance3,instance4).
My openwire ports are 0.0.0.0:61617 - 0.0.0.0:61620.
Was found some solutions over the internet like:
Instance1
<networkConnectors>
<networkConnector name="instance1-instance3-instance4" uri="masterslave:(tcp://192.168.0.2:61619,tcp://192.168.0.2:61620)" />
</networkConnectors>
<persistenceAdapter>
<kahaDB directory="${activemq.data}/kahadb1" />
</persistenceAdapter>
instance2:
<networkConnectors>
<networkConnector name="instance2-instance3-instance4" uri="masterslave:(tcp://192.168.0.2:61619,tcp://192.168.0.2:61620)" />
</networkConnectors>
<persistenceAdapter>
<kahaDB directory="${activemq.data}/kahadb1" />
</persistenceAdapter>
instance3:
<networkConnectors>
<networkConnector name="instance3-instance1-instance2" uri="masterslave:(tcp://192.168.0.1:61617,tcp://192.168.0.1:61618)" />
</networkConnectors>
<persistenceAdapter>
<kahaDB directory="${activemq.data}/kahadb2" />
</persistenceAdapter>
instance4:
<networkConnectors>
<networkConnector name="instance4-instance1-instance2" uri="masterslave:(tcp://192.168.0.1:61617,tcp://192.168.0.1:61618)" />
<networkConnector name="FAILOVER"
uri="static:(failover:(tcp://192.168.0.1:61617,tcp://192.168.0.2:61619,tcp://192.168.0.1:61618,tcp://192.168.0.2:61620))?randomize=false"
dynamicOnly="true"
networkTTL="4"
duplex="true"/>
</networkConnectors>
<persistenceAdapter>
<kahaDB directory="${activemq.data}/kahadb2" />
</persistenceAdapter>
And it works, connecting to each other, when one node falls down, instance4 reconnect to another with this failover string. But i dont know is it works fine or it doesnt work, tell me pls??
Maybe somebody made LB cluster and can give xml files?Please. And sorry if my english not so good. I am from other country)
The solution to scale out (load balance from a broker perspective) an ActiveMQ deployment is called Network of Brokers. The network consists of multiple logical nodes, which you seem to be setting up. These nodes can be of master-slave type, to cover for machine failure etc.
There are example configs inside the ActiveMQ distribution that you can look at.

Linux ActiveMQ destinations Topic startup

I am trying to setup a Topic at startup of ActiveMQ. We will have Durable subscribers but they are not yet available.
Startup Config says to add:
<destinations>
<queue physicalName="FOO.BAR" />
<topic physicalName="SOME.TOPIC" />
</destinations>
I have added this to activemq.xml but no luck. No Topic is created at startup of ActiveMQ. We are running 5.7.
Ideas?
EDIT:
I am trying to setup a Topic on Startup of ActiveMQ. When ActiveMQ is restarted (or shutdown and started) Topics are deleted because they are in memory. I want to add a Topic in the XML configuration so it is created on the fly when AMQ is started. in this way our ESB can reach it directly and can start to work. The ESB will be a Durable subscriber but not yet. Still implementing. The documentation says to add to above rows in the XML config. But I have no luck with that. A Topic is not created upon start.
So my I will just add them whereever?
<beans
xmlns="http://www.springframework.org/schema/beans"
xmlns:amq="http://activemq.apache.org/schema/core"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd
http://activemq.apache.org/schema/core http://activemq.apache.org/schema/core/activemq-core.xsd">
<!-- Allows us to use system properties as variables in this configuration file -->
<bean class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
<property name="locations">
<value>file:${activemq.conf}/credentials.properties</value>
</property>
</bean>
<!--
The <broker> element is used to configure the ActiveMQ broker.
-->
<broker xmlns="http://activemq.apache.org/schema/core" brokerName="localhost" dataDirectory="${activemq.data}">
<!-- Like here? -->
<destinations>
<queue physicalName="FOO.BAR" />
<topic physicalName="SOME.TOPIC" />
</destinations>
<!--
For better performances use VM cursor and small memory limit.
For more information, see:
http://activemq.apache.org/message-cursors.html
Also, if your producer is "hanging", it's probably due to producer flow control.
For more information, see:
http://activemq.apache.org/producer-flow-control.html
-->
<destinationPolicy>
<policyMap>
<policyEntries>
<policyEntry topic=">" producerFlowControl="true">
<!-- The constantPendingMessageLimitStrategy is used to prevent
slow topic consumers to block producers and affect other consumers
by limiting the number of messages that are retained
For more information, see:
http://activemq.apache.org/slow-consumer-handling.html
-->
<pendingMessageLimitStrategy>
<constantPendingMessageLimitStrategy limit="1000"/>
</pendingMessageLimitStrategy>
</policyEntry>
<policyEntry queue=">" producerFlowControl="true" memoryLimit="1mb">
<!-- Use VM cursor for better latency
For more information, see:
http://activemq.apache.org/message-cursors.html
<pendingQueuePolicy>
<vmQueueCursor/>
</pendingQueuePolicy>
-->
</policyEntry>
</policyEntries>
</policyMap>
</destinationPolicy>
<!--
The managementContext is used to configure how ActiveMQ is exposed in
JMX. By default, ActiveMQ uses the MBean server that is started by
the JVM. For more information, see:
http://activemq.apache.org/jmx.html
-->
<managementContext>
<managementContext createConnector="false"/>
</managementContext>
<!--
Configure message persistence for the broker. The default persistence
mechanism is the KahaDB store (identified by the kahaDB tag).
For more information, see:
http://activemq.apache.org/persistence.html
-->
<persistenceAdapter>
<kahaDB directory="${activemq.data}/kahadb"/>
</persistenceAdapter>
<!--
The systemUsage controls the maximum amount of space the broker will
use before slowing down producers. For more information, see:
http://activemq.apache.org/producer-flow-control.html
If using ActiveMQ embedded - the following limits could safely be used:
<systemUsage>
<systemUsage>
<memoryUsage>
<memoryUsage limit="20 mb"/>
</memoryUsage>
<storeUsage>
<storeUsage limit="1 gb"/>
</storeUsage>
<tempUsage>
<tempUsage limit="100 mb"/>
</tempUsage>
</systemUsage>
</systemUsage>
-->
<systemUsage>
<systemUsage>
<memoryUsage>
<memoryUsage limit="64 mb"/>
</memoryUsage>
<storeUsage>
<storeUsage limit="100 gb"/>
</storeUsage>
<tempUsage>
<tempUsage limit="50 gb"/>
</tempUsage>
</systemUsage>
</systemUsage>
<!--
The transport connectors expose ActiveMQ over a given protocol to
clients and other brokers. For more information, see:
http://activemq.apache.org/configuring-transports.html
-->
<transportConnectors>
<!-- DOS protection, limit concurrent connections to 1000 and frame size to 100MB -->
<transportConnector name="openwire" uri="tcp://0.0.0.0:61616?maximumConnections=1000&wireformat.maxFrameSize=104857600"/>
<transportConnector name="amqp" uri="amqp://0.0.0.0:5672?maximumConnections=1000&wireformat.maxFrameSize=104857600"/>
</transportConnectors>
<!-- destroy the spring context on shutdown to stop jetty -->
<shutdownHooks>
<bean xmlns="http://www.springframework.org/schema/beans" class="org.apache.activemq.hooks.SpringContextHook" />
</shutdownHooks>
</broker>
<!--
Enable web consoles, REST and Ajax APIs and demos
Take a look at ${ACTIVEMQ_HOME}/conf/jetty.xml for more details
-->
<import resource="jetty.xml"/>
/Ziggy
I just dropped those into a vanilla 5.7 AMQ installation (on MacOS) and I see the both the queue and topic via the web console...
you should try again with a clean install of AMQ to try to narrow the issue down
My own solution works. Though in our Linux environment we had more than one instance. One under /user/share and one under /home/activemq/
So it worked when I edited the corret file.
Thank you for your efforts.
Put your destination inside <destinations> tag like
<destinations>
<queue physicalName="FOO.BAR" />
<topic physicalName="SOME.TOPIC" />
</destinations>
inside
<broker></broker> tags.