i know that there is a lot if information about AMQ cluster but i dont know how to setup this..
I need a load balancing cluster, with two machines, and two instances in each machine like 192.168.0.1 (instance1,instance2) 192.168.0.2(instance3,instance4).
My openwire ports are 0.0.0.0:61617 - 0.0.0.0:61620.
Was found some solutions over the internet like:
Instance1
<networkConnectors>
<networkConnector name="instance1-instance3-instance4" uri="masterslave:(tcp://192.168.0.2:61619,tcp://192.168.0.2:61620)" />
</networkConnectors>
<persistenceAdapter>
<kahaDB directory="${activemq.data}/kahadb1" />
</persistenceAdapter>
instance2:
<networkConnectors>
<networkConnector name="instance2-instance3-instance4" uri="masterslave:(tcp://192.168.0.2:61619,tcp://192.168.0.2:61620)" />
</networkConnectors>
<persistenceAdapter>
<kahaDB directory="${activemq.data}/kahadb1" />
</persistenceAdapter>
instance3:
<networkConnectors>
<networkConnector name="instance3-instance1-instance2" uri="masterslave:(tcp://192.168.0.1:61617,tcp://192.168.0.1:61618)" />
</networkConnectors>
<persistenceAdapter>
<kahaDB directory="${activemq.data}/kahadb2" />
</persistenceAdapter>
instance4:
<networkConnectors>
<networkConnector name="instance4-instance1-instance2" uri="masterslave:(tcp://192.168.0.1:61617,tcp://192.168.0.1:61618)" />
<networkConnector name="FAILOVER"
uri="static:(failover:(tcp://192.168.0.1:61617,tcp://192.168.0.2:61619,tcp://192.168.0.1:61618,tcp://192.168.0.2:61620))?randomize=false"
dynamicOnly="true"
networkTTL="4"
duplex="true"/>
</networkConnectors>
<persistenceAdapter>
<kahaDB directory="${activemq.data}/kahadb2" />
</persistenceAdapter>
And it works, connecting to each other, when one node falls down, instance4 reconnect to another with this failover string. But i dont know is it works fine or it doesnt work, tell me pls??
Maybe somebody made LB cluster and can give xml files?Please. And sorry if my english not so good. I am from other country)
The solution to scale out (load balance from a broker perspective) an ActiveMQ deployment is called Network of Brokers. The network consists of multiple logical nodes, which you seem to be setting up. These nodes can be of master-slave type, to cover for machine failure etc.
There are example configs inside the ActiveMQ distribution that you can look at.
Related
I'm trying to migrate from ActiveMQ "Classic" to ActiveMQ Artemis.
We have a cluster of 2 active nodes that we try to migrate without impacting the consumers and producers. To do so, we stop the first node, migrate it, start it and do the same on the 2nd when the first is back up.
We are observing that the consumers/producers are not able to reconnect:
o.a.a.t.f.FailoverTransport | | Failed to connect to [tcp://172.17.233.92:63616?soTimeout=30000&soWriteTimeout=30000&keepAlive=true, tcp://172.17.233.93:63616?soTimeout=30000&soWriteTimeout=30000&keepAlive=true] after: 30 attempt(s) continuing to retry.
Consumers/producers are able to connect after we have restarted them.
Is it normal behavior ?
Here is the ActiveMQ Artemis broker :
<connectors>
<connector name="netty-connector">tcp://172.17.233.92:63616</connector>
<connector name="server_0">tcp://172.17.233.93:63616</connector>
</connectors>
<acceptors>
<acceptor name="netty-acceptor">tcp://172.17.233.92:63616?protocols=OPENWIRE</acceptor>
<acceptor name="invm">"vm://0</acceptor>
</acceptors>
<cluster-connections>
<cluster-connection name="cluster">
<connector-ref>netty-connector</connector-ref>
<retry-interval>500</retry-interval>
<use-duplicate-detection>true</use-duplicate-detection>
<message-load-balancing>ON_DEMAND</message-load-balancing>
<max-hops>1</max-hops>
<static-connectors>
<connector-ref>server_0</connector-ref>
</static-connectors>
</cluster-connection>
</cluster-connections>
And here is the ActiveMQ "classic" configuration
<!-- Transport protocol -->
<transportConnectors>
<transportConnector name="openwire"
uri="nio://172.17.233.92:63616?transport.soTimeout=15000&transport.threadName&keepAlive=true&transport.soWriteTimeout=15000&wireFormat.maxInactivityDuration=0"
enableStatusMonitor="true" rebalanceClusterClients="true" updateClusterClients="true" updateClusterClientsOnRemove="true" />
</transportConnectors>
<!-- Network of brokers setup -->
<networkConnectors>
<!-- we need conduit subscriptions for topics , but not for queue -->
<networkConnector name="NC_topic" duplex="false" conduitSubscriptions="true" networkTTL="1" uri="static:(tcp://172.17.233.92:63616,tcp://172.17.233.93:63616)" decreaseNetworkConsumerPriority="true" suppressDuplicateTopicSubscriptions="true" dynamicOnly="true">
<excludedDestinations>
<queue physicalName=">" />
</excludedDestinations>
</networkConnector>
<!-- we need conduit subscriptions for topics , but not for queue -->
<networkConnector name="NC_queue" duplex="false" conduitSubscriptions="false" networkTTL="1" uri="static:(tcp://172.17.233.92:63616,tcp://172.17.233.93:63616)" decreaseNetworkConsumerPriority="true" suppressDuplicateQueueSubscriptions="true" dynamicOnly="true">
<excludedDestinations>
<topic physicalName=">" />
</excludedDestinations>
</networkConnector>
</networkConnectors>
This issue should due to updateClusterClientsOnRemove, if true, will update clients when a cluster is removed from the network, see broker-side options for failover.
When the first node is stopped the clients will remove it and they will not add it again because the second node with ActiveMQ Classic isn't able to connect to the first node with ActiveMQ Artemis.
At the end, we decided to first stop the 2 nodes, then upgrade and restart. It implies an interruption from consumer/producer point of view but all the subscription are done properly after the restart.
Whenever I try to stop ActiveMQ installed on RHEL server, it doesn't stops gracefully. Does anyone knows why? I am not sure why it tries to connect to JMX broker as shown below and fails. What do I need to fix these issues?
[activemq#myserver apache-activemq-5.15.11]$ bin/activemq stop
INFO: Loading '/web/servers/apache-activemq-5.15.11//bin/env'
INFO: Using java '/bin/java'
INFO: Waiting at least 30 seconds for regular process termination of pid '2963' :
Java Runtime: Oracle Corporation 1.8.0_252 /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.252.b09-2.el7_8.x86_64/jre
Heap sizes: current=62976k free=61991k max=932352k
JVM args: -Xms64M -Xmx1G -Djava.util.logging.config.file=logging.properties -Djava.security.auth.login.config=/web/servers/apache-activemq-5.15.11//conf/login.config -Dactivemq.classpath=/web/servers/apache-activemq-5.15.11//conf:/web/servers/apache-activemq-5.15.11//../lib/: -Dactivemq.home=/web/servers/apache-activemq-5.15.11/ -Dactivemq.base=/web/servers/apache-activemq-5.15.11/ -Dactivemq.conf=/web/servers/apache-activemq-5.15.11//conf -Dactivemq.data=/web/servers/apache-activemq-5.15.11//data
Extensions classpath:
[/web/servers/apache-activemq-5.15.11/lib,/web/servers/apache-activemq-5.15.11/lib/camel,/web/servers/apache-activemq-5.15.11/lib/optional,/web/servers/apache-activemq-5.15.11/lib/web,/web/servers/apache-activemq-5.15.11/lib/extra]
ACTIVEMQ_HOME: /web/servers/apache-activemq-5.15.11
ACTIVEMQ_BASE: /web/servers/apache-activemq-5.15.11
ACTIVEMQ_CONF: /web/servers/apache-activemq-5.15.11/conf
ACTIVEMQ_DATA: /web/servers/apache-activemq-5.15.11/data
Connecting to pid: 2963
INFO: failed to resolve jmxUrl for pid:2963, using default JMX url
Connecting to JMX URL: service:jmx:rmi:///jndi/rmi://localhost:1099/jmxrmi
INFO: Broker not available at: service:jmx:rmi:///jndi/rmi://localhost:1099/jmxrmi
...............................
INFO: Regular shutdown not successful, sending SIGKILL to process
INFO: sending SIGKILL to pid '2963'
EDIT:
Adding activemq.xml configuration file
<!--
Licensed to the Apache Software Foundation (ASF) under one or more
contributor license agreements. See the NOTICE file distributed with
this work for additional information regarding copyright ownership.
The ASF licenses this file to You under the Apache License, Version 2.0
(the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->
<!-- START SNIPPET: example -->
<beans
xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd
http://activemq.apache.org/schema/core http://activemq.apache.org/schema/core/activemq-core.xsd">
<!-- Allows us to use system properties as variables in this configuration file -->
<bean class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
<property name="locations">
<value>file:${activemq.conf}/credentials.properties</value>
</property>
</bean>
<!-- Allows accessing the server log -->
<bean id="logQuery" class="io.fabric8.insight.log.log4j.Log4jLogQuery"
lazy-init="false" scope="singleton"
init-method="start" destroy-method="stop">
</bean>
<!--
The <broker> element is used to configure the ActiveMQ broker.
-->
<broker xmlns="http://activemq.apache.org/schema/core" brokerName="localhost" dataDirectory="${activemq.data}">
<destinationPolicy>
<policyMap>
<policyEntries>
<policyEntry topic=">" >
<!-- The constantPendingMessageLimitStrategy is used to prevent
slow topic consumers to block producers and affect other consumers
by limiting the number of messages that are retained
For more information, see:
http://activemq.apache.org/slow-consumer-handling.html
-->
<pendingMessageLimitStrategy>
<constantPendingMessageLimitStrategy limit="1000"/>
</pendingMessageLimitStrategy>
</policyEntry>
</policyEntries>
</policyMap>
</destinationPolicy>
<!--
The managementContext is used to configure how ActiveMQ is exposed in
JMX. By default, ActiveMQ uses the MBean server that is started by
the JVM. For more information, see:
http://activemq.apache.org/jmx.html
-->
<managementContext>
<managementContext createConnector="false"/>
</managementContext>
<!--
Configure message persistence for the broker. The default persistence
mechanism is the KahaDB store (identified by the kahaDB tag).
For more information, see:
http://activemq.apache.org/persistence.html
-->
<persistenceAdapter>
<kahaDB directory="${activemq.data}/kahadb"/>
</persistenceAdapter>
<!--
The systemUsage controls the maximum amount of space the broker will
use before disabling caching and/or slowing down producers. For more information, see:
http://activemq.apache.org/producer-flow-control.html
-->
<systemUsage>
<systemUsage>
<memoryUsage>
<memoryUsage percentOfJvmHeap="70" />
</memoryUsage>
<storeUsage>
<storeUsage limit="100 gb"/>
</storeUsage>
<tempUsage>
<tempUsage limit="50 gb"/>
</tempUsage>
</systemUsage>
</systemUsage>
<!--
The transport connectors expose ActiveMQ over a given protocol to
clients and other brokers. For more information, see:
http://activemq.apache.org/configuring-transports.html
-->
<transportConnectors>
<!-- DOS protection, limit concurrent connections to 1000 and frame size to 100MB -->
<transportConnector name="openwire" uri="tcp://0.0.0.0:61616?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
<transportConnector name="amqp" uri="amqp://0.0.0.0:5672?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
<transportConnector name="stomp" uri="stomp://0.0.0.0:61613?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
<transportConnector name="mqtt" uri="mqtt://0.0.0.0:1883?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
<transportConnector name="ws" uri="ws://0.0.0.0:61614?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
</transportConnectors>
<!-- destroy the spring context on shutdown to stop jetty -->
<shutdownHooks>
<bean xmlns="http://www.springframework.org/schema/beans" class="org.apache.activemq.hooks.SpringContextHook" />
</shutdownHooks>
</broker>
<!--
Enable web consoles, REST and Ajax APIs and demos
The web consoles requires by default login, you can disable this in the jetty.xml file
Take a look at ${ACTIVEMQ_HOME}/conf/jetty.xml for more details
-->
<import resource="jetty.xml"/>
</beans>
<!-- END SNIPPET: example -->
Adding activemq.log file from /web/servers/apache-activemq-5.15.11/data
Pasting shareable link below:
https://drive.google.com/file/d/1vQ6HkOu53mzMi-GqbGxaQP6EJNC4RT4G/view?usp=sharing
Don't know if this is the issue for you, but after some reading it sounded like activemq uses jmx for its shutdown hook, but the default config that ships with activemq does not setup the jmx connector.
For me, activemq was able to shutdown if I changed:
<managementContext>
<managementContext createConnector="false"/>
</managementContext>
To specify a jmx connectorPort:
<managementContext>
<managementContext connectorPort="1099"/>
</managementContext>
Some references:
Activemq Shutdown fails and then kills process
https://issues.apache.org/jira/browse/AMQ-6927
I'm attempting to use a network of brokers that bridges two LANs over a duplex WAN connector:
There are actually many subscribers in our setup, each connecting to a different "Broker A", if that makes sense. All of the Broker A instances have their own connections to a single "Broker B".
Software and configurations:
ActiveMQ 5.14.0, Java 8
All brokers have non-persistent topics only; advisory messages are on.
OS: Linux (RHEL 6)
When I initially bring everything online, regardless of the order in which I bring things online, communication between the publisher and subscriber works flawlessly. I've had the system up-and-running for weeks at a time without issue.
What I've observed is that if broker C is restarted, no new topics that show up in broker B ever appear in broker C. New topics are still appearing in broker B as they are created by the subscriber(s). Neither existing nor new topics ever propagate across the WAN to broker C. I've verified this using jconsole.
If I restart broker B, the problem goes away immediately. The topics contained in broker B (according to jconsole) are the same as they were prior to restart, but now they've magically appeared in C.
Brokers B and C have the same configuration (shown below). The only difference is that B creates a duplex network connector to C created using the following code:
final NetworkConnector wanNC = new DiscoveryNetworkConnector(
new URI(String.format("static:(failover:(tcp://%s:%d))", parentNode, port)));
wanNC.setCheckDuplicateMessagesOnDuplex(true);
wanNC.setDecreaseNetworkConsumerPriority(true);
wanNC.setDuplex(true);
wanNC.setName(NetworkUtils.getHostName());
wanNC.setNetworkTTL(10);
wanNC.setSuppressDuplicateTopicSubscriptions(false);
broker.addNetworkConnector(wanNC);
broker.xml:
<beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd
http://activemq.apache.org/schema/core http://activemq.apache.org/schema/core/activemq-core.xsd">
<!-- Allows us to use system properties as variables in this configuration file -->
<bean class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer" />
<broker xmlns="http://activemq.apache.org/schema/core" brokerName="${broker.id}" start="false"
offlineDurableSubscriberTimeout="5000" offlineDurableSubscriberTaskSchedule="5000"
persistent="false" useJmx="true" schedulePeriodForDestinationPurge="86400000">
[...]
<networkConnectors>
<networkConnector name="z-broker-${broker.id}-x-nc"
decreaseNetworkConsumerPriority="true"
networkTTL="10"
uri="multicast://225.5.5.5:6555?group=TO_X">
<excludedDestinations>
<topic physicalName="X.A" />
</excludedDestinations>
</networkConnector>
<networkConnector name="z-broker-${broker.id}-y-nc"
decreaseNetworkConsumerPriority="true"
networkTTL="10"
uri="multicast://225.5.5.5:6555?group=TO_Y">
<excludedDestinations>
<topic physicalName="X.B.>" />
</excludedDestinations>
</networkConnector>
</networkConnectors>
<transportConnectors>
<transportConnector name="openwire"
uri="tcp://${broker.ip}:${broker.port}?maximumConnections=1000&wireFormat.maxFrameSize=104857600"
discoveryUri="multicast://225.5.5.5:6555?group=TO_Z" />
</transportConnectors>
</broker>
</beans>
Why don't topics from broker B (existing or new) ever show up in broker C?
Why does restarting broker B solve the issue immediately?
Apparently the trick was changing the network connector URI from
static:(failover:(tcp://<ip>:<port>))
to
static:(tcp://<ip>:<port>)
I didn't need failover transport for any reason since the connection is intended as a network bridge and there's a single remote address.
For whatever reason, using failover prevented topics from propagating on reconnect.
I have setup a cluster with two nodes. Messages produced on one node are always consumed by single node.(The node which produces the message). I want the message to be consumed in all nodes.
I have a destination (topic) on both nodes.
So any messages raised should be consumed by both nodes.
I am not sure if there is any configuration issue with the way I configured,
<broker
xmlns="http://activemq.apache.org/schema/core"
useJmx="true"
brokerName="mybrokerA"
useShutdownHook="false"
persistent="true"
start="false"
schedulerSupport="true"
enableStatistics="false"
offlineDurableSubscriberTimeout="259200000"
offlineDurableSubscriberTaskSchedule="3600000">
<destinations>
<topic name="FooTopic" physicalName="FooTopic"/>
</destinations>
<networkConnectors>
<networkConnector name="dd" uri="static:(tcp://10.96.10.66:61816,tcp://10.96.10.25:61816)" duplex="true">
<staticallyIncludedDestinations>
<topic physicalName="FooTopic"/>
</staticallyIncludedDestinations>
</networkConnector>
</networkConnectors>
<transportConnectors>
<transportConnector name="default" uri="tcp://10.96.10.66:61816"/>
</transportConnectors>
</broker>
Same is the file in node2 with ip changed in the transport connector.
tomee file.
ServerUrl = tcp://10.96.10.66:61816
DataSource jdbc/teamsdb
BrokerXmlConfig = xbean:file:../conf/activemq.xml
Can any one please help me on this. ?
I am trying to setup a Topic at startup of ActiveMQ. We will have Durable subscribers but they are not yet available.
Startup Config says to add:
<destinations>
<queue physicalName="FOO.BAR" />
<topic physicalName="SOME.TOPIC" />
</destinations>
I have added this to activemq.xml but no luck. No Topic is created at startup of ActiveMQ. We are running 5.7.
Ideas?
EDIT:
I am trying to setup a Topic on Startup of ActiveMQ. When ActiveMQ is restarted (or shutdown and started) Topics are deleted because they are in memory. I want to add a Topic in the XML configuration so it is created on the fly when AMQ is started. in this way our ESB can reach it directly and can start to work. The ESB will be a Durable subscriber but not yet. Still implementing. The documentation says to add to above rows in the XML config. But I have no luck with that. A Topic is not created upon start.
So my I will just add them whereever?
<beans
xmlns="http://www.springframework.org/schema/beans"
xmlns:amq="http://activemq.apache.org/schema/core"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd
http://activemq.apache.org/schema/core http://activemq.apache.org/schema/core/activemq-core.xsd">
<!-- Allows us to use system properties as variables in this configuration file -->
<bean class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
<property name="locations">
<value>file:${activemq.conf}/credentials.properties</value>
</property>
</bean>
<!--
The <broker> element is used to configure the ActiveMQ broker.
-->
<broker xmlns="http://activemq.apache.org/schema/core" brokerName="localhost" dataDirectory="${activemq.data}">
<!-- Like here? -->
<destinations>
<queue physicalName="FOO.BAR" />
<topic physicalName="SOME.TOPIC" />
</destinations>
<!--
For better performances use VM cursor and small memory limit.
For more information, see:
http://activemq.apache.org/message-cursors.html
Also, if your producer is "hanging", it's probably due to producer flow control.
For more information, see:
http://activemq.apache.org/producer-flow-control.html
-->
<destinationPolicy>
<policyMap>
<policyEntries>
<policyEntry topic=">" producerFlowControl="true">
<!-- The constantPendingMessageLimitStrategy is used to prevent
slow topic consumers to block producers and affect other consumers
by limiting the number of messages that are retained
For more information, see:
http://activemq.apache.org/slow-consumer-handling.html
-->
<pendingMessageLimitStrategy>
<constantPendingMessageLimitStrategy limit="1000"/>
</pendingMessageLimitStrategy>
</policyEntry>
<policyEntry queue=">" producerFlowControl="true" memoryLimit="1mb">
<!-- Use VM cursor for better latency
For more information, see:
http://activemq.apache.org/message-cursors.html
<pendingQueuePolicy>
<vmQueueCursor/>
</pendingQueuePolicy>
-->
</policyEntry>
</policyEntries>
</policyMap>
</destinationPolicy>
<!--
The managementContext is used to configure how ActiveMQ is exposed in
JMX. By default, ActiveMQ uses the MBean server that is started by
the JVM. For more information, see:
http://activemq.apache.org/jmx.html
-->
<managementContext>
<managementContext createConnector="false"/>
</managementContext>
<!--
Configure message persistence for the broker. The default persistence
mechanism is the KahaDB store (identified by the kahaDB tag).
For more information, see:
http://activemq.apache.org/persistence.html
-->
<persistenceAdapter>
<kahaDB directory="${activemq.data}/kahadb"/>
</persistenceAdapter>
<!--
The systemUsage controls the maximum amount of space the broker will
use before slowing down producers. For more information, see:
http://activemq.apache.org/producer-flow-control.html
If using ActiveMQ embedded - the following limits could safely be used:
<systemUsage>
<systemUsage>
<memoryUsage>
<memoryUsage limit="20 mb"/>
</memoryUsage>
<storeUsage>
<storeUsage limit="1 gb"/>
</storeUsage>
<tempUsage>
<tempUsage limit="100 mb"/>
</tempUsage>
</systemUsage>
</systemUsage>
-->
<systemUsage>
<systemUsage>
<memoryUsage>
<memoryUsage limit="64 mb"/>
</memoryUsage>
<storeUsage>
<storeUsage limit="100 gb"/>
</storeUsage>
<tempUsage>
<tempUsage limit="50 gb"/>
</tempUsage>
</systemUsage>
</systemUsage>
<!--
The transport connectors expose ActiveMQ over a given protocol to
clients and other brokers. For more information, see:
http://activemq.apache.org/configuring-transports.html
-->
<transportConnectors>
<!-- DOS protection, limit concurrent connections to 1000 and frame size to 100MB -->
<transportConnector name="openwire" uri="tcp://0.0.0.0:61616?maximumConnections=1000&wireformat.maxFrameSize=104857600"/>
<transportConnector name="amqp" uri="amqp://0.0.0.0:5672?maximumConnections=1000&wireformat.maxFrameSize=104857600"/>
</transportConnectors>
<!-- destroy the spring context on shutdown to stop jetty -->
<shutdownHooks>
<bean xmlns="http://www.springframework.org/schema/beans" class="org.apache.activemq.hooks.SpringContextHook" />
</shutdownHooks>
</broker>
<!--
Enable web consoles, REST and Ajax APIs and demos
Take a look at ${ACTIVEMQ_HOME}/conf/jetty.xml for more details
-->
<import resource="jetty.xml"/>
/Ziggy
I just dropped those into a vanilla 5.7 AMQ installation (on MacOS) and I see the both the queue and topic via the web console...
you should try again with a clean install of AMQ to try to narrow the issue down
My own solution works. Though in our Linux environment we had more than one instance. One under /user/share and one under /home/activemq/
So it worked when I edited the corret file.
Thank you for your efforts.
Put your destination inside <destinations> tag like
<destinations>
<queue physicalName="FOO.BAR" />
<topic physicalName="SOME.TOPIC" />
</destinations>
inside
<broker></broker> tags.