I have a network of 3 brokers running but sometimes the brokers fail in a rather unique and annoying way.
They still accept connections but otherwise stop to communicate with the client (even those still connected).
For new connections created in java this means:
con = factory.getConnection(); // method returns, connection is created
con.createSession(false, Session.AUTO_ACKNOWLEDGE); // never returns
on the server side there are no logged exceptions when running in debug mode.
do you have any idea what happens here?
is there any logmessage i can look for?
EDIT:
some additional info:
http://pastebin.com/9iztG67D - xml config file
every node is a master node with a connected slave (pure master slave)
client uri: failover:(tcp://serverA:61616,tcp://serverB:61616,tcp://serverC:61616,tcp://serverA-Slave:61616,tcp://serverB-Slave:61616,tcp://serverC-Slave:61616)?randomize=false
Without knowing more about how your brokers are configured and what kind of queue/topic structure you are using it's a bit difficult to tell exactly what is happening. But based on the general behavior that you described, this sounds like flow control at some level.
Look here for a more detailed explanation of flow control. Basically, if you have a destination that is filling up and hitting its memory limit due to a slow/hanging consumer the broker might give the appearance of "locking up" when new producers attempt to send messages.
Brokers might also lock up in strange ways when system memory limits are reached. Check out your configuration values for:
<systemUsage>
<systemUsage>
<memoryUsage>
<memoryUsage limit="64 mb" />
</memoryUsage>
<storeUsage>
<storeUsage limit="100 gb" />
</storeUsage>
<tempUsage>
<tempUsage limit="10 gb" />
</tempUsage>
</systemUsage>
</systemUsage>
Easiest way of identifying this kind of problem is to use a JMX client, such as jconsole, connect to the brokers, and inspect the current usage of memory in relation to their limits, for different destinations and the overall broker.
Since you haven't shown what the client URI looks like its not easy to say what the exact problem is. The client hang that you described can occur if you are using failover on the client uri and the client is unable to connect to the broker. The call to createSession basically trips a condition that causes the client to beigin its exchange of protocol commands with the server and if the connection is not able to be made the call will block until the failover transport can create a connection.
Related
We're using Apache James 3.0-beta4 which uses embedded ActiveMQ 5.5.0 for FIFO message queue, and sometimes messages get stuck. Therefore, we need to monitor it. Is there any way to monitor an ActiveMQ queue like message size and most recent message-id in the queue (if possible).
In the JAMES spring-server.xml I found that:
<amq:broker useJmx="true" persistent="true" brokerName="james" dataDirectory="filesystem=file://var/store/activemq/brokers" useShutdownHook="false" schedulerSupport="false" id="broker">
<amq:destinationPolicy>
<amq:policyMap>
<amq:policyEntries>
<!-- Support priority handling of messages -->
<!-- http://activemq.apache.org/how-can-i-support-priority-queues.html -->
<amq:policyEntry queue=">" prioritizedMessages="true"/>
</amq:policyEntries>
</amq:policyMap>
</amq:destinationPolicy>
<amq:managementContext>
<amq:managementContext createConnector="false"/>
</amq:managementContext>
<amq:persistenceAdapter>
<amq:amqPersistenceAdapter/>
</amq:persistenceAdapter>
<amq:plugins>
<amq:statisticsBrokerPlugin/>
</amq:plugins>
<amq:transportConnectors>
<amq:transportConnector uri="tcp://localhost:0" />
</amq:transportConnectors>
</amq:broker>
also one old part from readme:
- Telnet Management has been removed in favor of JMX with client shell
- More metrics counters available via JMX
...
* Monitor via JMX (launch any JMX client and connect to URL=service:jmx:rmi:///jndi/rmi://localhost:9999/jmxrmi)
which is confusion on how to use it.
This is part of the bigger "monolith" project which now is recreated for microservices but still need to be supported ;) All was fine till mid of March.
It looks like ActiveMQ management and monitoring is not possible because JMX is disabled.
I'm new to active MQ.
I have a requirement to create a local Active MQ and connect it to a remote IBM MQ.
Can anyone help me on how to connect to Distributed Queue manager and Queues .
You can use Apache Camel to bridge between the two providers. The routes can be run from within the broker, pull from the ActiveMQ queue and push to the WMQ Queue (or the other way around). The concept is almost like the concept of a Channel in WMQ pulling from a transmit queue and pushing it to the appropriate destination on the remote queue manager.
Assuming you are using WMQ V7+ for all QMgrs and Clients, its simply a matter of learning how to set up the route and configure the connection factories. Older versions of WMQ and you may have to understand how to deal with RFH2 headers for native WMQ clients if they are the consumers.
The most simple route configured in spring would look like:
<route id="amq-to-wmq" >
<from uri="amq:YOUR.QUEUE" />
<to uri="wmq:YOUR.QUEUE" />
</route>
The "wmq" and "amq" would point to beans where the JMS components are configured. This is where you would set up you connection factories to each provider and how the clients behave (transacted or not for example), so I'll hold off on giving an example on that.
This would go in the camel.xml (or whatever you name it) and get imported from your broker's XML. ActiveMQ comes with several examples you can use to get you started using Camel JMS components. Just take a look at the default camel.xml that comes with a normal install.
Here is what I try to achieve with ActiveMQ:
I'd like to have 2 clusters of brokers: clusterA and clusterB. Data between these 2 clusters should be mirrored. So, when clusterA receives a message it will be stored at storageA and also this message should be forwarded to clusterB (if there is such demand) and stored in storageB. On the other hand if clusterB receives a message it should be forwarded to clusterA.
I'm wondering whether config like this considered to be valid according to described above:
<networkConnectors>
<networkConnector
uri="static:(failover(tcp://clusterB_broker1:port,tcp://clusterB_broker2:port,tcp://clusterB_broker3:port))"
name="bridge"
duplex="true"
conduitSubscriptions="true"
decreaseNetworkConsumerPriority="false"/>
</networkConnectors>
This is a valid configuration. It indicates (assuming that all ClusterA brokers are configured this way) that brokers in ClusterA will store and forward first to clusterB_broker1, and if it is down will instead store and forward to clusterB_broker2, and then to clusterB_broker3 if clusterB_broker2 is down. But depending on your intra-cluster broker configuration, it is not going to do what you want it to.
The broker configurations must be set up for failover themselves or else you will lose messages when clusterB_broker1 goes down. If clusterB brokers are not working together as described below, then when clusterB_broker1 goes down, any messages sent to it will not be present or accessible on the other clusterB brokers. New messages will forward to them.
How to do failover within the cluster depends on your ActiveMQ version.
The latest version (5.9.0) supports 3 failover (or master/slave) cluster configurations. For quick reference, they are:
Shared File System Master Slave
JDBC Master Slave
Replicated LevelDB Store
Earlier versions supported a master/slave configuration that had one master and one slave node where messages were forwarded to the slave broker. This setup was not well maintained, had bugs, and has been removed from ActiveMQ.
I am new to activemq. I have configured two servers of activemq and using them in failover transport. They are working fine. I mean if one activemq goes down another pick up the queues. My problem is when main server comes up it do not restore the queues. Is there any such configuration or protocol that can manage it if main server is up then consumers should come back to to it.
Currently my configuration is :
<transportConnectors>
<transportConnector name="openwire" uri="tcp://0.0.0.0:61616" updateClusterClients="true" rebalanceClusterClients="true"/>
</transportConnectors>
<networkConnectors>
<networkConnector uri="static:(tcp://192.168.0.122:61616)"
networkTTL="3"
prefetchSize="1"
decreaseNetworkConsumerPriority="true" />
</networkConnectors>
and my connection uri is :
failover:(tcp://${ipaddress.master}:61616,tcp://${ipaddress.backup}:61616)?randomize=false
Also i want to send mail in case of failover occurs so that i could know if activemq is down.
What you have configured there is not a true HA deployment, but a network of brokers. If you have two brokers configured in a network, each has its own message store, which at any time contains a partial set of messages (see how networks of brokers work).
The behaviour that you would likely expect to see is that if one broker falls over, the other takes its place resuming from where the failed one left off (with all of the undelivered messages that the failed broker held). For that you need to use a (preferably shared-storage) master-slave configuration.
I have done that. And posting solution in case any one is having same problem.
This feature is available in activemq 5.6. priorityBackup=true in connection url is the key to tell to tell consumer to come back on primary node if available.
My new connection uri is :
failover:master:61616,tcp://backup:61616)?randomize=false&priorityBackup=true
see here for more details.
I have configured network of brokers with the topology as below.
Producer(P1) connected to Broker(B1) and Producer(P2) connected to Broker(B2)
Broker(B1) and Broker(B2) are connected as network of Brokers and are laod balancing
Consumer(C1) connected to Broker(B1) and Consumer(C2) connected to Broker(B2)
Clients are configured to use the failover as:
Consumer-1 = failover:tcp://localhost:61616,tcp://localhost:61615?randomize=false
Consumer-2 = failover:tcp://localhost:61615,tcp://localhost:61616?randomize=false
Once Channel-2 goes down P2 and C2 shifts to Channel-1 which is the desired behaviour for failover.
I want to understand the behavior when Chaneel-2 come back?
I have noticed it is only Channel-1 which continues to serve all
the connections even after Channel-2 has recovered and thus losing load balancing between Channels.
I want to know if it is possible once Channel-2 is back, load balancing will start automatically between channelsand respective Producer-2, Consumers-2 shifts to Channel-2 and thus giving full load balancing and full failover?
I have came across an article 'Combining Fault Tolerance with Load Balancing' on
http://fusesource.com/docs/broker/5.4/clustering/index.html is this recommended for combining Fault Tolerance and Load Balancing?
Regards,
-Amber
On both of your brokers, you need to setup your transportConnector to enable updateClusterClients and rebalanceClusterClients.
<transportConnectors>
<transportConnector name="tcp-connector" uri="tcp://192.168.0.23:61616" updateClusterClients="true" rebalanceClusterClients="true" />
</<transportConnectors>
Specifically, you should want rebalanceClusterClients. From the docs at http://activemq.apache.org/failover-transport-reference.html it states that:
if true, connected clients will be asked to rebalance across a cluster
of brokers when a new broker joins the network of brokers
You must be using ActiveMQ 5.4 or greater to have these options available.
As an answer to your follow up question:
"Is there a way of logging Broker URI as discussed in the article ?"
In order to show what client is connected to what broker,
modify the client's Log4j configuration as follow:
<log4j:configuration debug="true"
xmlns:log4j="http://jakarta.apache.org/log4j/">
...
<logger name="org.apache.activemq.transport.failover.FailoverTransport">
<level value="debug"/>
</logger>
...
</log4j:configuration>