Identify master broker using JMX in ActiveMQ or Fuse AMQ? - activemq

Is it possible to identify the "master" broker in an ActiveMQ/Fuse AMQ master/slave configuration, using JMX, or perhaps a different mechanism? We're creating a dashboard and want to show visually which server is actively handling traffic.

Figured it out :-)
ObjectName: org.apache.activemq:type=Broker,brokerName=amq
AttributeName: Slave

I have two nodes. They are configured in "Shared File System Master Slave".
JMX shows both node as Slave=false.
So I checked log file to see who made "lock" on shared file system.
Database /dir1/activemq-db/lock is locked by another server. This broker is now in slave mode waiting a lock to be acquired | org.apache.activemq.store.SharedFileLocker | main

Related

ActiveMQ: Network connectors created via JMX are not persistent

One can create network connectors to exchange messages between two brokers with one of two ways:
Edit the conf/activemq.xml and adding network connectors inside <networkConnectors></networkConnectors>
Using the JMX API to add them programmatically via a BrokerViewMBean
When creating a network connector via JMX, this is not persistent, i.e. on broker restart it is not there anymore. Is this normal? Is there a way to create NCs using JMX that persist broker restart?
The connections created via JMX are temporary and are not written into the ActiveMQ configuration. You can view these as devops connections that can be used to test connectivity or to solve some messaging problems but for a permanent solution one needs to edit the ActiveMQ configuration file and add the desired connections there so that on each start they are recreated.

ActiveMQ takes a long time to failover

I have 3 ActiveMQ brokers in a networked Shared File System(GlusterFS)/Master Slave configuration - all in VMs.
If the master fails the client should failover to the new master.
The issue I have is that the connection to the new master takes about 50 seconds.
Is that reasonable?
How to improve it?
My client connection looks like this
failover:(tcp://a1:61616?connectionTimeout=1000,tcp://a2:61616?connectionTimeout=1000,tcp://a3:61616?connectionTimeout=1000)?randomize=false&maxReconnectDelay=10000&backup=true"
Also when disconnecting the master by disconnecting network cable it stops and throws an exception regarding the kahaDB (which is on GlusterFS) and needs to be restarted.
Is there a workaround for this behavior so the master broker auto-restarts or is able to connect automatically once the network comes back?
The failover depends on the time the underlying file system take for releasing the file lock.
In your case, the NFS cluster is waiting 50s to detect that the first node is lost and so release the lock on the kahadb file, wich can then be taken by the seconde node.
You can customize this delay with the NFSD_V4_GRACE and NFSD_V4_LEASE parameters in the NFS server configuration file (/etc/sysconfig/nfs on redhat/centos systems).
You can also customize the kahadb lockKeepAlivePeriod, see http://activemq.apache.org/pluggable-storage-lockers.html

activeMQ activemq-admin

how to use "activemq-admin" to view the list of queues; number of messages in the queue;
I read through the tutorial : http://activemq.apache.org/activemq-command-line-tools-reference.html
didn't find a working solution...
and my web console on the slave machine does not work... the web console seems always go with the master machine (in the master/slave structure)
I just want to test that if I send messages into queues on master, slave could update.
so I am trying to use activemq-admin.
The way it works is that Slave is waiting to get a lock to the DB (Kahadb by default) , you will not be able to check the slave , bring down the master and now the slave will become the master broker and you should be able to see all the queues and messages dropped in them (assuming you are using persistence)
You can use JMX, web console or programatically, as you can find here. The easiest solution, I think, is to use web console like here.
I can't understand why isn't accessible the web console. Check for ActiveMQ config xml.
Also you can connect via JMX like:
service:jmx:rmi:///jndi/rmi://localhost:1099/jmxrmi
Go here for more information.

ActiveMQ forwarding bridge with failover

Here is what I try to achieve with ActiveMQ:
I'd like to have 2 clusters of brokers: clusterA and clusterB. Data between these 2 clusters should be mirrored. So, when clusterA receives a message it will be stored at storageA and also this message should be forwarded to clusterB (if there is such demand) and stored in storageB. On the other hand if clusterB receives a message it should be forwarded to clusterA.
I'm wondering whether config like this considered to be valid according to described above:
<networkConnectors>
<networkConnector
uri="static:(failover(tcp://clusterB_broker1:port,tcp://clusterB_broker2:port,tcp://clusterB_broker3:port))"
name="bridge"
duplex="true"
conduitSubscriptions="true"
decreaseNetworkConsumerPriority="false"/>
</networkConnectors>
This is a valid configuration. It indicates (assuming that all ClusterA brokers are configured this way) that brokers in ClusterA will store and forward first to clusterB_broker1, and if it is down will instead store and forward to clusterB_broker2, and then to clusterB_broker3 if clusterB_broker2 is down. But depending on your intra-cluster broker configuration, it is not going to do what you want it to.
The broker configurations must be set up for failover themselves or else you will lose messages when clusterB_broker1 goes down. If clusterB brokers are not working together as described below, then when clusterB_broker1 goes down, any messages sent to it will not be present or accessible on the other clusterB brokers. New messages will forward to them.
How to do failover within the cluster depends on your ActiveMQ version.
The latest version (5.9.0) supports 3 failover (or master/slave) cluster configurations. For quick reference, they are:
Shared File System Master Slave
JDBC Master Slave
Replicated LevelDB Store
Earlier versions supported a master/slave configuration that had one master and one slave node where messages were forwarded to the slave broker. This setup was not well maintained, had bugs, and has been removed from ActiveMQ.

ActiveMQ failover protocol not reconnecting to master after restarting

I am using ActiveMQ version 5.4 and I have a pure master slave configuration. My slave is configured such that starts its network transports connectors in the event of a failure. My clients are configured using the failover protocol, just like the docs say:
failover://(tcp://masterhost:61616,tcp://slavehost:61616)?randomize=false
When my master dies, the clients successfully fail over to the slave perfectly. The problem is that after I recover (i.e. stop the slave, copy over the data, restart the master, then restart the slave), the clients are still trying to connect to the the slave (which does not have any open network connectors at that point). Thus, the clients never reconnect to the master after restarting it. Is this how it's supposed to work?
I've seen this as well. If you're using the PooledConnectionFactory, set an expiry timeout on the pooled connections via setExpiryTimeout. The API documentation here suggests that this will force reconnection to the master broker:
allow connections to expire, irrespective of load or idle time. This is useful with failover to force a reconnect from the pool, to reestablish load balancing or use of the master post recovery