activeMQ master/slave cluster with zookeeper - activemq

I have my activeMQ connected to zookeeper (a cluster of 5 zookeepers), in the config file "activemq.xml", I have
<persistenceAdapter>
<replicatedLevelDB
directory="${activemq.data}/leveldb"
replicas="3"
bind="tcp://0.0.0.0:0"
zkAddress="blablabla:2181"
zkPassword="password"
zkPath="/activemq/leveldb-stores"
hostname="blabla"
/>
</persistenceAdapter>
now I have activeMQ-server1 started, successfully become the master; activeMQ-server2 with the same "activemq.xml" config file, successfully become the slave; activeMQ-server3 with the same "activemq.xml" config file, successfully become the slave, but kicks out activeMQ-server2 (start to give connection error)
I think I put the wrong number for replicas, I changed all the 3 config files with "replicas="4"", still not work
what would be the correct replicas number with 3 activeMQ servers, or I am wrong with some other parts. (I only have 1 zookeeper listed in config, since the 5 zookeepers can connect to each other, already a cluster there)
Thanks :)

You need to list all zookeeper servers in the zkAddress portion, zkAddress="zoo1.example.org:2181,zoo2.example.org:2181,zoo3.example.org:2181", taken from activemq replicated levelDB

The replicas value is the number of activemq nodes, not the number of zookeeper nodes. So if you have 3 amq nodes, set replicas="3", not more. http://activemq.apache.org/replicated-leveldb-store.html :
Replicas property :
The number of nodes that will exist in the cluster.
At least (replicas/2)+1 nodes must be online to avoid service outage.
Another thing, all amq nodes in the cluster must get the same name (MyBroker below):
<broker xmlns="http://activemq.apache.org/schema/core" brokerName="MyBroker" dataDirectory="${activemq.data}">

Related

ActiveMQ doesn't propagate topics on failover reconnection

I'm attempting to use a network of brokers that bridges two LANs over a duplex WAN connector:
There are actually many subscribers in our setup, each connecting to a different "Broker A", if that makes sense. All of the Broker A instances have their own connections to a single "Broker B".
Software and configurations:
ActiveMQ 5.14.0, Java 8
All brokers have non-persistent topics only; advisory messages are on.
OS: Linux (RHEL 6)
When I initially bring everything online, regardless of the order in which I bring things online, communication between the publisher and subscriber works flawlessly. I've had the system up-and-running for weeks at a time without issue.
What I've observed is that if broker C is restarted, no new topics that show up in broker B ever appear in broker C. New topics are still appearing in broker B as they are created by the subscriber(s). Neither existing nor new topics ever propagate across the WAN to broker C. I've verified this using jconsole.
If I restart broker B, the problem goes away immediately. The topics contained in broker B (according to jconsole) are the same as they were prior to restart, but now they've magically appeared in C.
Brokers B and C have the same configuration (shown below). The only difference is that B creates a duplex network connector to C created using the following code:
final NetworkConnector wanNC = new DiscoveryNetworkConnector(
new URI(String.format("static:(failover:(tcp://%s:%d))", parentNode, port)));
wanNC.setCheckDuplicateMessagesOnDuplex(true);
wanNC.setDecreaseNetworkConsumerPriority(true);
wanNC.setDuplex(true);
wanNC.setName(NetworkUtils.getHostName());
wanNC.setNetworkTTL(10);
wanNC.setSuppressDuplicateTopicSubscriptions(false);
broker.addNetworkConnector(wanNC);
broker.xml:
<beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd
http://activemq.apache.org/schema/core http://activemq.apache.org/schema/core/activemq-core.xsd">
<!-- Allows us to use system properties as variables in this configuration file -->
<bean class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer" />
<broker xmlns="http://activemq.apache.org/schema/core" brokerName="${broker.id}" start="false"
offlineDurableSubscriberTimeout="5000" offlineDurableSubscriberTaskSchedule="5000"
persistent="false" useJmx="true" schedulePeriodForDestinationPurge="86400000">
[...]
<networkConnectors>
<networkConnector name="z-broker-${broker.id}-x-nc"
decreaseNetworkConsumerPriority="true"
networkTTL="10"
uri="multicast://225.5.5.5:6555?group=TO_X">
<excludedDestinations>
<topic physicalName="X.A" />
</excludedDestinations>
</networkConnector>
<networkConnector name="z-broker-${broker.id}-y-nc"
decreaseNetworkConsumerPriority="true"
networkTTL="10"
uri="multicast://225.5.5.5:6555?group=TO_Y">
<excludedDestinations>
<topic physicalName="X.B.>" />
</excludedDestinations>
</networkConnector>
</networkConnectors>
<transportConnectors>
<transportConnector name="openwire"
uri="tcp://${broker.ip}:${broker.port}?maximumConnections=1000&wireFormat.maxFrameSize=104857600"
discoveryUri="multicast://225.5.5.5:6555?group=TO_Z" />
</transportConnectors>
</broker>
</beans>
Why don't topics from broker B (existing or new) ever show up in broker C?
Why does restarting broker B solve the issue immediately?
Apparently the trick was changing the network connector URI from
static:(failover:(tcp://<ip>:<port>))
to
static:(tcp://<ip>:<port>)
I didn't need failover transport for any reason since the connection is intended as a network bridge and there's a single remote address.
For whatever reason, using failover prevented topics from propagating on reconnect.

I can't connect 8161/admin when activeMQ replicated levelDB with zookeeper

If I configured activemq.xml as follows, i can't connect 123.57.167.211:8161. why ?
replicatedLevelDB directory="${activemq.data}/leveldb"
replicas="3" bind="tcp://0.0.0.0:0"
zkAddress="10.172.5.40:2181,10.170.253.39:2181,10.165.112.136:2181"
zkPassword="password"
zkPath="/activemq40/leveldb-stores"
sync="local_disk"
hostname="123.57.167.211"
when replicas="3", you need to start at least 2 activemqs for the cluster to run.
One of them will be the master, you can connect only to the master's 8161, as only the master is actually running.

Why does an ActiveMQ cluster fail with "server null" when the Zookeeper master node goes offline?

I have encountered an issue with ActiveMQ where the entire cluster will fail when the master Zookeeper node goes offline.
We have a 3-node ActiveMQ cluster setup in our development environment. Each node has ActiveMQ 5.12.0 and Zookeeper 3.4.6 (*note, we have done some testing with Zookeeper 3.4.7, but this has failed to resolve the issue. Time constraints have so far prevented us from testing ActiveMQ 5.13).
What we have found is that when we stop the master ZooKeeper process (via the "end process tree" command in Task Manager), the remaining two ZooKeeper nodes continue to function as normal. Sometimes the ActiveMQ cluster is able to handle this, but sometimes it does not.
When the cluster fails, we typically see this in the ActiveMQ log:
2015-12-18 09:08:45,157 | WARN | Too many cluster members are connected. Expected at most 3 members but there are 4 connected. | org.apache.activemq.leveldb.replicated.MasterElector | WrapperSimpleAppMain-EventThread
...
...
2015-12-18 09:27:09,722 | WARN | Session 0x351b43b4a560016 for server null, unexpected error, closing socket connection and attempting reconnect | org.apache.zookeeper.ClientCnxn | WrapperSimpleAppMain-SendThread(192.168.0.10:2181)
java.net.ConnectException: Connection refused: no further information
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)[:1.7.0_79]
at sun.nio.ch.SocketChannelImpl.finishConnect(Unknown Source)[:1.7.0_79]
at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361)[zookeeper-3.4.6.jar:3.4.6-1569965]
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081)[zookeeper-3.4.6.jar:3.4.6-1569965]
We were immediately concerned by the fact that (A)ActiveMQ seems to think there are four members in the cluster when it is only configured with 3 and (B) when the exception is raised, the server appears to be null. We then increased ActiveMQ's logging level to DEBUG in order to display the list of members:
2015-12-18 09:33:04,236 | DEBUG | ZooKeeper group changed: Map(localhost -> ListBuffer((0000000156,{"id":"localhost","container":null,"address":null,"position":-1,"weight":5,"elected":null}), (0000000157,{"id":"localhost","container":null,"address":null,"position":-1,"weight":1,"elected":null}), (0000000158,{"id":"localhost","container":null,"address":"tcp://192.168.0.11:61619","position":-1,"weight":10,"elected":null}), (0000000159,{"id":"localhost","container":null,"address":null,"position":-1,"weight":10,"elected":null}))) | org.apache.activemq.leveldb.replicated.MasterElector | ActiveMQ BrokerService[localhost] Task-14
Can anyone suggest why this may be happening and/or suggest a way to resolve this? Our configurations are shown below:
ZooKeeper:
tickTime=2000
dataDir=C:\\zookeeper-3.4.7\\data
clientPort=2181
initLimit=5
syncLimit=2
server.1=192.168.0.10:2888:3888
server.2=192.168.0.11:2888:3888
server.3=192.168.0.12:2888:3888
ActiveMQ (server.1):
<persistenceAdapter>
<replicatedLevelDB
directory="activemq-data"
replicas="3"
bind="tcp://0.0.0.0:61619"
zkAddress="192.168.0.11:2181,192.168.0.10:2181,192.168.0.12:2181"
zkPath="/activemq/leveldb-stores"
hostname="192.168.0.10"
weight="5"/>
//server.2 has a weight of 10, server.3 has a weight of 1
</persistenceAdapter>

How to run two instances of Apache ActiveMQ in one system?

I am using apache-activemq-5.11.1 which is the stable version runs on JDK 7 (Major version 51.0), I am using JDK 7 Update 80. I had error if I run the same on JDK 6.
Exception in thread "main" java.lang.UnsupportedClassVersionError: org/apache/ac
tivemq/console/Main : Unsupported major.minor version 51.0
Coming to my problem I need to have two running instances of ActiveMQ in my system. I had followed the following steps to create two instance.
C:\>cd \apache-activemq-5.11.1
C:\apache-activemq-5.11.1>.\bin\activemq create instance1
C:\apache-activemq-5.11.1>.\bin\activemq create instance2
I had changed to different set of port numbers for instance2 as below,
<!--EDITED: apache-activemq-5.11.1\instance2\conf\activemq.xml-->
<transportConnectors>
<!-- DOS protection, limit concurrent connections to 1000 and frame size to 100MB -->
<transportConnector name="openwire" uri="tcp://0.0.0.0:61716?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
<transportConnector name="amqp" uri="amqp://0.0.0.0:5772?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
<transportConnector name="stomp" uri="stomp://0.0.0.0:61713?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
<transportConnector name="mqtt" uri="mqtt://0.0.0.0:1983?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
<transportConnector name="ws" uri="ws://0.0.0.0:61714?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
</transportConnectors>
Now I am starting instance1 & instance2 as follows.....
C:\apache-activemq-5.11.1\instance1\bin>instance1 start
C:\apache-activemq-5.11.1\instance1\bin>instance2 start
Among these the second instance which I am trying to start gives the following kahadb lock problem.....
INFO | Refreshing org.apache.activemq.xbean.XBeanBrokerFactory$1#7209d9af: startup date [Thu May 07 16:16:23 IST 2015]; root of context hierarchy
INFO | PListStore:[C:\apache-activemq-5.11.1\data\localhost\tmp_storage] started
INFO | Using Persistence Adapter: KahaDBPersistenceAdapter[C:\apache-activemq-5.11.1\data\kahadb]
INFO | Database C:\apache-activemq-5.11.1\data\kahadb\lock is locked... waiting 10 seconds for the database to be unlocked. Reason: java.io.IOException: File 'C:\apache-activemq-5.11.1\data\kahadb\lock' could not be locked.
Please give a solution for this db lock issue.
Make a replica of your ActiveMQ like apache-activemq-x.xx.x to apache-activemq-x.xx.x_2
Change the ports of apache-activemq-x.xx.x_2\conf\activemq.xml. Make sure the port numbers that you are changing are not in clash.
<!--EDIT: apache-activemq-5.11.1_2\conf\activemq.xml-->
<transportConnectors>
<!-- DOS protection, limit concurrent connections to 1000 and frame size to 100MB -->
<transportConnector name="openwire" uri="tcp://0.0.0.0:61716?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
<transportConnector name="amqp" uri="amqp://0.0.0.0:5772?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
<transportConnector name="stomp" uri="stomp://0.0.0.0:61713?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
<transportConnector name="mqtt" uri="mqtt://0.0.0.0:1983?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
<transportConnector name="ws" uri="ws://0.0.0.0:61714?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
</transportConnectors>
And along with port changes we have to correct jetty.xml http management console port as well.
<bean id="jettyPort" class="org.apache.activemq.web.WebConsolePort" init-method="start">
<!-- the default port number for the web console -->
<property name="host" value="0.0.0.0"/>
<property name="port" value="8162"/>
</bean>
In this way you can run two services of ActiveMQ in one system.
Running more instances created for ActiveMQ gives the Fail Over option.
In this way when one instance goes down for some reason. The other instance coming up automatically, since the first instance which locks the KahaDB is been released.
For this ports are not to be changed,- as we're configuring instances for Fail Over Mode.
C:\>cd \apache-activemq-5.11.1
C:\apache-activemq-5.11.1>.\bin\activemq create instance1
C:\apache-activemq-5.11.1>.\bin\activemq create instance2
Please start instances without changing any of the configuration. So when instance1 goes down for any reason, instance2 coming up.
C:\apache-activemq-5.11.1\instance1\bin>instance1 start
C:\apache-activemq-5.11.1\instance2\bin>instance2 start
I hope this must be the purpose of creating multiple instances under ActiveMQ. And more than this some more tweaky config. also available for Kahadb.
I did do the following instructions from:
Running Multiple ActiveMQ Instances on One Machine (Dzone)
It works fine for Mac (Not tested in Linux)
Note: the instances must be started through instanceNumber start (the console argument/parameter is not valid anymore).
I had the same problem about the kahadb locking only for Windows, it for ActiveMQ 5.13.3 and 5.14.5 versions
The same author from DZone wrote practically the same post in his blog
Running multiple ActiveMQ instances on one machine (Blog)
But there exists an important update.
You must open each instanceNumber.bat file for each instance from each bin directory and add these two lines:
set ACTIVEMQ_CONF="ACTIVEMQ_HOME/instanceNumber/conf"
set ACTIVEMQ_DATA="ACTIVEMQ_HOME/instanceNumber/data"
Where ACTIVEMQ_HOME represents the path location of your ActiveMQ and instanceNumber is the instance being edited such as: instanceA and instanceB
what is happening is you have changed the port numbers correctly but both the instances that you created use a same Database(in this case file system KahaDB) to store their messages,
So when one instance is up and running, it holds the lock for that database and other instance of activeMQ will be waiting to gain a lock of this DB.
Essentially this is becoming a master slave configuration .
look at this line in activeMQ.xml
<persistenceAdapter>
<kahaDB directory="${activemq.data}/kahadb"/>
</persistenceAdapter>
this will be pointing so same location for both instances.
what my solution is to copy entire folder apache-activemq-x.xx.x in different location change the port numbers for second instance and run them differently
by this you will have 2 instances of activeMQ running on same machine
hope this helps!
Good luck!
Although because of KahaDB restriction load balancing/fault tolerant configuration is restricted. We can use following kind of connection URL to utilize ActiveMQ load....
failover://(tcp://192.nnn.nn.nn:61616,tcp://192.nnn.nn.nn:61616)?randomize=false
randomize=true will made message shuffles between two AciveMQ in active mode, rather not by just fail-over of ActiveMQ......
Complete reference for this can be found under the following Apache Site link....
http://activemq.apache.org/failover-transport-reference.html
But Still high availability (i.e, cluster) configuration make things stable for your App although Apache must advance ActiveMQ High Availability, hence things can work smoother.
The present Apache ActiveMQ High Availability configuration available in the following link.
http://activemq.apache.org/clustering.html
Although KahaDB has file lock restriction, following tweaking/alternates ways of configuration can be done...
1)Shared File System Master Slave,- A shared file system such as a SAN
http://activemq.apache.org/shared-file-system-master-slave.html
2)JDBC Master Slave,- A Shared database
http://activemq.apache.org/jdbc-master-slave.html
3)Replicated LevelDB Store,- ZooKeeper Server
http://activemq.apache.org/replicated-leveldb-store.html
Over & above by having JCA connectors,- AS like JBoss, Weblogic, Websphere, Geronimo, Glassfish,- ActimeMQ patching as a kind of Resource Adapter can be done. And with Apache Camel (karaf), JBoss Fuse ESB kind of products HA & clustering of ActiveMQ can be done.

configure JMX for ActiveMQ for remoting access

Anyone can give the detailed steps on how to enable JMX (can be access remotely) on a newly installed 5.5.0 version?
In your activemq.xml file, you need make sure useJmx is true on your broker element:
<broker xmlns="http://activemq.org/config/1.0" brokerName="localhost" useJmx="true">
and ensure that you have a management context
<managementContext>
<managementContext createConnector="true" connectorPort="1099"/>
</managementContext>
From there it is just a matter of making sure you can connect over TCP to your broker on port 1099 or whatever port you specify. This doesn't work quite so straightforward on services like EC2 or anything that does some heavy NAT'ing: http://jmsbrdy.com/monitoring-java-applications-running-on-ec2-i