Using AMQ 5.9 I had a network issue that caused messages in my queue to back up. I started getting log messages
"Usage (default:store:queue://myqueue:store) percentUsage=101% usage=6254005794, limit=6144430090, ...Stopping producer...to prevent flooding queue..)
I have not been able to recover and start consuming messages again.
I tried going into activemq.xml to increase my max store usage:
<systemUsage>
<systemUsage>
<storeUsage>
<storeUsage limit="15 gb" />
</storeUsage>
</systemUsage>
</systemUsage>
I also tried to turn off flow control with
<policyEntry queue=">" producerFlowControl="false"/>
But I still get the same error message.
I have the disk space. There are no settings being overridden on the command line. How can I recover and get my messages processed?
I ended up finding an ugly (yet effective) way to get around this. If you connect to activemq via JMX (jconsole), you can navigate to the mbean org.apache.activemq.Broker and find attribute StoreLimit. You can MANUALLY increase this value in JConsole then activemq will restart message processing shortly thereafter.
Related
Problem statement: There are two Queues in two different brokers. Each Queue has one Consumer to it. The producer is dropping messages on the first Queue. We would want to send a copy of message to the second Queue. For visualization
Producer
|
Broker1 --> Queue1 --> Consumer1
| (copy)
Broker2 --> Queue2 --> Consumer2 (consumes same message as Consumer1 but is independent of Consumer1)
The ask is
Only 1 queue is created in each broker. I have achieved the above with 4 Queues but looking for more optimized solution.
Prefer no topics to be used.
To be done only through activemq provided configuration.
What have I done till now:
I managed to do the above with 4 queues.
In Broker1, Queue1 forwarding a copy to a Virtual Destination Queue. Also, sending the messages in Virtual Destination to broker 2 through network connector.
<destinationInterceptors>
<virtualDestinationInterceptor>
<virtualDestinations>
<compositeQueue name="Queue1" forwardOnly="false">
<forwardTo>
<queue physicalName="IntermediateQueue"/>
</forwardTo>
</compositeQueue>
</virtualDestinations>
</virtualDestinationInterceptor>
</destinationInterceptors>
<networkConnectors>
<networkConnector
name="Q:broker1->broker2"
uri="static:(tcp://localhost:31616)"
duplex="false"
staticBridge="true">
<staticallyIncludedDestinations>
<queue physicalName="IntermediateQueue"/>
</staticallyIncludedDestinations>
</networkConnector>
</networkConnectors>
In Broker2, forwarding all messages received in the intermediate Queue to the actual destination queue.
<destinationInterceptors>
<virtualDestinationInterceptor>
<virtualDestinations>
<compositeQueue name="IntermediateQueue">
<forwardTo>
<queue physicalName="FinalDestinationQueue" />
</forwardTo>
</compositeQueue>
</virtualDestinations>
</virtualDestinationInterceptor>
</destinationInterceptors>
Appreciate any help, as going through activemq documentation and forums didn't yield an optimized answer to this problem.
You are essentially re-creating pub+sub and then adding in a transmission-queue pattern for multi-broker integration. There are valid use cases to do this and your approach is valid and within the intended design of Composite Destinations and Network Connectors. The trade-off in this approach is the heavy administration and configuration management that is required.
I understand you prefer to not use topics. However, you may consider looking at Virtual Topics1 which solve this problem in an elegant way and allows you to add new consumers dynamically and without having to modify the broker configuration.
Producer send to Topic:
topic://VT.ORDER.EVENT
Consumer(s) read from special named Queues
clientA: queue://VQ.CLIENTA.VT.ORDER.EVENT
clientB: queue://VQ.CLIENTB.VT.ORDER.EVENT
ref: Virtual Topics
I have two ActiveMQ brokers in a network setup. The clients are configured with randomize=true and are able to connect fine. However, the messages do not get forwarded from one broker to the other and remain in the queue. For example, I have a particular queue which has multiple producers and one consumer. If I look at the queue on the broker to which the one consumer is connected to, all messages are dequeued immediately. However, on the other broker messages get queued and do not get drained.
Listed below are my networkConnectors and transportConnectors setup for the two brokers. I have tried adding duplex="true" as well as changing the networkTTL to 1 and those didn't seem to make any difference.
BrokerA:
<networkConnectors>
<networkConnector name="LocalBrokerToB"
networkTTL="2"
uri="static:(tcp://hostnameB:61617)"/>
</networkConnectors>
<transportConnectors>
<transportConnector name="nioConnectorFront" uri="nio://hostnameA:61616?maximumConnections=1024 "/>
<transportConnector name="nioConnectorBack" uri="tcp://hostnameA:61617?maximumConnections=1024 "/>
</transportConnectors>
BrokerB:
<networkConnectors>
<networkConnector name="LocalBrokerToA"
networkTTL="2"
uri="static:(tcp://hostnameA:61617)"/>
</networkConnectors>
<transportConnectors>
<transportConnector name="nioConnectorFront" uri="nio://hostnameB:61616?maximumConnections=1024 "/>
<transportConnector name="nioConnectorBack" uri="tcp://hostnameB:61617?maximumConnections=1024 "/>
</transportConnectors>
Any ideas on what could be the problem? An example configuration that someone has working would be a great help.
You should connect the networkConnector to the transport connector of the other broker. That is port 61616 in your example, not 61617.
You should verify in the broker logs or via Web Console / JMX that the network connection actually gets established.
Adding duplex="true" let's one of the brokers initiate the connection which is great in case of firewalls etc. In your case, that should not matter.
I am using apache-activemq-5.11.1 which is the stable version runs on JDK 7 (Major version 51.0), I am using JDK 7 Update 80. I had error if I run the same on JDK 6.
Exception in thread "main" java.lang.UnsupportedClassVersionError: org/apache/ac
tivemq/console/Main : Unsupported major.minor version 51.0
Coming to my problem I need to have two running instances of ActiveMQ in my system. I had followed the following steps to create two instance.
C:\>cd \apache-activemq-5.11.1
C:\apache-activemq-5.11.1>.\bin\activemq create instance1
C:\apache-activemq-5.11.1>.\bin\activemq create instance2
I had changed to different set of port numbers for instance2 as below,
<!--EDITED: apache-activemq-5.11.1\instance2\conf\activemq.xml-->
<transportConnectors>
<!-- DOS protection, limit concurrent connections to 1000 and frame size to 100MB -->
<transportConnector name="openwire" uri="tcp://0.0.0.0:61716?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
<transportConnector name="amqp" uri="amqp://0.0.0.0:5772?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
<transportConnector name="stomp" uri="stomp://0.0.0.0:61713?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
<transportConnector name="mqtt" uri="mqtt://0.0.0.0:1983?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
<transportConnector name="ws" uri="ws://0.0.0.0:61714?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
</transportConnectors>
Now I am starting instance1 & instance2 as follows.....
C:\apache-activemq-5.11.1\instance1\bin>instance1 start
C:\apache-activemq-5.11.1\instance1\bin>instance2 start
Among these the second instance which I am trying to start gives the following kahadb lock problem.....
INFO | Refreshing org.apache.activemq.xbean.XBeanBrokerFactory$1#7209d9af: startup date [Thu May 07 16:16:23 IST 2015]; root of context hierarchy
INFO | PListStore:[C:\apache-activemq-5.11.1\data\localhost\tmp_storage] started
INFO | Using Persistence Adapter: KahaDBPersistenceAdapter[C:\apache-activemq-5.11.1\data\kahadb]
INFO | Database C:\apache-activemq-5.11.1\data\kahadb\lock is locked... waiting 10 seconds for the database to be unlocked. Reason: java.io.IOException: File 'C:\apache-activemq-5.11.1\data\kahadb\lock' could not be locked.
Please give a solution for this db lock issue.
Make a replica of your ActiveMQ like apache-activemq-x.xx.x to apache-activemq-x.xx.x_2
Change the ports of apache-activemq-x.xx.x_2\conf\activemq.xml. Make sure the port numbers that you are changing are not in clash.
<!--EDIT: apache-activemq-5.11.1_2\conf\activemq.xml-->
<transportConnectors>
<!-- DOS protection, limit concurrent connections to 1000 and frame size to 100MB -->
<transportConnector name="openwire" uri="tcp://0.0.0.0:61716?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
<transportConnector name="amqp" uri="amqp://0.0.0.0:5772?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
<transportConnector name="stomp" uri="stomp://0.0.0.0:61713?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
<transportConnector name="mqtt" uri="mqtt://0.0.0.0:1983?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
<transportConnector name="ws" uri="ws://0.0.0.0:61714?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
</transportConnectors>
And along with port changes we have to correct jetty.xml http management console port as well.
<bean id="jettyPort" class="org.apache.activemq.web.WebConsolePort" init-method="start">
<!-- the default port number for the web console -->
<property name="host" value="0.0.0.0"/>
<property name="port" value="8162"/>
</bean>
In this way you can run two services of ActiveMQ in one system.
Running more instances created for ActiveMQ gives the Fail Over option.
In this way when one instance goes down for some reason. The other instance coming up automatically, since the first instance which locks the KahaDB is been released.
For this ports are not to be changed,- as we're configuring instances for Fail Over Mode.
C:\>cd \apache-activemq-5.11.1
C:\apache-activemq-5.11.1>.\bin\activemq create instance1
C:\apache-activemq-5.11.1>.\bin\activemq create instance2
Please start instances without changing any of the configuration. So when instance1 goes down for any reason, instance2 coming up.
C:\apache-activemq-5.11.1\instance1\bin>instance1 start
C:\apache-activemq-5.11.1\instance2\bin>instance2 start
I hope this must be the purpose of creating multiple instances under ActiveMQ. And more than this some more tweaky config. also available for Kahadb.
I did do the following instructions from:
Running Multiple ActiveMQ Instances on One Machine (Dzone)
It works fine for Mac (Not tested in Linux)
Note: the instances must be started through instanceNumber start (the console argument/parameter is not valid anymore).
I had the same problem about the kahadb locking only for Windows, it for ActiveMQ 5.13.3 and 5.14.5 versions
The same author from DZone wrote practically the same post in his blog
Running multiple ActiveMQ instances on one machine (Blog)
But there exists an important update.
You must open each instanceNumber.bat file for each instance from each bin directory and add these two lines:
set ACTIVEMQ_CONF="ACTIVEMQ_HOME/instanceNumber/conf"
set ACTIVEMQ_DATA="ACTIVEMQ_HOME/instanceNumber/data"
Where ACTIVEMQ_HOME represents the path location of your ActiveMQ and instanceNumber is the instance being edited such as: instanceA and instanceB
what is happening is you have changed the port numbers correctly but both the instances that you created use a same Database(in this case file system KahaDB) to store their messages,
So when one instance is up and running, it holds the lock for that database and other instance of activeMQ will be waiting to gain a lock of this DB.
Essentially this is becoming a master slave configuration .
look at this line in activeMQ.xml
<persistenceAdapter>
<kahaDB directory="${activemq.data}/kahadb"/>
</persistenceAdapter>
this will be pointing so same location for both instances.
what my solution is to copy entire folder apache-activemq-x.xx.x in different location change the port numbers for second instance and run them differently
by this you will have 2 instances of activeMQ running on same machine
hope this helps!
Good luck!
Although because of KahaDB restriction load balancing/fault tolerant configuration is restricted. We can use following kind of connection URL to utilize ActiveMQ load....
failover://(tcp://192.nnn.nn.nn:61616,tcp://192.nnn.nn.nn:61616)?randomize=false
randomize=true will made message shuffles between two AciveMQ in active mode, rather not by just fail-over of ActiveMQ......
Complete reference for this can be found under the following Apache Site link....
http://activemq.apache.org/failover-transport-reference.html
But Still high availability (i.e, cluster) configuration make things stable for your App although Apache must advance ActiveMQ High Availability, hence things can work smoother.
The present Apache ActiveMQ High Availability configuration available in the following link.
http://activemq.apache.org/clustering.html
Although KahaDB has file lock restriction, following tweaking/alternates ways of configuration can be done...
1)Shared File System Master Slave,- A shared file system such as a SAN
http://activemq.apache.org/shared-file-system-master-slave.html
2)JDBC Master Slave,- A Shared database
http://activemq.apache.org/jdbc-master-slave.html
3)Replicated LevelDB Store,- ZooKeeper Server
http://activemq.apache.org/replicated-leveldb-store.html
Over & above by having JCA connectors,- AS like JBoss, Weblogic, Websphere, Geronimo, Glassfish,- ActimeMQ patching as a kind of Resource Adapter can be done. And with Apache Camel (karaf), JBoss Fuse ESB kind of products HA & clustering of ActiveMQ can be done.
I've been struggling to start AMQ broker node with persistent store on an NFSv3 share.
I keep getting the below error complaining of unavailable locks.
I've made sure that all java processes are killed and the lock file on the shared folder is deleted before starting the AMQ master broker.
When I start AMQ, it seems to create a lock file on the shared folder and after that it complains of unavailable locks.
Loading message broker from: xbean:activemq.xml
INFO | Refreshing org.apache.activemq.xbean.XBeanBrokerFactory$1#73cf56e9: startup date [Mon Dec 23 05:28:23 UTC 2013]; root of context hierarchy
INFO | PListStore:[/home/pnarayan/apache-activemq-5.9.0/activemq-data/notificationsBroker/tmp_storage] started
INFO | Using Persistence Adapter: KahaDBPersistenceAdapter[/home/y/share/nfs/amqnfs]
INFO | JMX consoles can connect to service:jmx:rmi:///jndi/rmi://localhost:1099/jmxrmi
INFO | Database /home/y/share/nfs/amqnfs/lock is locked... waiting 10 seconds for the database to be unlocked. Reason: java.io.IOException: No locks available
Below is the activemq xml configuration file I'm using:
<beans
xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd
http://activemq.apache.org/schema/core http://activemq.apache.org/schema/core/activemq-core.xsd">
<broker
xmlns="http://activemq.apache.org/schema/core"
xmlns:spring="http://www.springframework.org/schema/beans"
brokerName="notificationsBroker"
useJmx="true"
start="true"
persistent="true"
useShutdownHook="false"
deleteAllMessagesOnStartup="false">
<persistenceAdapter>
<kahaDB directory="/home/y/share/nfs/amqnfs" />
</persistenceAdapter>
<transportConnectors>
<transportConnector name="openwire" uri="tcp://0.0.0.0:61616?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
<transportConnector name="stomp" uri="stomp://0.0.0.0:61613?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
</transportConnectors>
</broker>
</beans>
Is this because of the reason that I'm using NFSv3, but not NFSv4 as recommended by AMQ?
I believe the issue with NFSv3 is that it cannot cleanup the lock if at all the broker process dies abruptly. However it shouldn't be having any issues in starting the broker. If my understanding is right why am I observing the above error?
You are absolutely right in what you say - NFS3 does not clean up its locks properly. When using KahaDB, the broker creates a file in $ACTIVEMQ_DATA/lock. If that file exists, chances are that something has a hold on it (or at least NFS3 thinks that it does) and the broker will be blocked. Check to see whether the file is there, and if so, use the lsof command to determine the process id of its holder.
We are running activeMQ5.6 on tomcat 6.0.35 as embedded broker with message delivery option set as PERSISTENT. We are getting OutOfMemory problem at one of the consumer side. The consumer is slow as its doing time consuming job. We used to get the OOM after running for 8-10 hrs. There are ~10000 messages has to be processed nut its giving OOM after processing 3000 messages and rest 7000 messaged kept in pending state. The message size is very small ~1KB in xml format. While we have other consumer on diff queue who are very fast and there also ~10000 messages are published and the message size is quite high ~100 KB but we are not getting OOM on that queue. Though it is setup on the same broker.
Here is stacktrace of the error and out activemq.xml file
INFO [11/08/12 05:39:31]ActiveMQ Session
Task-4- Start Uploading
Nam2011_08_prototype/gdfas/mnada/usa/uf3.7z.001 to Amazon S3 bucket -
aws-s3-infotech Exception in thread "InactivityMonitor WriteCheck"
java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(Thread.java:640)
at java.util.concurrent.ThreadPoolExecutor.addIfUnderMaximumPoolSize(ThreadPoolExecutor.java:727)
at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:657)
at org.apache.activemq.transport.AbstractInactivityMonitor.writeCheck(AbstractInactivityMonitor.java:142)
at org.apache.activemq.transport.AbstractInactivityMonitor$2.run(AbstractInactivityMonitor.java:111)
at org.apache.activemq.thread.SchedulerTimerTask.run(SchedulerTimerTask.java:33)
at java.util.TimerThread.mainLoop(Timer.java:512)
at java.util.TimerThread.run(Timer.java:462)
Here is snapshot from activemq.xml
<persistenceAdapter>
<kahaDB directory="${activemq.data}/kahadb"/>
</persistenceAdapter>
<transportConnectors>
<!-- <transportConnector name="openwire" uri="tcp://localhost:61616"/> -->
<transportConnector name="openwire" uri="tcp://0.0.0.0:61616"/>
<transportConnector name="stomp" uri="stomp://localhost:61613"/>
</transportConnectors>
<networkConnectors>
<!-- by default just auto discover the other brokers -->
<networkConnector name="defaultNetwork" uri="multicast://default"/>
<!--
<networkConnector name="host1 and host2" uri="static://(tcp://host1:61616,tcp://host2:61616)" failover="true"/>
-->
</networkConnectors>
<systemUsage>
<systemUsage>
<memoryUsage>
<memoryUsage limit="512 mb"/>
</memoryUsage>
<storeUsage>
<storeUsage limit="100 gb"/>
</storeUsage>
<tempUsage>
<tempUsage limit="50 gb"/>
</tempUsage>
</systemUsage>
</systemUsage>
<!-- lets define the dispatch policy -->
<destinationPolicy>
<policyMap>
<policyEntries>
<policyEntry queue="SyncServer.>" memoryLimit="512mb" optimizedDispatch="true" queuePrefetch="10">
<pendingQueuePolicy>
<fileQueueCursor/>
</pendingQueuePolicy>
</policyEntry>
</policyEntries>
</policyMap>
</destinationPolicy>
This has nothing to do with ActiveMQ. The error
java.lang.OutOfMemoryError: unable to create new native thread
means that the OS does not have enough free memory to allocate for the thread. The way I think of it is for every thread that Java creates the OS needs to be able to create a 'native' thread, and that takes memory. You need to free up memory on the machine, add memory, or most of the time, unintuitively, you should actually decrease your heap allocation to leave more for the OS.
A general rule of thumb is you need to leave at least the amount of memory free for the OS as you allocate to the JVM. So for example if you have a 2GB heap, you need to have at least 2GB free there after (taking into account that the OS is going to use some memory too).
If you update your answer with your JVM settings, OS, 64/32 bit? and hardware I can help you tune it.