Servicemix ActiveMQ performance issue - activemq

I am using apache servicemix and apache activeMQ in my product. Here in case of HttpConnector i found a performance issue.
If i increase number of total request at a time then as the number will increase exchange queue gets stuck on any of the component. After all the requests start processing, most of the exchange gets stuck at the end component or at the dispatcher* component. Container's heap size reaches to very high and after some time it crashes and restarts automatically.
I have used transactionmanager also. flowname is also mentioned as jms.
need some solution urgently.

You might try using KahaDB instead of the "amqPersistenceAdapter". We saw a huge throughput increase just by switching to this.
here is the config we used (which is highly dependent on your application, but make sure "enableJournalDiskSyncs" is set to false)
<persistenceAdapter>
<kahaDB directory="../data/kaha"
enableJournalDiskSyncs="false"
indexWriteBatchSize="10000"
indexCacheSize="1000" />
</persistenceAdapter>

If i increase number of total request at a time then as the number will increase exchange queue gets stuck on any of the component.
ActiveMQ has a mechanism to stop producer writing messages, it is called "flow controll", seems that you producer is faster than consumer (or consumer is not stable it its speed), so first check the memoryLimit config for your AMQ (also defining it or special queue possible). Try to increase it.
<destinationPolicy>
<policyMap>
<policyEntries>
<policyEntry topic="FOO.>" producerFlowControl="false" memoryLimit="1mb">
<dispatchPolicy>
<strictOrderDispatchPolicy/>
</dispatchPolicy>
<subscriptionRecoveryPolicy>
<lastImageSubscriptionRecoveryPolicy/>
</subscriptionRecoveryPolicy>
</policyEntry>
</policyEntries>
</policyMap>
Also, you can disable that stopping of processing incoming messages with producerFlowControl="false" option. So, in case when all buffer will be used, AMQ will slow down and all messages will be stored to HD. More info Producer Flow Control and Message Cursors
Container's heap size reaches to very high and after some time it crashes and restarts automatically.
But anyway, it is just way of tunning of your app, it is not solution, because always you will have the case when some resources run out :)
You should limit the incoming requests or balance them f.e. using Pound

Related

ActiveMQ - Limiting number of pending messages in queue without affecting the producer

ActiveMQ 5.15.4
Context: the producer I'm working with publishes to many different queues.
Things I've tried:
1.
<policyEntry queue=">" producerFlowControl="true" memoryLimit="10 mb">
</policyEntry>
This correctly limits the queue size, but it throttles the producer, as described here. This leads to other, non-problematic queues being af
2.
<policyEntry queue=">" producerFlowControl="false" memoryLimit="10 mb">
</policyEntry>
Doesn't seem to do limit queue size.
3.
I've tried to use messageEvictionStrategy and pendingMessageLimitStrategy but they don't seem to work for queues, only topics.
Am I missing some other possible strategy?
You need to use a Time To Live value on the messages to control how long they stay in the Queue, otherwise the broker will either block the producer using flow control or page the messages to disk if you disable it. Queues are not generally meant for fixed size messaging as the assumption on the Queue is that the contents are important and should not be discarded unless the sender allows for it via a TTL.

Strange behaviour on Queues with ActiveMQ

We have been using AMQ in production for quite a while some time already, and we are noticing a strange behavior on one of our queues.
The situation is as follows:
we do clickstream traffic so when we have identified a user, all his events are "grouped" by JMSXGroupID property (which is an UUID, in our case, we can have millions of these per hour) so we have some order in consuming the events for the same user in case they do burst
we use KahaDB with kinda the following config:
<mKahaDB directory="${activemq.data}/mkahadb">
<filteredPersistenceAdapters>
<filteredKahaDB perDestination="true">
<persistenceAdapter>
<kahaDB checkForCorruptJournalFiles="true" journalDiskSyncStrategy="PERIODIC" journalDiskSyncInterval="5000" preallocationStrategy="zeros" concurrentStoreAndDispatchQueues="false" />
</persistenceAdapter>
</filteredKahaDB>
</filteredPersistenceAdapters>
the broker is in a rather beefy EC2 instance, but it doesn't seem to hit any limits, neither file limits, nor IOPS, nor CPU limits
destination policy for this destination uses, very similar to a lot other destinations that use the same grouping for JMSXGroupID:
<policyEntry queue="suchDestination>" producerFlowControl="false" memoryLimit="256mb" maxPageSize="5000" maxBrowsePageSize="2000">
<messageGroupMapFactory>
<simpleMessageGroupMapFactory/>
</messageGroupMapFactory>
<deadLetterStrategy>
<individualDeadLetterStrategy queuePrefix="DLQ." useQueueForQueueMessages="true" />
</deadLetterStrategy>
consumers consume messages fairly slowly compared to other destinations (about 50-100ms per message compared to
other consumers for other destinations- about 10-30ms per message)
however, it seems we end up in a situation, where the consumers are not consuming with the speed we expect them to be doing, and seem to wait for something, while there is a huge load of messages on the remote broker for that destination. The consumers seem to also not be neither CPU, nor IO bound, nor network traffic bound.
a symptom is that if we split that queue to two queues and we attach the same number of consumers in the same number of nodes to consume it, things are somehow becoming better. Also, if there is a huge workload for that queue, if we just rename it to suchQueue2 on producers, and assign some consumers on it, these consumers are much faster (for a while) than the consumers on the "old" suchQueue.
the queue doesn't have "non-grouped messages", all messages on it have the JMSXGroupID property and are of the same type.
increasing the number of consumers or lowering it for that queue seems to have little effect
rebooting the consumer apps seems to have little effect once the queue becomes "slow to consume"
Has anybody experienced this:
in short:
Broker is waiting a considerable time for the consumers who seem to be free and not busy all the time.

ActiveMQ and maxPageSize

I would like to set the maxPageSize to a larger number from its default 200.
This is how I set in the activemq.xml file:
<destinationPolicy>
<policyMap>
<policyEntries>
---
<policyEntry queue=">" producerFlowControl="true" memoryLimit="1mb" maxPageSize="SOME_LARGE_NUMBER">
</policyEntry>
</policyEntries>
</policyMap>
</destinationPolicy>
This change helps me get the number of messages in a queue using QueueBrowser.getEnumeration() as it returned 200 even if the number of messages in the queue were greater than 200.
Please see: http://docs.oracle.com/javaee/1.4/api/javax/jms/QueueBrowser.html for QueueBrowser.getEnumeration().
What is the side effect of changing the maxPageSize from 200 to say 1000?
Does it affect the broker's performance anyway?
I am not seeing any documentation for this property other than "maximum number of persistent messages to page from store at a time" on this page:
http://activemq.apache.org/per-destination-policies.html
Thanks for your time!
Max page size simply indicates the number of messages that will be loaded into memory, so the impact is.. that it will consume more memory.
Reading between the lines though, the reason that you are doing this is an anti-pattern. Queue browsing as part of an application is really a misuse of messaging - a message queue works best when treated as a queue. First in, first out. Not as an array that you scan to see whether a message has arrived.
You are much better off consuming each of the messages, and either:
sorting them onto a bunch of other queues depending on their payload and then processing that second level of queues differently, or
storing the payloads into a database and selecting based on the content.

ActiveMQ: Reject connections from producers when persistent store fills

I would like to configure my ActiveMQ producers to failover (I'm using the Stomp protocol) when a broker reaches a configured limit. I want to allow consumers to continue consumption from the overloaded broker, unabated.
Reading ActiveMQ docs, it looks like I can configure ActiveMQ to do one of a few things when a broker reaches its limits (memory or disk):
Slow down messages using producerFlowControl="true" (by blocking the send)
Throw exceptions when using sendFailIfNoSpace="true"
Neither of the above, in which case..I'm not sure what happens? Reverts to TCP flow control?
It doesn't look like any of these things are designed to trigger a producer failover. A producer will failover when it fails to connect but not, as far as I can tell, when it fails to send (due to producer flow control, for example).
So, is it possible for me to configure a broker to refuse connections when it reaches its limits? Or is my best bet to detect slow down on the producer side, and to manually reconfigure my producers to use the a different broker at that time?
Thanks!
Your best bet is to use sendFailIfNoSpace, or better sendFailIfNoSpaceAfterTimeout. This will throw an exception up to your client, which can then attempt to resend the message to another broker at the application level (though you can encapsulate this logic over the top of your Stomp library, and use this facade from your code). Though if your ActiveMQ setup is correctly wired, your load both in terms of production and consumption should be more or less evenly distributed across your brokers, so this feature may not buy you a great deal.
You would probably get a better result if you concentrated on fast consumption of the messages, and increased the storage limits to smooth out peaks in load.

How can I configure ActiveMQ to drop a consumer if it just stops accepting data?

Today, I saw many errors on my ActiveMQ 5.3.2 console:
INFO | Usage Manager Memory Limit
reached. Stopping producer (ID:...) to
prevent flooding topic://mytopic. See
http://activemq.apache.org/producer-flow-control.html
for more info (blocking for: 3422ms)
I did a little bit of poking around, and determined that the subscriber had gone out to lunch:
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp6 0 130320 10.208.87.178:61613 66.31.31.216:37951 ESTABLISHED
In this situation, I don't want the producer to block; I would prefer to drop the client completely. http://activemq.apache.org/slow-consumer-handling.html explains how to limit the number of messages queued, which is a good start, but isn't really what I want. http://activemq.apache.org/slow-consumers.html alludes to being able to drop a slow consumer, but doesn't explain how one might do this.
So, this is my question: is it possible to set up ApacheMQ to drop slow consumers completely, and how do I do so?
I would turn off producerFlowControl for the topics you want or all of them like so:
<policyEntry topic=">" producerFlowControl="false">
This has the ability to cause you to run out of memory or disk space now though because the message queue could keep growing. So make sure you set up an eviction strategy and pending message limit strategy like so:
<messageEvictionStrategy>
<oldestMessageEvictionStrategy/>
</messageEvictionStrategy>
<pendingMessageLimitStrategy>
<constantPendingMessageLimitStrategy limit="10"/>
</pendingMessageLimitStrategy>
This will start throwing away messages after a limit of 10 has been reached. The oldest messages will be thrown out first.
AbortSlowConsumerStrategy which has been improved in Active MQ 5.9.0 should help in this aspect. I haven't had the opportunity to test it thoroughly for various configurations. But I think this is what you are looking for, especially ignoreIdleConsumers=true.
http://java.dzone.com/articles/coming-activemq-59-new-way
You can configure below property - "sendFailIfNoSpaceAfterTimeout" for throwing error to Producer, instead of blocking.
<systemUsage>
<systemUsage sendFailIfNoSpaceAfterTimeout="3000">
<memoryUsage>
<memoryUsage limit="256 mb"/>
</memoryUsage>
</systemUsage>
</systemUsage>
I cannot give you an answer on how to drop the client ( in fact: i am not so sure it is possible all together) but what you can do is add a policy to not block the producer:
" memoryLimit="10mb" producerFlowControl="false"/>
This way, your producer will not suffer from slow consumers.