ActiveMQ destinationPolicy and systemUsage Configuration - activemq

Looking for some help around systemUsage and destinationPolicy configuration as I'm having some difficulty fully understanding the relationship between systemUsage, destinationPolicy, and flow control.
All our messages are persistent! producerFlowControl is on.
So we give ActiveMQ say a maximum of 512MB heap space.
Our systemUsage is set as below:
<systemUsage>
<systemUsage>
<memoryUsage>
<memoryUsage limit="200 mb"/>
</memoryUsage>
<storeUsage>
<storeUsage limit="10 gb"/>
</storeUsage>
<tempUsage>
<tempUsage limit="1000 mb"/>
</tempUsage>
</systemUsage>
</systemUsage>
Our destination policy as below:
<destinationPolicy>
<policyMap>
<policyEntries>
<policyEntry topic=">" producerFlowControl="true" memoryLimit="1mb">
<pendingSubscriberPolicy>
</pendingSubscriberPolicy>
</policyEntry>
<policyEntry queue=">" producerFlowControl="true" memoryLimit="1mb">
</policyEntry>
</policyEntries>
</policyMap>
</destinationPolicy>
Can anyone verify the if the following is correct:
This means that for each individual queue/topic the memory limit is 1MB. What exactly happens when this 1MB is hit, does the queue block for producers or does it page to disc?
The total allowed memory for all queues and topics is 200MB. Meaning that we could have 200 channels operating at their full capacity of 1MB. We currently have 16 queues and topics in total so obviously that is never reached.
Are we better to remove the individual policy entry on the memory limit and share the memory between the various channels?
If we do this at what point would they block?
Any help very much appreciated! Can paypal you some beer money!

You're touching on a number of points here, which I'll answer out of order.
memoryUsage corresponds to the amount of memory that's assigned to the in-memory store. storeUsage corresponds to how much space should be given to the KahaDB store. You either use one or the other, depending on how you want your broker to persist messages or not. tempUsage is a special case for file cursors (http://activemq.apache.org/message-cursors.html) - a mechanism to overflow the memory from the in-memory store to disk if the memory limit is exceeded (you have to configure this behaviour at the destination level if you want it).
policyEntry#memoryLimit is a sub-limit for individual destinations.
What happens when memory limits are exceeded depends on whether producer flow control (PFC) is turned on. It's on by default for queues, off for topics and asynchronous sends to queues; all this can be configured in the policyEntry (http://activemq.apache.org/per-destination-policies.html).
If you hit a "memory limit" when PFC is on, your clients will block until someone frees up space by consuming messages from the store. If it's off, the send will throw an exception (better the client fall over than the broker). "Memory limit" means either the one defined by memoryUsage across all queues, or the queue-specific limit (it's possible to hit the former before the latter).
Whether or not you want a destination-specific limit depends on your use case. I'd suggest to ignore it unless you're trying to achieve a specific outcome.

Related

ActiveMQ getting blocked once memory is 100 % used

I have 2 queues setup in ActiveMQ broker. Wanted to test that producer flow control is working and producers for both queue are getting stalled when there is no memory. As soon as event occurs, messages start getting enqueued in both queues, but everything stops after memory reaches 100%(sometimes it is more than 100 % also, don't know how is that possible for persistent messages).
Memory never reduces from 100 % so that producers can again start putting messages in both queues.
<policyEntry queue=">" memoryLimit="1000MB" producerFlowControl="true"/>
<systemUsage>
<systemUsage>
<memoryUsage>
<memoryUsage percentOfJvmHeap="70" />
</memoryUsage>
<storeUsage>
<storeUsage limit="50MB"/>
</storeUsage>
<tempUsage>
<tempUsage limit="50MB"/>
</tempUsage>
</systemUsage>
</systemUsage>
Is there something wrong with configuration? I want to slow down producer when memory is full(basically stop to enqueue) and again start when memory is free. Why is memory not getting free? Is there a way to slow down producer queuing even before memory is full if there are enough items on queue already to deque?

ActiveMQ and maxPageSize

I would like to set the maxPageSize to a larger number from its default 200.
This is how I set in the activemq.xml file:
<destinationPolicy>
<policyMap>
<policyEntries>
---
<policyEntry queue=">" producerFlowControl="true" memoryLimit="1mb" maxPageSize="SOME_LARGE_NUMBER">
</policyEntry>
</policyEntries>
</policyMap>
</destinationPolicy>
This change helps me get the number of messages in a queue using QueueBrowser.getEnumeration() as it returned 200 even if the number of messages in the queue were greater than 200.
Please see: http://docs.oracle.com/javaee/1.4/api/javax/jms/QueueBrowser.html for QueueBrowser.getEnumeration().
What is the side effect of changing the maxPageSize from 200 to say 1000?
Does it affect the broker's performance anyway?
I am not seeing any documentation for this property other than "maximum number of persistent messages to page from store at a time" on this page:
http://activemq.apache.org/per-destination-policies.html
Thanks for your time!
Max page size simply indicates the number of messages that will be loaded into memory, so the impact is.. that it will consume more memory.
Reading between the lines though, the reason that you are doing this is an anti-pattern. Queue browsing as part of an application is really a misuse of messaging - a message queue works best when treated as a queue. First in, first out. Not as an array that you scan to see whether a message has arrived.
You are much better off consuming each of the messages, and either:
sorting them onto a bunch of other queues depending on their payload and then processing that second level of queues differently, or
storing the payloads into a database and selecting based on the content.

ActiveMq, unconsumed messages are always in broker's memory?

I am using ActiveMQ and have a question about how and when the persisted messages do really get swapped to disk, while waiting for consumer?
I am using virtual topics, and create queue consumers to receive messages from them. All messages are persisted (I've verified that records for all non consumed messages do present in the persistence storage)
I have a multiple consumers which regularly come on and off line. Having connected jconsole to activemq I noticed that ALL not consumed messages seem to be contained in broker's memory. They do not get swapped to disk, at least I was not able to verify that.
Setting the memoryUsage or turning producerFlowControl on - does not have any effect. Broker is whether blocked if memoryUsage limit is hit with flow control on, or MemoryPercentUsage keeps increasing if flow control is off.
When exactly activemq frees memory by storing the messages to disk/persistence storage? Or does it so? How can I verify it is disk space not RAM which will limit the broker in a long run when there are millions of not consumed (pending) messages in queues?
Please recheck with memoryUsage on, you can specify different limits at which message should persist.In the below example, when 64 mb of memory is full, it starts writing on to disk upto 100 gb.
<systemUsage>
<systemUsage>
<memoryUsage>
<memoryUsage limit="64 mb" />
</memoryUsage>
<storeUsage>
<storeUsage limit="100 gb" />
</storeUsage>
</systemUsage>
</systemUsage>
try using the storeCursor and setting the memory limit in the destinationPolicy, this will enforce a per queue limit to the amount of memory/messages it will pull in from disk...
<destinationPolicy>
<policyMap>
<policyEntries>
<policyEntry queue=">" producerFlowControl="false" memoryLimit="100mb">
<pendingQueuePolicy>
<storeCursor/>
</pendingQueuePolicy>
</policyEntry>
</policyEntries>
</policyMap>
</destinationPolicy>

How can I configure ActiveMQ to drop a consumer if it just stops accepting data?

Today, I saw many errors on my ActiveMQ 5.3.2 console:
INFO | Usage Manager Memory Limit
reached. Stopping producer (ID:...) to
prevent flooding topic://mytopic. See
http://activemq.apache.org/producer-flow-control.html
for more info (blocking for: 3422ms)
I did a little bit of poking around, and determined that the subscriber had gone out to lunch:
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp6 0 130320 10.208.87.178:61613 66.31.31.216:37951 ESTABLISHED
In this situation, I don't want the producer to block; I would prefer to drop the client completely. http://activemq.apache.org/slow-consumer-handling.html explains how to limit the number of messages queued, which is a good start, but isn't really what I want. http://activemq.apache.org/slow-consumers.html alludes to being able to drop a slow consumer, but doesn't explain how one might do this.
So, this is my question: is it possible to set up ApacheMQ to drop slow consumers completely, and how do I do so?
I would turn off producerFlowControl for the topics you want or all of them like so:
<policyEntry topic=">" producerFlowControl="false">
This has the ability to cause you to run out of memory or disk space now though because the message queue could keep growing. So make sure you set up an eviction strategy and pending message limit strategy like so:
<messageEvictionStrategy>
<oldestMessageEvictionStrategy/>
</messageEvictionStrategy>
<pendingMessageLimitStrategy>
<constantPendingMessageLimitStrategy limit="10"/>
</pendingMessageLimitStrategy>
This will start throwing away messages after a limit of 10 has been reached. The oldest messages will be thrown out first.
AbortSlowConsumerStrategy which has been improved in Active MQ 5.9.0 should help in this aspect. I haven't had the opportunity to test it thoroughly for various configurations. But I think this is what you are looking for, especially ignoreIdleConsumers=true.
http://java.dzone.com/articles/coming-activemq-59-new-way
You can configure below property - "sendFailIfNoSpaceAfterTimeout" for throwing error to Producer, instead of blocking.
<systemUsage>
<systemUsage sendFailIfNoSpaceAfterTimeout="3000">
<memoryUsage>
<memoryUsage limit="256 mb"/>
</memoryUsage>
</systemUsage>
</systemUsage>
I cannot give you an answer on how to drop the client ( in fact: i am not so sure it is possible all together) but what you can do is add a policy to not block the producer:
" memoryLimit="10mb" producerFlowControl="false"/>
This way, your producer will not suffer from slow consumers.

Servicemix ActiveMQ performance issue

I am using apache servicemix and apache activeMQ in my product. Here in case of HttpConnector i found a performance issue.
If i increase number of total request at a time then as the number will increase exchange queue gets stuck on any of the component. After all the requests start processing, most of the exchange gets stuck at the end component or at the dispatcher* component. Container's heap size reaches to very high and after some time it crashes and restarts automatically.
I have used transactionmanager also. flowname is also mentioned as jms.
need some solution urgently.
You might try using KahaDB instead of the "amqPersistenceAdapter". We saw a huge throughput increase just by switching to this.
here is the config we used (which is highly dependent on your application, but make sure "enableJournalDiskSyncs" is set to false)
<persistenceAdapter>
<kahaDB directory="../data/kaha"
enableJournalDiskSyncs="false"
indexWriteBatchSize="10000"
indexCacheSize="1000" />
</persistenceAdapter>
If i increase number of total request at a time then as the number will increase exchange queue gets stuck on any of the component.
ActiveMQ has a mechanism to stop producer writing messages, it is called "flow controll", seems that you producer is faster than consumer (or consumer is not stable it its speed), so first check the memoryLimit config for your AMQ (also defining it or special queue possible). Try to increase it.
<destinationPolicy>
<policyMap>
<policyEntries>
<policyEntry topic="FOO.>" producerFlowControl="false" memoryLimit="1mb">
<dispatchPolicy>
<strictOrderDispatchPolicy/>
</dispatchPolicy>
<subscriptionRecoveryPolicy>
<lastImageSubscriptionRecoveryPolicy/>
</subscriptionRecoveryPolicy>
</policyEntry>
</policyEntries>
</policyMap>
Also, you can disable that stopping of processing incoming messages with producerFlowControl="false" option. So, in case when all buffer will be used, AMQ will slow down and all messages will be stored to HD. More info Producer Flow Control and Message Cursors
Container's heap size reaches to very high and after some time it crashes and restarts automatically.
But anyway, it is just way of tunning of your app, it is not solution, because always you will have the case when some resources run out :)
You should limit the incoming requests or balance them f.e. using Pound