ActiveMQ memory limit exceeded - activemq

I try to configure ActiveMQ for the following behavior: when broker exceeds its memory limit, it should store message in persistence storage. If use the following configuration:
BrokerService broker = new BrokerService();
broker.setBrokerName("activemq");
KahaDBPersistenceAdapter persistence = new KahaDBPersistenceAdapter();
persistence.setDirectory(new File(config.getProperty("amq.persistenceDir", "amq")));
broker.setPersistenceAdapter(persistence);
broker.setVmConnectorURI(new URI("vm://activemq"));
broker.getSystemUsage().getMemoryUsage().setLimit(64 * 1024 * 1024L);
broker.getSystemUsage().getStoreUsage().setLimit(1024 * 1024 * 1024 * 100L);
broker.getSystemUsage().getTempUsage().setLimit(1024 * 1024 * 1024 * 100L);
PolicyEntry policyEntry = new PolicyEntry();
policyEntry.setCursorMemoryHighWaterMark(50);
policyEntry.setExpireMessagesPeriod(0L);
policyEntry.setPendingDurableSubscriberPolicy(new StorePendingDurableSubscriberMessageStoragePolicy());
policyEntry.setMemoryLimit(64 * 1024 * 1024L);
policyEntry.setProducerFlowControl(false);
broker.setDestinationPolicy(new PolicyMap());
broker.getDestinationPolicy().setDefaultEntry(policyEntry);
broker.setUseJmx(true);
broker.setPersistent(true);
broker.start();
However, this does not work. ActiveMQ still consumes as much memory as needed to store the full queue. I also tried to remove PolicyEntry, that caused broker to stop producers after memory limit is reached. I could find nothing in documentation about what I am doing wrong.

we use a storeCursor and set the memory limit as follows...this will limit the amount of memory for all queues to 100MB...
<destinationPolicy>
<policyMap>
<policyEntries>
<policyEntry queue=">" producerFlowControl="false" memoryLimit="100mb">
<pendingQueuePolicy>
<storeCursor/>
</pendingQueuePolicy>
</policyEntry>
</policyEntries>
</policyMap>
</destinationPolicy>
make sure you set the "destinations" that your policy should apply against...in my XML examples this is done using queue=">", but your example is using a new PolicyMap()...try calling policyEntry.setQueue(">") instead to apply to all queues or add specific destinations to your PolicyMap, etc.
see this test for a full example...
https://github.com/apache/activemq/blob/master/activemq-unit-tests/src/test/java/org/apache/activemq/PerDestinationStoreLimitTest.java

Related

Spring JMS - Message-driven-channel-adapter The number of consumers doesn't reduce to the standard level

I have a message-driven-channel-adapter and I defined the max-concurrent-consumers as 100 and concurrent-consumers as 2. When I tried a load test, I saw that the concurrent-consumers increased but after the load test, The number of consumers didn't reduce to the standard level. I'm checking it with RabbitMQ management portal.
When the project restarted (no load test), the GET (Empty) is 650/s but after load test it stays about 2500/s. It is not returning to 650/s. I think concurrent-consumers property is being increased to a number but is not being reduced to original value.
How can I make it to reduce to normal level again?
Here is my message-driven-channel-adapter definition:
<int-jms:message-driven-channel-adapter id="inboundAdapter"
auto-startup="true"
connection-factory="jmsConnectionFactory"
destination="inboundQueue"
channel="requestChannel"
error-channel="errorHandlerChannel"
receive-timeout="-1"
concurrent-consumers="2"
max-concurrent-consumers="100" />
With receiveTimeout=-1; the container has no control over the idle consumer (it is blocked in the jms client).
You also need to set max-messages-per-task for the container to consider stopping a consumer.
<int-jms:message-driven-channel-adapter id="inboundAdapter"
auto-startup="true"
connection-factory="jmsConnectionFactory"
destination-name="inboundQueue"
channel="requestChannel"
error-channel="errorHandlerChannel"
receive-timeout="5000"
concurrent-consumers="2"
max-messages-per-task="10"
max-concurrent-consumers="100" />
The time elapsed for an idle consumer is receiveTimeout * maxMessagesPerTask.

Ignite SQL query is taking time

We are currently using GridGain community Edition 8.8.10. We have setup the Ignite Cluster in Kubernetes using the Ignite operator. The cluster consists of 2 nodes with native persistence enabled and we are using thick client to connect to the Ignite cluster . The clients are also deployed in the same Kubernetes Cluster. The memory configuration of the Cluster is as follows :
-DIGNITE_WAL_MMAP=false -DIGNITE_QUIET=false -Xms6g -Xmx6g -XX:+AlwaysPreTouch -XX:+UseG1GC -XX:+ScavengeBeforeFullGC -XX:+DisableExplicitGC
<bean class="org.apache.ignite.configuration.DataRegionConfiguration">
<property name="name" value="Knowledge_Region"/>
<!-- Memory region of 20 MB initial size. -->
<property name="initialSize" value="#{20 * 1024 * 1024}"/>
<!-- Maximum size is 9 GB -->
<property name="maxSize" value="#{9L * 1024 * 1024 * 1024}"/>
<!-- Enabling eviction for this memory region. -->
<property name="pageEvictionMode" value="RANDOM_2_LRU"/>
<property name="persistenceEnabled" value="true"/>
<!-- Enabling SEGMENTED_LRU page replacement for this region. -->
<property name="pageReplacementMode" value="SEGMENTED_LRU"/>
</bean>
We are using the Ignite String function to query the cache. The Cache structure is as follows:
#QuerySqlField(index = true, inlineSize = 100)
private String value;
#QuerySqlField(name = "label", index = true, inlineSize = 100)
private String label;
#QuerySqlField(name = "type", index = true, inlineSize = 100)
#AffinityKeyMapped
private String type;
private String typeLabel;
private List<String> synonyms;
The SQL Query which we are using to get the data is as follows :
select _key, _val from TESTCACHEVALUE USE INDEX(TESTCACHEVALUE_label_IDX) WHERE REGEXP_LIKE(label, 'unit.*s.*','i') LIMIT 8
The Query Plan it is getting generated:
[05:04:56,613][WARNING][long-qry-#36][LongRunningQueryManager] Query execution is too long [duration=1124ms, type=MAP, distributedJoin=false, enforceJoinOrder=false, lazy=false, schema=staging_infrastructuretesting_business_object, sql='SELECT
"__Z0"."_KEY" AS "__C0_0",
"__Z0"."_VAL" AS "__C0_1"
FROM "staging_infrastructuretesting_business_object"."TESTCACHEVALUE" AS "__Z0" USE INDEX ("TESTCACHEVALUE_LABEL_IDX")
WHERE REGEXP_LIKE("__Z0"."LABEL", 'uni.*', 'i') FETCH FIRST 8 ROWS ONLY', plan=SELECT
__Z0._KEY AS __C0_0,
__Z0._VAL AS __C0_1
FROM staging_infrastructuretesting_business_object.TESTCACHEVALUE __Z0 USE INDEX (TESTCACHEVALUE_LABEL_IDX)
/* staging_infrastructuretesting_business_object.TESTCACHEVALUE.__SCAN_ */
/* scanCount: 289643 */
/* lookupCount: 1 */
WHERE REGEXP_LIKE(__Z0.LABEL, 'uni.*', 'i')
FETCH FIRST 8 ROWS ONLY
As I can see the Query is going for full scan and not using the Index specified in the Query.
The cache contains 5 million Objects.
The memory statistics of the Cluster is as follows :
^-- Node [id=d87d1212, uptime=00:30:00.229]
^-- Cluster [hosts=6, CPUs=20, servers=2, clients=4, topVer=12, minorTopVer=25]
^-- Network [addrs=[10.57.5.10, 127.0.0.1], discoPort=47500, commPort=47100]
^-- CPU [CPUs=1, curLoad=16%, avgLoad=38.3%, GC=0%]
^-- Heap [used=4265MB, free=30.58%, comm=6144MB]
^-- Off-heap memory [used=4872MB, free=58.58%, allocated=11564MB]
^-- Page memory [pages=620072]
^-- sysMemPlc region [type=internal, persistence=true, lazyAlloc=false,
... initCfg=40MB, maxCfg=100MB, usedRam=0MB, freeRam=99.96%, allocRam=100MB, allocTotal=0MB]
^-- metastoreMemPlc region [type=internal, persistence=true, lazyAlloc=false,
... initCfg=40MB, maxCfg=100MB, usedRam=0MB, freeRam=99.87%, allocRam=0MB, allocTotal=0MB]
^-- TxLog region [type=internal, persistence=true, lazyAlloc=false,
... initCfg=40MB, maxCfg=100MB, usedRam=0MB, freeRam=100%, allocRam=100MB, allocTotal=0MB]
^-- volatileDsMemPlc region [type=internal, persistence=false, lazyAlloc=true,
... initCfg=40MB, maxCfg=100MB, usedRam=0MB, freeRam=100%, allocRam=0MB]
^-- Default_Region region [type=default, persistence=true, lazyAlloc=true,
... initCfg=20MB, maxCfg=9216MB, usedRam=4781MB, freeRam=48.12%, allocRam=9216MB, allocTotal=4753MB]
^-- Ignite persistence [used=4844MB]
^-- Outbound messages queue [size=0]
^-- Public thread pool [active=0, idle=0, qSize=0]
^-- System thread pool [active=0, idle=8, qSize=0]
^-- Striped thread pool [active=0, idle=8, qSize=0]
From the memory snapshot it seems like we have enough memory in the Cluster.
What I have tried so far.
Index hint in the Query
Applied limit to the Query
Partitioned Cache with Query parallelism 3
SkipReducer on update True
OnheapCacheEnabled set to True
Not sure why the Query is taking time. Please let me know if i have missed anything.
One observation from the Query execution plan the time taken is around 2 secs but on the client side getting response in 5 sec.
Thanks in advance.
It seems you are missing the fact that Apache Ignite SQL engine leverages B+Tree data structure internally. B+Tree relies on some "order" of stored objects (there should be a way to compare them). The only case of a textual search that could be handled with this structure is the prefix search because it establishes a branching condition for the search algorithm. Here is the example:
select _key, _val from TESTCACHEVALUE WHERE label LIKE 'unit%'
In that case you would see the TESTCACHEVALUE_label_IDX index being used even without a hint.
For your scenario REGEXP_LIKE is just an iteration applying Matcher.find() to the label one by one.
Try the Ignite Text Query machinery. It's based on Apache Lucene and looks more suitable for the case.
No!!! You don't want to use index for your problem. Using index will only further delay your query.
It will proceed with top to bottom parsing doing full scan.
The below query should work.
select _key, _val from TESTCACHEVALUE WHERE label LIKE 'unit.%s.%'

SAP Pool capacity and peak limit-Mule esb

Hi am having an SAP connector and SAP is sending 10,000 idocs parallel what can be the good number I can give in Pooling capacity and Peak limit,Now I have given 2000 and 1000 respectively any suggestion for better performance
<sap:connector name="SAP" jcoAsHost="${saphost}" jcoUser="${sapuser}" jcoPasswd="${sappassword}" jcoSysnr="${sapsystemnumber}" jcoClient="${sapclient}" jcoLang="${saploginlanguage}" validateConnections="true" doc:name="SAP" jcoPeakLimit="1000" jcoPoolCapacity="2000"/>
When dealing with big amounts of IDocs, the first thing I recommend to improve performance is to configure the SAP system to send IDocs in batches instead of sending them individually. That is, the IDocs are sent in groups of X number you define as a batch size in the Partner Profile section of SAPGUI. Among other settings, you have to set "Pack. Size" and select "Collect IDocs" as Output Mode.
If you are not familiar with SAPGUI, request your SAP Admin to configure it for you.
Aditionally, to extract the most out of the connection pooling from the SAP connector, I suggest you use SAP Client Extended Properties to get full advantage of JCo additional connection parameters. These extended properties are defined in a Spring Bean and set in jcoClientExtendedProperties at connector or endpoint level. Take a look at the following example:
<spring:beans>
<spring:bean name="sapClientProperties" class="java.util.HashMap">
<spring:constructor-arg>
<spring:map>
<!-- Maximum number of active connections that can be created for a destination simultaneously -->
<spring:entry key="jco.destination.peak_limit" value="15"/>
<!-- Maximum number of idle connections kept open by the destination. A value of 0 has the effect that there is no connection pooling, i.e. connections will be closed after each request. -->
<spring:entry key="jco.destination.pool_capacity" value="10"/>
<!-- Time in ms after that the connections hold by the internal pool can be closed -->
<spring:entry key="jco.destination.expiration_time" value="300000"/>
<!-- Interval in ms with which the timeout checker thread checks the connections in the pool for expiration -->
<spring:entry key="jco.destination.expiration_check_period" value="60000"/>
<!-- Max time in ms to wait for a connection, if the max allowed number of connections is allocated by the application -->
<spring:entry key="jco.destination.max_get_client_time" value="30000"/>
</spring:map>
</spring:constructor-arg>
</spring:bean>
</spring:beans>
<sap:connector name="SAP"
jcoAsHost="${sap.jcoAsHost}"
jcoUser="${sap.jcoUser}"
jcoPasswd="${sap.jcoPasswd}"
jcoSysnr="${sap.jcoSysnr}"
jcoClient="${sap.jcoClient}"
...
jcoClientExtendedProperties-ref="sapClientProperties" />
Important: to enable the pool it is mandatory that you set the jcoExpirationTime additionally to the Peak Limit and Pool Capacity.
For further details, please refer to SAP Connector Advanced Features documentation.

Hawtio ActiveMQ queue Browse shows maximum 500 messages

I'm trying to see all the messages in my queue in ActiveMQ(5.11.1). I am using Hawtio(1.4.51) for this purpose. My queue in ActiveMQ contains 790 message.
My Steps till now:
By default hawtio shows up to 400 Messages in ActiveMQ queue. So i went to my broker.xml settings and added:
<destinationPolicy>
<policyMap>
<policyEntries>
<policyEntry queue="incoming.status" maxBrowsePageSize="401"/>
</policyEntries>
</policyMap>
</destinationPolicy>
This gave me 401 messages.
So i tried to change the maxBrowsePageSize="401" to "-1". To my surprise i got only 200 messages...
Next try was to set maxBrowsePageSize="1000" and again dissapointement. I could see only 500 messages...
Next i went to my java code and inserted:
PrintWriter writer = new PrintWriter("c:\\Messages.log", "UTF-8");
writer.write(jmsQueueEndpoint.browseAllMessagesAsXml(true));
writer.close();
The results were: for maxBrowsePageSize="401" i got 401/790 messages, for "2" i got 2/790 for "1000" and for "-1" i got 790/790.
So my conclusion was that there is some setting in Hawtio that limits my results to 500.
I need to see ALL my messages in Hawtio.
So after more investigation, and with a help of this blog: HawtIO + Camel plugin - Multiple context not showing up - Limits to max3
I was able to find the setting which will allow ActiveMQ in Hawtion to show more than 500 entries. The setting located here:
In the right side of hawtio application there is your user picture with a small arrow. Press on it and select "Preferences".
In "Preferences" select "Jolokia.
In "Jolokia" edit: "Max collection size" to maximum that you want and press "apply", restart browser.
The only problem left is the unlimited option. When i set "-1" in the broker part, hawtio limits me to 200 entries...

The receiveBufferSize not being honored. UDP packet truncated

netty 4.0.24
I am passing XML over UDP. When receiving the UPD packet, the packet is always of length 2048, truncating the message. Even though, I have attempted to set the receive buffer size to something larger (4096, 8192, 65536) but it is not being honored.
I have verified the UDP sender using another UDP ingest mechanism. A standalone Java app using java.net.DatagramSocket. The XML is around 45k.
I was able to trace the stack to DatagramSocketImpl.createChannel (line 281). Stepping into DatagramChannelConfig, it has a receiveBufferSize of whatever I set (great), but a rcvBufAllocator of 2048.
Does the rcvBufAllocator override the receiveBufferSize (SO_RCVBUF)? Is the message coming in multiple buffers?
Any feedback or alternative solutions would be greatly appreciated.
I also should mention, I am using an ESB called vert.x which uses netty heavily. Since I was able to trace down to netty, I was hopeful that I could find help here.
The maximum size of incoming datagrams copied out of the socket is actually not a socket option, but rather a parameter of the socket read() function that your client passes in each time it wants to read a datagram. One advantage of this interface is that programs accepting datagrams of unknown/varying lengths can adaptively change the size of the memory allocated for incoming datagram copies such that they do not over-allocate memory while still getting the whole datagram. (In netty this allocation/prediction is done by implementors of io.netty.channel.RecvByteBufAllocator.)
In contrast, SO_RCVBUF is the size of a buffer that holds all of the datagrams your client hasn't read yet.
Here's an example of how to configure a UDP service with a fixed max incoming datagram size with netty 4.x using a Bootstrap:
import io.netty.bootstrap.Bootstrap;
import io.netty.channel.ChannelOption;
import io.netty.channel.FixedRecvByteBufAllocator;
import io.netty.channel.nio.NioEventLoopGroup;
import io.netty.channel.socket.nio.NioDatagramChannel;
int maxDatagramSize = 4092;
String bindAddr = "0.0.0.0";
int port = 1234;
SimpleChannelInboundHandler<DatagramPacket> handler = . . .;
InetSocketAddress address = new InetSocketAddress(bindAddr, port);
NioEventLoopGroup group = new NioEventLoopGroup();
Bootstrap b = new Bootstrap()
.group(group)
.channel(NioDatagramChannel.class)
.handler(handler);
b.option(ChannelOption.RCVBUF_ALLOCATOR, new FixedRecvByteBufAllocator(maxDatagramSize));
b.bind(address).sync().channel().closeFuture().await();
You could also configure the allocator with ChannelConfig.setRecvByteBufAllocator