Spring JMS - Message-driven-channel-adapter The number of consumers doesn't reduce to the standard level - rabbitmq

I have a message-driven-channel-adapter and I defined the max-concurrent-consumers as 100 and concurrent-consumers as 2. When I tried a load test, I saw that the concurrent-consumers increased but after the load test, The number of consumers didn't reduce to the standard level. I'm checking it with RabbitMQ management portal.
When the project restarted (no load test), the GET (Empty) is 650/s but after load test it stays about 2500/s. It is not returning to 650/s. I think concurrent-consumers property is being increased to a number but is not being reduced to original value.
How can I make it to reduce to normal level again?
Here is my message-driven-channel-adapter definition:
<int-jms:message-driven-channel-adapter id="inboundAdapter"
auto-startup="true"
connection-factory="jmsConnectionFactory"
destination="inboundQueue"
channel="requestChannel"
error-channel="errorHandlerChannel"
receive-timeout="-1"
concurrent-consumers="2"
max-concurrent-consumers="100" />

With receiveTimeout=-1; the container has no control over the idle consumer (it is blocked in the jms client).
You also need to set max-messages-per-task for the container to consider stopping a consumer.
<int-jms:message-driven-channel-adapter id="inboundAdapter"
auto-startup="true"
connection-factory="jmsConnectionFactory"
destination-name="inboundQueue"
channel="requestChannel"
error-channel="errorHandlerChannel"
receive-timeout="5000"
concurrent-consumers="2"
max-messages-per-task="10"
max-concurrent-consumers="100" />
The time elapsed for an idle consumer is receiveTimeout * maxMessagesPerTask.

Related

change jta transaction timeout from default to custom

I am using Atomikos for JTA transaction.
I have following setting for JTA:
UserTransactionImp userTransactionImp = new UserTransactionImp();
userTransactionImp.setTransactionTimeout(900);
but when my code perform JTA transaction, then if it takes more than 5 minutes (which is default value) then it throws exception:
Caused by: com.atomikos.icatch.RollbackException: Prepare: NO vote
at com.atomikos.icatch.imp.ActiveStateHandler.prepare(ActiveStateHandler.java:231)
at com.atomikos.icatch.imp.CoordinatorImp.prepare(CoordinatorImp.java:681)
at com.atomikos.icatch.imp.CoordinatorImp.terminate(CoordinatorImp.java:970)
at com.atomikos.icatch.imp.CompositeTerminatorImp.commit(CompositeTerminatorImp.java:82)
at com.atomikos.icatch.imp.CompositeTransactionImp.commit(CompositeTransactionImp.java:336)
at com.atomikos.icatch.jta.TransactionImp.commit(TransactionImp.java:190)
... 25 common frames omitted
it looks like its taking the default jta transaction timeout (even though i am setting timeout explicitely (to 15 minutes/900 seconds).
I tried using following properties in application.properties file however it still takes the default timeout value(300 seconds).
spring.jta.atomikos.properties.max-timeout=600000
spring.jta.atomikos.properties.default-jta-timeout=10000
I have also tried with below property but no luck:
spring.transaction.default-timeout=900
Can anyone suggest if I need any other setting? I am using wildfly plugin, spring boot and atomikos api for JTA transaction.
From the Atomikos documentation:
com.atomikos.icatch.max_timeout
Specifies the maximum timeout (in milliseconds) that can be allowed for transactions. Defaults to 300000. This means that calls to UserTransaction.setTransactionTimeout() with a value higher than configured here will be max'ed to this value. For 4.x or higher, a value of 0 means no maximum (i.e., unlimited timeouts are allowed).
Indeed, if you take a look at the Atomikos library source code (for both versions 4.0.0M4 and 3.7.0), in the createCC method from class com.atomikos.icatch.imp.TransactionServiceImp you will see:
387: if ( timeout > maxTimeout_ ) {
388: timeout = maxTimeout_;
389: //FIXED 20188
390: LOGGER.logWarning ( "Attempt to create a transaction with a timeout that exceeds maximum - truncating to: " + maxTimeout_ );
391: }
So any attempt to specify a longer transaction timeout gets capped to maxTimeout_ which has a default value of 300000 set during initialization if none is specified.
You can set the com.atomikos.icatch.max_timeout as a JVM argument with:
-Dcom.atomikos.icatch.max_timeout=900000
or you could use The Advanced Case recipe specified in the Configuration for Spring Section from the Atomikos documentation.
I've resolved similar problem where configuration in application.yml (or application. properties) of Spring Boot did not get picked up.
There was even a log that I later found mentioned in official docs.
However, I added transactions.properties file (next to the application.yml) where I set mine desired properties.
# Atomikos properties
# Service must be defined!
com.atomikos.icatch.service = com.atomikos.icatch.standalone.UserTransactionServiceFactory
# Override default properties.
com.atomikos.icatch.log_base_dir = ./atomikos
Some properties can be set within transactions.properties and other within jta.properties file.

Apache Ignite - Distibuted Queue and Executors

I am planning to use Apache Ignite Distributed Queue.
I am using Ignite with a spring boot application. So, on bootup, I will be adding 20 names in a queue. But, since there are 3 servers in a cluster, the same 20 names gets added 3 times. But, i want to add them only once in the queue.
Ignite ignite = Ignition.ignite();
IgniteQueue<String> queue = ignite.queue(
"queueName", // Queue name.
0, // Queue capacity. 0 for unbounded queue.
null // Collection configuration.
);
Distributed executors, will be able to poll from the queue and run the task. Here, the executor is expected to poll, run the task and then add the same name to the queue. Trying to achieve round robin here.
Only one executor should be running the same task at any point of time, though there are multiple servers in a cluster.
Any suggestion for this.
You can launch ignite cluster singleton service https://apacheignite.readme.io/docs/cluster-singletons which will fill data to queue. Also you can adding data from coordinator node (oldest node in cluster) ignite.cluster().forOldest().node().isLocal()
I fixed bootup time duplicate cache loading issue this way:
final IgniteAtomicLong cacheLoadCnt = ignite.atomicLong(cacheName + "Cnt", 0, true);
if (cacheLoadCnt.get() == 0) {
loadCache();
cacheLoadCnt.addAndGet(1);
}

SAP Pool capacity and peak limit-Mule esb

Hi am having an SAP connector and SAP is sending 10,000 idocs parallel what can be the good number I can give in Pooling capacity and Peak limit,Now I have given 2000 and 1000 respectively any suggestion for better performance
<sap:connector name="SAP" jcoAsHost="${saphost}" jcoUser="${sapuser}" jcoPasswd="${sappassword}" jcoSysnr="${sapsystemnumber}" jcoClient="${sapclient}" jcoLang="${saploginlanguage}" validateConnections="true" doc:name="SAP" jcoPeakLimit="1000" jcoPoolCapacity="2000"/>
When dealing with big amounts of IDocs, the first thing I recommend to improve performance is to configure the SAP system to send IDocs in batches instead of sending them individually. That is, the IDocs are sent in groups of X number you define as a batch size in the Partner Profile section of SAPGUI. Among other settings, you have to set "Pack. Size" and select "Collect IDocs" as Output Mode.
If you are not familiar with SAPGUI, request your SAP Admin to configure it for you.
Aditionally, to extract the most out of the connection pooling from the SAP connector, I suggest you use SAP Client Extended Properties to get full advantage of JCo additional connection parameters. These extended properties are defined in a Spring Bean and set in jcoClientExtendedProperties at connector or endpoint level. Take a look at the following example:
<spring:beans>
<spring:bean name="sapClientProperties" class="java.util.HashMap">
<spring:constructor-arg>
<spring:map>
<!-- Maximum number of active connections that can be created for a destination simultaneously -->
<spring:entry key="jco.destination.peak_limit" value="15"/>
<!-- Maximum number of idle connections kept open by the destination. A value of 0 has the effect that there is no connection pooling, i.e. connections will be closed after each request. -->
<spring:entry key="jco.destination.pool_capacity" value="10"/>
<!-- Time in ms after that the connections hold by the internal pool can be closed -->
<spring:entry key="jco.destination.expiration_time" value="300000"/>
<!-- Interval in ms with which the timeout checker thread checks the connections in the pool for expiration -->
<spring:entry key="jco.destination.expiration_check_period" value="60000"/>
<!-- Max time in ms to wait for a connection, if the max allowed number of connections is allocated by the application -->
<spring:entry key="jco.destination.max_get_client_time" value="30000"/>
</spring:map>
</spring:constructor-arg>
</spring:bean>
</spring:beans>
<sap:connector name="SAP"
jcoAsHost="${sap.jcoAsHost}"
jcoUser="${sap.jcoUser}"
jcoPasswd="${sap.jcoPasswd}"
jcoSysnr="${sap.jcoSysnr}"
jcoClient="${sap.jcoClient}"
...
jcoClientExtendedProperties-ref="sapClientProperties" />
Important: to enable the pool it is mandatory that you set the jcoExpirationTime additionally to the Peak Limit and Pool Capacity.
For further details, please refer to SAP Connector Advanced Features documentation.

ServerXmlHttpRequest hanging sometimes when doing a POST

I have a job that periodically does some work involving ServerXmlHttpRquest to perform an HTTP POST. The job runs every 60 seconds.
And normally it runs without issue. But there's about a 1 in 50,000 chance (every two or three months) that it will hang:
IXMLHttpRequest http = new ServerXmlHttpRequest();
http.open("POST", deleteUrl, false, "", "");
http.send(stuffToDelete); <---hang
When it hangs, not even the Task Scheduler (with the option enabled to kill the job if it takes longer than 3 minutes to run) can end the task. I have to connect to the remote customer's network, get on the server, and use Task Manager to kill the process.
And then its good for another month or three.
Eventually i started using Task Manager to create a process dump,
so i could analyze where the hang is. After five crash dumps (over the last 11 months or so) i get a consistent picture:
ntdll.dll!_NtWaitForMultipleObjects#20()
KERNELBASE.dll!_WaitForMultipleObjectsEx#20()
user32.dll!MsgWaitForMultipleObjectsEx()
user32.dll!_MsgWaitForMultipleObjects#20()
urlmon.dll!CTransaction::CompleteOperation(int fNested) Line 2496
urlmon.dll!CTransaction::StartEx(IUri * pIUri, IInternetProtocolSink * pOInetProtSink, IInternetBindInfo * pOInetBindInfo, unsigned long grfOptions, unsigned long dwReserved) Line 4453 C++
urlmon.dll!CTransaction::Start(const wchar_t * pwzURL, IInternetProtocolSink * pOInetProtSink, IInternetBindInfo * pOInetBindInfo, unsigned long grfOptions, unsigned long dwReserved) Line 4515 C++
msxml3.dll!URLMONRequest::send()
msxml3.dll!XMLHttp::send()
Contoso.exe!FrobImporter.TFrobImporter.DeleteFrobs Line 971
Contoso.exe!FrobImporter.TFrobImporter.ImportCore Line 1583
Contoso.exe!FrobImporter.TFrobImporter.RunImport Line 1070
Contoso.exe!CommandLineProcessor.TCommandLineProcessor.HandleFrobImport Line 433
Contoso.exe!CommandLineProcessor.TCommandLineProcessor.CoreExecute Line 71
Contoso.exe!CommandLineProcessor.TCommandLineProcessor.Execute Line 84
Contoso.exe!Contoso.Contoso Line 167
kernel32.dll!#BaseThreadInitThunk#12()
ntdll.dll!__RtlUserThreadStart()
ntdll.dll!__RtlUserThreadStart#8()
So i do a ServerXmlHttpRequest.send, and it never returns. It will sit there for days (causing the system to miss financial transactions, until come Sunday night i get a call that it's broken).
It is of no help unless someone knows how to debug code, but the registers in the stalled thread at the time of the dump are:
EAX 00000030
EBX 00000000
ECX 00000000
EDX 00000000
ESI 002CAC08
EDI 00000001
EIP 732A08A7
ESP 0018F684
EBP 0018F6C8
EFL 00000000
Windows Server 2012 R2
Microsoft IIS/8.5
Default timeouts of ServerXmlHttpRequest
You can use serverXmlHttpRequest.setTimeouts(...) to configure the four classes of timeouts:
resolveTimeout: The value is applied to mapping host names (such as "www.microsoft.com") to IP addresses; the default value is infinite, meaning no timeout.
connectTimeout: A long integer. The value is applied to establishing a communication socket with the target server, with a default timeout value of 60 seconds.
sendTimeout: The value applies to sending an individual packet of request data (if any) on the communication socket to the target server. A large request sent to a server will normally be broken up into multiple packets; the send timeout applies to sending each packet individually. The default value is 30 seconds.
receiveTimeout: The value applies to receiving a packet of response data from the target server. Large responses will be broken up into multiple packets; the receive timeout applies to fetching each packet of data off the socket. The default value is 30 seconds.
The KB305053 (a server that decides to keep the connection open will cause serverXmlHttpRequest to wait for the connection to close) seems like it plausibly could be the issue. But the 30 second default timeout would have taken care of that.
Possible workaround - Add myself to a Job
The Windows Task Scheduler is unable to terminate the task; even though the option is enabled to do do.
I will look into using the Windows Job API to add my self process to a job, and use SetInformationJobObject to set a time limit on my process:
CreateJobObject
AssignProcessToJobObject
SetInformationJobObject
to limit my process to three minutes of execution time:
PerProcessUserTimeLimit
If LimitFlags specifies
JOB_OBJECT_LIMIT_PROCESS_TIME, this member is the per-process
user-mode execution time limit, in 100-nanosecond ticks. Otherwise,
this member is ignored.
The system periodically checks to determine
whether each process associated with the job has accumulated more
user-mode time than the set limit. If it has, the process is
terminated.
If the job is nested, the effective limit is the most
restrictive limit in the job chain.
Although since Task Scheduler uses Job objects to also limit a task's time, i'm not hopeful that the Job Object can limit a job either.
Edit: Job objects cannot limit a process by process time - only user time. And with a process idle waiting for an object, it will not accumulate any user time - certainly not three minutes worth.
Bonus Reading
How can a ServerXMLHTTP GET request hang? (GET, not POST)
KB305053: ServerXMLHTTP Stops Responding When You Send a POST Request (which says the timeout should expire; where mine does not)
MS Forums: oHttp.Send - Hangs (HEAD, not POST)
MS Forums: ASP to test SOAP WebService using MSXML2.ServerXMLHTTP Send hangs
CC to MS Support Forums
Consider switching to a newer, supported API.
msxml6.dll using MSXML2.ServerXMLHTTP.6.0
winhttpcom.dll using WinHttp.WinHttpRequest.5.1.
The msxml3.dll library is no longer supported and is only kept around for compatibility reasons. Plus, there were a number of security and stability improvements included with msxml4.dll (and newer) that you are missing out on.

Maximum number of messages sent to a Queue in OpenMQ?

I am currently using Glassfish v2.1 and I have set up a queue to send and receive messages from with Sesion beans and MDBs respectively. However, I have noticed that I can send only a maximum of 1000 messages to the queue. Is there any reason why I cannot send more than 1000 messages to the queue? I do have a "developer" profile setup for the glassfish domain. Could that be the reason? Or is there some resource configuration setting that I need to modify?
I have setup the sun-resources.xml configuration properties as follows:
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE resources PUBLIC "-//Sun Microsystems, Inc.//DTD Application Server 9.0 Resource Definitions //EN" "http://www.sun.com/software/appserver/dtds/sun-resources_1_3.dtd">
<resources>
<admin-object-resource
enabled="true"
jndi-name="jms/UpdateQueue"
object-type="user"
res-adapter="jmsra"
res-type="javax.jms.Queue">
<description/>
<property name="Name" value="UpdatePhysicalQueue"/>
</admin-object-resource>
<connector-resource
enabled="true" jndi-name="jms/UpdateQueueFactory"
object-type="user"
pool-name="jms/UpdateQueueFactoryPool">
<description/>
</connector-resource>
<connector-connection-pool
associate-with-thread="false"
connection-creation-retry-attempts="0"
connection-creation-retry-interval-in-seconds="10"
connection-definition-name="javax.jms.QueueConnectionFactory"
connection-leak-reclaim="false"
connection-leak-timeout-in-seconds="0"
fail-all-connections="false"
idle-timeout-in-seconds="300"
is-connection-validation-required="false"
lazy-connection-association="false"
lazy-connection-enlistment="false"
match-connections="true"
max-connection-usage-count="0"
max-pool-size="32"
max-wait-time-in-millis="60000"
name="jms/UpdateFactoryPool"
pool-resize-quantity="2"
resource-adapter-name="jmsra"
steady-pool-size="8"
validate-atmost-once-period-in-seconds="0"/>
</resources>
Hmm .. further investigation revealed the following in the imq logs:
[17/Nov/2009:10:27:57 CST] ERROR sendMessage: Sending message failed. Connection ID: 427038234214377984:
com.sun.messaging.jmq.jmsserver.util.BrokerException: transaction failed: [B4303]: The maximum number of messages [1,000] that the producer can process in a single transaction (TID=427038234364096768) has been exceeded. Please either limit the # of messages per transaction or increase the imq.transaction.producer.maxNumMsgs property.
So what would I do if I needed to send more than 5000 messages at a time?
What I am trying to do is to read all the records in a table and update a particular field of each record based on the corresponding value of that record in a legacy table to which I have only read only access. This table has more than 10k records in it. As of now, I am sequentially going through each record in a for loop, getting the corresponding record from the legacy table, comparing the field values, updating the record if necessary and adding corresponding new records in other tables.
However, I was hoping to improve performance by processing all the records asynchronously. To do that I was thinking of sending each record info as a separate message and hence requiring so many messages.
To configure OpenMQ and set artitrary broker properties, have a look at this blog post.
But actually, I wouldn't advice to increase the imq.transaction.producer.maxNumMsgs property, at least not above the value recommended in the documentation:
The maximum number of messages that a producer can process in a single transaction. It is recommended that the value be less than 5000 to prevent the exhausting of resources.
If you need to send more messages, consider doing it in several transactions.