camel sftp consumer becomes inactive - apache

I am using apache camel version 2.10.0
I am using disconnect=true in my SFTP consumer. This works absolutely fine. However sometimes the consumer becomes completely idle. Upon redeploying the consumer it starts polling again. I have set polling interval as 10 seconds. Here is my complete route configuration.
Route/ Endpoint
<ftp:binding.sftp xmlns:ftp="urn:switchyard-component-camel-ftp:config:1.1" name="i014-SFTP-Central-abcbank-Local">
<ftp:contextMapper class="com.hm.goep.esb.composer.SI_InContextMapper"/>
<ftp:additionalUriParameters>
<ftp:parameter name="recursive" value="${goep.hm.abcbank.ftp_recursive}"/>
<ftp:parameter name="antInclude" value="*.csv"/>
<ftp:parameter name="consumer.bridgeErrorHandler" value="true"/>
<ftp:parameter name="localWorkDirectory" value="${goep.hm.esb-sap.in.filedirectory}/i014/"/>
<ftp:parameter name="passiveMode" value="${goep.hm.abcbank.ftp_passive_mode}"/>
<ftp:parameter name="binary" value="true"/>
</ftp:additionalUriParameters>
<ftp:directory>${goep.hm.i014.abcbank.ftpdirectory}</ftp:directory>
<ftp:autoCreate>false</ftp:autoCreate>
<ftp:host>${goep.hm.abcbank.ftp_server}</ftp:host>
<ftp:port>${goep.hm.abcbank.ftp_port}</ftp:port>
<ftp:username>${goep.hm.abcbank.ftp_user}</ftp:username>
<ftp:password>${goep.hm.abcbank.ftp_password}</ftp:password>
<ftp:binary>true</ftp:binary>
<ftp:disconnect>true</ftp:disconnect>
<ftp:stepwise>false</ftp:stepwise>
<ftp:throwExceptionOnConnectFailed>true</ftp:throwExceptionOnConnectFailed>
<ftp:consume>
<ftp:delete>true</ftp:delete>
<ftp:sortBy>file:modified</ftp:sortBy>
<ftp:readLock>changed</ftp:readLock>
<ftp:startingDirectoryMustExist>true</ftp:startingDirectoryMustExist>
<ftp:directoryMustExist>true</ftp:directoryMustExist>
<ftp:delay>${goep.hm.i014.abcbank.inbound.poll_interval}</ftp:delay>
</ftp:consume>
Thread dump
"Connect thread 110.75.144.86 session" daemon prio=10 tid=0x00007f7450012000 nid=0x1800 runnable [0x00007f7518408000]
java.lang.Thread.State: RUNNABLE
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.read(SocketInputStream.java:152)
at java.net.SocketInputStream.read(SocketInputStream.java:122)
at com.jcraft.jsch.IO.getByte(IO.java:82)
at com.jcraft.jsch.Session.read(Session.java:885)
at com.jcraft.jsch.Session.run(Session.java:1289)
at java.lang.Thread.run(Thread.java:745)
"Camel (camel-101) thread #124 - sftp://testuser:******#11.11.11.11:22/download" daemon prio=10 tid=0x00007f748c090800 nid=0x7b46 in Object.wait() [0x00007f74cd35e000]
java.lang.Thread.State: TIMED_WAITING (on object monitor)
at java.lang.Object.wait(Native Method)
- waiting on <0x0000000730625758> (a com.jcraft.jsch.Channel$MyPipedInputStream)
at java.io.PipedInputStream.read(PipedInputStream.java:327)
- locked <0x0000000730625758> (a com.jcraft.jsch.Channel$MyPipedInputStream)
at java.io.PipedInputStream.read(PipedInputStream.java:378)
- locked <0x0000000730625758> (a com.jcraft.jsch.Channel$MyPipedInputStream)
at com.jcraft.jsch.ChannelSftp._get(ChannelSftp.java:1041)
at com.jcraft.jsch.ChannelSftp.get(ChannelSftp.java:944)
at com.jcraft.jsch.ChannelSftp.get(ChannelSftp.java:922)
at org.apache.camel.component.file.remote.SftpOperations.retrieveFileToFileInLocalWorkDirectory(SftpOperations.java:675)
at org.apache.camel.component.file.remote.SftpOperations.retrieveFile(SftpOperations.java:542)
at org.apache.camel.component.file.GenericFileConsumer.processExchange(GenericFileConsumer.java:317)
at org.apache.camel.component.file.remote.RemoteFileConsumer.processExchange(RemoteFileConsumer.java:92)
at org.apache.camel.component.file.GenericFileConsumer.processBatch(GenericFileConsumer.java:189)
at org.apache.camel.component.file.GenericFileConsumer.poll(GenericFileConsumer.java:155)
at org.apache.camel.impl.ScheduledPollConsumer.doRun(ScheduledPollConsumer.java:142)
at org.apache.camel.impl.ScheduledPollConsumer.run(ScheduledPollConsumer.java:92)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
During every polling, consumer first connect to server and then disconnect from the server. But during the last polling, consumer connected to SFTP server but didn't disconnect and then went idle.
Usual log during polling :
02-05-2016 20:48:36,662 INFO (org.apache.camel.component.file.remote.SftpOperations) [Camel (camel-99) thread #117 - sftp://xxxx:******#11.11.11.11:22/download] Connected to sftp://xxxx:******#11.11.11.11:22
02-05-2016 20:48:36,662 INFO (org.apache.camel.component.file.remote.SftpConsumer) [Camel (camel-99) thread #117 - sftp://xxxx:******#11.11.11.11:22/download] Connected and logged in to: sftp://xxxx:******#11.11.11.11:22
02-05-2016 20:48:38,539 INFO (org.apache.camel.component.file.remote.SftpOperations) [Camel (camel-99) thread #117 - sftp://xxxx:******#11.11.11.11:22/download] JSCH -> Disconnecting from 11.11.11.11 port 22
Error log after which consumer stopped polling:
02-05-2016 20:59:39,392 INFO (org.apache.camel.component.file.remote.SftpOperations) [Camel (camel-99) thread #117 - sftp://xxxx:******#11.11.11.11:22/download] Connected to sftp://xxxx:******#11.11.11.11:22
02-05-2016 20:59:39,392 INFO (org.apache.camel.component.file.remote.SftpConsumer) [Camel (camel-99) thread #117 - sftp://xxxx:******#11.11.11.11:22/download] Connected and logged in to: sftp://xxxx:******#11.11.11.11:22
Please suggest what can be done here.

Related

JVM hangs on shutdown

My application JVM hangs on shutdown. While in this state I have taken a thread dump using jstack. But its not obvious to me whats causing the hang from this thread dump. Any help will be much appreciated.
I have removed the RUNNABLE threads from the dump.
2019-04-26 20:26:11
Full thread dump OpenJDK 64-Bit Server VM (25.191-b12 mixed mode):
"TenantsRegistry-HEARTBEAT" #17 daemon prio=5 os_prio=0 tid=0x000055e33475f800 nid=0x24 in Object.wait() [0x00007f2dd9e64000]
java.lang.Thread.State: TIMED_WAITING (on object monitor)
at java.lang.Object.wait(Native Method)
at java.util.TimerThread.mainLoop(Timer.java:552)
- locked <0x000000072c2d1858> (a java.util.TaskQueue)
at java.util.TimerThread.run(Timer.java:505)
"ExtendedDataCache-HEARTBEAT" #12 daemon prio=5 os_prio=0 tid=0x000055e33434a800 nid=0x1f in Object.wait() [0x00007f2dda769000]
java.lang.Thread.State: TIMED_WAITING (on object monitor)
at java.lang.Object.wait(Native Method)
at java.util.TimerThread.mainLoop(Timer.java:552)
- locked <0x000000072c07c0b8> (a java.util.TaskQueue)
at java.util.TimerThread.run(Timer.java:505)
"HikariPool-1 housekeeper" #11 daemon prio=5 os_prio=0 tid=0x000055e334244800 nid=0x1e waiting on condition [0x00007f2ddaf09000]
java.lang.Thread.State: TIMED_WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for <0x000000072b87c350> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
"Finalizer" #3 daemon prio=8 os_prio=0 tid=0x000055e33355d000 nid=0x16 in Object.wait() [0x00007f2de1237000]
java.lang.Thread.State: WAITING (on object monitor)
at java.lang.Object.wait(Native Method)
- waiting on <0x000000072aaa7e58> (a java.lang.ref.ReferenceQueue$Lock)
at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:144)
- locked <0x000000072aaa7e58> (a java.lang.ref.ReferenceQueue$Lock)
at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:165)
at java.lang.ref.Finalizer$FinalizerThread.run(Finalizer.java:216)
"Reference Handler" #2 daemon prio=10 os_prio=0 tid=0x000055e33355a000 nid=0x15 in Object.wait() [0x00007f2de1338000]
java.lang.Thread.State: WAITING (on object monitor)
at java.lang.Object.wait(Native Method)
at java.lang.Object.wait(Object.java:502)
at java.lang.ref.Reference.tryHandlePending(Reference.java:191)
- locked <0x000000072aaa64d0> (a java.lang.ref.Reference$Lock)
at java.lang.ref.Reference$ReferenceHandler.run(Reference.java:153)
"VM Thread" os_prio=0 tid=0x000055e333550000 nid=0x14 runnable
"VM Periodic Task Thread" os_prio=0 tid=0x000055e3335bd000 nid=0x1b waiting on condition
JNI global references: 420

Why does a the ignite client open a port?

We have started using Apache Ignite and we are using TCP-communication. What we are seeing is that the clients are opening a port for communication just like the server.
My first assumption was that we don't need to open up from the server to the client, everything seemed to be working fine. However, in some cases when the topology is changing we got stack traces in the logs that indicates that the server is initiating communication with the client on this port and fails.
My question is why is the server trying to communicate directly with the client? Do we need to let the servers communicate with the client or can we simply ignore the error messages?
Below is an example of the stack trace:
2016-07-04 16:02:32,298 ERROR [marshaller-cache-#67%PMCacheCluster%] [org.apache.ignite.internal.processors.cache.query.continuous.CacheContinuousQueryHandler] [NONE] - Failed to send event notification to node: ad8937b4-eb38-442a-8e06-9625c6246d7b
org.apache.ignite.IgniteCheckedException: Failed to send message (node may have left the grid or TCP connection cannot be established due to firewall issues) [node=TcpDiscoveryNode [id=ad8937b4-eb38-442a-8e06-9625c6246d7b, addrs=[xxx.xx.x.xxx], sockAddrs=[/xxx.xx.x.xxx:0, /xxx.xx.x.xxx:0], discPort=0, order=51, intOrder=29, lastExchangeTime=1467640045240, loc=false, ver=1.6.0#20160518-sha1:0b22c45b, isClient=true], topic=T4 [topic=TOPIC_CACHE, id1=ee261127-933b-36b7-b4ef-f5be9bb4bff2, id2=ad8937b4-eb38-442a-8e06-9625c6246d7b, id3=0], msg=GridContinuousMessage [type=MSG_EVT_NOTIFICATION, routineId=7107ffc5-9868-422f-8509-4739558869f7, data=null, futId=null], policy=2]
at org.apache.ignite.internal.managers.communication.GridIoManager.send(GridIoManager.java:1290)
at org.apache.ignite.internal.managers.communication.GridIoManager.sendOrderedMessage(GridIoManager.java:1508)
at org.apache.ignite.internal.processors.continuous.GridContinuousProcessor.sendWithRetries(GridContinuousProcessor.java:1229)
at org.apache.ignite.internal.processors.continuous.GridContinuousProcessor.sendWithRetries(GridContinuousProcessor.java:1200)
at org.apache.ignite.internal.processors.continuous.GridContinuousProcessor.sendWithRetries(GridContinuousProcessor.java:1182)
at org.apache.ignite.internal.processors.continuous.GridContinuousProcessor.sendNotification(GridContinuousProcessor.java:843)
at org.apache.ignite.internal.processors.continuous.GridContinuousProcessor.addNotification(GridContinuousProcessor.java:802)
at org.apache.ignite.internal.processors.cache.query.continuous.CacheContinuousQueryHandler.onEntryUpdate(CacheContinuousQueryHandler.java:787)
at org.apache.ignite.internal.processors.cache.query.continuous.CacheContinuousQueryHandler.access$700(CacheContinuousQueryHandler.java:91)
at org.apache.ignite.internal.processors.cache.query.continuous.CacheContinuousQueryHandler$1.onEntryUpdated(CacheContinuousQueryHandler.java:412)
at org.apache.ignite.internal.processors.cache.query.continuous.CacheContinuousQueryManager.onEntryUpdated(CacheContinuousQueryManager.java:343)
at org.apache.ignite.internal.processors.cache.GridCacheMapEntry.innerUpdate(GridCacheMapEntry.java:2522)
at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateSingle(GridDhtAtomicCache.java:2246)
at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateAllAsyncInternal0(GridDhtAtomicCache.java:1644)
at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateAllAsyncInternal(GridDhtAtomicCache.java:1484)
at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.processNearAtomicUpdateRequest(GridDhtAtomicCache.java:2940)
at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.access$600(GridDhtAtomicCache.java:129)
at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache$5.apply(GridDhtAtomicCache.java:260)
at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache$5.apply(GridDhtAtomicCache.java:258)
at org.apache.ignite.internal.processors.cache.GridCacheIoManager.processMessage(GridCacheIoManager.java:622)
at org.apache.ignite.internal.processors.cache.GridCacheIoManager.onMessage0(GridCacheIoManager.java:320)
at org.apache.ignite.internal.processors.cache.GridCacheIoManager.handleMessage(GridCacheIoManager.java:244)
at org.apache.ignite.internal.processors.cache.GridCacheIoManager.access$000(GridCacheIoManager.java:81)
at org.apache.ignite.internal.processors.cache.GridCacheIoManager$1.onMessage(GridCacheIoManager.java:203)
at org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1219)
at org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:847)
at org.apache.ignite.internal.managers.communication.GridIoManager.access$1700(GridIoManager.java:105)
at org.apache.ignite.internal.managers.communication.GridIoManager$5.run(GridIoManager.java:810)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.ignite.spi.IgniteSpiException: Failed to send message to remote node: TcpDiscoveryNode [id=ad8937b4-eb38-442a-8e06-9625c6246d7b, addrs=[xxx.xx.x.xxx], sockAddrs=[/xxx.xx.x.xxx:0, /xxx.xx.x.xxx:0], discPort=0, order=51, intOrder=29, lastExchangeTime=1467640045240, loc=false, ver=1.6.0#20160518-sha1:0b22c45b, isClient=true]
at org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.sendMessage0(TcpCommunicationSpi.java:1993)
at org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.sendMessage(TcpCommunicationSpi.java:1933)
at org.apache.ignite.internal.managers.communication.GridIoManager.send(GridIoManager.java:1285)
... 30 common frames omitted
Caused by: org.apache.ignite.IgniteCheckedException: Failed to connect to node (is node still alive?). Make sure that each ComputeTask and GridCacheTransaction has a timeout set in order to prevent parties from waiting forever in case of network issues [nodeId=ad8937b4-eb38-442a-8e06-9625c6246d7b, addrs=[/xxx.xx.x.xxx:47100, /xxx.xx.x.xxx:47100]]
at org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.createTcpClient(TcpCommunicationSpi.java:2496)
at org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.createNioClient(TcpCommunicationSpi.java:2137)
at org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.reserveClient(TcpCommunicationSpi.java:2031)
at org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.sendMessage0(TcpCommunicationSpi.java:1967)
... 32 common frames omitted
Suppressed: org.apache.ignite.IgniteCheckedException: Failed to connect to address: /xxx.xx.x.xxx:47100
at org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.createTcpClient(TcpCommunicationSpi.java:2501)
... 35 common frames omitted
Caused by: java.net.NoRouteToHostException: No route to host
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
at sun.nio.ch.SocketAdaptor.connect(SocketAdaptor.java:111)
at org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.createTcpClient(TcpCommunicationSpi.java:2360)
... 35 common frames omitted
Suppressed: org.apache.ignite.IgniteCheckedException: Failed to connect to address: /xxx.xx.x.xxx:47100
at org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.createTcpClient(TcpCommunicationSpi.java:2501)
... 35 common frames omitted
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
at sun.nio.ch.SocketAdaptor.connect(SocketAdaptor.java:111)
at org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.createTcpClient(TcpCommunicationSpi.java:2360)
... 35 common frames omitted
2016-07-04 16:02:34,923 ERROR [marshaller-cache-#67%PMCacheCluster%] [org.apache.ignite.internal.processors.cache.query.continuous.CacheContinuousQueryHandler] [NONE] - Failed to send event notification to node: 95d9812d-4a16-4589-93a8-0bf2aa6b8413
Client nodes are different from server nodes mostly by the fact that they don't hold cache data and don't execute computations.
Other than that, client nodes are first-class cluster citizens and participate in communications the same way as servers do. So yes, they need to accept connections.
See https://apacheignite.readme.io/docs/clients-vs-servers

Load tests on Mule 3.5

I am running load tests with JMeter, and responds well Mule with 12 threads but with 14 not responding, and then I have to restart. Someone help me.
And no errors, the log stands still.
The configuration of the components
<jms:activemq-connector name="ActiveMQ" brokerURL="${org.xyz.esb.jms.url}"
validateConnections="true" createMultipleTransactedReceivers="true"
numberOfConcurrentTransactedReceivers="30" doc:name="AMQ BE">
<receiver-threading-profile maxThreadsActive="30" maxThreadsIdle="5" doThreading="false" threadTTL="20000" poolExhaustedAction="DISCARD_OLDEST" />
<reconnect frequency="5000"/>
</jms:activemq-connector>
<jms:activemq-connector name="ActiveMQLog" brokerURL="${org.xyz.esb.jms.url}" validateConnections="false" disableTemporaryReplyToDestinations="true" doc:name="AMQ Log"/>
<db:mysql-config name="MySQL_Cache" doc:name="MySQL Configuration" dataSource-ref="dataSMule"/>
<db:mysql-config name="MySQL_Cache2" doc:name="MySQL Configuration" dataSource-ref="dataSMule2"/>
<vm:connector name="vm_cnn_op" validateConnections="false"
createMultipleTransactedReceivers="true"
numberOfConcurrentTransactedReceivers="30">
<vm:queue-profile maxOutstandingMessages="500" />
</vm:connector>
<configuration doc:name="conf_ssb">
<default-threading-profile maxThreadsActive="30" maxThreadsIdle="5" doThreading="false" threadTTL="20000" poolExhaustedAction="DISCARD_OLDEST" />
</configuration>
<flow name="agent-xdf-main" processingStrategy="synchronous">
...
</flow>
--- this is the execution of jstack
2015-09-18 23:03:10
Full thread dump Java HotSpot(TM) 64-Bit Server VM (24.71-b01 mixed mode):
"Attach Listener" daemon prio=5 tid=0x00007ff231c14800 nid=0xdd07 waiting on condition [0x0000000000000000]
java.lang.Thread.State: RUNNABLE
"ActiveMQ InactivityMonitor Worker" daemon prio=5 tid=0x00007ff231afa000 nid=0xab07 waiting on condition [0x0000000155c24000]
java.lang.Thread.State: TIMED_WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for <0x000000014c287758> (a java.util.concurrent.SynchronousQueue$TransferStack)
at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
at java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460)
at java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:359)
at java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:942)
at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
"[esb-xyz-r4].HTTP_HTTPS_4.receiver.15" prio=5 tid=0x00007ff2320ce000 nid=0xdb03 in Object.wait() [0x0000000159fb9000]
java.lang.Thread.State: WAITING (on object monitor)
at java.lang.Object.wait(Native Method)
- waiting on <0x00000001469fa630> (a org.apache.commons.pool.impl.GenericObjectPool$Latch)
at java.lang.Object.wait(Object.java:503)
at org.apache.commons.pool.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:1118)
- locked <0x00000001469fa630> (a org.apache.commons.pool.impl.GenericObjectPool$Latch)
at org.apache.commons.dbcp.AbandonedObjectPool.borrowObject(AbandonedObjectPool.java:79)
WorkerContext.java:286)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
"[esb-xyz-r4].http.request.dispatch.8085.14" prio=5 tid=0x00007ff2320cd800 nid=0xd903 waiting on condition [0x0000000159ec0000]
java.lang.Thread.State: WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for <0x00000001468c24a8> (a java.util.concurrent.CountDownLatch$Sync)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:834)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:994)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1303)
at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:236)
at org.mule.transport.http.HttpMessageProcessTemplate.awaitTermination(HttpMessageProcessTemplate.java:492)
at org.mule.transport.http.HttpMessageReceiver.processRequest(HttpMessageReceiver.java:60)
at org.mule.transport.http.HttpRequestDispatcherWork.run(HttpRequestDispatcherWork.java:73)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
"Statistics Thread" daemon prio=5 tid=0x00007ff22b810800 nid=0x2b0b waiting on condition [0x0000000155a1e000]
java.lang.Thread.State: TIMED_WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for <0x000000010f1d9a00> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1090)
at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807)
at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
"__DEFAULT__" daemon prio=5 tid=0x00007ff22afc3800 nid=0x2c0b in Object.wait() [0x0000000154de6000]
java.lang.Thread.State: TIMED_WAITING (on object monitor)
at java.lang.Object.wait(Native Method)
- waiting on <0x000000010f1cea50> (a java.util.TaskQueue)
at java.util.TimerThread.mainLoop(Timer.java:552)
- locked <0x000000010f1cea50> (a java.util.TaskQueue)
at java.util.TimerThread.run(Timer.java:505)
"Abandoned connection cleanup thread" daemon prio=5 tid=0x00007ff22b772000 nid=0x2f07 in Object.wait() [0x0000000151c47000]
java.lang.Thread.State: TIMED_WAITING (on object monitor)
at java.lang.Object.wait(Native Method)
at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:135)
- locked <0x000000010eb6f1a8> (a java.lang.ref.ReferenceQueue$Lock)
at com.mysql.jdbc.AbandonedConnectionCleanupThread.run(AbandonedConnectionCleanupThread.java:41)
"[esb-xyz-r4].Mule.01" prio=5 tid=0x00007ff22b759800 nid=0x6203 waiting on condition [0x0000000156559000]
java.lang.Thread.State: TIMED_WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for <0x000000010ec222b0> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
at java.util.concurrent.LinkedBlockingDeque.pollFirst(LinkedBlockingDeque.java:519)
at java.util.concurrent.LinkedBlockingDeque.poll(LinkedBlockingDeque.java:682)
at org.mule.context.notification.ServerNotificationManager.run(ServerNotificationManager.java:267)
at org.mule.work.WorkerContext.run(WorkerContext.java:286)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
"[esb-xyz-r4].log4j.config.monitor" daemon prio=5 tid=0x00007ff22b40b800 nid=0x6003 waiting on condition [0x0000000156456000]
java.lang.Thread.State: TIMED_WAITING (sleeping)
at java.lang.Thread.sleep(Native Method)
at org.mule.module.launcher.log4j.ArtifactAwareRepositorySelector$ConfigWatchDog.run(ArtifactAwareRepositorySelector.java:411)
"[default].processing.time.monitor" daemon prio=5 tid=0x00007ff22b6d7800 nid=0x5e03 in Object.wait() [0x0000000154bb5000]
java.lang.Thread.State: WAITING (on object monitor)
at java.lang.Object.wait(Native Method)
- waiting on <0x000000010ed30460> (a java.lang.ref.ReferenceQueue$Lock)
at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:135)
- locked <0x000000010ed30460> (a java.lang.ref.ReferenceQueue$Lock)
at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:151)
at org.mule.management.stats.DefaultProcessingTimeWatcher$ProcessingTimeChecker.run(DefaultProcessingTimeWatcher.java:76)
at java.lang.Thread.run(Thread.java:745)
"[default].Mule.01" prio=5 tid=0x00007ff22adc2000 nid=0x5c03 waiting on condition [0x00000001551c4000]
java.lang.Thread.State: TIMED_WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for <0x000000010e1efb40> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
at java.util.concurrent.LinkedBlockingDeque.pollFirst(LinkedBlockingDeque.java:519)
at java.util.concurrent.LinkedBlockingDeque.poll(LinkedBlockingDeque.java:682)
at org.mule.context.notification.ServerNotificationManager.run(ServerNotificationManager.java:267)
at org.mule.work.WorkerContext.run(WorkerContext.java:286)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
"[default].log4j.config.monitor" daemon prio=5 tid=0x00007ff22ad5f000 nid=0x5a03 waiting on condition [0x00000001550c1000]
java.lang.Thread.State: TIMED_WAITING (sleeping)
at java.lang.Thread.sleep(Native Method)
at org.mule.module.launcher.log4j.ArtifactAwareRepositorySelector$ConfigWatchDog.run(ArtifactAwareRepositorySelector.java:411)
"[default].log4j.config.monitor" daemon prio=5 tid=0x00007ff22ad03800 nid=0x5807 waiting on condition [0x0000000154fbe000]
java.lang.Thread.State: TIMED_WAITING (sleeping)
at java.lang.Thread.sleep(Native Method)
at org.mule.module.launcher.log4j.ArtifactAwareRepositorySelector$ConfigWatchDog.run(ArtifactAwareRepositorySelector.java:411)
"Mule.log.slf4j.ref.handler" prio=5 tid=0x00007ff22b1e4800 nid=0x5103 in Object.wait() [0x0000000154944000]
java.lang.Thread.State: WAITING (on object monitor)
at java.lang.Object.wait(Native Method)
- waiting on <0x000000010e1ea228> (a java.lang.ref.ReferenceQueue$Lock)
at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:135)
- locked <0x000000010e1ea228> (a java.lang.ref.ReferenceQueue$Lock)
at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:151)
at org.mule.module.logging.LoggerReferenceHandler$1.run(LoggerReferenceHandler.java:49)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
"Mule.log.clogging.ref.handler" prio=5 tid=0x00007ff22aced800 nid=0x4f03 in Object.wait() [0x0000000154841000]
java.lang.Thread.State: WAITING (on object monitor)
at java.lang.Object.wait(Native Method)
- waiting on <0x000000010e1e8898> (a java.lang.ref.ReferenceQueue$Lock)
at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:135)
- locked <0x000000010e1e8898> (a java.lang.ref.ReferenceQueue$Lock)
at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:151)
at org.mule.module.logging.LoggerReferenceHandler$1.run(LoggerReferenceHandler.java:49)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
"Mule.system.log4j.config.monitor" daemon prio=5 tid=0x00007ff22acc2800 nid=0x4d03 waiting on condition [0x000000015473e000]
java.lang.Thread.State: TIMED_WAITING (sleeping)
at java.lang.Thread.sleep(Native Method)
at org.mule.module.launcher.log4j.ArtifactAwareRepositorySelector$ConfigWatchDog.run(ArtifactAwareRepositorySelector.java:411)
"DestroyJavaVM" prio=5 tid=0x00007ff22b0e0000 nid=0x1303 waiting on condition [0x0000000000000000]
java.lang.Thread.State: RUNNABLE
"Wrapper-Connection" daemon prio=5 tid=0x00007ff22b0d7800 nid=0x4903 runnable [0x0000000153cba000]
java.lang.Thread.State: RUNNABLE
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.read(SocketInputStream.java:152)
at java.net.SocketInputStream.read(SocketInputStream.java:122)
at java.net.SocketInputStream.read(SocketInputStream.java:210)
at java.io.DataInputStream.readByte(DataInputStream.java:265)
at org.tanukisoftware.wrapper.WrapperManager.handleSocket(WrapperManager.java:3737)
at org.tanukisoftware.wrapper.WrapperManager.run(WrapperManager.java:4084)
at java.lang.Thread.run(Thread.java:745)
"Wrapper-Control-Event-Monitor" daemon prio=5 tid=0x00007ff22b072800 nid=0x4503 waiting on condition [0x0000000153ab4000]
java.lang.Thread.State: TIMED_WAITING (sleeping)
at java.lang.Thread.sleep(Native Method)
at org.tanukisoftware.wrapper.WrapperManager$3.run(WrapperManager.java:731)
"Service Thread" daemon prio=5 tid=0x00007ff22a8a8000 nid=0x4103 runnable [0x0000000000000000]
java.lang.Thread.State: RUNNABLE
"C2 CompilerThread1" daemon prio=5 tid=0x00007ff22a87d000 nid=0x3f03 waiting on condition [0x0000000000000000]
java.lang.Thread.State: RUNNABLE
"C2 CompilerThread0" daemon prio=5 tid=0x00007ff22b001000 nid=0x3d03 waiting on condition [0x0000000000000000]
java.lang.Thread.State: RUNNABLE
"Signal Dispatcher" daemon prio=5 tid=0x00007ff22a8a2800 nid=0x300f runnable [0x0000000000000000]
java.lang.Thread.State: RUNNABLE
"Finalizer" daemon prio=5 tid=0x00007ff22b010800 nid=0x2903 in Object.wait() [0x0000000151b03000]
java.lang.Thread.State: WAITING (on object monitor)
at java.lang.Object.wait(Native Method)
- waiting on <0x000000010e120260> (a java.lang.ref.ReferenceQueue$Lock)
at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:135)
- locked <0x000000010e120260> (a java.lang.ref.ReferenceQueue$Lock)
at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:151)
at java.lang.ref.Finalizer$FinalizerThread.run(Finalizer.java:209)
"Reference Handler" daemon prio=5 tid=0x00007ff22a85e800 nid=0x2703 in Object.wait() [0x0000000151a00000]
java.lang.Thread.State: WAITING (on object monitor)
at java.lang.Object.wait(Native Method)
- waiting on <0x000000010e11fd18> (a java.lang.ref.Reference$Lock)
at java.lang.Object.wait(Object.java:503)
at java.lang.ref.Reference$ReferenceHandler.run(Reference.java:133)
- locked <0x000000010e11fd18> (a java.lang.ref.Reference$Lock)
"VM Thread" prio=5 tid=0x00007ff22a85c000 nid=0x2503 runnable
"GC task thread#0 (ParallelGC)" prio=5 tid=0x00007ff22a818800 nid=0x2103 runnable
"GC task thread#1 (ParallelGC)" prio=5 tid=0x00007ff22b00a000 nid=0x2303 runnable
"VM Periodic Task Thread" prio=5 tid=0x00007ff22a8b1000 nid=0x4303 waiting on condition
JNI global references: 471
You're not providing any info so here is a general answer: make sure Mule's thread pools are properly configured.
Assuming you are doing http calls, you can use jetty-inbound instead of http-inbound transport which handles more load.
No matter the transport you use, you should tune the threading profile and run jmeter from a different host.
Also you can check more info on the performance-tuning-guide.
In general, these kind of issues needs to be figured out using narrow-down approach. Since there is no clarity on what each thread is doing as part of testing in your problem statement here are couple of pointers which you can try
check whether JMeter has enough resources or not (check using Resource Monitor utility - CPU, Memory)
server reaction to requests
Try hitting the application url from same machine where jmeter is running. This will show whether there is a problem from jmeter or not
If point 3 fails, try hitting the application url from server / someother machine if it works its a problem with jmeter & machine resources else server
See memory allocation to jmeter. Default allocations will be very less
There are some tips. Please try if possible

Some JVMs on the box are showing higher CPU utilization

We have a weblogic cluster across 6 boxes. Each box has 3 JVMs.
2 JVMs on box 4 are showing very high CPU utilization (in the range of 80-90%) compared to other JVMs which have less than 10% CPU utilization.
We checked load-balancing. The requests are getting uniformly distributed to all the JVMs. All the JVMs are doing GC correctly, there is no issue with Garbage collection. Each JVM has the same GC and memory configuration.
Is there any way to figure out what threads are utilizing high CPU? We can not restart the JVMs or modify any settings on them as those are the production JVMs.
Try this shell on you box 4.
show-busy-java-threads.sh
It will show java busy threads like this:
The stack of busy(0.2%) thread(3901/0xf3d) of java process(3626) of user(zuojing):
"ApplicationImpl pooled thread 16" daemon prio=10 tid=0x00007fbd54076000 nid=0xf3d waiting on condition [0x00007fbcd9636000]
java.lang.Thread.State: TIMED_WAITING (sleeping)
at java.lang.Thread.sleep(Native Method)
at com.intellij.util.TimeoutUtil.sleep(TimeoutUtil.java:58)
at com.intellij.util.io.BaseOutputReader.doRun(BaseOutputReader.java:116)
at com.intellij.util.io.BaseOutputReader$1.run(BaseOutputReader.java:57)
at com.intellij.openapi.application.impl.ApplicationImpl$8.run(ApplicationImpl.java:454)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
at java.util.concurrent.FutureTask.run(FutureTask.java:166)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:701)
at com.intellij.openapi.application.impl.ApplicationImpl$1$1.run(ApplicationImpl.java:152)
The stack of busy(0.2%) thread(3897/0xf39) of java process(3626) of user(zuojing):
"ApplicationImpl pooled thread 15" daemon prio=10 tid=0x00007fbd1c39b800 nid=0xf39 waiting on condition [0x00007fbcd9838000]
java.lang.Thread.State: TIMED_WAITING (sleeping)
at java.lang.Thread.sleep(Native Method)
at com.intellij.util.TimeoutUtil.sleep(TimeoutUtil.java:58)
at com.intellij.util.io.BaseOutputReader.doRun(BaseOutputReader.java:116)
at com.intellij.util.io.BaseOutputReader$1.run(BaseOutputReader.java:57)
at com.intellij.openapi.application.impl.ApplicationImpl$8.run(ApplicationImpl.java:454)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
at java.util.concurrent.FutureTask.run(FutureTask.java:166)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:701)
....
Then trace the stack and find out what the busy threads doing in you source code.

How to understand jstack output

Does this mean that I have a dead lock?
Do anyone has some ideas on this?
"Finalizer" daemon prio=10 tid=0x00007f8a30bef800 nid=0x6318 in Object.wait() [0x00007f8a0dfa9000]
java.lang.Thread.State: WAITING (on object monitor)
at java.lang.Object.wait(Native Method)
at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:135)
- locked <0x0000000741501260> (a java.lang.ref.ReferenceQueue$Lock)
at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:151)
at java.lang.ref.Finalizer$FinalizerThread.run(Finalizer.java:189)
"Reference Handler" daemon prio=10 tid=0x00007f8a30bed800 nid=0x6317 in Object.wait() [0x00007f8a0e0aa000]
java.lang.Thread.State: WAITING (on object monitor)
at java.lang.Object.wait(Native Method)
at java.lang.Object.wait(Object.java:503)
at java.lang.ref.Reference$ReferenceHandler.run(Reference.java:133)
- locked <0x00000007415012f8> (a java.lang.ref.Reference$Lock)