RSocket: Local and remote state disagreement - serversocket

Enough stable system handles ~5 thousand WS requests per day and the 1%(~50 connections) connections are closing on an error with the next exception: during the resume process, we have a disagreement during the frames removing.
Did someone face this issue? Any idea what's going on?
java 11
kotlin 1.4.10
org.springframework.boot:spring-boot-starter-rsocket:2.4.0
-> io.rsocket:rsocket-core:1.1.0
-> io.rsocket:rsocket-transport-netty:1.1.0
15:34:05.998 [reactor-http-epoll-3] ERROR reactor.core.publisher.Operators - Operator called default onErrorDropped
reactor.core.Exceptions$ErrorCallbackNotImplemented: java.lang.IllegalStateException: Local and remote state disagreement: need to remove additional 194 bytes, but cache is empty
Caused by: java.lang.IllegalStateException: Local and remote state disagreement: need to remove additional 194 bytes, but cache is empty
at io.rsocket.resume.InMemoryResumableFramesStore.releaseFrames(InMemoryResumableFramesStore.java:120)
at io.rsocket.resume.ServerRSocketSession.doResume(ServerRSocketSession.java:153)
at io.rsocket.resume.ServerRSocketSession.lambda$resumeWith$2(ServerRSocketSession.java:109)
at io.rsocket.resume.ServerRSocketSession.resumeWith(ServerRSocketSession.java:117)
at io.rsocket.core.ServerSetup$ResumableServerSetup.acceptRSocketResume(ServerSetup.java:137)
at io.rsocket.core.RSocketServer.acceptResume(RSocketServer.java:356)
at io.rsocket.core.RSocketServer.accept(RSocketServer.java:368)
at io.rsocket.core.RSocketServer.lambda$acceptor$0(RSocketServer.java:350)
at reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:125)
at com.company.mdc.config.MdcContextLifter.onNext(MdcConfig.kt:40)
at reactor.core.publisher.FluxFirstWithSignal$FirstEmittingSubscriber.onNext(FluxFirstWithSignal.java:329)
at com.company.mdc.config.MdcContextLifter.onNext(MdcConfig.kt:40)
at reactor.core.publisher.MonoCreate$DefaultMonoSink.success(MonoCreate.java:160)
at io.rsocket.core.SetupHandlingDuplexConnection.onNext(SetupHandlingDuplexConnection.java:114)
at io.rsocket.core.SetupHandlingDuplexConnection.onNext(SetupHandlingDuplexConnection.java:19)
at com.company.mdc.config.MdcContextLifter.onNext(MdcConfig.kt:40)
at reactor.core.publisher.FluxMap$MapSubscriber.onNext(FluxMap.java:120)
at reactor.netty.channel.FluxReceive.drainReceiver(FluxReceive.java:265)
at reactor.netty.channel.FluxReceive.onInboundNext(FluxReceive.java:371)
at reactor.netty.channel.ChannelOperations.onInboundNext(ChannelOperations.java:381)
at reactor.netty.http.server.HttpServerOperations.onInboundNext(HttpServerOperations.java:544)
at reactor.netty.http.server.WebsocketServerOperations.onInboundNext(WebsocketServerOperations.java:161)
at reactor.netty.channel.ChannelOperationsHandler.channelRead(ChannelOperationsHandler.java:94)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:324)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:296)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
at io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:795)
at io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:480)
at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:378)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.base/java.lang.Thread.run(Thread.java:834)

That was a bug in the rsoket library. The issue is already fixed
https://github.com/rsocket/rsocket-java/issues/973

Related

java.io.IOException: Error closing multipart upload

I am working on pyspark code which processes Terabytes of data and write on s3.
After processing a data I am getting below error
`
Caused by: org.apache.spark.SparkException: Task failed while writing rows.
at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:257)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:170)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:169)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:123)
at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1405)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
... 1 more
Caused by: java.io.IOException: Error closing multipart upload
at com.amazon.ws.emr.hadoop.fs.s3n.MultipartUploadOutputStream.doMultiPartUpload(MultipartUploadOutputStream.java:441)
at com.amazon.ws.emr.hadoop.fs.s3n.MultipartUploadOutputStream.close(MultipartUploadOutputStream.java:421)
at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:74)
at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:108)
at org.apache.parquet.hadoop.util.HadoopPositionOutputStream.close(HadoopPositionOutputStream.java:64)
at org.apache.parquet.hadoop.ParquetFileWriter.end(ParquetFileWriter.java:685)
at org.apache.parquet.hadoop.InternalParquetRecordWriter.close(InternalParquetRecordWriter.java:122)
at org.apache.parquet.hadoop.ParquetRecordWriter.close(ParquetRecordWriter.java:165)
at org.apache.spark.sql.execution.datasources.parquet.ParquetOutputWriter.close(ParquetOutputWriter.scala:42)
`
I tried by setting below configurations. Still I am getting same error.
self._spark_session.conf.set("spark.hadoop.fs.s3a.multipart.threshold", 2097152000)
self._spark_session.conf.set("spark.hadoop.fs.s3a.multipart.size", 104857600)
self._spark_session.conf.set("spark.hadoop.fs.s3a.connection.maximum", 500)
self._spark_session.conf.set("spark.hadoop.fs.s3a.connection.timeout", 600000)
self._spark_session.conf.set("spark.hadoop.fs.s3.maxRetries", 50)
Can someone please help me to resolve this issue ?

WSO2 ESB not accepting large json data

I am using WSO2 ESB in my java application for integration.
When I send very large json data, it shows the ERROR below:
Here is the error which I receive in ESB,
ERROR - NativeWorkerPool Uncaught exception
java.lang.ClassFormatError: Invalid method Code length 82129 in class file org/mozilla/javascript/gen/c330
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClass(ClassLoader.java:760)
at org.mozilla.javascript.DefiningClassLoader.defineClass(DefiningClassLoader.java:62)
at org.mozilla.javascript.optimizer.Codegen.defineClass(Codegen.java:126)
at org.mozilla.javascript.optimizer.Codegen.createScriptObject(Codegen.java:81)
at org.mozilla.javascript.Context.compileImpl(Context.java:2361)
at org.mozilla.javascript.Context.compileReader(Context.java:1310)
at org.mozilla.javascript.Context.compileReader(Context.java:1282)
at org.mozilla.javascript.Context.evaluateReader(Context.java:1224)
at com.sun.phobos.script.javascript.RhinoScriptEngine.eval(RhinoScriptEngine.java:172)
at javax.script.AbstractScriptEngine.eval(AbstractScriptEngine.java:249)
at org.apache.synapse.mediators.bsf.ScriptMediator.processJSONPayload(ScriptMediator.java:322)
at org.apache.synapse.mediators.bsf.ScriptMediator.mediateForInlineScript(ScriptMediator.java:294)
at org.apache.synapse.mediators.bsf.ScriptMediator.invokeScript(ScriptMediator.java:239)
at org.apache.synapse.mediators.bsf.ScriptMediator.mediate(ScriptMediator.java:207)
at org.apache.synapse.mediators.AbstractListMediator.mediate(AbstractListMediator.java:81)
at org.apache.synapse.mediators.AbstractListMediator.mediate(AbstractListMediator.java:48)
at org.apache.synapse.mediators.filters.FilterMediator.mediate(FilterMediator.java:160)
at org.apache.synapse.mediators.AbstractListMediator.mediate(AbstractListMediator.java:81)
at org.apache.synapse.mediators.AbstractListMediator.mediate(AbstractListMediator.java:48)
at org.apache.synapse.mediators.base.SequenceMediator.mediate(SequenceMediator.java:149)
at org.apache.synapse.mediators.base.SequenceMediator.mediate(SequenceMediator.java:214)
at org.apache.synapse.mediators.AbstractListMediator.mediate(AbstractListMediator.java:81)
at org.apache.synapse.mediators.AbstractListMediator.mediate(AbstractListMediator.java:48)
at org.apache.synapse.config.xml.AnonymousListMediator.mediate(AnonymousListMediator.java:30)
at org.apache.synapse.mediators.filters.FilterMediator.mediate(FilterMediator.java:197)
at org.apache.synapse.mediators.AbstractListMediator.mediate(AbstractListMediator.java:81)
at org.apache.synapse.mediators.AbstractListMediator.mediate(AbstractListMediator.java:48)
at org.apache.synapse.mediators.base.SequenceMediator.mediate(SequenceMediator.java:149)
at org.apache.synapse.rest.Resource.process(Resource.java:297)
at org.apache.synapse.rest.API.process(API.java:378)
at org.apache.synapse.rest.RESTRequestHandler.dispatchToAPI(RESTRequestHandler.java:97)
at org.apache.synapse.rest.RESTRequestHandler.process(RESTRequestHandler.java:65)
at org.apache.synapse.core.axis2.Axis2SynapseEnvironment.injectMessage(Axis2SynapseEnvironment.java:266)
at org.apache.synapse.core.axis2.SynapseMessageReceiver.receive(SynapseMessageReceiver.java:83)
at org.apache.axis2.engine.AxisEngine.receive(AxisEngine.java:180)
at org.apache.synapse.transport.passthru.ServerWorker.processNonEntityEnclosingRESTHandler(ServerWorker.java:317)
at org.apache.synapse.transport.passthru.ServerWorker.processEntityEnclosingRequest(ServerWorker.java:363)
at org.apache.synapse.transport.passthru.ServerWorker.run(ServerWorker.java:142)
at org.apache.axis2.transport.base.threads.NativeWorkerPool$1.run(NativeWorkerPool.java:172)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
I am not sure what causes this error. Please help me to resolve this.
when processing large JSON data volumes, the code length must be less than 65536 characters, since the Script mediator converts the payload into a Java object so, try reducing the size of JSON.

Randomly getting java.lang.ClassCastException in snappy job

Snappy job written in Scala aborts with exception:
java.lang.ClassCastException: com.....$Class1 cannot be cast to com.....$Class1.
Class1 is custom class that is stored in RDD. Interesting thing is this error is thrown while casting same class. So far, no patterns are found.
In the job, we fetch data from hbase, enrich data with analytical metadata using Dataframes and push it to a table in SnappyData. We are using Snappydata 1.2.0.1.
Not sure why is this happening.
Below is Stack Trace:
Job aborted due to stage failure: Task 76 in stage 42.0 failed 4 times, most recent failure: Lost task 76.3 in stage 42.0 (TID 3550, HostName, executor XX.XX.x.xxx(10360):7872): java.lang.ClassCastException: cannot be cast to
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(generated.java:86)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegenRDD$$anon$2.hasNext(WholeStageCodegenExec.scala:571)
at org.apache.spark.sql.execution.WholeStageCodegenRDD$$anon$1.hasNext(WholeStageCodegenExec.scala:514)
at org.apache.spark.sql.execution.columnar.InMemoryRelation$$anonfun$1$$anon$1.hasNext(InMemoryRelation.scala:132)
at org.apache.spark.storage.memory.MemoryStore.putIteratorAsValues(MemoryStore.scala:233)
at org.apache.spark.storage.BlockManager$$anonfun$doPutIterator$1.apply(BlockManager.scala:1006)
at org.apache.spark.storage.BlockManager$$anonfun$doPutIterator$1.apply(BlockManager.scala:997)
at org.apache.spark.storage.BlockManager.doPut(BlockManager.scala:936)
at org.apache.spark.storage.BlockManager.doPutIterator(BlockManager.scala:997)
at org.apache.spark.storage.BlockManager.getOrElseUpdate(BlockManager.scala:700)
at org.apache.spark.rdd.RDD.getOrCompute(RDD.scala:335)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:286)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:41)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
at org.apache.spark.sql.execution.WholeStageCodegenRDD.computeInternal(WholeStageCodegenExec.scala:557)
at org.apache.spark.sql.execution.WholeStageCodegenRDD$$anon$1.(WholeStageCodegenExec.scala:504)
at org.apache.spark.sql.execution.WholeStageCodegenRDD.compute(WholeStageCodegenExec.scala:503)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:41)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:103)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:58)
at org.apache.spark.scheduler.Task.run(Task.scala:126)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:326)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at org.apache.spark.executor.SnappyExecutor$$anon$2$$anon$3.run(SnappyExecutor.scala:57)
at java.lang.Thread.run(Thread.java:748)
Classes are not unique by name. They're unique by name + classloader.
ClassCastException of the kind you're seeing happens when you pass data between parts of the app where one or both parts are loaded in a separate classloader.
You might need to clean up your classpath, you might need to resolve the classes from the same classloader, or you might have to serialize the data (especially if you have features that rely on reloading code at runtime).

Jetty server closes the stream generating an 500 error

When constructing a http response using jetty-9.4.6 I get the following exception. In my particular case I'm constructing the message from camel but this doesn't impact the behavior.
org.apache.cxf.interceptor.Fault: Could not write to XMLStreamWriter.
at org.apache.cxf.interceptor.StaxOutEndingInterceptor.handleMessage(StaxOutEndingInterceptor.java:75)
at org.apache.cxf.phase.PhaseInterceptorChain.doIntercept(PhaseInterceptorChain.java:308)
at org.apache.cxf.interceptor.AbstractFaultChainInitiatorObserver.onMessage(AbstractFaultChainInitiatorObserver.java:112)
at org.apache.cxf.phase.PhaseInterceptorChain.wrapExceptionAsFault(PhaseInterceptorChain.java:366)
at org.apache.cxf.phase.PhaseInterceptorChain.doIntercept(PhaseInterceptorChain.java:324)
at org.apache.cxf.interceptor.OutgoingChainInterceptor.handleMessage(OutgoingChainInterceptor.java:83)
at org.apache.cxf.phase.PhaseInterceptorChain.doIntercept(PhaseInterceptorChain.java:308)
at org.apache.cxf.phase.PhaseInterceptorChain.resume(PhaseInterceptorChain.java:278)
at org.apache.cxf.transport.ChainInitiationObserver.onMessage(ChainInitiationObserver.java:78)
at org.apache.cxf.transport.http.AbstractHTTPDestination.invoke(AbstractHTTPDestination.java:267)
at org.apache.cxf.transport.http_jetty.JettyHTTPDestination.doService(JettyHTTPDestination.java:247)
at org.apache.cxf.transport.http_jetty.JettyHTTPHandler.handle(JettyHTTPHandler.java:79)
at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:190)
at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1253)
at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:170)
at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1155)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
at org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:193)
at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
at org.eclipse.jetty.server.Server.handleAsync(Server.java:609)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:334)
at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:279)
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:110)
at org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:124)
at org.eclipse.jetty.util.thread.Invocable.invokePreferred(Invocable.java:128)
at org.eclipse.jetty.util.thread.Invocable$InvocableExecutor.invoke(Invocable.java:222)
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:294)
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:199)
at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:673)
at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:591)
at java.lang.Thread.run(Thread.java:748)
Caused by: com.ctc.wstx.exc.WstxIOException: Closed
at com.ctc.wstx.sw.BaseStreamWriter._finishDocument(BaseStreamWriter.java:1421)
at com.ctc.wstx.sw.BaseStreamWriter.writeEndDocument(BaseStreamWriter.java:532)
at org.apache.cxf.interceptor.StaxOutEndingInterceptor.handleMessage(StaxOutEndingInterceptor.java:56)
... 32 more
Caused by: org.eclipse.jetty.io.EofException: Closed
at org.eclipse.jetty.server.HttpOutput.write(HttpOutput.java:476)
at org.apache.cxf.transport.http_jetty.JettyHTTPDestination$JettyOutputStream.write(JettyHTTPDestination.java:322)
at org.apache.cxf.io.AbstractWrappedOutputStream.write(AbstractWrappedOutputStream.java:51)
at com.ctc.wstx.io.UTF8Writer.flush(UTF8Writer.java:100)
at com.ctc.wstx.sw.BufferingXmlWriter.flush(BufferingXmlWriter.java:241)
at com.ctc.wstx.sw.BufferingXmlWriter.close(BufferingXmlWriter.java:214)
at com.ctc.wstx.sw.BaseStreamWriter._finishDocument(BaseStreamWriter.java:1419)
... 34 more
If the headers are beyond a certain size the exception is thrown. See Maximum on http header values? .
The exception doesn't suggest the reason for closing.

stack trace for deadlock with cfhtmltopdf

We recently switched to CF11 and converted some of our file downloads to use cfhtmltopdf.
When there are multiple requests for PDFs the request start to hang as monitored in FusionReactor. I can replicate the issue by opening just three or four tabs and all requesting a PDF simultaneously. The stack trace shows the last CFM line processed was cfhtmltopdf.
It then looks to be acquiring locks on something and then two or three of the requests hang in this state. Sometimes they eventually resolve (after 10 plus minutes), other times they are hung for hours. Can anyone shed some light on this thread dump or the issue? Its is maybe a CF Standard Edition thing where PDF creation is limited?
sun.misc.Unsafe.park(null:???)[Native Method]
- waiting on <0x46295456> (a java.util.concurrent.Semaphore$NonfairSync)
java.util.concurrent.locks.LockSupport.park(null:???)
java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(null:???)
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(null:???)
java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(null:???)
java.util.concurrent.Semaphore.acquire(null:???)
coldfusion.featurerouter.edition.StandardServerEdition.acquireEnterpriseFeatureLock(StandardServerEdition.java:195)
- locked <0x5f893c6d> (a coldfusion.featurerouter.EFRContext)
coldfusion.featurerouter.edition.StandardServerEdition.allowFeature(StandardServerEdition.java:88)
coldfusion.featurerouter.FeatureRouter.allowFeature(FeatureRouter.java:113)
coldfusion.tagext.lang.HtmlToPdfTag.doStartTag(HtmlToPdfTag.java:807)
cfFileHandlerFunctions2ecfc1831761174$funcCREATEPDF.runFunction(D:\timeclock\lib\functions\FileHandlerFunctions.cfc:308)
coldfusion.runtime.UDFMethod.invoke(UDFMethod.java:487)
coldfusion.filter.SilentFilter.invoke(SilentFilter.java:47)
coldfusion.runtime.UDFMethod$ReturnTypeFilter.invoke(UDFMethod.java:420)
coldfusion.runtime.UDFMethod$ArgumentCollectionFilter.invoke(UDFMethod.java:383)
coldfusion.filter.FunctionAccessFilter.invoke(FunctionAccessFilter.java:95)
coldfusion.runtime.UDFMethod.runFilterChain(UDFMethod.java:334)
coldfusion.runtime.UDFMethod.invoke(UDFMethod.java:533)
coldfusion.runtime.TemplateProxy.invoke(TemplateProxy.java:648)
coldfusion.runtime.TemplateProxy.invoke(TemplateProxy.java:457)
coldfusion.runtime.CfJspPage._invoke(CfJspPage.java:2424)
cfTimecardController2ecfc2026185905$funcDOWNLOADHOURLYPDFTIMECARD.runFunction(D:\timeclock\system\controller\TimecardController.cfc:188)
coldfusion.runtime.UDFMethod.invoke(UDFMethod.java:487)
coldfusion.filter.SilentFilter.invoke(SilentFilter.java:47)
coldfusion.runtime.UDFMethod$ReturnTypeFilter.invoke(UDFMethod.java:420)
coldfusion.runtime.UDFMethod$ArgumentCollectionFilter.invoke(UDFMethod.java:383)
coldfusion.filter.FunctionAccessFilter.invoke(FunctionAccessFilter.java:95)
coldfusion.runtime.UDFMethod.runFilterChain(UDFMethod.java:334)
coldfusion.runtime.UDFMethod.invoke(UDFMethod.java:533)
coldfusion.runtime.TemplateProxy.invoke(TemplateProxy.java:648)
coldfusion.runtime.TemplateProxy.invoke(TemplateProxy.java:457)
coldfusion.runtime.CfJspPage._invoke(CfJspPage.java:2424)
coldfusion.tagext.lang.InvokeTag.doEndTag(InvokeTag.java:399)
coldfusion.runtime.CfJspPage._emptyTcfTag(CfJspPage.java:2987)
cfApplication2ecfc1521604824$funcONCFCREQUEST.runFunction(D:\timeclock\system\Application.cfc:403)
coldfusion.runtime.UDFMethod.invoke(UDFMethod.java:487)
coldfusion.runtime.UDFMethod$ReturnTypeFilter.invoke(UDFMethod.java:420)
coldfusion.runtime.UDFMethod$ArgumentCollectionFilter.invoke(UDFMethod.java:383)
coldfusion.filter.FunctionAccessFilter.invoke(FunctionAccessFilter.java:95)
coldfusion.runtime.UDFMethod.runFilterChain(UDFMethod.java:334)
coldfusion.runtime.UDFMethod.invoke(UDFMethod.java:231)
coldfusion.runtime.TemplateProxy.invoke(TemplateProxy.java:643)
coldfusion.runtime.TemplateProxy.invoke(TemplateProxy.java:432)
coldfusion.runtime.TemplateProxy.invoke(TemplateProxy.java:402)
coldfusion.runtime.AppEventInvoker.invokeWithReturn(AppEventInvoker.java:147)
coldfusion.runtime.AppEventInvoker.OnCfcRequest(AppEventInvoker.java:323)
coldfusion.filter.ApplicationFilter.callOnCFCMethod(ApplicationFilter.java:598)
coldfusion.filter.ApplicationFilter.invoke(ApplicationFilter.java:473)
coldfusion.filter.RequestMonitorFilter.invoke(RequestMonitorFilter.java:42)
coldfusion.filter.MonitoringFilter.invoke(MonitoringFilter.java:40)
coldfusion.filter.PathFilter.invoke(PathFilter.java:142)
coldfusion.filter.ExceptionFilter.invoke(ExceptionFilter.java:94)
coldfusion.filter.ClientScopePersistenceFilter.invoke(ClientScopePersistenceFilter.java:28)
coldfusion.filter.BrowserFilter.invoke(BrowserFilter.java:38)
coldfusion.filter.NoCacheFilter.invoke(NoCacheFilter.java:58)
coldfusion.filter.GlobalsFilter.invoke(GlobalsFilter.java:38)
coldfusion.filter.DatasourceFilter.invoke(DatasourceFilter.java:22)
coldfusion.xml.rpc.CFCServlet.invoke(CFCServlet.java:156)
coldfusion.xml.rpc.CFCServlet.doPost(CFCServlet.java:348)
javax.servlet.http.HttpServlet.service(HttpServlet.java:646)
javax.servlet.http.HttpServlet.service(HttpServlet.java:727)
coldfusion.bootstrap.BootstrapServlet.service(BootstrapServlet.java:89)
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:303)
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
coldfusion.monitor.event.MonitoringServletFilter.doFilter(MonitoringServletFilter.java:42)
coldfusion.bootstrap.BootstrapFilter.doFilter(BootstrapFilter.java:46)
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241)
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
sun.reflect.GeneratedMethodAccessor100.invoke(null:???)
sun.reflect.DelegatingMethodAccessorImpl.invoke(null:???)
java.lang.reflect.Method.invoke(null:???)
com.intergral.fusionreactor.j2ee.filterchain.WrappedFilterChain.doFilter(WrappedFilterChain.java:97)
com.intergral.fusionreactor.j2ee.filter.FusionReactorRequestHandler.doNext(FusionReactorRequestHandler.java:472)
com.intergral.fusionreactor.j2ee.filter.FusionReactorRequestHandler.doHttpServletRequest(FusionReactorRequestHandler.java:312)
com.intergral.fusionreactor.j2ee.filter.FusionReactorRequestHandler.doFusionRequest(FusionReactorRequestHandler.java:192)
com.intergral.fusionreactor.j2ee.filter.FusionReactorRequestHandler.handle(FusionReactorRequestHandler.java:507)
com.intergral.fusionreactor.j2ee.filter.FusionReactorCoreFilter.doFilter(FusionReactorCoreFilter.java:36)
sun.reflect.GeneratedMethodAccessor99.invoke(null:???)
sun.reflect.DelegatingMethodAccessorImpl.invoke(null:???)
java.lang.reflect.Method.invoke(null:???)
com.intergral.fusionreactor.j2ee.filterchain.WrappedFilterChain.doFilter(WrappedFilterChain.java:79)
sun.reflect.GeneratedMethodAccessor98.invoke(null:???)
sun.reflect.DelegatingMethodAccessorImpl.invoke(null:???)
java.lang.reflect.Method.invoke(null:???)
com.intergral.fusionreactor.agent.filter.FusionReactorStaticFilter.doFilter(FusionReactorStaticFilter.java:53)
com.intergral.fusionreactor.agent.pointcuts.NewFilterChainPointCut$1.invoke(NewFilterChainPointCut.java:41)
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:???)
org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:220)
org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:122)
org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:501)
org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:171)
org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:116)
org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:422)
org.apache.coyote.ajp.AjpProcessor.process(AjpProcessor.java:199)
org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:607)
org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:314)
- locked <0x6791a199> (a org.apache.tomcat.util.net.SocketWrapper)
java.util.concurrent.ThreadPoolExecutor.runWorker(null:???)
java.util.concurrent.ThreadPoolExecutor$Worker.run(null:???)
org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
java.lang.Thread.run(null:???)
Locked ownable synchronizers:
- java.util.concurrent.ThreadPoolExecutor$Worker#2bc8ea50