BLE: bluetoothd crash [Software caused connection abort (103)] - bluez

In BlueZ5.50/5.49 bluethood crash few second later while register for
notification (BLE ).this issuse occurs only bluez5.49 and bluez5.50 .
#gattc register_for_notification 2 48:d0:cf:ff:d1:ad {1812,4,1} {2a4d,4}
bluetoothd[8728]: android/gatt.c:disconnected_cb() Software caused connection abort (103)
Program received signal SIGSEGV, Segmentation fault.
0x000000000045ea56 in queue_remove_if (queue=0x6bf610, function=function#entry=0x43a2e0 , user_data=user_data#entry=0x6c3840) at src/shared/queue.c:292
292 if (function(entry->data, user_data)) {
(gdb)

Related

(Apple Silicon) (M1) Inexplicable SIGBUS crash

In some native M1 code I'm working on, calling a particular function raises a SIGBUS fault that makes no sense:
Exception Type: EXC_BAD_ACCESS (SIGBUS)
Exception Codes: KERN_PROTECTION_FAILURE at 0x0000000280dc7da0
Exception Codes: 0x0000000000000002, 0x0000000280dc7da0
Exception Note: EXC_CORPSE_NOTIFY
Termination Reason: Namespace SIGNAL, Code 10 Bus error: 10
Terminating Process: exc handler [12171]
VM Region Info: 0x280dc7da0 is in 0x280d50000-0x280dd0000; bytes after start: 490912 bytes before end: 33375
REGION TYPE START - END [ VSIZE] PRT/MAX SHRMOD REGION DETAIL
VM_ALLOCATE 280cf0000-280d50000 [ 384K] rw-/rwx SM=ZER
---> VM_ALLOCATE 280d50000-280dd0000 [ 512K] rwx/rwx SM=ZER
VM_ALLOCATE 280dd0000-280e50000 [ 512K] rw-/rwx SM=ZER
According to this dump:
The fault address is the same as the function address.
The function address (0x280dc7da0) is properly aligned.
The target region has rwx protection and is therefore executable.
What could possibly be triggering SIGBUS here?
BTW, an Intel (x64) version of this program works fine on x64 Macs and in Rosetta.
The problem here is most likely Thread JIT Write Protection, a feature that only exists on Apple Silicon and operates in addition to conventional memory page permissions. Unfortunately, Apple's crash dumps seem to provide no indication that Thread JIT Write Protection could be the SIGBUS trigger.

Unexpected "Internal error" exception when using spring-cloud-gcp-pubsub 3.2.1

I'm using PubSubReactiveFactory fromspring-cloud-gcp-pubsub onOpenJDK 11 Debian Linux and I've observed the following exception in our application:
com.google.api.gax.rpc.InternalException: io.grpc.StatusRuntimeException: INTERNAL: http2 exception
at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:110)
at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:41)
at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:86)
at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:66)
at com.google.api.gax.grpc.GrpcExceptionCallable$ExceptionTransformingFuture.onFailure(GrpcExceptionCallable.java:97)
at com.google.api.core.ApiFutures$1.onFailure(ApiFutures.java:67)
at com.google.common.util.concurrent.Futures$CallbackListener.run(Futures.java:1132)
at com.google.common.util.concurrent.DirectExecutor.execute(DirectExecutor.java:31)
at com.google.common.util.concurrent.AbstractFuture.executeListener(AbstractFuture.java:1270)
at com.google.common.util.concurrent.AbstractFuture.complete(AbstractFuture.java:1038)
at com.google.common.util.concurrent.AbstractFuture.setException(AbstractFuture.java:808)
at io.grpc.stub.ClientCalls$GrpcFuture.setException(ClientCalls.java:572)
at io.grpc.stub.ClientCalls$UnaryStreamToFuture.onClose(ClientCalls.java:542)
at io.grpc.PartialForwardingClientCallListener.onClose(PartialForwardingClientCallListener.java:39)
at io.grpc.ForwardingClientCallListener.onClose(ForwardingClientCallListener.java:23)
at io.grpc.ForwardingClientCallListener$SimpleForwardingClientCallListener.onClose(ForwardingClientCallListener.java:40)
at com.google.api.gax.grpc.ChannelPool$ReleasingClientCall$1.onClose(ChannelPool.java:535)
at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: io.grpc.StatusRuntimeException: INTERNAL: http2 exception
at io.grpc.Status.asRuntimeException(Status.java:535)
... 14 common frames omitted
Caused by: io.grpc.netty.shaded.io.netty.handler.codec.http2.Http2Exception$StreamException: Stream closed before write could take place
at io.grpc.netty.shaded.io.netty.handler.codec.http2.Http2Exception.streamError(Http2Exception.java:172)
at io.grpc.netty.shaded.io.netty.handler.codec.http2.DefaultHttp2RemoteFlowController$FlowState.cancel(DefaultHttp2RemoteFlowController.java:481)
at io.grpc.netty.shaded.io.netty.handler.codec.http2.DefaultHttp2RemoteFlowController$1.onStreamClosed(DefaultHttp2RemoteFlowController.java:105)
at io.grpc.netty.shaded.io.netty.handler.codec.http2.DefaultHttp2Connection.notifyClosed(DefaultHttp2Connection.java:357)
at io.grpc.netty.shaded.io.netty.handler.codec.http2.DefaultHttp2Connection$ActiveStreams.removeFromActiveStreams(DefaultHttp2Connection.java:1007)
at io.grpc.netty.shaded.io.netty.handler.codec.http2.DefaultHttp2Connection$ActiveStreams$2.process(DefaultHttp2Connection.java:968)
at io.grpc.netty.shaded.io.netty.handler.codec.http2.DefaultHttp2Connection$ActiveStreams.decrementPendingIterations(DefaultHttp2Connection.java:1029)
at io.grpc.netty.shaded.io.netty.handler.codec.http2.DefaultHttp2Connection$ActiveStreams.forEachActiveStream(DefaultHttp2Connection.java:984)
at io.grpc.netty.shaded.io.netty.handler.codec.http2.DefaultHttp2Connection.forEachActiveStream(DefaultHttp2Connection.java:209)
at io.grpc.netty.shaded.io.grpc.netty.NettyClientHandler.goingAway(NettyClientHandler.java:839)
at io.grpc.netty.shaded.io.grpc.netty.NettyClientHandler.access$200(NettyClientHandler.java:91)
at io.grpc.netty.shaded.io.grpc.netty.NettyClientHandler$2.onGoAwayReceived(NettyClientHandler.java:278)
at io.grpc.netty.shaded.io.netty.handler.codec.http2.DefaultHttp2Connection.goAwayReceived(DefaultHttp2Connection.java:237)
at io.grpc.netty.shaded.io.netty.handler.codec.http2.DefaultHttp2ConnectionDecoder.onGoAwayRead0(DefaultHttp2ConnectionDecoder.java:217)
at io.grpc.netty.shaded.io.netty.handler.codec.http2.DefaultHttp2ConnectionDecoder$FrameReadListener.onGoAwayRead(DefaultHttp2ConnectionDecoder.java:583)
at io.grpc.netty.shaded.io.netty.handler.codec.http2.Http2InboundFrameLogger$1.onGoAwayRead(Http2InboundFrameLogger.java:119)
at io.grpc.netty.shaded.io.netty.handler.codec.http2.DefaultHttp2FrameReader.readGoAwayFrame(DefaultHttp2FrameReader.java:580)
at io.grpc.netty.shaded.io.netty.handler.codec.http2.DefaultHttp2FrameReader.processPayloadState(DefaultHttp2FrameReader.java:271)
at io.grpc.netty.shaded.io.netty.handler.codec.http2.DefaultHttp2FrameReader.readFrame(DefaultHttp2FrameReader.java:159)
at io.grpc.netty.shaded.io.netty.handler.codec.http2.Http2InboundFrameLogger.readFrame(Http2InboundFrameLogger.java:41)
at io.grpc.netty.shaded.io.netty.handler.codec.http2.DefaultHttp2ConnectionDecoder.decodeFrame(DefaultHttp2ConnectionDecoder.java:173)
at io.grpc.netty.shaded.io.netty.handler.codec.http2.Http2ConnectionHandler$FrameDecoder.decode(Http2ConnectionHandler.java:378)
at io.grpc.netty.shaded.io.netty.handler.codec.http2.Http2ConnectionHandler.decode(Http2ConnectionHandler.java:438)
at io.grpc.netty.shaded.io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:507)
at io.grpc.netty.shaded.io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:446)
at io.grpc.netty.shaded.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:276)
at io.grpc.netty.shaded.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.grpc.netty.shaded.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.grpc.netty.shaded.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at io.grpc.netty.shaded.io.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:1371)
at io.grpc.netty.shaded.io.netty.handler.ssl.SslHandler.decodeJdkCompatible(SslHandler.java:1234)
at io.grpc.netty.shaded.io.netty.handler.ssl.SslHandler.decode(SslHandler.java:1283)
at io.grpc.netty.shaded.io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:507)
at io.grpc.netty.shaded.io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:446)
at io.grpc.netty.shaded.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:276)
at io.grpc.netty.shaded.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.grpc.netty.shaded.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.grpc.netty.shaded.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at io.grpc.netty.shaded.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
at io.grpc.netty.shaded.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.grpc.netty.shaded.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.grpc.netty.shaded.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
at io.grpc.netty.shaded.io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:795)
at io.grpc.netty.shaded.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:480)
at io.grpc.netty.shaded.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:378)
at io.grpc.netty.shaded.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986)
at io.grpc.netty.shaded.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.grpc.netty.shaded.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
... 1 common frames omitted
The "Internal" gRPC status is propagated to application code and force us to retry pull operation polluting logs with errors/warnings along the way.
To add more context this is happening when Pub/Sub PullRequest API takes long time (I'm observing p99 20-30 seconds latency when this happens). Netty closes the connection with "Stream closed before write could take place" in DefaultHttp2RemoteFlowController.java:481 and status io.netty.handler.codec.http2.Http2Error.STREAM_CLOSED(0x5) Then this status code is translated to io.grpc.internal.Http2Error.INTERNAL and propagated up the stack.
Has anybody experience this error and come up with a way to gracefully handle it?

How to handle reactor-netty terminating a request in spring-webflux?

We have a spring-webflux application running on spring-boot-starter-webflux:jar:2.1.5.RELEASE and reactor-netty:jar:0.8.8.RELEASE. When a reactive client goes away (k8s pod killed or the client subscription is disposed off) before the request is completed, we see the server simply stops processing the request and no more application logs related to that request are seen. However, reactive-netty prints a trace log indicating that the channel is inactive and will be terminated.
Is there anything we can do to handle this termination gracefully? We would ideally like to respond to a cancellation signal from reactor.
2020-03-18T16:46:40.372Z TRACE --- [reactor-http-epoll-2] r.n.c.ChannelOperations : [id: 0x4cad974b, L:/<SERVER_IP_ADDRESS>:8080 ! R:/<SOME_OTHER_IP_ADDRESS>:55436] Disposing ChannelOperation from a channel
java.lang.Exception: ChannelOperation terminal stack
at reactor.netty.channel.ChannelOperations.terminate(ChannelOperations.java:391)
at reactor.netty.channel.ChannelOperations.onInboundClose(ChannelOperations.java:360)
at reactor.netty.channel.ChannelOperationsHandler.channelInactive(ChannelOperationsHandler.java:72)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:257)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:243)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelInactive(AbstractChannelHandlerContext.java:236)
at io.netty.channel.CombinedChannelDuplexHandler$DelegatingChannelHandlerContext.fireChannelInactive(CombinedChannelDuplexHandler.java:420)
at io.netty.handler.codec.ByteToMessageDecoder.channelInputClosed(ByteToMessageDecoder.java:393)
at io.netty.handler.codec.ByteToMessageDecoder.channelInactive(ByteToMessageDecoder.java:358)
at io.netty.channel.CombinedChannelDuplexHandler.channelInactive(CombinedChannelDuplexHandler.java:223)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:257)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:243)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelInactive(AbstractChannelHandlerContext.java:236)
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelInactive(DefaultChannelPipeline.java:1416)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:257)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:243)
at io.netty.channel.DefaultChannelPipeline.fireChannelInactive(DefaultChannelPipeline.java:912)
at io.netty.channel.AbstractChannel$AbstractUnsafe$8.run(AbstractChannel.java:816)
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:416)
at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:331)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:918)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.base/java.lang.Thread.run(Thread.java:834)
The doOnCancel operator allows for a callback to be provided when a subscription is cancelled. This is different from the doOnEach operator which is run only when the subscription completes or errors out.
https://projectreactor.io/docs/core/release/api/reactor/core/publisher/Mono.html#doOnCancel-java.lang.Runnable-

php7.0.2 Program terminated with signal 11, Segmentation fault

I am running php-7.0.2 with codeigniter (a php mvc frame). I got some segmentation faults which caused core dumps. And, I found that these segmentation faults randomly occurred when the child php-fpm processes shutdown and restart. I don't know why.
Using gdb "bt" to display the core dump:
Core was generated by `php-fpm: pool www '.
Program terminated with signal 11, Segmentation fault.
\#0 zend_string_release (ht=0x114dae0) at /home/smt/phpng/php-7.0.2/Zend/zend_string.h:269
269 /home/smt/phpng/php-7.0.2/Zend/zend_string.h: No such file or directory.
in /home/smt/phpng/php-7.0.2/Zend/zend_string.h
Missing separate debuginfos, use: debuginfo-install php7-7.0.2-20160407105024.x86_64
(gdb) bt
\#0 zend_string_release (ht=0x114dae0) at /home/smt/phpng/php-7.0.2/Zend/zend_string.h:269
\#1 zend_hash_destroy (ht=0x114dae0) at /home/smt/phpng/php-7.0.2/Zend/zend_hash.c:1273
\#2 0x000000000080647b in module_destructor (module=0x14b6ae0)
at /home/smt/phpng/php-7.0.2/Zend/zend_API.c:2509
\#3 0x000000000080075c in module_destructor_zval (zv=<value optimized out>)
at /home/smt/phpng/php-7.0.2/Zend/zend.c:615
\#4 0x000000000080dcff in _zend_hash_del_el_ex (ht=0x1154780)
at /home/smt/phpng/php-7.0.2/Zend/zend_hash.c:1013
\#5 _zend_hash_del_el (ht=0x1154780) at /home/smt/phpng/php-7.0.2/Zend/zend_hash.c:1037
\#6 zend_hash_graceful_reverse_destroy (ht=0x1154780) at /home/smt/phpng/php-7.0.2/Zend/zend_hash.c:1489
\#7 0x0000000000800096 in zend_shutdown () at /home/smt/phpng/php-7.0.2/Zend/zend.c:840
\#8 0x00000000007a2a6a in php_module_shutdown () at /home/smt/phpng/php-7.0.2/main/main.c:2339
\#9 0x000000000089e45d in main (argc=<value optimized out>, argv=<value optimized out>)
at /home/smt/phpng/php-7.0.2/sapi/fpm/fpm/fpm_main.c:1997
(gdb) quit
The php-fpm.log is as following:
[20-Apr-2016 08:00:02] WARNING: [pool www] child 11751 exited on signal 11 (SIGSEGV - core dumped) after 3600.462022 seconds from start
I am very curious about this bug.
Until now, I am sure that the core dumps occurred when the fpm restarted. The restarts were caused by the command 'kill -10 fpm-master-process-ids'. Or, the fpm also restarted when it had processed 'pm.max_requests' requests.
However, the core dumps didn't occur at every restart and the probability of core dumps was very small. I cannot find the role.
Fortunately, I have installed the 7.0.5 version to replace the 7.0.2 version in our production environment and it had run for three days without core dumps.
I cannot find any modification in the changelogs from 7.0.2 to 7.0.5. This is exactly a very strange bug and I want to know the reason. who can tell me something about this bug?
After updating to 7.0.5, core dump has not occurred for 2 weeks. So, this bug has been fixed in 7.0.5!
I still don't know what case this bug.
I am a curious cat. #_#

Glassfish and ActiveMQ XATransactions

I'm using
Sun GlassFish Enterprise Server v2.1.1 ((v2.1 Patch06)(9.1_02 Patch12))
activemq-rar-5.4.2-fuse-02-00.rar
XATransaction for activemq-rar
I experience the exception below and I suspect that it is because activemqra cannot get hold of the activemq connection after a XATransaction is started.
I would have expected that activemqra would roll back the transaction in this case.
Currently the started transaction hangs in Glassfish untill its timed out. This is especially bad when a db transaction is part of the XATransaction
because then the db connection is unavailable.
I have no idea about what to do, so I dont get hanging transactions?
Any help or comments are welcome.
Best regards Trym
[#|2011-07-14T15:24:50.946+0200|INFO|sunappserver2.1| javax.enterprise.system.container.ejb.mdb |_ThreadID=26;_ThreadName=p: mdb-threadpool; w: 6;|javax.ejb.EJBException javax.ejb.EJBException: Unable to complete container-managed transaction.;
nested exception is: javax.transaction.SystemException: org.omg.CORBA.INTERNAL: JTS5031:
Exception [org.omg.CORBA.INTERNAL: vmcid: 0x0 minor code: 0 completed: Maybe] on Resource [commit one phase] operation. vmcid: 0x0 minor code: 0 completed: No
javax.transaction.SystemException: org.omg.CORBA.INTERNAL: JTS5031:
Exception [org.omg.CORBA.INTERNAL: vmcid: 0x0 minor code: 0 completed: Maybe] on Resource [commit one phase] operation. vmcid: 0x0 minor code: 0 completed: No
at com.sun.jts.jta.TransactionManagerImpl.commit(TransactionManagerImpl.java:321)
at com.sun.enterprise.distributedtx.J2EETransactionManagerImpl.commit(J2EETransactionManagerImpl.java:1029)
at com.sun.enterprise.distributedtx.J2EETransactionManagerOpt.commit(J2EETransactionManagerOpt.java:398)
at com.sun.ejb.containers.BaseContainer.completeNewTx(BaseContainer.java:3826)
at com.sun.ejb.containers.BaseContainer.postInvokeTx(BaseContainer.java:3605)
at com.sun.ejb.containers.MessageBeanContainer.afterMessageDeliveryInternal(MessageBeanContainer.java:1226)
at com.sun.ejb.containers.MessageBeanContainer.afterMessageDelivery(MessageBeanContainer.java:1197)
at com.sun.ejb.containers.MessageBeanListenerImpl.afterMessageDelivery(MessageBeanListenerImpl.java:79)
at com.sun.enterprise.connectors.inflow.MessageEndpointInvocationHandler.invoke(MessageEndpointInvocationHandler.java:139)
at $Proxy65.afterDelivery(Unknown Source)
at org.apache.activemq.ra.MessageEndpointProxy$MessageEndpointAlive.afterDelivery(MessageEndpointProxy.java:128)
at org.apache.activemq.ra.MessageEndpointProxy.afterDelivery(MessageEndpointProxy.java:69)
at org.apache.activemq.ra.ServerSessionImpl.afterDelivery(ServerSessionImpl.java:224)
at org.apache.activemq.ActiveMQSession.run(ActiveMQSession.java:897)
at org.apache.activemq.ra.ServerSessionImpl.run(ServerSessionImpl.java:169)
at com.sun.enterprise.connectors.work.OneWork.doWork(OneWork.java:77)
at com.sun.corba.ee.impl.orbutil.threadpool.ThreadPoolImpl$WorkerThread.run(ThreadPoolImpl.java:555)
javax.ejb.EJBException: Unable to complete container-managed transaction.; nested exception is: javax.transaction.SystemException: org.omg.CORBA.INTERNAL: JTS5031: Exception [org.omg.CORBA.INTERNAL: vmcid: 0x0 minor code: 0 completed: Maybe] on Resource [commit one phase] operation. vmcid: 0x0 minor code: 0 completed: No
at com.sun.ejb.containers.BaseContainer.completeNewTx(BaseContainer.java:3837)
at com.sun.ejb.containers.BaseContainer.postInvokeTx(BaseContainer.java:3605)
at com.sun.ejb.containers.MessageBeanContainer.afterMessageDeliveryInternal(MessageBeanContainer.java:1226)
at com.sun.ejb.containers.MessageBeanContainer.afterMessageDelivery(MessageBeanContainer.java:1197)
at com.sun.ejb.containers.MessageBeanListenerImpl.afterMessageDelivery(MessageBeanListenerImpl.java:79)
at com.sun.enterprise.connectors.inflow.MessageEndpointInvocationHandler.invoke(MessageEndpointInvocationHandler.java:139)
at $Proxy65.afterDelivery(Unknown Source)
at org.apache.activemq.ra.MessageEndpointProxy$MessageEndpointAlive.afterDelivery(MessageEndpointProxy.java:128)
at org.apache.activemq.ra.MessageEndpointProxy.afterDelivery(MessageEndpointProxy.java:69)
at org.apache.activemq.ra.ServerSessionImpl.afterDelivery(ServerSessionImpl.java:224)
at org.apache.activemq.ActiveMQSession.run(ActiveMQSession.java:897)
at org.apache.activemq.ra.ServerSessionImpl.run(ServerSessionImpl.java:169)
at com.sun.enterprise.connectors.work.OneWork.doWork(OneWork.java:77)
at com.sun.corba.ee.impl.orbutil.threadpool.ThreadPoolImpl$WorkerThread.run(ThreadPoolImpl.java:555)
|#]
It seems to be a bug on Glassfish. It tries to close a thread that is already closed. There's a patch to correct this issue.
Take a look at https://forums.oracle.com/forums/thread.jspa?threadID=1979095.
Just remove the contents of glassfish/domains/domain1/imq/instances/imqbroker/fs370
this is happening because of some weird corrupted files to persist the queue data... after cleaning this folder, the errors will be gone