Rabbitmq consumer shutting down repeatedly - rabbitmq

I have configured a Dead Letter exchange with exponential back off policy. After making these changes, I have started getting an exception in the rabbitmq consumer getting shutdown repeatedly throwing the following exception:
Received shutdown signal for consumer tag=amq.ctag--Qn9jFNOd3vxhaHvEw8Nrw
com.rabbitmq.client.ShutdownSignalException: connection error
at com.rabbitmq.client.impl.AMQConnection.startShutdown(AMQConnection.java:715)
at com.rabbitmq.client.impl.AMQConnection.shutdown(AMQConnection.java:705)
at com.rabbitmq.client.impl.AMQConnection$MainLoop.run(AMQConnection.java:563)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.EOFException
at java.io.DataInputStream.readUnsignedByte(DataInputStream.java:290)
at com.rabbitmq.client.impl.Frame.readFrom(Frame.java:95)
at com.rabbitmq.client.impl.SocketFrameHandler.readFrame(SocketFrameHandler.java:139)
at com.rabbitmq.client.impl.AMQConnection$MainLoop.run(AMQConnection.java:532)
Can someone please give me some pointers regarding possible causes for this exception?
Thanks,
Shuchi

If you have changed your queue declarations in spring then make sure any queues that existed in Rabbit before the change have been deleted. RabbitMQ doesn't like it when you try to generate a queue with the same name with different parameters.

Related

Apache Ignite : How to fix this error Closing NIO session because of unhandled exception

Getting this error Closing NIO session because of unhandled exception while starting ignite(apache-ignite-fabric-2.6.0-bin).
Trying restarting the instance and increasing the onheap and offheap memory.
we tried another version apache-ignite-fabric-2.5.0-bin got the same error.
Any suggestion or configuration i need to do to fix this issue.
[06:06:33,024][SEVERE][grid-nio-worker-client-listener-0-#40][ClientListenerProcessor] Closing NIO session because of unhandled exception.
class org.apache.ignite.IgniteCheckedException: Invalid handshake message
at org.apache.ignite.internal.processors.odbc.ClientListenerNioServerBuffer.read(ClientListenerNioServerBuffer.java:115)
at org.apache.ignite.internal.processors.odbc.ClientListenerBufferedParser.decode(ClientListenerBufferedParser.java:60)
at org.apache.ignite.internal.processors.odbc.ClientListenerBufferedParser.decode(ClientListenerBufferedParser.java:40)
at org.apache.ignite.internal.util.nio.GridNioCodecFilter.onMessageReceived(GridNioCodecFilter.java:114)
at org.apache.ignite.internal.util.nio.GridNioFilterAdapter.proceedMessageReceived(GridNioFilterAdapter.java:109)
at org.apache.ignite.internal.util.nio.GridNioServer$HeadFilter.onMessageReceived(GridNioServer.java:3490)
at org.apache.ignite.internal.util.nio.GridNioFilterChain.onMessageReceived(GridNioFilterChain.java:175)
at org.apache.ignite.internal.util.nio.GridNioServer$ByteBufferNioClientWorker.processRead(GridNioServer.java:1113)
at org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.processSelectedKeysOptimized(GridNioServer.java:2339)
at org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.bodyInternal(GridNioServer.java:2110)
at org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.body(GridNioServer.java:1764)
at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
at java.lang.Thread.run(Thread.java:748)
I don't see why there will be activity on ODBC port when you start Apache Ignite instance. Are you sure that no applications are trying to connect to Ignite ports, send garbage to sockets when you bring Ignite up?
Maybe there's something which confuses Ignite for their buddy.

Spring Cloud Stream : Sink loses RabbitMQ connectivity

I see that my custom Spring cloud stream sink with log sink stream app dependency loses RabbitMQ connectivity during RabbitMQ outage, tries making a connection for 5 times and then stops its consumer. I have to manually restart the app to make it successfully connect once the RabbitMQ is up. When I see the default properties of rabbitMQ binding here, it gives interval time but there is no property for infinite retry(which i assume to be default behaviour). Can someone please let me know what I might be missing here to make it try connecting infinitely ?
Error faced during outage triggering consumer retry :
2017-08-08T10:52:07.586-04:00 [APP/PROC/WEB/0] [OUT] Caused by: com.rabbitmq.client.ShutdownSignalException: channel error; protocol method: #method<channel.close>(reply-code=404, reply-text=NOT_FOUND - home node 'rabbit#229ec9f90e07c75d56a0aa84dc28f602' of durable queue 'datastream.dataingestor.datastream' in vhost '8880756f-8a21-4dc8-9b97-95e5a3248f58' is down or inaccessible, class-id=50, method-id=10)
It appears you have a RabbitMQ cluster and the queue in question is hosted on a down node.
If the queue was HA, you wouldn't have this problem.
The listener container does not (currently) handle that condition. It will retry for ever if it loses connection to RabbitMQ itself.
Please open a JIRA Issue and we'll take a look. The container should treat that error the same as a connection problem.

Prototype project with RabbitMQ+RavenDB repeated SharedQueue closed errors from RabbitMQ

I've created a simple saga prototype project with RabbitMQ as the transport and RavenDB as the persistence mechanism. The prototype actually runs as expected, but every few seconds i get this error msg:
ERROR NServiceBus.Transports.RabbitMQ.RabbitMqDequeueStrategy Failed to receive messages from [Assembly].Retries
System.AggregateException: One or more errors occurred. --> System.IO.EndOfStreamException: SharedQueue closed
at RabbitMQ.Util.SharedQueue1.EnsureIsOpen()
at RabbitMQ.Util.SharedQueue1.Dequeue(int 32 milliseconds timeout.......
I also get an almost identical message immediately following the above one but it says it Failed to receive messages from RabbitMGPoller.Timeouts
In addition to that there are constant INFO messages that say:
NServiceBus.Transports.RabbitMQ.RabbitMqConnectionManager Disconnected from RabbitMQ broker, reason: AMQP close-reason, initiated Library, code=0 text="End of stream"... cause=System.IOException:Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host. ---> System.Net.Sockets.SocketException: An existing connection was forcibly closed by the remote host...
I have tried adding a DequeueTimeout=600 value to the transport connection, but the same errors still occur. I've also tried adding the following key in the config file, but it still didn't seem to help.
I eventually figured it out, just my lack of understanding of RabbitMQ and NServiceBus. I changed the RequestedHeartbeat value for the RabbitMQ connection to something larger i.e. RequestedHeartbeat=6000. That solved my issue.

Handling SystemExceptions in mule esb

I have an global JMS activemq-connector with reconnection strategy defined.
When activemq-server goes down, after retries exhausted i see a error log below.
ERROR DefaultSystemExceptionStrategy:337 - Could not connect to broker URL: tcp://localhost:61616. Reason: java.net.ConnectException: Connection refused: connect
how to handle DefaultSystemException and perform a set of activities like sending exception message to vm endpoint.
Does anyone across this situation please help.
Thanks in advance.!
You have to override the Default Exception Strategy with the a Custom Global Default Exception Strategy. This way the JMS connection failure exception might be caught
More on this at the below link.
http://www.mulesoft.org/documentation/display/current/Error+Handling#ErrorHandling-CustomDefaultExceptionStrategies
Hope this helps.

How to recover from deadlocks in Message Driven Beans in GlassFish?

I am running into a deadlock situation when receiving messages on a pool of MDBs in GlassFish. If I receive multiple messages that concurrently try to update the same set of rows, the MDB throws a LockAcquisitionException. Unfortunately the GlassFish JMS provider redelivers the message immediately causing the same exception to occur again. I was hoping to configure the JMS provider to redeliver after some delay, but this does not seem to be supported. Any ideas on how I could solve this issue?
Have you looked at
Configuring a 'retry delay' in MQ Series
What about catching the error, sleeping, and then re-throwing it?
Here's a link to some Oracle documentation on the configuration options:
http://download.oracle.com/docs/cd/E19798-01/821-1794/aeooq/index.html
endpointExceptionRedeliveryAttempts
This will allow you to catch errors. You could then implement an MBean on the Fault/RME endpoint and add in artificial delays
But there doesn't appear to be a way to put a retry delay in GlassFish at this time.