I see that my custom Spring cloud stream sink with log sink stream app dependency loses RabbitMQ connectivity during RabbitMQ outage, tries making a connection for 5 times and then stops its consumer. I have to manually restart the app to make it successfully connect once the RabbitMQ is up. When I see the default properties of rabbitMQ binding here, it gives interval time but there is no property for infinite retry(which i assume to be default behaviour). Can someone please let me know what I might be missing here to make it try connecting infinitely ?
Error faced during outage triggering consumer retry :
2017-08-08T10:52:07.586-04:00 [APP/PROC/WEB/0] [OUT] Caused by: com.rabbitmq.client.ShutdownSignalException: channel error; protocol method: #method<channel.close>(reply-code=404, reply-text=NOT_FOUND - home node 'rabbit#229ec9f90e07c75d56a0aa84dc28f602' of durable queue 'datastream.dataingestor.datastream' in vhost '8880756f-8a21-4dc8-9b97-95e5a3248f58' is down or inaccessible, class-id=50, method-id=10)
It appears you have a RabbitMQ cluster and the queue in question is hosted on a down node.
If the queue was HA, you wouldn't have this problem.
The listener container does not (currently) handle that condition. It will retry for ever if it loses connection to RabbitMQ itself.
Please open a JIRA Issue and we'll take a look. The container should treat that error the same as a connection problem.
Related
I have a rabbitmq cluster with 3 nodes. One node has a durable and non-mirrored classic queue named test-queue.
I have a spring boot app using spring-AMQP default connection factory new CachingConnectionFactory() to firstly ensure the queue exists and then subscribe its messages. Everything works fine
Then I started a rolling update to the rabbitmq cluster, where node was being restarted one by one.
I observed following during this process from the log:
Upon start I saw below output
Received shutdown signal for consumer tag=amq.ctag-pzPHM_GEd5e-J5Y_L2W7_g com.rabbitmq.client.ShutdownSignalException: connection error; protocol method: #method<connection.close>(reply-code=320, reply-text=CONNECTION_FORCED - broker forced connection closure with reason 'shutdown', class-id=0, method-id=0)
...
org.springframework.amqp.rabbit.connection.CachingConnectionFactory[m][] - Attempting to connect to: xxx:5672
...
org.springframework.amqp.rabbit.connection.CachingConnectionFactory[m][] - Created new connection: xxx#66971f6b:58/SimpleConnection#4315e774
Which shows that the app received shutdown signal and successfully reconnected. At this point, it looks like the node that has the queue was shut down, but the app was able to establish a new connection because there are other nodes
Later I saw more shut down signal which indicates the other node started to shutdown
org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer[m][] - Consumer raised exception, processing can restart if the connection factory supports it com.rabbitmq.client.ShutdownSignalException: connection error; protocol method: #method<connection.close>(reply-code=320, reply-text=CONNECTION_FORCED - broker forced
At the same time I noticed below logs, which indicate that although connected, spring amqp can't find the queue. I guess it is because the node has the queue was down. Spring amqp might be checking other nodes. It thought the queue does not exist so it started to recreate the queue. Also note that there was a retry limit which is 3
org.springframework.amqp.rabbit.listener.BlockingQueueConsumer[m][] - Failed to declare queue: test-queuey
Queue declaration failed; retries left=3 org.springframework.amqp.rabbit.listener.BlockingQueueConsumer$DeclarationException: Failed to declare queue(s):[test-queue]
...
Caused by: com.rabbitmq.client.ShutdownSignalException: channel error; protocol method: #method<channel.close>(reply-code=404, reply-text=NOT_FOUND - queue 'test-queue' in vhost '/' process is stopped by supervisor, class-id=50, method-id=10)
At the end, retry exhausted. I noticed the followings. Looks like spring amqp give up, and started to close everything. The end state was that, no consumer registered to the queue. Spring app was still running but not be able to get messages. It no longer retry like how the disconnection is handled. The resolution was to reboot the app.
org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer[m][] - Cancelling Consumer#7f74d6dd: tags=[[]], channel=Cached Rabbit Channel: AMQChannel(amqp://guest#xxx:5672/,26), conn: Proxy#65ef722a Shared Rabbit Connection: SimpleConnection#4315e774 [delegate=amqp://guest#xxx:5672/, localPort= 37208], acknowledgeMode=AUTO local queue size=0
org.springframework.amqp.rabbit.listener.BlockingQueueConsumer[m][] - Closing Rabbit Channel: Cached Rabbit Channel: AMQChannel(amqp://guest#xxx:5672/,26), conn: Proxy#65ef722a Shared Rabbit Connection: SimpleConnection#4315e774 [delegate=amqp://guest#xxx:5672/, localPort= 37208]
org.springframework.amqp.rabbit.connection.CachingConnectionFactory[m][] - Closing cached Channel: AMQChannel
org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer[m][] - Stopping container from aborted consumer
org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer[m][] - Shutting down Rabbit listener container
I get spring amqp come with a retry on disconnection logic which keeps reconnecting indefinitely. But for such case, how can I make it so that spring wait until cluster restart is completed then start reconnecting? or is there a way to disable the retry limit on the queue checking so that it will keep checking the queue until cluster restart is completed instead of giving up early? Would changing queue to mirrored queue or Quorum queue resolve this issue?
See
https://docs.spring.io/spring-amqp/docs/current/reference/html/#declarationRetries
The number of retry attempts when passive queue declaration fails. Passive queue declaration occurs when the consumer starts or, when consuming from multiple queues, when not all queues were available during initialization. When none of the configured queues can be passively declared (for any reason) after the retries are exhausted, the container behavior is controlled by the 'missingQueuesFatal` property, described earlier.
and
https://docs.spring.io/spring-amqp/docs/current/reference/html/#failedDeclarationRetryInterval
The interval between passive queue declaration retry attempts. Passive queue declaration occurs when the consumer starts or, when consuming from multiple queues, when not all queues were available during initialization.
You can increase one or both of these from their defaults (3 and 5000 respectively).
I have a webpage connecting to a rabbit mq broker using javascript/websockets that are exposed by a spring app deployed in tomcat. Messages are produced 1 per second by an external application and are rendered on the webpage. The javascript subscription is durable.
The issue I'm experiencing is that when the network connection is broken on the javascript client for a period of time (say 60 seconds), the first ~24 seconds of messages are missing. I've looked through the logs of the app deployed in tomcat and the missing messages seem to be up until the following log statement:
org.springframework.messaging.simp.stomp.StompBrokerRelayMessageHandler - DEBUG - TCP connection to broker closed in session 14
I think this is the point at which the endpoint realises the javascript client is disconnected and decides to close the connection to the broker resulting in future messages queueing up.
My question is how can I ensure that the messages between the time the network is severed and the time the endpoint realises the client is disconnected are not lost? Should the endpoint put the messages back on the queue somehow? Maybe there's a way to make it transactional?
Thanks in advance.
The RabbitMQ team monitors this mailing list and only sometimes answers questions on StackOverflow.
Your Tomcat application should not acknowledge messages from RabbitMQ until it confirms that your Javascript client has received them. This way, any messages that aren't ack-ed by the JS client won't be ack-ed by Tomcat, and RabbitMQ will re-deliver them.
I don't know how your JS app and Tomcat interact, but you may have to implement your own ack process there.
I need to start a queue in OpenEJB in a "paused" state so no messages are processed by the consumer until some related data is available. I can programmatically pause the queue as shown here, so if there was some initializer function that is called when a queue is created I could use that method. The queue configuration documentation does not seem to support setting the paused state. Any ideas on how to configure the queue upon creation?
If you read the thread you link you will see a queue is not paused but a broker can be.
In TomEE broker is created from a factory using a spi (in tomee classloader so tomee/lib by default) so you can write your own if that's an option starting programmatically when you are ready.
Now I suspect you don't want to start connectors with the container but it is not an issue to start the broker. Said otherwise you don't want to be connected to any other machine through JMS to not receive anything but if JMS is started and deployed it is ok.
In such a case you can just not configure any connector on the broker and add them when ready. You can find brokers doing:
new org.apache.openejb.resource.activemq.ActiveMQ5Factory().getBrokers()
I have a requirement to load messages from two queues and i am using ActiveMQ I have to implement the Retry mechanism in case of any error or network or application server failure and load back into the same Queue. Also, I want to load any poison messages to DLQ.
Please let me know if I can acheive these through Spring JMS. Also, please advise some good examples to accomplish this task. I checked Spring JMS documentation and have not much details in that.
This is a broker function with ActiveMQ - just configure the broker with the appropriate policies.
If using a DefaultMessageListenerContainer, you must use transacted sessions; then, if the listener throws an exception the message will be rolled back onto the queue and the broker's retry/DLQ policies kick in.
See the Spring documentation about enabling transactions.
I'm a fairly new user of ActiveMQ and I'm looking for a way to get detailed debug information on the client side of a queue connection. My problem is this: I have a server that is sending a message through a queue to a client. Using the admin web page associated with the broker, I can verify the following: the queue was created, there is a consumer associated with the queue, the message has been enqueued, the message has been dispatched, the dispatched queue size is 1, the message has not been dequeued. This setup was working yesterday but mysteriously stopped working today even though I did a restart of the activemq service. The log file at /var/log/activemq.log does not contain any useful information.
At this point I'm stumped; I'm assuming that there is some sort of problem with the configuration, but it hasn't changed since yesterday. Does anybody have a suggestion about what my next step should be?
Turn on debug (or even trace) logging in the broker first of all in conf/log4j.properties.
log4j.logger.org.apache.activemq=DEBUG
restart the broker and re-run your scenario. The logging will hopefully provide you with some information.
Jconsole is also a useful tool to monitor the running broker.
Does your client use any message filters?
You can also enable remote debugging and then connect with an IDE.
To start remote debugging execute
$ ACTIVEMQ_DEBUG=true bin/activemq
and then start a remote debugger to connect to port 5005