RabbitMQ rollback transaction and re route message - rabbitmq

conditions:
1) use a hibernate transaction manager
2) queue is marked as transactional
3) use the SimpleMessageListenerContainer that comes bundles with spring-amqp to trigger a consumer of the message.
scenario :
A consumer generates an exception due to some unexpected error which causes the hibernate transaction to rollback, and the message to be requeued. This is taken care of by the container.
Due to the way that the SimpleMessageContainer is written i could not find a way to remove the message from the queue and & have the platform transaction manager rollback. Either the channel & transaction manager both roll back or the operation passes.
What i thought of doing was to mark the message as failed on an exception by populated a field on the message, so that when it comes back to another consumer i can analyse the state of the message using AOP advice and then reroute the message to another exchange.
I can not seem to alter the body of the message, or add in a header to tag the message in rabbitmq. Each time the message comes back in it is the original one.
How can i tag the message ?
How have other people managed to solve the rerouting of messages on an exception whilst rolling back the transaction ?

There is a queue argument named "x-dead-letter-exchange" that is used to specify an exchange to wich messages will be republished if they are rejected or expire.

Related

How can I ensure that a message will be given to a specific consumer in a queue?

We are working in a microservice architecture and we are using RabbitMQ as a message broker. We want to avoid the scenarios where the following happens:
An entity begins its creation but it takes a while for it to finish.
The system decides that the creation time has taken too long and that the entity should be deleted due to a timeout, so it sends out a message to delete the entity which is currently still being created
Delete message gets consumed and the system checks whether the entity exists and does not find it due to the entity still being in the process of being created.
Delete entity message consumer returns an error due to not finding the entity.
How can we ensure that the delete message is consumed after the create message is finished in such a way that we do not block the consumption of other messages?
How can we ensure that the delete message is consumed after the create message is finished in such a way that we do not block the consumption of other messages?
Let's say your entity creation timeout is N. The worker(s) responsible for creating entities should know about this timeout, and should be able to cancel entity creation should N be reached. This isn't strictly necessary but it sounds like your entity creation may be resource intensive so cancellation should be a feature you have.
If your workers know to cancel entity creation when timeout N is reached, then perhaps you don't even need the deletion message?
If you keep the delete message, the workers processing that could do the following:
First, ensure your queue has a Dead Letter Exchange configured
Consume the message, and try to delete the entity
If deletion succeeds, great, ack the message with RabbitMQ and you're done
If deletion fails, nack (reject) the message with RabbitMQ and do set requeue to be false. This will cause the message to be routed to the dead-letter exchange
A worker should consume from a queue bound to this dead-letter exchange. You could have a queue dedicated to re-trying entity deletions. When a worker consumes a message from this queue, it can re-try the deletion. If it fails, you can reject it again (after a delay, of course) and, if this queue has the same dead-letter settings, the same process will happen
Finally, ensure that your deletion workers respect the count property and only try a certain number of times to delete an entity. If a limit is exceeded, this should create an exception in your system
NOTE: the RabbitMQ team monitors the rabbitmq-users mailing list and only sometimes answers questions on StackOverflow.

How does the 'Publisher returns" happen/work in Spring AMQP?

I am working on RabbitMQ integration. I have a microservice which receives messages from other services. I am currently looking into how to handle messages which encounter exceptions during processing.
The scenario could be:
ServiceA sends message to engine's queue.
Engine processes the message received.
During processing, engine encountered an exception (say a NullPointerException)
Engine returns the message to ServiceA for reprocessing
ServiceA holds the message until the exception in the engine is resolved (resending to engine can be manually triggered)
I bumped into Spring AMQP documentation about Publisher Returns but I could not totally grasp the context. I would like to know how this works and if this could be a solution to address above item #4. Or is there other solution for this?
Thank you in advance!
For #4 on your list the solution is quite simple - don't acknowledge the message automatically, rather then when the processing is finished. In that way -
if the client (subscriber) dies (for whatever reason) during processing of the message then that message is re-queued (so sent to ServiceA for reprocessing in your case).
If you want to explicitly re-queue the message you could do negative acknowledgment (search for it here).
In any case of re-queuing (manual or automatic) you should be careful that the single message that causes subscribers to die doesn't end up being processed forever by subscriber(s), that is - make sure that the exception that happened during processing was a random and not a guaranteed event. Example for this would be a message containing invalid XML - you process it, see it's invalid, handle the exception and re-queue, but then again another (or the same) subscriber gets it, and handles the same exception since the content of the message and the XML inside it didn't change and so on...

rabbitmq with spring amqp - messages stuck in case of AmqpException

I am throwing an AmqpException inside of my consumer.
My expectation is that the message will return back to the queue in FIFO order and will be reprocessed sometime in the future.
It seems as if Spring AMQP does not release the message back to the queue. But instead tries to reprocess the failed messages over and over again.
This blocks the newly arrived messages from being processed. The ones that are stuck appear in the "unpacked" state forever inside of the AMQP console.
Any thoughts?
That's the way rabbitmq/Spring AMQP works; if a message is rejected (any exception is thrown) the message is requeued by default and is put back at the head of the queue so it is retried immediately.
... reprocessed sometime in the future.
You have to configure things appropriately to make that happen.
First, you have to tell the broker to NOT requeue the message. That is done by setting defaultRequeueRejected on the listener container to false (it's true by default). Or, you can throw an AmqpRejectAndDontRequeueException which instructs the container to reject (and not requeue) an individual message.
But that's not the end of it; just doing that will simply cause the rejected message to be discarded.
To avoid that, you have to set up a Dead Letter Exchange/Queue for the queue - rejected messages are then sent to the DLX/DLQ instead of being discarded. Using a policy rather than queue arguments is generally recommended.
Finally, you can set a message time to live on the the DLQ so, after that time, the message is removed from the queue. If you set up an another appropriate dead letter exchange on that queue (the DLQ), you can cause the message to be requeued back to the original queue after the time expires.
Note that this will only work for rejected deliveries from the original queue; it will not work when expiring messages in that queue.
See this answer and some of the links from its question for more details.
You can use the contents of the x-death header to decide if you should give up completely after some number of attempts (catch the exception and somehow dispose of the bad message; don't thrown an exception and the container will ack the message).
Here is a solution I used to solve this. I setup an Interceptor to retry the message x number of times while applying a backoff policy.
http://trippstech.blogspot.com/2016/03/rabbitmq-deadletter-queue-with.html

MSMQ + WCF - Immediately Move Messages to the Dead-Letter Queue

We have a WCF service that listens for messages on a queue (MSMQ). It sends a request to our web server (REST API), which returns an HTTP status code.
If the status code falls within the 400 range, we are throwing away the message. The idea is a 400 range error can never succeed (unauthorized, bad request, not found, etc.) and so we don't want keep retrying.
For all other errors (e.g., 500 - Internal Server Error), we have WCF configured to put the message on a "retry" queue. Messages on the retry queue get retried after a certain amount of time. The idea is that the server is temporarily down, so wait and try again.
The way WCF is set up, if we throw a FaultException in the service contract, it will automatically put the message on the retry queue.
When a message causes a 400 range error, we are just swallowing the error (we just log it). This prevents the retry mechanism from firing; however, it would be better to move the message to a dead-letter queue. This way we can react to the error by sending an email to the user and/or a system administrator.
Is there a way to immediately move these bad messages to a dead-letter queue?
First, I kept referring to the dead-letter queue. At the time when I posted this question, I was unaware that WCF/MSMQ automatically creates what's known as a poison sub-queue. Any message that can't be delivered in the configured number of times is put in the poison sub-queue.
In my situation, I knew that some messages would never succeed, so I wanted to move the message out of the queue immediately.
The solution was to create a second queue that I called "poison" (not to be confused with the poison sub-queue). My catch block would create an instance of a WCF client and forward the message to this poison queue. I could reuse the same client to post to both the original queue and the poison queue; I just had to create a separate client end-point in the configuration file for each.
I had two separate ServiceHost instances running that read the queues. The ServiceHost for the original queue did the HTTP request and forwarded messages to the poison queue when unrecoverable errors occurred. The second ServiceHost would simply send out an email to record that a message was lost.
There was also the issue of temporary errors that exceeded the maximum number of tries. WCF/MSMQ automatically creates a sub-queue called <myqueuename>;poison. You cannot directly write to a sub-queue via WCF, but you can read from it using a ServiceHost. Whenever messages end up in the poison sub-queue, I simply forward the message to the poison queue, with the exact same client I use in the original handler's catch block.
I wanted the ability to include a stack trace in the error emails. Since I was reusing the same client and service contract for all of the handlers, I couldn't just pass along the stack trace as a string (unless I added it to all of my data contracts). Instead, I had the poison handler try to execute the code one more time, which would fail again and spit out the stack trace.
This is what my message queues ended up looking like:
MyQueue
- Queue messages
- Retry
- Poison
MyQueuePoison
- Queue messages
This approach is pretty convoluted. It was strange calling A WCF client from within a WCF service handler. It also meant setting up one more queue on the server and a ton of additional configuration sections for specifying which queue a client should forward messages to.
hopefully I have understood your question and if it is what i think you are saying then yes there is but you obviously need to program it to do this. But you DO need a retry amount set so the MSMQ can retry until it gives up. Or you can create your own custom queue for dead letters/messages
http://msdn.microsoft.com/en-us/library/ms789035(v=vs.110).aspx
http://msdn.microsoft.com/en-us/library/ms752268(v=vs.110).aspx
take a look here also:
http://www.michaelfcollins3.me/blog/2012/09/20/wcf-msmq-bad-message-handling.html
How do I handle message failure in MSMQ bindings for WCF
I hope these links help.

Re-queue Amqp message at tail of Queue

I have a project setup using Spring and RabbitMQ. Currently it is possible for my application to receive an amqp message that cannot be processed until another asynchronous process has completed (legacy and totally detached, i have no control). So the result is i may have to wait on processing a message for some amount of time. The result of this is an exception in a transformer.
When the message is NACK'd back to rabbitMQ it is putting it back into the head of the queue and re-pulling it immediately. If i get unprocessable messages equal to the number of concurrent listeners my workflow locks up. It spins its wheels waiting for messages to become processable, even though there are valid processable messages waiting behind in the queue.
Is there a way to reject and amqp message and have it go back to the tail of the queue instead? From my research rabbitMQ worked this way at one time, but now i appear to get the head of the queue exclusively.
My config is rather straight forward, but for continuity here it is...
Connection factory is: org.springframework.amqp.rabbit.connection.CachingConnectionFactory
RabbitMQ 3.1.1
Spring Integration: 2.2.0
<si:channel id="channel"/>
<si-amqp:inbound-channel-adapter
queue-names="commit" channel="channel" connection-factory="amqpConnectionFactory"
acknowledge-mode="AUTO" concurrent-consumers="${listeners}"
channel-transacted="true"
transaction-manager="transactionManager"/>
<si:chain input-channel="channel" output-channel="nullChannel">
<si:transformer ref="transformer"></si:transformer>
<si:service-activator ref="activator"/>
</si:chain>
You are correct that RabbitMQ was changed some time ago. There is nothing in the API to change the behavior.
You can, of course, put an error-channel on the inbound adapter, followed by a transformer (expression="payload.failedMessage"), followed by an outbound adapter configured with an appropriate exchange/routing-key to requeue the message at the back of the queue.
You might want to add some additional logic in the error flow to check the exception type (payload.cause) and decide which action you want.
If the error flow itself throws an exception, the original message will be requeued at the head, as before; if it exits normally, the message will be acked.