I am wondering what would be the best practice when a consumer is not able to handle the message it receives. What would be the mechanism to notify the rabbit to it will either put it back on to the queue or move to an error queue?
im using the .net client from rabbitmq
Either discard it or put it on an error queue. If there is a problem with the message such that the consumer cannot handle it then do not put it back on the queue as the consumer will just try to read it again.
It is an exception so handle it as such. In the exception handling you should raise an error message stating what happened and what you have done with the message. Best practice is to put it on an error queue where it could be handled manually.
Related
Currently I have an IErrorHandler implementation dealing with messages going to the Rebus error queue. That handler then publishes messages to a saga that throttles output to a Slack notification channel. I think there may be an easier way to do this though. I would like to have the saga implement an IHandleMessages against messages from the Rebus error queue itself. Is that possible? Currently, we have the FleetManager process enabled and for my custom IErrorHandler to work it has to dual publish errors both to the error queue and to FleetManager using the FleetManager API options. This allows my IErrorHandler to be called so I can publish a custom message to start the slack saga and also feeds FleetManager with the data it needs. The problem with my approach is that the Rebus error queue just grows with data I no longer care about. So I guess my question is: is there a way to handle those Rebus error queue messages? Or perhaps even better, is there a simple way to make those error queue messages go away once I know I have them in my saga?
Note: the reason for the saga and to not simply use a FleetManager Slack web hook is to notify based on custom count thresholds of errors, rather than for every error encountered.
I think I just realized one approach I could take, which is to still use my custom IErrorHandler, yet not actually handle the poison message so that it never makes it to the error queue regardless. Instead I would just publish my custom message that is handled by the saga.
I am throwing an AmqpException inside of my consumer.
My expectation is that the message will return back to the queue in FIFO order and will be reprocessed sometime in the future.
It seems as if Spring AMQP does not release the message back to the queue. But instead tries to reprocess the failed messages over and over again.
This blocks the newly arrived messages from being processed. The ones that are stuck appear in the "unpacked" state forever inside of the AMQP console.
Any thoughts?
That's the way rabbitmq/Spring AMQP works; if a message is rejected (any exception is thrown) the message is requeued by default and is put back at the head of the queue so it is retried immediately.
... reprocessed sometime in the future.
You have to configure things appropriately to make that happen.
First, you have to tell the broker to NOT requeue the message. That is done by setting defaultRequeueRejected on the listener container to false (it's true by default). Or, you can throw an AmqpRejectAndDontRequeueException which instructs the container to reject (and not requeue) an individual message.
But that's not the end of it; just doing that will simply cause the rejected message to be discarded.
To avoid that, you have to set up a Dead Letter Exchange/Queue for the queue - rejected messages are then sent to the DLX/DLQ instead of being discarded. Using a policy rather than queue arguments is generally recommended.
Finally, you can set a message time to live on the the DLQ so, after that time, the message is removed from the queue. If you set up an another appropriate dead letter exchange on that queue (the DLQ), you can cause the message to be requeued back to the original queue after the time expires.
Note that this will only work for rejected deliveries from the original queue; it will not work when expiring messages in that queue.
See this answer and some of the links from its question for more details.
You can use the contents of the x-death header to decide if you should give up completely after some number of attempts (catch the exception and somehow dispose of the bad message; don't thrown an exception and the container will ack the message).
Here is a solution I used to solve this. I setup an Interceptor to retry the message x number of times while applying a backoff policy.
http://trippstech.blogspot.com/2016/03/rabbitmq-deadletter-queue-with.html
We have a WCF service that listens for messages on a queue (MSMQ). It sends a request to our web server (REST API), which returns an HTTP status code.
If the status code falls within the 400 range, we are throwing away the message. The idea is a 400 range error can never succeed (unauthorized, bad request, not found, etc.) and so we don't want keep retrying.
For all other errors (e.g., 500 - Internal Server Error), we have WCF configured to put the message on a "retry" queue. Messages on the retry queue get retried after a certain amount of time. The idea is that the server is temporarily down, so wait and try again.
The way WCF is set up, if we throw a FaultException in the service contract, it will automatically put the message on the retry queue.
When a message causes a 400 range error, we are just swallowing the error (we just log it). This prevents the retry mechanism from firing; however, it would be better to move the message to a dead-letter queue. This way we can react to the error by sending an email to the user and/or a system administrator.
Is there a way to immediately move these bad messages to a dead-letter queue?
First, I kept referring to the dead-letter queue. At the time when I posted this question, I was unaware that WCF/MSMQ automatically creates what's known as a poison sub-queue. Any message that can't be delivered in the configured number of times is put in the poison sub-queue.
In my situation, I knew that some messages would never succeed, so I wanted to move the message out of the queue immediately.
The solution was to create a second queue that I called "poison" (not to be confused with the poison sub-queue). My catch block would create an instance of a WCF client and forward the message to this poison queue. I could reuse the same client to post to both the original queue and the poison queue; I just had to create a separate client end-point in the configuration file for each.
I had two separate ServiceHost instances running that read the queues. The ServiceHost for the original queue did the HTTP request and forwarded messages to the poison queue when unrecoverable errors occurred. The second ServiceHost would simply send out an email to record that a message was lost.
There was also the issue of temporary errors that exceeded the maximum number of tries. WCF/MSMQ automatically creates a sub-queue called <myqueuename>;poison. You cannot directly write to a sub-queue via WCF, but you can read from it using a ServiceHost. Whenever messages end up in the poison sub-queue, I simply forward the message to the poison queue, with the exact same client I use in the original handler's catch block.
I wanted the ability to include a stack trace in the error emails. Since I was reusing the same client and service contract for all of the handlers, I couldn't just pass along the stack trace as a string (unless I added it to all of my data contracts). Instead, I had the poison handler try to execute the code one more time, which would fail again and spit out the stack trace.
This is what my message queues ended up looking like:
MyQueue
- Queue messages
- Retry
- Poison
MyQueuePoison
- Queue messages
This approach is pretty convoluted. It was strange calling A WCF client from within a WCF service handler. It also meant setting up one more queue on the server and a ton of additional configuration sections for specifying which queue a client should forward messages to.
hopefully I have understood your question and if it is what i think you are saying then yes there is but you obviously need to program it to do this. But you DO need a retry amount set so the MSMQ can retry until it gives up. Or you can create your own custom queue for dead letters/messages
http://msdn.microsoft.com/en-us/library/ms789035(v=vs.110).aspx
http://msdn.microsoft.com/en-us/library/ms752268(v=vs.110).aspx
take a look here also:
http://www.michaelfcollins3.me/blog/2012/09/20/wcf-msmq-bad-message-handling.html
How do I handle message failure in MSMQ bindings for WCF
I hope these links help.
I am using RabbitMQ version 3.0.2 & I see close to 1000 message in Error queue. I want to know
At what point messages are moved to the error queues?
Is there a way to know why a certain message is being moved to an error queue?
Is there any way to move message from error queue to normal queue?
Thank you
a) they fail to deserialize or b) the consumer throws an exception processing that message five times
Not really... If you peek at the message in the queue, the payload headers might contain a note but I don't think we did that. If you turn logging on (NLog, log4net, etc) you should be able to see the exceptions in your log. You'll have to correlate message ids at that point to figure out exactly why.
There is no built in way via MassTransit. Mostly because there doesn't seem to be a great, generic way to handle this. Everyone wants some process around this. Dru did create a BusDriver app (in the main MT source repo) that could be used to move messages back to the exchange in question. This default behaviour is there so you at least know things have been failing if you don't put in the infrastructure to handle it.
To add to Travis' answer, During my development I found some other reasons for messages going onto the error queue:
The published message type has no consumer
A SAGA and a consumer are expecting the same concrete message type. Even if you try and differentiate using "Accepts" and ".Selected", both a SAGA and a Consumer should not be programmed to receive the same message type.
I have a project setup using Spring and RabbitMQ. Currently it is possible for my application to receive an amqp message that cannot be processed until another asynchronous process has completed (legacy and totally detached, i have no control). So the result is i may have to wait on processing a message for some amount of time. The result of this is an exception in a transformer.
When the message is NACK'd back to rabbitMQ it is putting it back into the head of the queue and re-pulling it immediately. If i get unprocessable messages equal to the number of concurrent listeners my workflow locks up. It spins its wheels waiting for messages to become processable, even though there are valid processable messages waiting behind in the queue.
Is there a way to reject and amqp message and have it go back to the tail of the queue instead? From my research rabbitMQ worked this way at one time, but now i appear to get the head of the queue exclusively.
My config is rather straight forward, but for continuity here it is...
Connection factory is: org.springframework.amqp.rabbit.connection.CachingConnectionFactory
RabbitMQ 3.1.1
Spring Integration: 2.2.0
<si:channel id="channel"/>
<si-amqp:inbound-channel-adapter
queue-names="commit" channel="channel" connection-factory="amqpConnectionFactory"
acknowledge-mode="AUTO" concurrent-consumers="${listeners}"
channel-transacted="true"
transaction-manager="transactionManager"/>
<si:chain input-channel="channel" output-channel="nullChannel">
<si:transformer ref="transformer"></si:transformer>
<si:service-activator ref="activator"/>
</si:chain>
You are correct that RabbitMQ was changed some time ago. There is nothing in the API to change the behavior.
You can, of course, put an error-channel on the inbound adapter, followed by a transformer (expression="payload.failedMessage"), followed by an outbound adapter configured with an appropriate exchange/routing-key to requeue the message at the back of the queue.
You might want to add some additional logic in the error flow to check the exception type (payload.cause) and decide which action you want.
If the error flow itself throws an exception, the original message will be requeued at the head, as before; if it exits normally, the message will be acked.