I'm using RetryOperationsInterceptor. Every time a message consumption throws an exception, the message is put back in the queue. That's what I want, but I want it at the tail of the queue instead of the head. Is there a way to achieve it?
You have to republish it to the queue in the recoverer.
Add a RepublishMessageRecoverer to the interceptor.
https://docs.spring.io/spring-amqp/docs/current/reference/html/#async-listeners
It will still retry immediately, so you would have to set maxAttempts to 1 and use a message header to count how many delivery attempts have been made, so you will probably need to customize the recoverer.
Related
I have exchange and queue. Producer doesn't need consumption confirmation, but messages in some cases can be un-processable by consumer in current moment, because of lack of other data. Because of this, I want to return those messages to the end of queue. How to do this? Or is it done automatically when I reject message?
Flow:
Message1 gets consumed and creates some record in database.
Message2 gets consumed and checks if there is record in database, and if yes, then it updates the record. If there is no record in database, message should be returned to the end of queue.
So there is message ordering problem and in general situation I get messages in order, because most of components deliver their messages correctly. I want to solve potential situation, when Producer of Message1 wasn't able to put message to exchange immediately because of heavy load or other reason. In this situation, Message2 will be consumed first but there will be no sufficient informations in database to process it. I want this message to be returned back to queue, but be sure that this Message2 will go to the tail of queue. If it will go to head, I will get infinite loop if I use only one queue.
Side question is, if it's possible to track how many times consumers tried to process message but returned it. If there is possibility to put message to the tail of queue like I described before, but for some reason Producer of Message1 died, and there will be no Message1, I want to make Message2 dead after some number of retries or some time.
RabbitMQ always puts rejected messages at the head of the queue. To put them at the tail, you will have to publish them yourself (e.g. using RabbitTemplate). You can add a header with a count of retries.
I'm trying to use RabbitMQ in a more unconventional way (though at this point i can pick any other message queue implementation if needed)
I have one queue (I can have more if needed) that where customers are fetching N messages asynchronous. After they do their work I send the results from the client to the db.
I have two problems: first I don't want that they will work on the same message, second I want to grantee that I wont lose messages in case that my customer will close the browser or just stop working.
I looked at the documentation and saw the TTL which was perfect for me if I could alter that message that got timeout isn't going to be deleted but to move to another queue. can't find a way to alter this.
Moreover I looked at the confirmation option which in the first glance looked what I wanted,that mechanism is working like this: when the consumer gets a message he send confirmation to queue, I thought I can delay this confirm and send it when the work is done on the client side.
my problem was that I can't program the queue that if any message didn't get confirm then return it to the queue (or to another).
I also find how to do a scheduled message but it didn't help either because I don't want that the message will be inserted to the queue in five min,I want that when a customer will receive a message it will be locked in the queue for 5 min until confirm to delete is set otherwise return it to the queue.
Can I do temporary queue that enables my mechanism?
If someone can help with one of the problems or suggest another architecture or option to do it in another MQ it would be great.
Resources:
confirmation:
http://www.rabbitmq.com/blog/2011/02/10/introducing-publisher-confirms/
post about locks but his problem was a batcher component:
Locks and batch fetch messages with RabbitMq
TTL:
https://www.rabbitmq.com/ttl.html
Schedule a message:
https://www.rabbitmq.com/blog/2015/04/16/scheduling-messages-with-rabbitmq/
my problem was that I can't program the queue that if any message
didnt get confirm then return it to the queue (or to another).
RabbitMQ does this anyhow, so all you have to do is switch off the auto-ack flag, you figured this out
I thought I can delay this confirm and send it when the work is done
on the client side.
so just send the ACK once you've finished with processing the message.
All the unacknowledged messages remain in the queue and are re-delivered to next consumer (or the same one when it's up again, depending on your setup)
I have a service which tasks worker processes via RabbitMQ. The messages are sent with a TTL, and the worker will not ack the message until it successfully completes the task sent in the message.
The tasking process will monitor workers for timeouts, and if a worker exceeds the timeout it will be terminated. Since the message isn't ack'd, the message is re-queued immediately and the next worker will pick up the message (this is useful in my scenario, as workers are unreliable and may fail but subsequent attempts typically succeed.
However, I would also like the ability to cancel a message. Terminating and re-creating the worker process is the normal procedure (it's single threaded, so I can't send a separate 'cancel' message to the worker). However, doing so leads to the message immediately re-queueing if the TTL has not been exceeded.
The only suggested solution I've found is here, which suggested a separate data source which checks if a message is still valid. However, that answer is both a) old and b) inconvenient.
Does RabbitMQ offer a means to cancel a message once it's been placed into the queue?
Unfortunately rabbitmq does not have a way to cancel a message.
Without the ability to send a "cancel" message to your consumer, you may have to do something like what that other post suggests.
Another option to consider: message processing should be idempotent. That is, processing the same message more than once should only cause the desired result to occur once (the first time it is processed).
Idempotence is often achieved through the use of a correlationid in messaging. You can attach a correlationid to your message, then check a database or other service to see if that message should still be processed. If you want to "cancel" the message, you would update the other database/service with that specific correlationid to say "this one has been processed already" or "has been canceled" or something like that.
I am throwing an AmqpException inside of my consumer.
My expectation is that the message will return back to the queue in FIFO order and will be reprocessed sometime in the future.
It seems as if Spring AMQP does not release the message back to the queue. But instead tries to reprocess the failed messages over and over again.
This blocks the newly arrived messages from being processed. The ones that are stuck appear in the "unpacked" state forever inside of the AMQP console.
Any thoughts?
That's the way rabbitmq/Spring AMQP works; if a message is rejected (any exception is thrown) the message is requeued by default and is put back at the head of the queue so it is retried immediately.
... reprocessed sometime in the future.
You have to configure things appropriately to make that happen.
First, you have to tell the broker to NOT requeue the message. That is done by setting defaultRequeueRejected on the listener container to false (it's true by default). Or, you can throw an AmqpRejectAndDontRequeueException which instructs the container to reject (and not requeue) an individual message.
But that's not the end of it; just doing that will simply cause the rejected message to be discarded.
To avoid that, you have to set up a Dead Letter Exchange/Queue for the queue - rejected messages are then sent to the DLX/DLQ instead of being discarded. Using a policy rather than queue arguments is generally recommended.
Finally, you can set a message time to live on the the DLQ so, after that time, the message is removed from the queue. If you set up an another appropriate dead letter exchange on that queue (the DLQ), you can cause the message to be requeued back to the original queue after the time expires.
Note that this will only work for rejected deliveries from the original queue; it will not work when expiring messages in that queue.
See this answer and some of the links from its question for more details.
You can use the contents of the x-death header to decide if you should give up completely after some number of attempts (catch the exception and somehow dispose of the bad message; don't thrown an exception and the container will ack the message).
Here is a solution I used to solve this. I setup an Interceptor to retry the message x number of times while applying a backoff policy.
http://trippstech.blogspot.com/2016/03/rabbitmq-deadletter-queue-with.html
I have a project setup using Spring and RabbitMQ. Currently it is possible for my application to receive an amqp message that cannot be processed until another asynchronous process has completed (legacy and totally detached, i have no control). So the result is i may have to wait on processing a message for some amount of time. The result of this is an exception in a transformer.
When the message is NACK'd back to rabbitMQ it is putting it back into the head of the queue and re-pulling it immediately. If i get unprocessable messages equal to the number of concurrent listeners my workflow locks up. It spins its wheels waiting for messages to become processable, even though there are valid processable messages waiting behind in the queue.
Is there a way to reject and amqp message and have it go back to the tail of the queue instead? From my research rabbitMQ worked this way at one time, but now i appear to get the head of the queue exclusively.
My config is rather straight forward, but for continuity here it is...
Connection factory is: org.springframework.amqp.rabbit.connection.CachingConnectionFactory
RabbitMQ 3.1.1
Spring Integration: 2.2.0
<si:channel id="channel"/>
<si-amqp:inbound-channel-adapter
queue-names="commit" channel="channel" connection-factory="amqpConnectionFactory"
acknowledge-mode="AUTO" concurrent-consumers="${listeners}"
channel-transacted="true"
transaction-manager="transactionManager"/>
<si:chain input-channel="channel" output-channel="nullChannel">
<si:transformer ref="transformer"></si:transformer>
<si:service-activator ref="activator"/>
</si:chain>
You are correct that RabbitMQ was changed some time ago. There is nothing in the API to change the behavior.
You can, of course, put an error-channel on the inbound adapter, followed by a transformer (expression="payload.failedMessage"), followed by an outbound adapter configured with an appropriate exchange/routing-key to requeue the message at the back of the queue.
You might want to add some additional logic in the error flow to check the exception type (payload.cause) and decide which action you want.
If the error flow itself throws an exception, the original message will be requeued at the head, as before; if it exits normally, the message will be acked.