managing lock on message in RabbitMQ - rabbitmq

I'm trying to use RabbitMQ in a more unconventional way (though at this point i can pick any other message queue implementation if needed)
I have one queue (I can have more if needed) that where customers are fetching N messages asynchronous. After they do their work I send the results from the client to the db.
I have two problems: first I don't want that they will work on the same message, second I want to grantee that I wont lose messages in case that my customer will close the browser or just stop working.
I looked at the documentation and saw the TTL which was perfect for me if I could alter that message that got timeout isn't going to be deleted but to move to another queue. can't find a way to alter this.
Moreover I looked at the confirmation option which in the first glance looked what I wanted,that mechanism is working like this: when the consumer gets a message he send confirmation to queue, I thought I can delay this confirm and send it when the work is done on the client side.
my problem was that I can't program the queue that if any message didn't get confirm then return it to the queue (or to another).
I also find how to do a scheduled message but it didn't help either because I don't want that the message will be inserted to the queue in five min,I want that when a customer will receive a message it will be locked in the queue for 5 min until confirm to delete is set otherwise return it to the queue.
Can I do temporary queue that enables my mechanism?
If someone can help with one of the problems or suggest another architecture or option to do it in another MQ it would be great.
Resources:
confirmation:
http://www.rabbitmq.com/blog/2011/02/10/introducing-publisher-confirms/
post about locks but his problem was a batcher component:
Locks and batch fetch messages with RabbitMq
TTL:
https://www.rabbitmq.com/ttl.html
Schedule a message:
https://www.rabbitmq.com/blog/2015/04/16/scheduling-messages-with-rabbitmq/

my problem was that I can't program the queue that if any message
didnt get confirm then return it to the queue (or to another).
RabbitMQ does this anyhow, so all you have to do is switch off the auto-ack flag, you figured this out
I thought I can delay this confirm and send it when the work is done
on the client side.
so just send the ACK once you've finished with processing the message.
All the unacknowledged messages remain in the queue and are re-delivered to next consumer (or the same one when it's up again, depending on your setup)

Related

RabbitMQ more messages than expected on fixed size queue

I have a publisher that sends messages to a consumer that moves a motor.
The motor has a work queue which I cannot access, and it works slower than the rate of the incoming messages, so I'm trying to control the traffic on the consumer.
To keep updated and relevant data coming to the motor without the queue filling up and creating a traffic jam, I set the RabbitMQ queue size limit to 5 and basicQos to 1.
The idea is that the RabbitMQ queue will drop the old messages when it is filled up, so the newest commands are at the front of the queue.
Also by setting basicQos to 1 I ensure that the consumer doesn't grab all messages from the queue and bombards the motor at once, which is exactly what i'm trying to avoid since I can't do anything once the command was sent to the motor.
This way the consumer takes messages from the queue one by one, while new messages replace the old ones on the queue.
Practically this moves the bottleneck to the RabbitMQ queue instead of the motor's queue.
I also cannot check the motor's work queue, so all traffic control must be done on the consumer.
I added messageId and tested, and found out many messages are still coming and going long after the publisher is being shut down.
I'm expecting around 5 messages after shutdown since that's the size of the queue, but i'm getting hundreds.
I also added a few seconds of sleep inside the callback to make sure this isn't the robot queue that's acting up, but i'm still getting many messages after shutdown, and I can see in the logs the callback is being called every time so it's definitely still getting messages from somewhere.
Please help.
Thanks.
Moving the acknowledgment to the end of the callback solved the problem.
I'm guessing that by setting basicQos to 1 it did execute the callback for each message one after another, but in the background it kept grabbing messages from the queue.
So even when the publisher was shutdown, the consumer still had messages that were taken from the queue in it, and those messages were the ones that I saw being executed.

Are alerts created for messages that have a reservation expire in IronMQ?

I am using the alerts feature of IronMQ service provided by IronIO to start workers.
I have things setup so that a message is pushed onto the push queue. The push queue sends an alert that starts a worker. The worker pulls off the message on the push queue, reserving it. Sometimes for whatever reason the job fails, the reservation for a message expires, and the message becomes available again. However, from what I can tell, no alert is sent when the reservation expires on a message. So the message sits in the queue until another message is added to the queue firing an alert and starting a worker. But the new message is not processed.
Are alerts created for messages that have a reservation expire in IronMQ? Is there any documentaion that I missed describing what can happen?
I am working on having workers pull off multiple messages but I am running into issues unrelated to iron io when processing multiple messages in the same worker.
Also is there a way to pull off the top of the queue. To avoid pulling off messages that may be causing errors? Should I just modify my workers to delete messages causing errors?
Currently there are no alerts for when a message times out and goes back on the queue, but that does seem like it would be a good idea. I assume this is a pretty inactive queue? I made a feature request for this here: https://trello.com/c/XcHi0NdN/35-fire-alert-when-a-message-times-out-goes-back-on-queue
And regarding messages that are causing issues, your best bet would be to add them to a different queue (an error queue) and delete them off the original queue. Then you can go through the error queue to figure out why certain messages are causing you problems. This is known as a "dead letter queue" btw and we have a feature request for it here, please give it a vote! https://trello.com/c/bGnJcNa9/26-dead-letter-queue

RabbitMQ - purge a queue from all of its unacked messages

I have thousands of unacked messages in my dev environment which I can't restart.
Is there a way to remove (purge) all messages even if they are unacknowledged?
Close the channel that the unacked messages reside on, which will nack them back into the queue, then call purge.
You have to make consumer ack them (or nack) and only after that they will be removed. Alternatively you can shutdown consumers and purge the queue completely.
If you are looking for some way to purge all unacked messages - there are no such feature nor in AMQP protocol neither in RabbitMQ.
It looks like your consumer is the cause of the problem, so you have to adjust it (rewrite) to release message immediately after it processed or failed.
Once there are no "ready" messages in the queue, delete it and recreate.
YOU WILL LOSE THE QUEUE CONTENTS with this method.
You need to put messages back into the queue before you can purge them:
close the channel
close the connection (the script doesn't work for me)
As an alternative, this doesn't require to wait:
delete and recreate the queue
restart the server
You need to call basic.recover to force all unacked messages to be re-enqueued to a channel that failed. Be aware of the errata concerning this function specifying that only the requeue mode is supported by RabbitMQ.
For software developer use below code.
channel.purgeQueue(queue-name);
if we use this code the Queue will be clear and same queue will exist.
One way this can happen is if the consumer is stuck recycling the same messages due to a processing error. In this case, the RabbitMQ queue management interface may show the messages as Unacked, but really they are being read from the queue and processed (to the point of the failure) then requeued (to enable a retry) at a rapid pace -- maybe thousands of times per second.
During this loop, the messages exist briefly in the Ready state, but are immediately removed again by you application -- and the cycle begins again. As an example, this auto-requeue behavior is the default for Spring AMQP.
Since the messages are never left in the Ready state, the Management Interface's Get Message(s) button is unlikely work. What can work, if you have queue access, is to run a separate custom consumer instance, perhaps locally, but with the specific intent of removing and not requeuing the messages in question.
By RabbitMQ's Fair Dispatch mechanism, your additional consumer will likely receive the messages in question and have the opportunity to perform your custom handling.
You might even write a custom utility to do this, with logic to filter, analyze, or deadletter the messages of interest.
If you want to clear the contents of the queue, then you can use the AMQP method queue.purge: There is queue purge in AMQP: http://www.rabbitmq.com/amqp-0-9-1-reference.html#queue.purge
You could do similar using the management plugin.

Setting a long timeout for RabbitMQ ack message

I was wondering if this is possible. I want to pull a task from a queue and have some work that could potentially take anywhere from 3 seconds or longer (possibly) minutes before an ack is sent back to RabbitMQ notifying that the work has been completed. The work is done by a user, hence this is why the time it takes to process the job varies.
I don't want to ack the message immediately after I pop off the queue because I want the message to be requeued if no ack is received. Can anyone give me any insights into how to solve my problem?
Having a long timeout should be fine, and certainly as you say you want redelivery if something goes wrong, so you want to only ack after you finish.
The best way to achieve that, IMO, would be to have multiple consumers on the queue (i.e. multiple threads/processes consuming from the same queue). That should be fine as long as there's no particular ordering constraint on your queue contents (i.e. the way there might be if the queue were to contain contents representing Postgres data that involves FK constraints).
This tutorial on the RabbitMQ website provides more info (Python linked, but there should be similar tutorials for other languages): https://www.rabbitmq.com/tutorials/tutorial-two-python.html
Edit in response to comment from OP:
What's your heartbeat set to? If your worker doesn't acknowledge the heartbeat within the set period of time, the server will consider the connection to be dead.
Not sure which language you're using, but for Java you would use the setRequestedHeartbeat method to specify the heartbeat.
The way you implement your workers, it's vital that the heartbeat can still be sent back to the RabbitMQ server. If something blocks the client from sending the heartbeat, the server will kill the connection after the time interval expires.

RabbitMQ use of immediate and mandatory bits

I am using RabbitMQ server.
For publishing messages, I set the immediate field to true and tried sending 50,000 messages. Using rabbitmqctl list_queues, I saw that the number of messages in the queue was zero.
Then, I changed the immediate flag to false and again tried sending 50,000 messages. Using rabbitmqctl list_queues, I saw that a total of 100,000 messages were in queues (till now, no consumer was present).
After that, I started a consumer and it consumed all the 100,000 messages.
Can anybody please help me in understanding about the immediate bit field and this behavior too? Also, I could not understand the concept of the mandatory bit field.
The immediate and mandatory fields are part of the AMQP specification, and are also covered in the RabbitMQ FAQ to clarify how its implementers interpreted their meaning:
Mandatory
This flag tells the server how to
react if a message cannot be routed to
a queue. Specifically, if mandatory is
set and after running the bindings the
message was placed on zero queues then
the message is returned to the sender
(with a basic.return). If mandatory
had not been set under the same
circumstances the server would
silently drop the message.
Or in my words, "Put this message on at least one queue. If you can't, send it back to me."
Immediate
For a message published with immediate
set, if a matching queue has ready
consumers then one of them will have
the message routed to it. If the lucky
consumer crashes before ack'ing
receipt the message will be requeued
and/or delivered to other consumers on
that queue (if there's no crash the
messaged is ack'ed and it's all done
as per normal). If, however, a
matching queue has zero ready
consumers the message will not be
enqueued for subsequent redelivery on
from that queue. Only if all of the
matching queues have no ready
consumers that the message is returned
to the sender (via basic.return).
Or in my words, "If there is at least one consumer connected to my queue that can take delivery of a message right this moment, deliver this message to them immediately. If there are no consumers connected then there's no point in having my message consumed later and they'll never see it. They snooze, they lose."
http://www.rabbitmq.com/blog/2012/11/19/breaking-things-with-rabbitmq-3-0/
Removal of "immediate" flag
What changed? We removed support for the
rarely-used "immediate" flag on AMQP's basic.publish.
Why on earth did you do that? Support for "immediate" made many parts
of the codebase more complex, particularly around mirrored queues. It
also stood in the way of our being able to deliver substantial
performance improvements in mirrored queues.
What do I need to do? If you just want to be able to publish messages
that will be dropped if they are not consumed immediately, you can
publish to a queue with a TTL of 0.
If you also need your publisher to be able to determine that this has
happened, you can also use the DLX feature to route such messages to
another queue, from which the publisher can consume them.
Just copied the announcement here for a quick reference.