In the console pane rabbitmq one day I had accumulated 8000 posts, but I am embarrassed that their status is idle at the counter ready and total equal to 1. What status should be completed at the job, idle? In what format is registered x-pires? It seems to me that I had something wrong =(
While it's difficult to fully understand what you are asking, it seems that you simply don't have anything pulling messages off of the queue in question.
In general, RabbitMQ will hold on to a message in a queue until a listener pulls it off and successfully ACKs, indicating that the message was successfully processed. You can configure queues to behave differently by setting a Time-To-Live (TTL) on messages or having different queue durabilities (eg. destroyed when there are no more listeners), but the default is to play it safe.
Related
I have a publisher that sends messages to a consumer that moves a motor.
The motor has a work queue which I cannot access, and it works slower than the rate of the incoming messages, so I'm trying to control the traffic on the consumer.
To keep updated and relevant data coming to the motor without the queue filling up and creating a traffic jam, I set the RabbitMQ queue size limit to 5 and basicQos to 1.
The idea is that the RabbitMQ queue will drop the old messages when it is filled up, so the newest commands are at the front of the queue.
Also by setting basicQos to 1 I ensure that the consumer doesn't grab all messages from the queue and bombards the motor at once, which is exactly what i'm trying to avoid since I can't do anything once the command was sent to the motor.
This way the consumer takes messages from the queue one by one, while new messages replace the old ones on the queue.
Practically this moves the bottleneck to the RabbitMQ queue instead of the motor's queue.
I also cannot check the motor's work queue, so all traffic control must be done on the consumer.
I added messageId and tested, and found out many messages are still coming and going long after the publisher is being shut down.
I'm expecting around 5 messages after shutdown since that's the size of the queue, but i'm getting hundreds.
I also added a few seconds of sleep inside the callback to make sure this isn't the robot queue that's acting up, but i'm still getting many messages after shutdown, and I can see in the logs the callback is being called every time so it's definitely still getting messages from somewhere.
Please help.
Thanks.
Moving the acknowledgment to the end of the callback solved the problem.
I'm guessing that by setting basicQos to 1 it did execute the callback for each message one after another, but in the background it kept grabbing messages from the queue.
So even when the publisher was shutdown, the consumer still had messages that were taken from the queue in it, and those messages were the ones that I saw being executed.
I have the following use case that I'm trying to setup in rabbit MQ:
Normally process A should handle all messages sent to queue A.
However if process A goes down (is no longer consuming from queue A) Then process B should handle the messages until process A comes back up.
At first it looks like consumer priorities might be the solution. https://www.rabbitmq.com/consumer-priority.html. However that will send messages to process B when process A is just blocked working on other messages. I only want them sent to process B when process A is down.
A 2nd option might be dead lettering. https://www.rabbitmq.com/dlx.html. If process A is not reading from queue A the messages will eventually time out and then move to an exchange that forwards them to a queue that process B reads. However that options requires waiting for the message to timeout which is not ideal. Also the message could timeout even while process A is still working which is not ideal.
Any ideas how rabbit MQ could be configured for the use case described above? Thanks
According to your answers to my questions, I would probably use a priority on consumer so that process A handles a maximum of messages, along with a high prefetch count (if possible, and you must ensure your process can handle such a high number).
Then, process B would handle the messages that process A cannot handle due to the high load, or all the messages when process A is not available. It is probably acceptable that in the case of high load some messages are handled with a higher delay. Do not forget to set a low prefetch count for process B.
Hope this helps.
I have observed RabbitMQ "stuck" with unacked messages. The queue shows a consumer which no longer exists, and I assume what's happening is that RabbitMQ is continuing to deliver messages to that consumer. They show as an ever-increasing count of unacked messages. I'm doing this in PHP with php-amqplib.
I can produce the problem by killing the consumer process (control-C on command line).
I tried specifying a heartbeat of 3 seconds and tried keep-alive both true and false. With heartbeat, the consumer will eventually fail:
Exception fwrite(): send of 573 bytes failed with errno=32 Broken pipe
PhpAmqpLib\Wire\IO\StreamIO->error_handler(8, 'fwrite(): send ...',
php-amqplib/PhpAmqpLib/Wire/IO/StreamIO.php(281): fwrite(Resource id #176, '\x01\x00\x01\x00\x00\x00\x15\x00<\x00(\x00\x00\fb...', 8192)
Issue #374 might relate: https://github.com/php-amqplib/php-amqplib/issues/374
The consumer is consuming from multiple queues, but I believe that shouldn't matter.
The problem I'm trying to solve is that RabbitMQ continues to think that a consumer exists when it doesn't, with the result that RabbitMQ delivers those messages nowhere, and they go unacknowledged. I'm looking for a way to get rid of that spurious connection so that those messages can be re-delivered to a live consumer. I think that's what heartbeat is for, but I haven't gotten it to work.
The first and more important think that we need to do in this case is try to "print" your content message, and only return true to consumer. Don't process your real code, if you can "consume" the messages the problem isn't in rabbit but in our process, because probably we expend to much time to acknowledge message to rabbit and Rabbit closes our connections.
I'm not saying that its you case, but I'm just trying to help debugging the problem.
In my case I change the approach of this problem, because I have many product ids(my case) for each message and its expend long time to ACK process cause they reach database, I fit my messages and it works well after do that.
We can change the approach like create another queues to fit this messages, I don't know, but 90% of problems is it.
You can read more about Detecting Dead TCP Connections with Heartbeats here
I have a clients that uses API. The API sends messeges to rabbitmq. Rabbitmq to workers.
I ought to reply to clients if somethings went wrong - message wasn't routed to a certain queue and wasn't obtained for performing at this time ( full confirmation )
A task who is started after 5-10 seconds does not make sense.
Appropriately, I must use mandatory and immediate flags.
I can't increase counts of workers, I can't run workers on another servers. It's a demand.
So, as I could find the immediate flag hadn't been supporting since rabbitmq v.3.0x
The developers of rabbitmq suggests to use TTL=0 for a queue instead but then I will not be able to check status of message.
Whether any opportunity to change that behavior? Please, share your experience how you solved problems like this.
Thank you.
I'm not sure, but after reading your original question in Russian, it might be that using both publisher and consumer confirms may be what you want. See last three paragraphs in this answer.
As you want to get message result for published message from your worker, it looks like RPC pattern is what you want. See RabbitMQ RPC tuttorial. Pick a programming language section there you most comfortable with, overall concept is the same. You may also find Direct reply-to useful.
It's not the same as immediate flag functionality, but in case all your publishers operate with immediate scenario, it might be that AMQP protocol is not the best choice for such kind of task. Immediate mean "deliver this message right now or burn in hell" and it might be a situation when you publish more than you can process. In such cases RPC + response timeout may be a good choice on application side (e.g. socket timeout). But it doesn't work well for non-idempotent RPC calls while message still be processed, so you may want to use per-queue or per-message TTL (or set queue length limit). In case message will be dead-lettered, you may get it there (in case you need that for some reason).
TL;DR
As to "something" can go wrong, it can go so on different levels which we for simplicity define as:
before RabbitMQ, like sending application failure and network problems;
inside RabbitMQ, say, missed destination queue, message timeout, queue length limit, some hard and unexpected internal error;
after RabbitMQ, in most cases - messages processing application error or some third-party services like data persistence or caching layer outage.
Some errors like network outage or hardware error are a bit epic and are not a subject of this q/a.
Typical scenario for guaranteed message delivery is to use publisher confirms or transactions (which are slower). After you got a confirm it mean that RabbitMQ got your message and if it has route - placed in a queue. If not it is dropped OR if mandatory flag set returned with basic.return method.
For consumers it's similar - after basic.consumer/basic.get, client ack'ed message it considered received and removed from queue.
So when you use confirms on both ends, you are protected from message loss (we'll not run into a situation that there might be some bug in RabbitMQ itself).
Bogdan, thank you for your reply.
Seems, I expressed my thought enough clearly.
Scheme may looks like this. Each component of system must do what it must do :)
The an idea is make every component more simple.
How to task is performed.
Clients goes to HTTP-API with requests and must obtain a respones like this:
Positive - it have put to queue
Negative - response with error and a reason
When I was talking about confirmation I meant that I must to know that a message is delivered ( there are no free workers - rabbitmq can remove a message ), a client must be notified.
A sent message couldn't be delivered to certain queue, a client must be notified.
How to a message is handled.
Messages is sent for performing.
Status of perfoming is written into HeartBeat
Status.
Clients obtain status from HeartBeat by itself and then decide that
it's have to do.
I'm not sure, that RPC may be useful for us i.e. RPC means that clients must to wait response from server. Tasks may works a long time. Excess bound between clients and servers, additional logic on client-side.
Limited size of queue maybe not useful too.
Possible situation when a size of queue maybe greater than counts of workers. ( problem in configuration or defined settings ).
Then an idea with 5-10 seconds doesn't make sense.
TTL doesn't usefull because of:
Setting the TTL to 0 causes messages to be expired upon reaching a
queue unless they can be delivered to a consumer immediately. Thus
this provides an alternative to basic.publish's immediate flag, which
the RabbitMQ server does not support. Unlike that flag, no
basic.returns are issued, and if a dead letter exchange is set then
messages will be dead-lettered.
direct reply-to :
The RPC server will then see a reply-to property with a generated
name. It should publish to the default exchange ("") with the routing
key set to this value (i.e. just as if it were sending to a reply
queue as usual). The message will then be sent straight to the client
consumer.
Then I will not be able to route messages.
So, I'm sorry. I may flounder in terms i.e. I'm new in AMQP and rabbitmq.
I was wondering if this is possible. I want to pull a task from a queue and have some work that could potentially take anywhere from 3 seconds or longer (possibly) minutes before an ack is sent back to RabbitMQ notifying that the work has been completed. The work is done by a user, hence this is why the time it takes to process the job varies.
I don't want to ack the message immediately after I pop off the queue because I want the message to be requeued if no ack is received. Can anyone give me any insights into how to solve my problem?
Having a long timeout should be fine, and certainly as you say you want redelivery if something goes wrong, so you want to only ack after you finish.
The best way to achieve that, IMO, would be to have multiple consumers on the queue (i.e. multiple threads/processes consuming from the same queue). That should be fine as long as there's no particular ordering constraint on your queue contents (i.e. the way there might be if the queue were to contain contents representing Postgres data that involves FK constraints).
This tutorial on the RabbitMQ website provides more info (Python linked, but there should be similar tutorials for other languages): https://www.rabbitmq.com/tutorials/tutorial-two-python.html
Edit in response to comment from OP:
What's your heartbeat set to? If your worker doesn't acknowledge the heartbeat within the set period of time, the server will consider the connection to be dead.
Not sure which language you're using, but for Java you would use the setRequestedHeartbeat method to specify the heartbeat.
The way you implement your workers, it's vital that the heartbeat can still be sent back to the RabbitMQ server. If something blocks the client from sending the heartbeat, the server will kill the connection after the time interval expires.