We are using Spring AMQP library to consume messages from Queues in RabbitMQ. Our publisher produces variable number of messages on known schedules, So we are thinking about start Consumers to on schedule and stop once Queue is empty.
I am wondering how to gracefully close channel and connetion when Queue length reaches Zero?
The upcoming 1.6 release (the release candidate was released last week, the GA is due at the end of next week) has a new feature to emit events when the listener container goes idle.
You can stop the container when such an event is received. You should not stop the container on the thread that the event listener is called on - instead pass the event to a new thread. If you try to stop the container on the same thread it will cause a delay because the container waits for the thread to be released.
Related
I am experimenting with pika API for rabbitmq and am observing a weird bahaviour. I have three threads.
Thread 1: Creates new queues and produces messages using a thread specific connection conn1. All messages get routed to all queues.
Thread 2: Instantiates SelectConnection - conn2 and a channel chan1
Thread 3: In a loop calls chan1.basic_consume() for each queue created by thread 1. Each queue has a unique message handler. (one message consumer per queue)
When I see the output I find that last message handler is handling messages from all the queues.
How do I ensure that the handlers for different consumers don't get overwritten ? Is this because I am calling basic_consume() from a separate thread or because I am calling it after calling ioloop.start() ?
I can paste the code here if required.
I have a publisher that sends messages to a consumer that moves a motor.
The motor has a work queue which I cannot access, and it works slower than the rate of the incoming messages, so I'm trying to control the traffic on the consumer.
To keep updated and relevant data coming to the motor without the queue filling up and creating a traffic jam, I set the RabbitMQ queue size limit to 5 and basicQos to 1.
The idea is that the RabbitMQ queue will drop the old messages when it is filled up, so the newest commands are at the front of the queue.
Also by setting basicQos to 1 I ensure that the consumer doesn't grab all messages from the queue and bombards the motor at once, which is exactly what i'm trying to avoid since I can't do anything once the command was sent to the motor.
This way the consumer takes messages from the queue one by one, while new messages replace the old ones on the queue.
Practically this moves the bottleneck to the RabbitMQ queue instead of the motor's queue.
I also cannot check the motor's work queue, so all traffic control must be done on the consumer.
I added messageId and tested, and found out many messages are still coming and going long after the publisher is being shut down.
I'm expecting around 5 messages after shutdown since that's the size of the queue, but i'm getting hundreds.
I also added a few seconds of sleep inside the callback to make sure this isn't the robot queue that's acting up, but i'm still getting many messages after shutdown, and I can see in the logs the callback is being called every time so it's definitely still getting messages from somewhere.
Please help.
Thanks.
Moving the acknowledgment to the end of the callback solved the problem.
I'm guessing that by setting basicQos to 1 it did execute the callback for each message one after another, but in the background it kept grabbing messages from the queue.
So even when the publisher was shutdown, the consumer still had messages that were taken from the queue in it, and those messages were the ones that I saw being executed.
I am doing a POC to work with RabbitMQ and have a questions about how to listen to queues conditionally!
We are consuming messaging from a queue and once consumed, the message will be involved in an upload process that takes longer times based on the file size. And as the file sizes are larger, sometimes the external service we invoke running out of memory if multiple messages are consumed and upload process is continuing for the previous messages.
That said, we would like to only consume the next message from the queue once the current/previous message is processed completely. I am new to JMS and wondering how to do it.
My current thought is, the code flow will manually pull the next message from the queue when it completes the process of previous message as the flow knows that it has completed the processing but if that listener is only used in code flow to manually call, how it will pull the very first message!
The JMS spec says that message consumers work sequentially:
The session used to create the message consumer serializes the
execution of all message listeners registered with the session
If you create a MessageListener and use that with your consumer, the JMS spec states the listener's onMessage will be called sequentially, i.e. once per message after each message has been processed by the listener. So in effect each message waits until the previous has completed.
I have celery setup to fetch tasks from RabbitMQ and things are working as excepted, but I've noticed the following behavior (T: task, P: process):
--> Fetch first batch of messages (6 tasks) from broker
<-- messages are received. Start them
--> Send T1..T6 to be executed by P1..P6
--> Prefetch 6 new messages from broker, but do not ACK them
<-- P1..P5 finish tasks T1..T5, but T6 is still being processed (it will take ~2h)
At this point, no other tasks start running, despite the fact that I have concurrency set to 6 and only one process is active. I have tried the add_consumer command on celery-flower, but nothing seems to happen. I can see on RabbitMQ that there are messages with no ACK yet and the messages in the READY state just start stacking up, since they won't be consumed for another ~2h.
Is there a way to setup celery so that whenever a process is free, it will consume the next task, instead of waiting for the original batch to completely finish?
Is there any way I can achieve this:
Write a message to a queue
Block the producer process until there is a consumer on the other side
If there is no consumer after 10 seconds, raise an exception
If there is a consumer, unblock the producer process
When the 10sec timeout is reached and an exception is raised on the producer side, the message should be kept in the queue, so that a consumer can consume it later
I want to be able to notify a consumer in an asynchrone way.
Until now I'm sending a message. I want to know if there is an immediate consumer, but if there is not, the message should still be on the queue. It doesn't seem to be the behavior of the "immediate" amqp thing
Interesting problem, unfortunately there isn't an elegant solution.
From the RabbitMQ documentation the "immediate" flag works like this:
This flag tells the server how to react if the message cannot be routed to a queue consumer immediately. If this flag is set, the server will return an undeliverable message with a Return method. If this flag is zero, the server will queue the message, but with no guarantee that it will ever be consumed.
You could solve your problem in part using the immediate flag, I'm thinking something like this:
When the producer is ready to queue a message it fires it off with the immediate flag set
If the message is returned then start a timer and keep retrying for 10 seconds with the immediate flag set
If after 10 seconds of trying it has still failed to be picked up, then publish it with the immediate flag set to false (so that your consumer will pick it up when the consumer comes online)