Temporary queue made in Celery - rabbitmq

I am using Celery with RabbitMQ. Lately, I have noticed that a large number of temporary queues are getting made.
So, I experimented and found that when a task fails (that is a tasks raises an Exception), then a temporary queue with a random name (like c76861943b0a4f3aaa6a99a6db06952c) is formed and the queue remains.
Some properties of the temporary queue as found in rabbitmqadmin are as follows -
auto_delete : True
consumers : 0
durable : False
messages : 1
messages_ready : 1
And one such temporary queue is made everytime a task fails (that is, raises an Exception). How to avoid this situation? Because in my production environment a large number of such queues get formed.

It sounds like you're using the amqp as the results backend. From the docs here are the pitfalls of using that particular setup:
Every new task creates a new queue on the server, with thousands of
tasks the broker may be overloaded with queues and this will affect
performance in negative ways. If you’re using RabbitMQ then each
queue will be a separate Erlang process, so if you’re planning to
keep many results simultaneously you may have to increase the Erlang
process limit, and the maximum number of file descriptors your OS
allows
Old results will not be cleaned automatically, so you must make
sure to consume the results or else the number of queues will
eventually go out of control. If you’re running RabbitMQ 2.1.1 or
higher you can take advantage of the x-expires argument to queues,
which will expire queues after a certain time limit after they are
unused. The queue expiry can be set (in seconds) by the
CELERY_AMQP_TASK_RESULT_EXPIRES setting (not enabled by default).
From what I've read in the changelog, this is no longer the default backend in versions >=2.3.0 because users were getting bit in the rear end by this behavior. I'd suggest changing the results backend if this not the functionality you need.

Well, Philip is right there. The following is a description of how I solved it. It is a configuration in celeryconfig.py.
I am still using CELERY_BACKEND = "amqp" as Philip had said. But in addition to that, I am now using CELERY_IGNORE_RESULT = True. This configuration will ensure that the extra queues are not formed for every task.
I was already using this configuration but still when a task fails, the extra queue was formed. Then I noticed that I was using another configuration which needed to be removed which was CELERY_STORE_ERRORS_EVEN_IF_IGNORED = True. What this did that it did not store the results for all tasks but did only for errors (tasks which failed) and hence one extra queue for a task which failed.

The CELERY_TASK_RESULT_EXPIRES dictates the time to live of the temp queues. The default is 1 day. You can modify this value.

The reason this is happening is because celery workers remote control is enabled (it is enabled by default).
You can disable it by setting the CELERY_ENABLE_REMOTE_CONTROL setting to False
However, note that you will lose the ability to do things like add_consumer, cancel_consumer etc using the celery command

amqp backend creates a new queue for each task. If you want to avoid it, you can use rpc backend which keeps results in a single queue.
In your config, set
CELERY_RESULT_BACKEND = 'rpc'
CELERY_RESULT_PERSISTENT = True
You can read more about this on celery docs.

Related

Checking whether RabbitMQ cluster is idle or not

I have got a task to check whether created RabbitMQ cluster is idle(has been used) or not. I can think of only one case which is non existence of queues and exchanges. If no queues are created then we can easily say that the created cluster has not been used. But my task is to collect all such cases by which we can check if created cluster is idle or been used.So I want everyone to help me to get more cases or situations where a RabbitMQ cluster will not be active for some time and be idle.
Because of RabbitMQ's behavior, a cluster that is currently not being used (but once was) looks exactly the same as one that has never been used (which is a good thing for performance).
Assuming no client deletes the queue it is using, or the cluster creation involves creating new queues or exchanges, then checking if there are any existing queues (or any non-default exchanges) is your best bet at guessing if any client has ever used a RabbitMQ cluster.

Celery with RabbitMQ creating too many queues

When running Django/Celery/RabbitMQ on production server, some tasks are sent and consumed correctly. However, RabbitMQ starts using up all the CPU after processing is done. I believe this is related to the following report.
RabbitMQ on EC2 Consuming Tons of CPU
In that thread, it is suggested to set these config values:
CELERY_IGNORE_RESULT
CELERY_AMQP_TASK_RESULT_EXPIRES
I forked and customized the celery-haystack package to set both those values when calling appl_async(), however it seems to have had no effect.
I think Celery is creating a large number (one per task) of uid-named queues automatically to store results. But I don't seem to be able to stop it.
Any ideas?
I just got a day of digging into this problem myself. I think the two options you meantioned can be explained like this:
CELERY_IGNORE_RESULT: if True then the results of tasks will be ignored, hence they won't return anything where you call them with delay or apply_async.
CELERY_AMQP_TASK_RESULT_EXPIRES: the expiration time for a result stored in the result backend. You can set this option to a reasonable value so RabbitMQ can delete expired results.
The many queues generated are for storing results only. So in case you don't want to store any results, you can remove CELERY_RESULT_BACKEND option from your config file.
Have a ncie day!

RabbitMQ change queue parameters on a production system

I'm using RabbitMQ as a message queue in a service-oriented architecture, where many separate web services publish messages bound for RabbitMQ queues. Those queues are in turn subscribed to by various consumers, which perform background work; a pretty vanilla use-case for RabbitMQ.
Now I'd like to change some of the queue parameters (specifically, I'd like to bind queues to a new dead-letter exchange with a certain routing key). My problem is that making this change in place on a production system is problematic for a couple reasons.
Whats the best way for me to transition to these new queues without losing messages in a production system?
I've considered everything from versioning queue names to making a new vhost with the new settings to doing all the changes in place.
Here are some of the problems I'm facing:
Because RabbitMQ queues are idempotent, the disparate web services have been declaring the queues before publishing to them (in case they don't already exist). Once you change the queue parameters (but maintain the same routing key), the queue declare fails and RabbitMQ closes the channel.
I'd like to not lose messages when changing a queue (here I'm planning on subscribing an exclusive consumer that saves the messages and then republishes to the new queue).
General coordination between disparate publishers and the consumer base (or, even better, a way to avoid needing to coordinate them).
Queues bindings can be added and removed at runtime without any impact on clients, unless clients manually modify bindings. So if your question only about bindings just change them via CLI or web management panel and skip what written below.
It's a common problem to make back-incompatible changes, especially in heterogeneous environment, especially when multiple applications attempts to declare same entity in their own way (with their specific settings). There are no easy way to change queue declaration at the same time in multiple applications and it highly depends on how whole working process organized, how critical your apps are, what is your infrastructure and etc.
Fast and dirty way:
While the publishers doesn't deals with queues declaration and bindings (at least they should not do that), you can focus on consumers. Wrapping queues declaration in try-except block may be the fast and dirty choice. Also most projects, even numerous can survive small downtime, so you can block rabbitmq user in one shell, alter queue as you wish (create new one and make your consumers use it instead of old one) and then unblock user and let consumers works as before (your workers are under supervisor or monit, right?). Then migrate manually messages from old queue to new one.
Fast and safe solution:
Is is a bit tricky and based on a hack how to migrate messages from one queue to another inside single vhost. The whole solution works inside single vhost but requires extra queue for every queue you want to modify. Set up Dead Letter Exchanges on source queue and point it to route expired messages to your new target queue. Then apply Per-Queue Message TTL to source queue, set x-message-ttl=0 (to it's minimal value, see No Queueing at all note about immediate delivery). Both actions can be done via CLI or management panel and can be done on already declared queue. In this way your publishers can publish messages as usual and even old consumers can work as expected for the first time, but in parallel new consumers can consume from new queue which can be pre-declared with new args manually or in other way.
Note, that on queues with large messages number and huge messages flow there are some risks to met flow control limits, especially if your server utilize almost all of it resources.
Much more complicated but safer approach (for cases when whole messages workflow logic changed):
Make all necessary changes to applications and run new codebase in parallel to existing one, but on the different RabbitMQ vhost (or even use separate server, it depends on your applications load and hardware). Actually, it may be possible to run on the same vhost but change exchanges and queues name, but it even doesn't sound good and smells even in written form. After you set up new apps, switch them with old one and run messages migration from old queues to new one (or just let old system empty the queues). It guaranties seamless migration with minimal downtime. If you have your deployment automatized, whole process will not takes too much efforts.
P.S.: in any case above, if you can, let old consumers to empty queues so you don't need to migrate messages manually.
Update:
You may find very useful Shovel plugin, especially Dynamic Shovels to move messages between exchanges and queues, even between different vhosts and servers. It's the fastest and safest way to migrate messages between queues/exchanges.

RabbitMQ queue length limit with flow control

If I declare a queue with x-max-length, all messages will be dropped or dead-lettered once the limit is reached.
I'm wondering if instead of dropped or dead-lettered, RabbitMQ could activate the Flow Control mechanism like the Memory/Disk watermarks. The reason is because I want to preserve the message order (when submitting; FIFO behaviour) and would be much more convenient slowing down the producers.
Try to realize queue length limit on application level. Say, increment/decrement Redis key and check it max value. It might be not so accurate as native RabbitMQ mechanism but it works pretty good on separate queue/exchange without affecting other ones on the same broker.
P.S. Alternatively, in some tasks RabbitMQ is not the best choice and old-school relational databases (MySQL, PostgreSQL or whatever you like) works the best, but RabbitMQ still can be used as an event bus.
There are two open issues related to this topic on the rabbitmq-server github repo. I recommended expressing your interest there:
Block publishers when queue length limit is reached
Nack messages that cannot be deposited to all queues due to max length reached

celeryev Queue in RabbitMQ Becomes Very Large

I am using celery on rabbitmq. I have been sending thousands of messages to the queue and they are being processed successfully and everything is working just fine. However, the number of messages in several rabbitmq queues are growing quite large (hundreds of thousands of items in the queue). The queues are named celeryev.[...] (see screenshot below). Is this appropriate behavior? What is the purpose of these queues and shouldn't they be regularly purged? Is there a way to purge them more regularly, I think they are taking up quite a bit of disk space.
You can use the CELERY_EVENT_QUEUE_TTL celery option (only working with amqp), that will set the message expiry time, after which it will be deleted from the queue.
For anyone else who is running into problems with a celeryev queue becoming very large and threatening the disk space on your rabbitmq server, beware the accepted answer! Here's my suggestion. Just issue this command on your rabbitmq instance:
rabbitmqctl set_policy limit_celeryev_queues "^celeryev\." '{"max-length":1000000}' --apply-to queues
This will limit any queue beginning with "celeryev" to 1 Million entries. I did some experimenting with a stuck flower instance causing a runaway celeryev queue, and setting CELERY_EVENT_QUEUE_TTL / CELERY_EVENT_QUEUE_EXPIRES did not help control the queue size.
In my testing, I started a flower process, then SIGSTOP'ed it, and watched its celeryev queue start running away. Neither of these two settings helped at all. I confirmed SIGCONT'ing the flower process would bring the queue back to 0 rapidly. I am not certain why these two knobs didn't help, but it may have something to do with how RabbitMQ implements these two settings.
First, the Per-Message TTL corresponding to CELERY_EVENT_QUEUE_TTL only establishes an expiration time on each queue entry -- AIUI it will not automatically delete the message out of the queue to save space upon expiration. Second, the Queue TTL corresponding to CELERY_EVENT_QUEUE_EXPIRES says that it "... guarantees that the queue will be deleted, if unused for at least the expiration period". However, I believe that their definition of "unused" may be too strict to kick in for e.g. an overburdened, stuck, or killed flower process.
EDIT: Unfortunately, one problem with this suggestion is that the set_policy ... apply-to queues will only impact existing queues, and flower can and will create new queues which may overflow.
Celery use celeryev prefixed queues (and exchange) for monitoring, you can configure it as you want or disable at all (celery control disable_events).
You just have to set a config to your Celery.
If you want to avoid Celery from creating celeryev.* queues:
CELERY_SEND_EVENTS = False # Will not create celeryev.* queues
If you need these queues for monitoring purpose (CeleryFlower for instance), you may regularly purge them:
CELERY_EVENT_QUEUE_EXPIRES = 60 # Will delete all celeryev. queues without consumers after 1 minute.
The solution came from here: https://www.cloudamqp.com/docs/celery.html
You can limit the queue size in RabbitMQ with x-max-length queue declaration argument
http://www.rabbitmq.com/maxlength.html