Celery task with a long ETA and RabbitMQ - rabbitmq

RabbitMQ may enforce ack timeouts for consumers: https://www.rabbitmq.com/consumers.html#acknowledgement-modes
By default if a task has not been acked within 15 min entire node will go down with a PreconditionFailed error.
I need to schedule a celery task (using RabbitMQ as a broker) with an ETA quite far in the future (1-3 h) and as of now (with celery 4 and rabbitmq 3.8) when I try that... I get PreconditionFailed after the consumer ack timeout configured for my RMQ.
I expected that the task would be acknolwedged before its ETA ...
Is there a way to configure an ETA celery task to be acknowledged within the consumer ack timeout?
right now I am increasing the consumer_timeout to above my ETA time delta, but there must be a better solution ...

I think adjusting the consumer_timeout is your only option in Celery 5. Note that this is only applicable for RabbitMQ 3.8.15 and newer.
Another possible solution is to have the workers ack the message immediately upon receipt. Do this only if you don't need to guarantee task completion. For example, if the worker crashes before doing the task, Celery will not know that it wasn't completed.
In RabbitMQ, the best options for delayed tasks are the delayed-message-exchange or dead lettering. Celery cannot use either option. In Celery, messages are published to the message broker where they are sent to consumers as soon as possible. The delay is enforced in the worker, not at the broker.

There's a way to change this consumer_timeout for a running instance by running the following command on the RabbitMQ server:
rabbitmqctl eval 'application:set_env(rabbit, consumer_timeout, 36000000).'
This will set the new timeout to 10 hrs (36000000ms). For this to take effect, you need to restart your workers though. Existing worker connections will continue to use the old timeout.
You can check the current configured timeout value as well:
rabbitmqctl eval 'application:get_env(rabbit, consumer_timeout).'
If you are running RabbitMQ via Docker image, here's how to set the value: Simply add -e RABBITMQ_SERVER_ADDITIONAL_ERL_ARGS="-rabbit consumer_timeout 36000000" to your docker run OR set the environment RABBITMQ_SERVER_ADDITIONAL_ERL_ARGS to "-rabbit consumer_timeout 36000000".
Hope this helps!

I faced this problem, actually i think you would better to use PeriodicTask, if you would like only do it once set the one_off=True.
https://docs.celeryq.dev/en/stable/userguide/periodic-tasks.html?highlight=periodic

I encountered the same problem and I resolved it.
With RabbitMQ version 3.8.14 (3.8.14-management), I am able to send long ETA tasks.
I personaly use Celery to send tasks with a long ETA.
In my case, I setup up celery to add a timeout (~consumer_timeout), I can configure it with time_limit or soft_time_limit

I also wanted to do something similar and have tried something with the "rabbitmq-delayed-exchange-plugin" and "dead-letter-queue". I wrote an article about both and mentioned the links below. I hope it will be helpful to someone. In a nutshell, we can use both approaches for scheduling celery tasks( handling long ETA).
using dlx:
Dead Letter Exchanges (DLX):
using RabbitMQ Delayed Message Plugin:
RabbitMQ Delayed Message Plugin:
p.s: I know StackOverflow answers should be self-explanatory, but posting the links as answers is long.Sorry

Related

Redis still has data after Celery run

I have set up a Celery task that is using RabbitMQ as the broker and Redis as the backend. After running I noticed that my Redis server was still using a lot of memory. Upon inspection I found that there were still keys for each task that was created.
Is there a way to get Celery to clean up these keys only after the response has been received? I know some MessageBrokers use acks, is there an equivalent for a redis backend in Celery?
Yes, use result_expires. Please note that celery beat should run as well, as written in the documentation:
A built-in periodic task will delete the results after this time (celery.backend_cleanup), assuming that celery beat is enabled. The task runs daily at 4am.
Unfortunately Celery doesn't have acks for its backend, so the best solution for my project was to call forget on my responses after I was done with them.

If celery worker dies hard, does job get retried?

Is there a way for a celery job to be retried if the server where the worker is running dies? I don't just mean the sub-process that execute the job, but the entire server becomes unavailable.
I tried with RabbitMQ and Redis as brokers. In both cases, if a job is currently being processed, it is entirely forgotten. When a worker restarts, it doesn't even try to reprocess the job, and looking at Rabbit or Redis, their queues are empty. The result backend is also empty.
It looks like the worker grabs the message and assume it will put it back if the subprocess fails, but if the worker dies also, it can't put it back.
(yes, I work in an environment where this happens more than once a year, and I don't want to lose tasks)
In theory, set task_acks_late=True should do the trick. (doc)
With a Redis broker, the task will be redelivered after visibility_timeout, which defaults to one hour. (doc)
With RabbitMQ, the task is redelivered as soon as Rabbit noticed that the worker died.

re-start celery queue after re-starting broker

I started my celery worker queue (in background):
celery worker -Q my_queue -l info
After this, its broker (redis) was stopped, and meanwhile the background celery worker keeps trying to re-connect to redis after growing amount of time.
Now my goal is re-start a non-duplicate my_queue after restarting redis. I realize that the following celery API will not return my_queue until the re-connection is made:
celery.task.control.inspect().active_queues()
Now if I start a new my_queue, I will end up with duplicate my_queue if the previous celery worker in the background is re-connected afterward.
A solution might be letting celery worker to actively quit if its broker is found stopped, but I don't find the right way to do this. I also don't want to kill it by previous-saved PID. Any suggestions or alternatives will be appreciated.
Well, I know it's contradictory to my requirement, but it seems that I do need the help from a PID file:
celery worker -Q my_queue -l info --pidfile=pid.log
which will raise an exception if the pid saved in pid.log is already running.
This is still not the ideal solution, and any suggestion regarding how to let celery worker actively quit if its broker is found stopped will still be appreciated.

Celery works without broker and backend running

I'm running Celery on my laptop, with rabbitmq being the broker and redis being the backend. I just used all the default settings and ran celery -A tasks worker --loglevel=info, then it all worked. The workers can get jobs done and I get fetch the execution results by calling result.get(). My question here is that why it works even if I didn't run the rebbitmq and redis servers at all. I did not set the accounts on the servers either. In many tutorials, the first step is to run the broker and backend servers before starting celery.
I'm new to these tools and do not quite understand how they work behind the scene. Any input would be greatly appreciated. Thanks in advance.
Never mind. I just realized that redis and rabbitmq automatically run after installation or shell startup. They must be running for celery to work.

Can Celery email me if the task time limit is exceeded?

So I have a celery setup using RabbitMQ as the broker and amqp as the results backend.
Sometimes, I will have tasks that go long because I misunderestimated the needed timeout, and as intended, Celery will kill the worker running the task.
The problem is that because this is a celery problem and not a task problem, my error handling that's supposed to email me from the task will not run, and I will receive no message about the failure.
Is there a way to have Celery do some error notification on it's own when it kills a task due to Celery-related errors? Like an on_timeout() function that I can create in the task? I really don't want to have the calling process do the error handling, because the timeout is already a couple hours and the process runs for about 30 seconds.
Looks like this question is from a while ago and you've probably resolved the issue, but in case not, have you checked out the CELERY_SEND_TASK_ERROR_EMAILS config setting?