RabbitMQ may enforce ack timeouts for consumers: https://www.rabbitmq.com/consumers.html#acknowledgement-modes
By default if a task has not been acked within 15 min entire node will go down with a PreconditionFailed error.
I need to schedule a celery task (using RabbitMQ as a broker) with an ETA quite far in the future (1-3 h) and as of now (with celery 4 and rabbitmq 3.8) when I try that... I get PreconditionFailed after the consumer ack timeout configured for my RMQ.
I expected that the task would be acknolwedged before its ETA ...
Is there a way to configure an ETA celery task to be acknowledged within the consumer ack timeout?
right now I am increasing the consumer_timeout to above my ETA time delta, but there must be a better solution ...
I think adjusting the consumer_timeout is your only option in Celery 5. Note that this is only applicable for RabbitMQ 3.8.15 and newer.
Another possible solution is to have the workers ack the message immediately upon receipt. Do this only if you don't need to guarantee task completion. For example, if the worker crashes before doing the task, Celery will not know that it wasn't completed.
In RabbitMQ, the best options for delayed tasks are the delayed-message-exchange or dead lettering. Celery cannot use either option. In Celery, messages are published to the message broker where they are sent to consumers as soon as possible. The delay is enforced in the worker, not at the broker.
There's a way to change this consumer_timeout for a running instance by running the following command on the RabbitMQ server:
rabbitmqctl eval 'application:set_env(rabbit, consumer_timeout, 36000000).'
This will set the new timeout to 10 hrs (36000000ms). For this to take effect, you need to restart your workers though. Existing worker connections will continue to use the old timeout.
You can check the current configured timeout value as well:
rabbitmqctl eval 'application:get_env(rabbit, consumer_timeout).'
If you are running RabbitMQ via Docker image, here's how to set the value: Simply add -e RABBITMQ_SERVER_ADDITIONAL_ERL_ARGS="-rabbit consumer_timeout 36000000" to your docker run OR set the environment RABBITMQ_SERVER_ADDITIONAL_ERL_ARGS to "-rabbit consumer_timeout 36000000".
Hope this helps!
I faced this problem, actually i think you would better to use PeriodicTask, if you would like only do it once set the one_off=True.
https://docs.celeryq.dev/en/stable/userguide/periodic-tasks.html?highlight=periodic
I encountered the same problem and I resolved it.
With RabbitMQ version 3.8.14 (3.8.14-management), I am able to send long ETA tasks.
I personaly use Celery to send tasks with a long ETA.
In my case, I setup up celery to add a timeout (~consumer_timeout), I can configure it with time_limit or soft_time_limit
I also wanted to do something similar and have tried something with the "rabbitmq-delayed-exchange-plugin" and "dead-letter-queue". I wrote an article about both and mentioned the links below. I hope it will be helpful to someone. In a nutshell, we can use both approaches for scheduling celery tasks( handling long ETA).
using dlx:
Dead Letter Exchanges (DLX):
using RabbitMQ Delayed Message Plugin:
RabbitMQ Delayed Message Plugin:
p.s: I know StackOverflow answers should be self-explanatory, but posting the links as answers is long.Sorry
Am using celery with redis its working fine but problem is to start worker have to give manually.Is there any way to start 'Redis woker' automatically without doing manually
There is a whole section in the Celery documentation about it.
Recently, I started to have some trouble with one of me Redis cluster. used_memroy and used_memory_rss increasing constantly.
According to some Googling, I found following discussion:
https://github.com/antirez/redis/issues/4570
Now I am wandering if it is safe to run SCRIPT FLUSH command on my production Redis cluster?
Yes - you can run the SCRIPT FLUSH command safely in a production cluster. The only potential side effect is blocking the server while it executes. Note, however, that you'll want to call it in each of your nodes.
We're running a flask application and we do all our heavy processing with celery. We use a redis instance from amazon to be our message broker. We just had a fail, causing much pain and bleeding, so we're looking into fail-over strategies.
The first project that appeared to us was Celery sentinel. https://github.com/dealertrack/celery-redis-sentinel
Would this be something that would give us a fail-over capability?
We've been doing some tests, and it seems not to be working as anticipated.
In your case maybe moving the celery backend to RabbitMQ would be better, as RabbitMQ is a lot more persistent with its data
I'm running Celery on my laptop, with rabbitmq being the broker and redis being the backend. I just used all the default settings and ran celery -A tasks worker --loglevel=info, then it all worked. The workers can get jobs done and I get fetch the execution results by calling result.get(). My question here is that why it works even if I didn't run the rebbitmq and redis servers at all. I did not set the accounts on the servers either. In many tutorials, the first step is to run the broker and backend servers before starting celery.
I'm new to these tools and do not quite understand how they work behind the scene. Any input would be greatly appreciated. Thanks in advance.
Never mind. I just realized that redis and rabbitmq automatically run after installation or shell startup. They must be running for celery to work.