I'm having some troubles using airflow 1.9.0 with CeleryExecutor using redis as broker.
I need to run a job that takes more than 6 hours to complete and I'm losing my celery workers.
Looking into airflow code in GitHub, There is a hard-coded configuration:
https://github.com/apache/incubator-airflow/blob/d760d63e1a141a43a4a43daee9abd54cf11c894b/airflow/config_templates/default_celery.py#L31
How could I bypass this problem?
This is configurable in airflow.cfg under the section celery_broker_transport_options.
See the commit adding this possibility https://github.com/apache/incubator-airflow/commit/be79f87f36b6b99649e0a1f6ab92b41640b3beaa
Related
I installed and set up Airflow 2.0.1 with Celery, Rabbitmq and Postgresql9.6 on RHEL7, using the constraints https://raw.githubusercontent.com/apache/airflow/constraints-2.0.1/constraints-3.7.txt.
So I am not using Docker container, and in fact am building a cluster with 3 nodes.
I created DB and username for PSQL, and created user and set permission for Rabbitmq and am able to access its WebUI in 15672 port.
I am able to run airflow webserver and scheduler and access airflow WebUI with no problem.
The issue arises when I try to start airflow worker (whether from master node or worker nodes). Even though airflow.cfg is set to point out to rabbitmq, I get the error that says:
ImportError: Missing redis library (pip install redis)
Because it is trying to access redis instead of rabbitmq, but I have no idea why.
I checked airflow.cfg line by line and there is not a single line with redis, so am I missing a configuration or what?
My airflow.cfg configuration:
sql_alchemy_conn = postgresql+psycopg2://airflow_user:airflow_pw#10.200.159.59:5432/airflow
broker_url= amqp://rabbitmq_user:rabbitmq_pw#10.200.159.59:5672/airflow_virtual_host
celery_result_backend = db+postgresql://airflow_user:airflow_pw#10.200.159.59:5432/airflow
dags_are_paused_at_creation = True
load_examples = False
Why does my airflow worker try to reach redis when it is configured for rabbitmq?
I found the problem after spending many hours on such a simple, silly issue.
Airflow still tried to connect to redis, which isthe default Airflow config despite my rabbitmq configuration in airflow.cfg because I had written all of the configs under [core] section, wheras lines must be written to related parts in airflow.cfg.
I moved broker_url and result_backend to under [celery] and issue was resolved.
Am using celery with redis its working fine but problem is to start worker have to give manually.Is there any way to start 'Redis woker' automatically without doing manually
There is a whole section in the Celery documentation about it.
I am trying to setup an airflow cluster. I am planning to use redis as celery backend.
I have seen people using sentinel redis successfully. I wanted to know if it is possible to use redis cluster instead?
If not then why not?
Celery doesn't have support for using Redis cluster as broker. It can use Redis highly available setup as broker (with Sentinels), but has no support for Redis cluster to be used as broker.
Reference:
Airflow CROSSSLOT Keys in request don't hash to the same slot error using AWS ElastiCache
How to use more than 2 redis nodes in django celery
To make Redis cluster to work we need to change the celery backend! not a feasible solution.
https://github.com/hbasria/celery-redis-cluster-backend
I have set up a Celery task that is using RabbitMQ as the broker and Redis as the backend. After running I noticed that my Redis server was still using a lot of memory. Upon inspection I found that there were still keys for each task that was created.
Is there a way to get Celery to clean up these keys only after the response has been received? I know some MessageBrokers use acks, is there an equivalent for a redis backend in Celery?
Yes, use result_expires. Please note that celery beat should run as well, as written in the documentation:
A built-in periodic task will delete the results after this time (celery.backend_cleanup), assuming that celery beat is enabled. The task runs daily at 4am.
Unfortunately Celery doesn't have acks for its backend, so the best solution for my project was to call forget on my responses after I was done with them.
I'm running Celery on my laptop, with rabbitmq being the broker and redis being the backend. I just used all the default settings and ran celery -A tasks worker --loglevel=info, then it all worked. The workers can get jobs done and I get fetch the execution results by calling result.get(). My question here is that why it works even if I didn't run the rebbitmq and redis servers at all. I did not set the accounts on the servers either. In many tutorials, the first step is to run the broker and backend servers before starting celery.
I'm new to these tools and do not quite understand how they work behind the scene. Any input would be greatly appreciated. Thanks in advance.
Never mind. I just realized that redis and rabbitmq automatically run after installation or shell startup. They must be running for celery to work.