celery is running but it is not executing any task - redis

The Celery with Redis is Running but the task is not executing in FastAPI
command - redis-server -
command - celery -A core.celery_app_work worker --loglevel=info -P eventlet -
celery flower output
I want to execute my task in fastapi using celery
celery_app_work.py
from celery import Celery
from celery.schedules import crontab
celery_app = Celery(__name__,
broker='redis://localhost:6379',
backend='redis://localhost:6379/0')
celery_app.conf.task_routes = {'backend.core.worker_task.*':'example-queue'}
worker_task.py
from .celery_app_work import celery_app
data = {'task':'celery'}
#celery_app.task(name="create_task")
def test_celery(data: str):
print('inside celery')
for i in range(10):
return data
structure -
|--backend
|
|--core
| |---celery_app_work.py
| |---worker_task.py
|
| ---main.py

According to the documentation, the format should be app.conf.task_queues
celery_app.conf.task_routes = {'backend.core.worker_task.*': {'queue': 'example-queue'}}
Additionally, you probably need to specify to Celery that the queue example-queue should exist:
from kombu import Exchange, Queue
exchange = Exchange('default', type='direct')
app.conf.task_queues = (
Queue('example-queue', exchange, routing_key='default'),
)

Related

Run client channel backup poller Celery Beat

I am working on a project with Celery Beat and Worker in Kubernetes.
I ran a project with this config.
app = Celery('celery-worker',
broker=RABBITMQ_URL,
backend=REDIS_URL
)
app.conf.update(
result_expires=3600,
)
And the run command is:
celery -A app worker -B -l INFO
After running it, the Celery Beat log shows a lot of lines:
backup_poller.cc:138] Run client channel backup poller: UNKNOWN:pollset_work {created_time:"2022-12-10T15:25:01.080085021+03:30", children:[UNKNOWN:Bad file descriptor {created_time:"2022-12-10T15:25:01.080072267+03:30", errno:9, os_error:"Bad file descriptor", syscall:"epoll_wait"}]}
After looking for some solutions, I changed the pool option in the Celery run command into --pool=gevent, but that did not work either.
how can I solve the problem?
Based on Document,
For production environment, you need to start celery beat separately.
So you need separate pods for ‌Beat and Worker and use --pool only for Worker pod.
celery -A app beat -l INFO
celery -A app worker -l INFO --pool=gevent -c 1

Apache Ignite - Unmatched Argument

I'm trying to create a tensorflow cluster on top of the Ignite cluster in my local multi-node environment.
I followed the tutorials and found tried the following command:
./ignite-tf.sh start TESTDATA models python /usr/local/grid/cifar10_main.py
This gives me an unmatched error as follows:
Unmatched argument:
Usage: ignite-tf [-hV] [-c=<cfg>] [COMMAND]
Apache Ignite and TensorFlow integration command line tool that allows to
start, maintain and stop distributed deep learning utilizing Apache Ignite
infrastructure and data.
-c, --config=<cfg> Apache Ignite client configuration.
-h, --help Show this help message and exit.
-V, --version Print version information and exit.
Commands:
start Starts a new TensorFlow cluster and attaches to user script process.
stop Stops a running TensorFlow cluster.
attach Attaches to running TensorFlow cluster (user script process).
ps Prints identifiers of all running TensorFlow clusters.
I'm not sure which is the umatched argument. Need help getting this to work.
I have downloaded this specific version and tried to run this command, I don't get any error messages, it would try to start a node:
311842 pts/12 S+ 0:00 | | \_ bash ./ignite-tf.sh start TESTDATA models python /usr/local/grid/cifar10_main.py
311902 pts/12 Sl+ 0:03 | | \_ /usr/lib/jvm/java-8-openjdk-amd64//bin/java -XX:+AggressiveOpts -Xms1g -Xmx1g -server -XX:MaxMetaspaceSize=256m -DIGNITE_QUIET=false -DIGNITE_SUCCESS_FILE=/home/user/Downloads/gridgain-community-8.7.24/work/ignite_success_20e882c5-5b64-4d0a-b7ed-9587c08a0e44 -DIGNITE_HOME=/home/user/Downloads/gridgain-community-8.7.24 -DIGNITE_PROG_NAME=./ignite-tf.sh -cp /home/user/Downloads/gridgain-community-8.7.24/libs/*:/home/user/Downloads/gridgain-community-8.7.24/libs/ignite-control-utility/*:/home/user/Downloads/gridgain-community-8.7.24/libs/ignite-indexing/*:/home/user/Downloads/gridgain-community-8.7.24/libs/ignite-opencensus/*:/home/user/Downloads/gridgain-community-8.7.24/libs/ignite-spring/*:/home/user/Downloads/gridgain-community-8.7.24/libs/licenses/*:/home/user/Downloads/gridgain-community-8.7.24/libs/optional//ignite-tensorflow/*:/home/user/Downloads/gridgain-community-8.7.24/libs/optional//ignite-slf4j/* org.apache.ignite.tensorflow.submitter.JobSubmitter start TESTDATA models python /usr/local/grid/cifar10_main.py

RuntimeWarning:You're running the worker with superuser privileges:this is absolutely not recommended

I am using django+celery+redis,celery==4.4.0 in local it is working fine but when I am dockerizing it , I am getting the above error.
I am using following commands to run in local as well as inside container
**CMDs**
celery -A nrn worker -l info
docker run -d -p 6379:6379 redis
flower -A nrn --port=5555
Any help is highly appreciated
*settings.py**
CELERY_ACCEPT_CONTENT = ['json']
CELERY_TASK_SERIALIZER = 'json'
CELERY_BROKER_URL = os.environ.get('redis', 'redis://127.0.0.1:6379/')
Take a look in the documentation. It's a warning, though, not an error (see the code). Running Celery under root is an error only when you allow pickle serialization which is not enabled by default (see here).
However, it's still the best practice to run Celery with lower privileges. In Docker (with Debian based image), I choose to run Celery under nobody:nogroup. I use this Dockerfile:
FROM python:3.6
ENV PYTHONDONTWRITEBYTECODE=1 \
PYTHONUNBUFFERED=1
WORKDIR /srv/celery
COPY ./app app
COPY ./requirements.txt /tmp/requirements.txt
COPY ./celery.sh celery.sh
RUN pip install --no-cache-dir \
-r /tmp/requirements.txt
VOLUME ["/var/log/celery", "/var/run/celery"]
CMD ["./celery.sh"]
where celery.sh looks as follows:
#!/usr/bin/env bash
mkdir -p /var/run/celery /var/log/celery
chown -R nobody:nogroup /var/run/celery /var/log/celery
exec celery --app=app worker \
--loglevel=INFO --logfile=/var/log/celery/worker-example.log \
--statedb=/var/run/celery/worker-example#%h.state \
--hostname=worker-example#%h \
--queues=celery.example -O fair \
--uid=nobody --gid=nogroup

WSL: ERROR/MainProcess] consumer: Cannot connect to amqp://guest:**#127.0.0.1:5672//: Socket closed

I can't open the socket using celery and WSL.
See the following info:
[ ] output of celery -A proj report:
software -> celery:3.1.26.post2 (Cipater) kombu:3.0.37 py:3.6.7
billiard:3.3.0.23 py-amqp:1.4.9
platform -> system:Linux arch:64bit, ELF imp:CPython
loader -> celery.loaders.app.AppLoader
settings -> transport:pyamqp results:disabled
BROKER_URL: 'amqp://guest:********#localhost:5672//'
[ ]contents of pip freeze in the issue.
I am using pipenv. Pipfile:
[[source]]
name = "pypi"
url = "https://pypi.org/simple"
verify_ssl = true
[dev-packages]
[packages]
django = "*"
django-allauth = "*"
django-crispy-forms = "*"
django-debug-toolbar = "==1.10."
numpy = "==1.15.3"
colorama = "==0.4.0"
dateparser = "==0.7.0"
django-extensions = "*"
python-binance = "*"
misaka = "*"
django-celery = "*"
celery = "*"
[requires]
python_version = "3.6"
Steps to Reproduce
I am in WSL:
sudo apt-get install rabbitmq-server
sudo service rabbitmq-server restart
chmod -R 777 ./ ## otherwise I don't have permissions
Other infos
tasks.py:
from celery import Celery
# app = Celery('tasks', broker='amqp://jm-user1:sample#localhost/jm-vhost')
# app = Celery('tasks', broker='amqp://guest#localhost//')
app = Celery('tasks', broker='pyamqp://guest#localhost//')
#app.task
def add(x, y):
return x + y
rabbitmqctl status:
[{pid,1716},
{running_applications,
[{rabbit,"RabbitMQ","3.6.10"},
{ranch,"Socket acceptor pool for TCP protocols.","1.3.0"},
{ssl,"Erlang/OTP SSL application","8.2.3"},
{public_key,"Public key infrastructure","1.5.2"},
{asn1,"The Erlang ASN1 compiler version 5.0.4","5.0.4"},
{rabbit_common,
"Modules shared by rabbitmq-server and rabbitmq-erlang-client",
"3.6.10"},
{xmerl,"XML parser","1.3.16"},
{crypto,"CRYPTO","4.2"},
{os_mon,"CPO CXC 138 46","2.4.4"},
{compiler,"ERTS CXC 138 10","7.1.4"},
{mnesia,"MNESIA CXC 138 12","4.15.3"},
{syntax_tools,"Syntax tools","2.1.4"},
{sasl,"SASL CXC 138 11","3.1.1"},
{stdlib,"ERTS CXC 138 10","3.4.3"},
{kernel,"ERTS CXC 138 10","5.4.1"}]},
{os,{unix,linux}},
{erlang_version,
"Erlang/OTP 20 [erts-9.2] [source] [64-bit] [smp:12:12] [ds:12:12:10] [async-threads:192] [kernel-poll:true]\n"},
{memory,
[{total,55943096},
{connection_readers,0},
{connection_writers,0},
{connection_channels,0},
{connection_other,0},
{queue_procs,2744},
{queue_slave_procs,0},
{plugins,0},
{other_proc,19080304},
{mnesia,65712},
{metrics,184888},
{mgmt_db,0},
{msg_index,42728},
{other_ets,1769840},
{binary,62120},
{code,21390833},
{atom,891849},
{other_system,12634158}]},
{alarms,[]},
{listeners,[{clustering,25672,"::"},{amqp,5672,"::"}]},
{vm_memory_high_watermark,0.4},
{vm_memory_limit,6791299072},
{disk_free_limit,50000000},
{disk_free,100481589248},
{file_descriptors,
[{total_limit,924},{total_used,2},{sockets_limit,829},{sockets_used,0}]},
{processes,[{limit,1048576},{used,165}]},
{run_queue,0},
{uptime,4073},
{kernel,{net_ticktime,60}}]
Output:
when run: celery -A tasks worker --loglevel=info I get the following output:
-------------- celery#Alvaro-Laptop v3.1.26.post2 (Cipater)
---- **** -----
--- * *** * -- Linux-4.4.0-17763-Microsoft-x86_64-with-Ubuntu-18.04-bionic
-- * - **** ---
- ** ---------- [config]
- ** ---------- .> app: tasks:0x7fd7952bcf60
- ** ---------- .> transport: amqp://guest:**#localhost:5672//
- ** ---------- .> results: disabled://
- *** --- * --- .> concurrency: 12 (prefork)
-- ******* ----
--- ***** ----- [queues]
-------------- .> celery exchange=celery(direct) key=celery
[tasks]
. tasks.add
[2019-01-23 08:38:30,538: ERROR/MainProcess] consumer: Cannot connect to amqp://guest:**#127.0.0.1:5672//: Socket closed.
Trying again in 2.00 seconds...
[2019-01-23 08:38:32,543: ERROR/MainProcess] consumer: Cannot connect to amqp://guest:**#127.0.0.1:5672//: Socket closed.
Trying again in 4.00 seconds...
How can I open the socket to allow the communications?
Check if rabbitmq is up and running. Enable management console:
sudo rabbitmq-plugins enable rabbitmq_management
then visit http://localhost:15672 using guest/guest as credentials. Look for Overview page>Ports and Contexts.
If AMQP is bound to IPv6 (::), then it might be the issue. Open rabbitmq server config:
sudo vi /etc/rabbitmq/rabbitmq-env.conf
and comment out
# By default RabbitMQ will bind to all interfaces, on IPv4 and IPv6 if
# available. Set this if you only want to bind to one network interface or#
# address family.
NODE_IP_ADDRESS=127.0.0.1
then restart the service:
sudo service rabbitmq-server restart
and check the connectivity to RabbitMQ again
I was able to configure everything using Redis instead of Rabbitmq:
sudo apt-get install redis-server
sudo service redis-server restart
pip install celery
chmod -R 777 ./
Place on any folder you want to execute the worker the file tasks.py:
from celery import Celery
app = Celery('tasks', broker='redis://localhost:6379')
#app.task
def add(x, y):
return x + y
Then execute the following:
celery -A tasks worker --loglevel=info
The socket is now open!
It sounds like you have not installed and started rabbitmq. The easiest way I have found is with docker, but you can install it in WSL with apt-get if you are using ubuntu under WSL by following these instructions.
I had exactly the same problem, when I was following "Django, Scheduled Tasks & Queues (Part 2)" article on Medium, I also had a running rabbitMQ server, but my CELERY_BROKER_URL in settings was amqp://username:password#192.168.0.38//, which turns out to be incorrect. I reconfigured my rabbitMQ server using this instructions. I hope it will help you too)

Automate Redis Cluster Creation

I am working on creating a shell script to automate setup of redis cluster. But I am getting stuck at the create cluster command.
When my script is executing the command
redis-cli --cluster create
It asks to type a yes, but I want to make it non interactive & it should proceed with me giving an input.
I have tried:
yes | redis-cli --cluster create
But this is also not working.
Please help. Thanks In Advance.
#!/usr/bin/env python3
from subprocess import Popen, PIPE, STDOUT
cmd = 'redis-cli --cluster create 172.31.104.226:6379 172.31.103.167:6379 172.31.102.56:6379 --cluster-replicas 0'
p = Popen(cmd.split(), stdout=PIPE, stdin=PIPE, stderr=STDOUT)
grep_stdout = p.communicate(input='yes\n'.encode())[0]
print(grep_stdout)