How do I get rid of a zombie Celery worker? - rabbitmq

I am running Celery with RabbitMQ backend.
Somehow I have ended up with what appears to be a zombie Celery worker. I see the worker in Flower, and in commands like celery inspect scheduled. But it references a PID that doesn't exist. There is no worker process. It is a big problem because Celery will delegate tasks to this worker, and they never get executed.
I believe what happened is the docker container within which this is running got shut down uncleanly. But now, even if I restart the docker container, this zombie worker always comes back. Always has the same name: celery#0357c65d991b.
The Celery docs say that to kill a worker you must send its process TERM. But I can't do that because there is no process. It's a zombie.
RabbitMQ must have a dangling reference to this worker. They only thing I could find in the RabbitMQ management interface is a queue named celery#0357c65d991b.celery.pidbox. I deleted this queue, but it simply reappeared a few seconds later.
Can anyone give me a pointer on where to look to get rid of this thing?

Related

Celery 4.3.0 - Send Signal To a Task Without Termination

On a celery service on CENTOS which runs a single task at a time, the termination of a task is simple:
revoke(id, terminate=True, signal='SIGINT')
However while the interrupt signal is being processed, the running task gets revoked. Then a new task - from the queue - starts on the node. This is troublesome. Two task are running at the same time on the node. The signal handling could take up to a minute.
The question is how a signal could be sent to a running task, without actually terminating the task in celery?
Or let's say is there any way to send a signal to a running task?
The assumption is user should be able to send a signal from a remote node. In other words user does not have access to list the running processes of the node.
Any other solution is welcome.
I don't understand your goal.
Are you trying to kill the worker? if so, I guess you are talking about t "Warm shutdown", so you can send the SIGTEERM to the worker's process. The running task will get a chance to finish but no new task will be added.
If you're just interested in revoking a specific task and keep using the same worker, can you share your celery configuration and the worker command? are you sure you're running with concurrency 1 ?

If celery worker dies hard, does job get retried?

Is there a way for a celery job to be retried if the server where the worker is running dies? I don't just mean the sub-process that execute the job, but the entire server becomes unavailable.
I tried with RabbitMQ and Redis as brokers. In both cases, if a job is currently being processed, it is entirely forgotten. When a worker restarts, it doesn't even try to reprocess the job, and looking at Rabbit or Redis, their queues are empty. The result backend is also empty.
It looks like the worker grabs the message and assume it will put it back if the subprocess fails, but if the worker dies also, it can't put it back.
(yes, I work in an environment where this happens more than once a year, and I don't want to lose tasks)
In theory, set task_acks_late=True should do the trick. (doc)
With a Redis broker, the task will be redelivered after visibility_timeout, which defaults to one hour. (doc)
With RabbitMQ, the task is redelivered as soon as Rabbit noticed that the worker died.

re-start celery queue after re-starting broker

I started my celery worker queue (in background):
celery worker -Q my_queue -l info
After this, its broker (redis) was stopped, and meanwhile the background celery worker keeps trying to re-connect to redis after growing amount of time.
Now my goal is re-start a non-duplicate my_queue after restarting redis. I realize that the following celery API will not return my_queue until the re-connection is made:
celery.task.control.inspect().active_queues()
Now if I start a new my_queue, I will end up with duplicate my_queue if the previous celery worker in the background is re-connected afterward.
A solution might be letting celery worker to actively quit if its broker is found stopped, but I don't find the right way to do this. I also don't want to kill it by previous-saved PID. Any suggestions or alternatives will be appreciated.
Well, I know it's contradictory to my requirement, but it seems that I do need the help from a PID file:
celery worker -Q my_queue -l info --pidfile=pid.log
which will raise an exception if the pid saved in pid.log is already running.
This is still not the ideal solution, and any suggestion regarding how to let celery worker actively quit if its broker is found stopped will still be appreciated.

RabbitMQ creates a number of strange processes

I happened to find a number of strange processes created by rabbitmq on my RabbitMQ server. I ran rabbitmq server in a docker container. I recreated the container and hours later those processes appeared again. There're some consumers connecting to it. Any idea about what those processes for? Thanks!

Celery Flower not monitoring jobs

I am running Celery and Flower, with RabbitMQ as a message broker. When I have no running workers and start a task, it sits on the queue until a worker starts. Then, when I start my workers, the task is consumed and executed as expected. However, when I try to use the Flower API to get task info, args and kwargs are null. This never happens when my workers are already running when I call a task. Why is this, and how can I fix it? Thanks.