I am trying to purge a list of tasks in a rabbitmq / celery queue.
I used the following:
root#ip-10-150-115-164:/my-api# celery amqp queue.purge celery
-> connecting to amqp://guest:**#localhost:5672//.
-> connected.
ok. 0 messages deleted.
Why are 0 messages being deleted when there are over 12,000 in the queue?
sudo rabbitmqctl list_queues
Listing queues
celeryev.fdc87318-5013-465c-92be-e3081ed3c579 0
worker5#ip-10-150-115-164.celery.pidbox 0
celeryev.218186c8-49d3-4601-8237-3edcb17ca82a 0
celery 12485
worker1#ip-10-150-115-164.celery.pidbox 0
worker3#ip-10-150-115-164.celery.pidbox 0
worker4#ip-10-150-115-164.celery.pidbox 0
worker2#ip-10-150-115-164.celery.pidbox 0
celeryev.616cf756-77fb-48fc-aa3c-a9a53e909ba1 0
celeryev.29bf149e-592c-4f82-990a-1a72c2830bb2 0
celeryev.c35d971e-80b1-492f-a111-6be576f6c825 0
Thanks
Related
Our DevOps team has specific naming requirements for queues in rabbit, I have done some searching and came across the below, which I thought may have worked, however it only names the one default queue and it puts the name before .pidbox. all the queues here need to be prefixed with a name if possible?
# set app name
app_name = 'my_app'
# Optional configuration, see the application user guide.
app.conf.update(
result_expires=3600,
control_exchange=app_name,
event_exchange=app_name + '.ev',
task_default_queue=app_name,
task_default_exchange=app_name,
task_default_routing_key=app_name,
)
sample queues with the above config
bash-5.0# rabbitmqctl list_queues
Timeout: 60.0 seconds ...
Listing queues for vhost / ...
name messages
my_app 0
celery#6989aa04c815.my_app.pidbox 0
celeryev.c1ce1b85-1bdc-4a46-b15b-e6b85105acdd 0
celeryev.8ba23a8f-9034-4c9b-8d86-56bfb368fdb6 0
bash-5.0#
desired queue names
bash-5.0# rabbitmqctl list_queues
Timeout: 60.0 seconds ...
Listing queues for vhost / ...
name messages
my_app 0
my_app.celery#6989aa04c815.pidbox 0
my_app.celeryev.c1ce1b85-1bdc-4a46-b15b-e6b85105acdd 0
my_app.celeryev.8ba23a8f-9034-4c9b-8d86-56bfb368fdb6 0
bash-5.0#
is this possible to achieve? I know I can disable the pidbox queue in the options CELERY_ENABLE_REMOTE_CONTROL = False, but I use flower to monitor the queues so I need this option?
Thanks
I'm using RPC Pattern for processing my objects with RabbitMQ.
You suspect,I have an object, and I want to have that process finishes and After that send ack to RPC Client.
Ack as default has a timeout about 3 Minutes.
My process Take long time.
How can I change this timeout for ack of each objects or what I must be do for handling these like processess?
Modern versions of RabbitMQ have a delivery acknowledgement timeout:
In modern RabbitMQ versions, a timeout is enforced on consumer delivery acknowledgement. This helps detect buggy (stuck) consumers that never acknowledge deliveries. Such consumers can affect node's on disk data compaction and potentially drive nodes out of disk space.
If a consumer does not ack its delivery for more than the timeout value (30 minutes by default), its channel will be closed with a PRECONDITION_FAILED channel exception. The error will be logged by the node that the consumer was connected to.
Error message will be:
Channel error on connection <####> :
operation none caused a channel exception precondition_failed: consumer ack timed out on channel 1
Timeout by default is 30 minutes (1,800,000ms)note 1 and is configured by the consumer_timeout parameter in rabbitmq.conf.
note 1: Timeout was 15 minutes (900,000ms) before RabbitMQ 3.8.17.
if you run rabbitmq in docker, you can describe volume with file rabbitmq.conf, then create this file inside volume and set consumer_timeout
for example:
docker compose
version: "2.4"
services:
rabbitmq:
image: rabbitmq:3.9.13-management-alpine
network_mode: host
container_name: 'you name'
ports:
- 5672:5672
- 15672:15672 ----- if you use gui for rabbit
volumes:
- /etc/rabbitmq/rabbitmq.conf:/etc/rabbitmq/rabbitmq.conf
And you need create file
rabbitmq.conf
on you server by this way
/etc/rabbitmq/
documentation with params: https://github.com/rabbitmq/rabbitmq-server/blob/v3.8.x/deps/rabbit/docs/rabbitmq.conf.example
We boot up a cluster of 250 worker nodes in AWS at night to handle some long-running distributed tasks.
The worker nodes are running celery with the following command:
celery -A celery_worker worker --concurrency=1 -l info -n background_cluster.i-1b1a0dbb --without-heartbeat --without-gossip --without-mingle -- celeryd.prefetch_multiplier=1
We are using rabbitmq as our broker, and there is only 1 rabbitmq node.
About 60% of our nodes claim to be listening, but will not pick up any tasks.
Their logs look like this:
-------------- celery#background_cluster.i-1b1a0dbb v3.1.18 (Cipater)
---- **** -----
--- * *** * -- Linux-3.2.0-25-virtual-x86_64-with-Ubuntu-14.04-trusty
-- * - **** ---
- ** ---------- [config]
- ** ---------- .> app: celery_worker:0x7f10c2235cd0
- ** ---------- .> transport: amqp://guest:**#localhost:5672//
- ** ---------- .> results: disabled
- *** --- * --- .> concurrency: 1 (prefork)
-- ******* ----
--- ***** ----- [queues]
-------------- .> background_cluster exchange=root(direct) key=background_cluster
[tasks]
. more.celery_worker.background_cluster
[2015-10-10 00:20:17,110: WARNING/MainProcess] celery#background_cluster.i-1b1a0dbb
[2015-10-10 00:20:17,110: WARNING/MainProcess] consuming from
[2015-10-10 00:20:17,110: WARNING/MainProcess] {'background_cluster': <unbound Queue background_cluster -> <unbound Exchange root(direct)> -> background_cluster>}
[2015-10-10 00:20:17,123: INFO/MainProcess] Connected to amqp://our_server:**#10.0.11.136:5672/our_server
[2015-10-10 00:20:17,144: WARNING/MainProcess] celery#background_cluster.i-1b1a0dbb ready.
However, rabbitmq shows that there are messages waiting in the queue.
If I login to any of the worker nodes and issue this command:
celery -A celery_worker inspect active
...then every (previously stalled) worker node immediately grabs a task and starts cranking.
Any ideas as to why?
Might it be related to these switches?
--without-heartbeat --without-gossip --without-mingle
It turns out that this was a bug in celery where using --without-gossip kept events from draining. Celery's implementation of gossip is pretty new, and it apparently implicitly takes care of draining events, but when you turn it off things get a little wonky.
The details to the issue are outlined in this github issue: https://github.com/celery/celery/issues/1847
Master currently has the fix in this PR: https://github.com/celery/celery/pull/2823
So you can solve this one of three ways:
Use gossip (remove --without-gossip)
Patch your version of celery with https://github.com/celery/celery/pull/2823.patch
Use a cron job to run a celery inspect active regularly
hey guys i am new to celery. i am working on periodic task scheduling. I have configured my celeryconfig.py as follow:
from datetime import timedelta
BROKER_URL = 'redis://localhost:6379/0'
CELERY_RESULT_BACKEND = "redis"
CELERY_REDIS_HOST = "localhost"
CELERY_REDIS_PORT = 6379
CELERY_REDIS_DB = 0
CELERY_IMPORTS=("mytasks")
CELERYBEAT_SCHEDULE={'runs-every-60-seconds' :
{
'task': 'mytasks.add',
'schedule': timedelta(seconds=60),
'args':(16,16)
},
}
and mytask.pyas follow:
from celery import Celery
celery = Celery("tasks",
broker='redis://localhost:6379/0',
backend='redis')
#celery.task
def add(x,y):
return x+y
#celery.task
def mul(x,y):
return x*y
when i am running
celery beat -s celerybeat-schedule then i am getting
Configuration ->
. broker -> redis://localhost:6379/0
. loader -> celery.loaders.default.Loader
. scheduler -> celery.beat.PersistentScheduler
. db -> celerybeat-schedule
. logfile -> [stderr]#INFO
. maxinterval -> now (0s)
[2012-08-28 12:27:17,825: INFO/MainProcess] Celerybeat: Starting...
[2012-08-28 12:28:00,041: INFO/MainProcess] Scheduler: Sending due task mytasks.add
[2012-08-28 12:29:00,057: INFO/MainProcess] Scheduler: Sending due task mytasks.add
[2012-08-28 12:30:00,064: INFO/MainProcess] Scheduler: Sending due task mytasks.add
[2012-08-28 12:31:00,097: INFO/MainProcess] Scheduler: Sending due task mytasks.add
now i am not getting that i have passed arguments (16,16) then how i can get the answer of this function add(x,y)
I'm not sure I quite understand what you have asked, but from what I can tell, your issue may be one of the following:
1) Are you running celeryd (the worker daemon)? If not, did you start a celery worker in a terminal? Celery beat is a task scheduler. It is not a worker. Celerybeat only schedules the tasks (i.e. places them in a queue for a worker to eventually consume).
2) How did you plan on viewing the results? Are they being saved somewhere? Since you have set your results backend to redis, the results are at least temporarily stored in the redis results backend
I have apache load balancer doing load balancing via tomcat servers ajp as follows:
Worker URL Route RouteRedir Factor Set Status Elected To From
ajp://localhost:8009/myapp s1 2 0 Ok 2292 0 22M
ajp://xx.xx.xx.64:8009/myapp s2 2 0 Ok 2291 0 23M
ajp://xx.xx.xx.228:8009/myapp s3 2 0 Ok 2292 0 23M
ajp://xx.xxx.xx.242:8009/myapp s4 1 0 Ok 2121 0 21M
Sometimes the status results in Error, is there a way I can monitor this constantly and get an email notification if it happens?