Our DevOps team has specific naming requirements for queues in rabbit, I have done some searching and came across the below, which I thought may have worked, however it only names the one default queue and it puts the name before .pidbox. all the queues here need to be prefixed with a name if possible?
# set app name
app_name = 'my_app'
# Optional configuration, see the application user guide.
app.conf.update(
result_expires=3600,
control_exchange=app_name,
event_exchange=app_name + '.ev',
task_default_queue=app_name,
task_default_exchange=app_name,
task_default_routing_key=app_name,
)
sample queues with the above config
bash-5.0# rabbitmqctl list_queues
Timeout: 60.0 seconds ...
Listing queues for vhost / ...
name messages
my_app 0
celery#6989aa04c815.my_app.pidbox 0
celeryev.c1ce1b85-1bdc-4a46-b15b-e6b85105acdd 0
celeryev.8ba23a8f-9034-4c9b-8d86-56bfb368fdb6 0
bash-5.0#
desired queue names
bash-5.0# rabbitmqctl list_queues
Timeout: 60.0 seconds ...
Listing queues for vhost / ...
name messages
my_app 0
my_app.celery#6989aa04c815.pidbox 0
my_app.celeryev.c1ce1b85-1bdc-4a46-b15b-e6b85105acdd 0
my_app.celeryev.8ba23a8f-9034-4c9b-8d86-56bfb368fdb6 0
bash-5.0#
is this possible to achieve? I know I can disable the pidbox queue in the options CELERY_ENABLE_REMOTE_CONTROL = False, but I use flower to monitor the queues so I need this option?
Thanks
Related
I have an ActiveMQ broker and data is coming to the queue inside this broker.
I am trying to read the data from the same broker but I am not able to read the data.
Below I have given my telegraf configuration. I have provided the topic name.
I tried creating a topic and sending custom data and that data I am able to read properly.
[[inputs.mqtt_consumer]]
servers = ["provided"]
qos = 0
## Topics that will be subscribed to.
topics = [
"topic_name",
]
connection_timeout = "30s"
## If unset, a random client ID will be generated.
client_id = "telegraf"
## Username and password to connect MQTT server.
username = "provided"
password = "provided"
## Optional TLS Config
# tls_ca = "/etc/telegraf/ca.pem"
# tls_cert = "/etc/telegraf/cert.pem"
# tls_key = "/etc/telegraf/key.pem"
## Use TLS but skip chain & host verification
# insecure_skip_verify = false
[[inputs.activemq]]
## ActiveMQ WebConsole URL
url = "provided"
## Required ActiveMQ Endpoint
## deprecated in 1.11; use the url option
# server = "192.168.50.10"
# port = 8161
## Credentials for basic HTTP authentication
username = "provided"
password = "provided"
[[outputs.file]]
## ## Files to write to, "stdout" is a specially handled file.
files = ["stdout","/etc/telegraf/metrics.out"]
The data coming from devices is going to the queue, not the topic.
As you can see the data is present inside the queue.
so now coming to my main question how I can read the data from the queue not from the topic using telegraf?
MQTT supports topics by default. You either need to change your message flow to publish to topics, or configure your ActiveMQ broker to use Virtual Topic Subscription Strategy for MQTT (where messages are stored in queues).
ref: https://activemq.apache.org/mqtt
Note: Please edit your post to hide your broker URL and admin password!
I am trying to purge a list of tasks in a rabbitmq / celery queue.
I used the following:
root#ip-10-150-115-164:/my-api# celery amqp queue.purge celery
-> connecting to amqp://guest:**#localhost:5672//.
-> connected.
ok. 0 messages deleted.
Why are 0 messages being deleted when there are over 12,000 in the queue?
sudo rabbitmqctl list_queues
Listing queues
celeryev.fdc87318-5013-465c-92be-e3081ed3c579 0
worker5#ip-10-150-115-164.celery.pidbox 0
celeryev.218186c8-49d3-4601-8237-3edcb17ca82a 0
celery 12485
worker1#ip-10-150-115-164.celery.pidbox 0
worker3#ip-10-150-115-164.celery.pidbox 0
worker4#ip-10-150-115-164.celery.pidbox 0
worker2#ip-10-150-115-164.celery.pidbox 0
celeryev.616cf756-77fb-48fc-aa3c-a9a53e909ba1 0
celeryev.29bf149e-592c-4f82-990a-1a72c2830bb2 0
celeryev.c35d971e-80b1-492f-a111-6be576f6c825 0
Thanks
I'm looking for configure Celery on my FreeBSD server and I get some issues according to log files.
My configuration:
FreeBSD server
2 Django applications : app1 and app2
Celery is daemonized and Redis
Each application has his own Celery task
My Celery config file:
I have in /etc/default/celeryd_app1 :
# Names of nodes to start
CELERYD_NODES="worker"
# Absolute or relative path to the 'celery' command:
CELERY_BIN="/usr/local/www/app1/venv/bin/celery"
# App instance to use
CELERY_APP="main"
# Where to chdir at start.
CELERYD_CHDIR="/usr/local/www/app1/src/"
# Extra command-line arguments to the worker
CELERYD_OPTS="--time-limit=300 --concurrency=8"
# Set logging level to DEBUG
#CELERYD_LOG_LEVEL="DEBUG"
# %n will be replaced with the first part of the nodename.
CELERYD_LOG_FILE="/var/log/celery/app1/%n%I.log"
CELERYD_PID_FILE="/var/run/celery/app1/%n.pid"
# Workers should run as an unprivileged user.
CELERYD_USER="celery"
CELERYD_GROUP="celery"
# If enabled pid and log directories will be created if missing,
# and owned by the userid/group configured.
CELERY_CREATE_DIRS=1
I have exactly the same file for celeryd_app2
Django settings file with Celery settings:
CELERY_BROKER_URL = 'redis://localhost:6379'
CELERY_ACCEPT_CONTENT = ['application/json']
CELERY_RESULT_BACKEND = 'redis://localhost:6379'
CELERY_TASK_SERIALIZER = 'json'
CELERY_RESULT_SERIALIZER = 'json'
CELERY_IGNORE_RESULT = False
CELERY_TASK_TRACK_STARTED = True
# Add a one-minute timeout to all Celery tasks.
CELERYD_TASK_SOFT_TIME_LIMIT = 60
Both settings have the same redis' port.
My issue:
When I execute a celery task for app1, I find logs from this task in app2 log file with an issue like this :
Received unregistered task of type 'app1.task.my_task_for_app1'
...
KeyError: 'app1.task.my_task_for_app1'
There is an issue in my Celery config file ? I have to set different redis port ? If yes, How I can do that ?
Thank you very much
I guess the problem lies in the fact that you are using the same Redis database for both applications:
CELERY_BROKER_URL = 'redis://localhost:6379'
Take a look into the guide for using Redis as a broker. Just change the database for each application, e.g.
CELERY_BROKER_URL = 'redis://localhost:6379/0'
and
CELERY_BROKER_URL = 'redis://localhost:6379/1'
I'm using RPC Pattern for processing my objects with RabbitMQ.
You suspect,I have an object, and I want to have that process finishes and After that send ack to RPC Client.
Ack as default has a timeout about 3 Minutes.
My process Take long time.
How can I change this timeout for ack of each objects or what I must be do for handling these like processess?
Modern versions of RabbitMQ have a delivery acknowledgement timeout:
In modern RabbitMQ versions, a timeout is enforced on consumer delivery acknowledgement. This helps detect buggy (stuck) consumers that never acknowledge deliveries. Such consumers can affect node's on disk data compaction and potentially drive nodes out of disk space.
If a consumer does not ack its delivery for more than the timeout value (30 minutes by default), its channel will be closed with a PRECONDITION_FAILED channel exception. The error will be logged by the node that the consumer was connected to.
Error message will be:
Channel error on connection <####> :
operation none caused a channel exception precondition_failed: consumer ack timed out on channel 1
Timeout by default is 30 minutes (1,800,000ms)note 1 and is configured by the consumer_timeout parameter in rabbitmq.conf.
note 1: Timeout was 15 minutes (900,000ms) before RabbitMQ 3.8.17.
if you run rabbitmq in docker, you can describe volume with file rabbitmq.conf, then create this file inside volume and set consumer_timeout
for example:
docker compose
version: "2.4"
services:
rabbitmq:
image: rabbitmq:3.9.13-management-alpine
network_mode: host
container_name: 'you name'
ports:
- 5672:5672
- 15672:15672 ----- if you use gui for rabbit
volumes:
- /etc/rabbitmq/rabbitmq.conf:/etc/rabbitmq/rabbitmq.conf
And you need create file
rabbitmq.conf
on you server by this way
/etc/rabbitmq/
documentation with params: https://github.com/rabbitmq/rabbitmq-server/blob/v3.8.x/deps/rabbit/docs/rabbitmq.conf.example
setup: Celery 4.1, RabbitMQ 3.6.1 (As broker), Redis (As backend, not relevant here).
Having two rabbit users:
admin_user with permissions of .* .* .*.
remote_user with permissions of ack ack ack.
admin_user can trigger tasks and is used by celery workers to handle tasks.
remote_user can only trigger one type of task - ack and is enqueued in a dedicated ack queue which later on being consumed by ack worker (by admin_user).
The remote_user sends the task by the following code:
from celery import Celery
app = Celery('remote', broker='amqp://remote_user:remote_pass#<machine_ip>:5672/vhost')
app.send_task('ack', args=('a1', 'a2'), queue='ack', route_name='ack')
This works perfectly in Celery 3.1. After upgrade to Celery 4.1 it doesn't send the task anymore. The call returns an AsyncResult but I don't see the message in Celery flower (or via rabbit management ui), or in the logs.
Trying to set permissions to remote_user .* .* .* as in the admin_user - doesn't help.
Trying to add administrator tag - doesn't help.
Changing the user of the broker to
'amqp://admin_user:admin_pass#<machine_ip>:5672/vhost' DOES work :
from celery import Celery
app = Celery('remote', broker='amqp://admin_user:admin_pass#<machine_ip>:5672/vhost')
app.send_task('ack', args=('a1', 'a2'), queue='ack', route_name='ack')
But I don't want to give a remote machine the admin_user permissions.
Any idea what I can do?
Solved,
API changed I guess, but to stay with the current permissions of RabbitMQ I had to use the following route:
old_celery_config.py: (celery 3.1)
CELERY_ROUTES = {
'ack_task': {
'queue': 'geo_ack'
}
}
celery_config.py: (celery 4.1)
CELERY_ROUTES = {
'ack_task': {
'exchange': 'ack',
'exchange_type': 'direct',
'routing_key': 'ack'
}
}
run_task.py:
from celery import Celery
app = Celery('remote', broker='amqp://remote_user:remote_pass#<machine_ip>:5672/vhost')
app.config_from_object('celery_config')
app.send_task('ack_task', args=('a1', 'a2'))