I've got Celery flower + rabbitmq setup running on my local box with the UI on localhost:5555 displaying an empty list of workers. Now, on a remote machine I have a celery flower connected to the same broker like the following:
server (local setup): celery flower --broker=amqp://guest:guest#localhost:5672//
remote: celery flower --broker=amqp://smurf#remotemachine:5672//
The connection is successful when I view the connections tab in rabbitmq, however when flower does not list the worker through the web-ui. Why would it connect successfully to rabbitmq but not show on flower?
Thanks in advance.
Related
Within celery, I see sometimes that the worker is offline. I run Flower in one Docker container and the Celery worker in another one. I use a RabbitMQ broker.
I see that the worker jumps between offline <-> online quite often.
What does it mean that a worker is offline? How does Flower figure that out?
Worker is considered "offline" if it does not broadcast heartbeat signal for some (short) period of time.
I am using Celery and rabbitmq for a django project in which i have created two queues queue_email and queue_push running with one worker.
But rabbitmq has following queues as well, created by default:
celery
celery.pidbox
celeryev
reply.celery.pidbox
How and why these default queues are created ?
Can they be removed, if not necessary ?
I found some imformation in github. But incomplete.
1.The celeryev queues contain the messages celerymon and Flower use for monitoring purposes.
2.Pidbox is the broadcast messaging system used by celery to support remote control of workers.
Refference:
These issues may be help:
Preventing Celery from creating Exchanges celery, celeryev, celeryev.pidbox, reply.celery.pidbox #3895
Hundreds of queues being created #1801
I've configured 3 zookeepers and 3 activemq instances in 1 cluster.
Scenario
3 activemq instances with only 1 master and other two is slave.
all 3 activemq instances are running, i.e. sudo service activemq status returns running but checking the logs, 1 instance(activemq1) is currently waiting for other cluster members, 1 instance(activemq2) stops, 1 instance(activemq3) has error. Assumming that we only require two instance to elect master, this setup should be able to run successfully .
two activemq instances should be running
zookeeper instances are running fine.
Issue
Below are the stacktraces of the respective activemq instances. Based on my understanding, it needs at least two properly running activemq intances for the cluster to nominate a master instance. Given that all activemq instanes produces running when issued with sudo service activemq status , I'm assuming there is an issue inside each activemq instances - refer to below stacktraces. Now, I noticed on logs, that activemq1 only fails to be properly running since other activemq instances failed internally. Notice the stacktrace on activemq2, it's stucked after it successfully connected to zookeeper and activemq3 has issue, I still need to figure out. The issue is fixed when I restarted activemq2 and activemq3. However, I can't be sure this won't happen again, thus this question.
activem1 show the below stacktrace, which I assume that this is because the other 2 activemq instances are running but has errors
Session establishment complete on server 10.5.4.111/10.5.4.111:2181, sessionid = 0x1582db00708000c, negotiated timeout = 4000
Not enough cluster members connected to elect a master.
Not enough cluster members connected to elect a master.
Not enough cluster members connected to elect a master.
activemq2 has the below stacktrace, which is the one I don't understand. It has stopped after successful connection to zookeeper, which should be detected by other activemq instances belonging to cluster-activem1 and activemq3
Opening socket connection to server 10.5.4.111/10.5.4.111:2181
Socket connection established to 10.5.4.111/10.5.4.111:2181, initiating session
Session establishment complete on server 10.5.4.111/10.5.4.111:2181, sessionid = 0x1582db00708000d, negotiated timeout = 4000
activemq3 has the below stacktrace
org.apache.jasper.servlet.JspServletWrapper.handleJspException(JspServletWrapper.java:568)[apache-jsp-8.0.9.M3.jar:2.3]
Configuration for activemq
the previous config here is with 2s zkSessionTimeout - which is the default. I made it to 4s as per googled to maximize the time needed for an activemq instance registers itself to zookeeper.
<persistenceAdapter>
<replicatedLevelDB
directory="${activemq.data}/leveldb"
replicas="3"
bind="tcp://0.0.0.0:61619"
zkAddress="zookeeper_addresses_here"
hostname="activemq_hostname_here"
zkSessionTimeout="4s"
/>
</persistenceAdapter>
Configuration for zookeeper
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/my/data/dir
clientPort=2181
server.1=activemq1_privateIP:2888:3888
server.2=activemq2_privateIP:2888:3888
server.3=activemq3_privateIP::2888:3888
autopurge.purgeInterval=24
autopurge.snapRetainCount=5
Zookeeper version 3.4.9
ActiveMQ version 5.13.4
Setup via Opswork
The attribute "directory" master-slave mq is need to refer to the same folder
Suppose one celery worker is running on server say x.y.a.b and let's say the rabbitmq server is running on server x.y.a.c and it has two queues P and Q. Now, I want this celery worker to listen to P queue which is another server. How to do this?
I have two servers running Celery and one Redis database. They both listen to the same queue as they are meant to divide the "workload". Tasks are queued onto Redis, but it looks like both my Celery servers pick up the task at the same time, hence executing it twice (once on each server.) Is there a way to prevent this with the Redis/Celery setup?
Thank you,
Each of my servers were using the same name for the celery workers. Since then I've added %h at the end of the worker name (-n my_worker_%h) to show the hostname. This way Celery Flower displays all of the workers in their own line, and there is no confusion more possible.