How can I programmatically, using Python code, list current queues created on a RabbitMQ broker and the number of workers connected to them? It would be the equivalent to:
rabbitmqctl list_queues name consumers
I do it this way and display all the queues and their details (messages ready, unacknowledged etc.) on a web page -
import kombu
conn = kombu.Connection(broker_url)# example 'amqp://guest:guest#localhost:5672/'
conn.connect()
client = conn.get_manager()
queues = client.get_queues('/')#assuming vhost as '/'
You will need kombu to be installed and queues will be a dictionary with keys representing the queue names.
I think I got this when digging through the code of celery flower (The tool used for monitoring celery).
Update: As pointed out by #zaq178miami, you will also need the management plugin that has the http API. I had forgotten that I had enabled than in rabbitmq.
This way did it for me:
def get_queue_info(queue_name):
with celery.broker_connection() as conn:
with conn.channel() as channel:
return channel.queue_declare(queue_name, passive=True)
This will return a namedtuple with the name, number of messages waiting and consumers of that queue.
ksrini answer is correct too and can be used when you require more information about a queue.
Thanks to Ask Solem who gave me the hint.
As a rabbitmq client you can use pika. However it doesn't have option for list_queues. The easiest solution would be calling rabbitmqctl command from python using subprocess:
import subprocess
command = "/usr/local/sbin/rabbitmqctl list_queues name consumers"
process = subprocess.Popen(command.split(), stdout=subprocess.PIPE)
print process.communicate()
I would use simply this:
Just replace the user(default= guest), passwd(default= guest) and port with your values.
import requests
import json
def call_rabbitmq_api(host, port, user, passwd):
url = 'https://%s:%s/api/queues' % (host, port)
r = requests.get(url, auth=(user,passwd),verify=False)
return r
def get_queue_name(json_list):
res = []
for json in json_list:
res.append(json["name"])
return res
if __name__ == '__main__':
host = 'rabbitmq_host'
port = 55672
user = 'guest'
passwd = 'guest'
res = call_rabbitmq_api(host, port, user, passwd)
print ("--- dump json ---")
print (json.dumps(res.json(), indent=4))
print ("--- get queue name ---")
q_name = get_queue_name(res.json())
print (q_name)
Referred from here: https://gist.github.com/hiroakis/5088513#file-example_rabbitmq_api-py-L2
Related
I'm trying to establish communication between different processes running celery. I successfully sent tasks from one process to others using app.send_task on a celery instance. I am struggling now to broadcast tasks through a fanout rabbitmq exchange to all other instances (basically a publish-subscribe pattern for celery).
It must be related to the routing across exchanges and queues but I simply can't make it work.
This is the master emitter which broadcasts a task named signal through the default exchange of type fanout:
from celery import Celery
from kombu import Exchange, Queue
app = Celery('emitter',
broker='pyamqp://test#localhost//',
backend='db+sqlite:///results.db')
default_queue_name = 'default'
default_exchange_name = 'default'
default_routing_key = 'default'
default_exchange = Exchange(default_exchange_name, type='fanout')
default_queue = Queue(
default_queue_name,
default_exchange,
routing_key=default_routing_key)
app.conf.task_queues = (
default_queue,
)
app.conf.task_default_queue = default_queue_name
app.conf.task_default_exchange = default_exchange_name
app.conf.task_default_routing_key = default_routing_key
if __name__ == '__main__':
app.send_task(name='signal', exchange='default')
To my understanding the routing, queue and exchange setup on the other app needs to be identical. Thus, this is a very similarly looking piece of code but defining a task that gets called:
from celery import Celery
from kombu import Exchange, Queue
app = Celery('CLIENT_A',
broker='pyamqp://test#localhost//',
backend='db+sqlite:///results.db')
default_queue_name = 'default'
default_exchange_name = 'default'
default_routing_key = 'default'
default_exchange = Exchange(default_exchange_name, type='fanout')
default_queue = Queue(
default_queue_name,
default_exchange,
routing_key=default_routing_key)
app.conf.task_queues = (
default_queue,
)
app.conf.task_default_queue = default_queue_name
app.conf.task_default_exchange = default_exchange_name
app.conf.task_default_routing_key = default_routing_key
#app.task(name='signal')
def signal():
print('client_a signal')
return 'signal'
The second client will look exactly the same as the first except for the name and the print message:
# [...]
app = Celery('CLIENT_B', ...
# [...] identical to the part above
#app.task(name='signal')
def signal():
print('client_b signal')
I'm starting both client workers with different node names (otherwise celery will complain):
celery -A client_a worker -n node_a
celery -A client_b worker -n node_b
If I then call the emitter (first piece of code) I see the signals being triggered alternately by client_a and client_b but never both as I would like them to be triggered.
The rabbitmq management platform looks as expected with the default exchange defined as fanout and the routing looks alright.
I'm not sure if I'm on the completely wrong track here but that's what I imagined should be possible with correct routing.
I am using Celery with Rabbitmq broker on Server A. Some tasks require interaction with another server say, Server B and I am using Rabbitmq queues for this interaction.
Queue 1 - Server A (Producer), Server B (Consumer)
Queue 2 - Server B (Producer), Server A (Consumer)
My celery is unexpectedly hanging and I have found the reason to be incorrect implementation of Server A consumer code.
channel.start_consuming() keeps polling Rabbitmq as expected however putting this in a celery task creates multiple pollers which don't expire. I can add expiry but the time completion for the data being sent to Server B cannot be guaranteed. The code pasted below is one method I used to tackle the issue but I am not convinced this is best solution.
I wish to know what I am doing wrong and what is the right way to implement this because I have failed searching for articles on the web. Any tips, insights and even links to articles would be extremely helpful.
Finally, my code -
#celery.task
def task_a(data):
do_some_processing
# Create only 1 Rabbitmq consumer instance to avoid celery hangups
task_d.delay()
#celery.task
def task_b(data):
do_some_processing
if data is not None:
task_c.delay()
#celery.task
def task_c():
data = some_data
data = json.dumps(data)
conn_params = pika.ConnectionParameters(host=RABBITMQ_HOST)
connection = pika.BlockingConnection(conn_params)
channel = connection.channel()
channel.queue_declare(queue=QUEUE_1)
channel.basic_publish(exchange='',
routing_key=QUEUE_1,
body=data)
channel.close()
#celery.task
def task_d():
def queue_helper(ch, method, properties, body):
'''
Callback from queue.
'''
data = json.loads(body)
task_b.delay(data)
conn_params = pika.ConnectionParameters(host=RABBITMQ_HOST)
connection = pika.BlockingConnection(conn_params)
channel = connection.channel()
channel.queue_declare(queue=QUEUE_2)
channel.basic_consume(queue_helper,
queue=QUEUE_2,
no_ack=True)
channel.start_consuming()
channel.close()
I am trying to implement an IRC Bot on a local server. The bot that I am using is identical to the one found at Eric Florenzano's Blog. This is the simplified code (which should run)
import sys
import re
from twisted.internet import reactor
from twisted.words.protocols import irc
from twisted.internet import protocol
class MomBot(irc.IRCClient):
def _get_nickname(self):
return self.factory.nickname
nickname = property(_get_nickname)
def signedOn(self):
print "attempting to sign on"
self.join(self.factory.channel)
print "Signed on as %s." % (self.nickname,)
def joined(self, channel):
print "attempting to join"
print "Joined %s." % (channel,)
def privmsg(self, user, channel, msg):
if not user:
return
if self.nickname in msg:
msg = re.compile(self.nickname + "[:,]* ?", re.I).sub('', msg)
prefix = "%s: " % (user.split('!', 1)[0], )
else:
prefix = ''
self.msg(self.factory.channel, prefix + "hello there")
class MomBotFactory(protocol.ClientFactory):
protocol = MomBot
def __init__(self, channel, nickname='YourMomDotCom', chain_length=3,
chattiness=1.0, max_words=10000):
self.channel = channel
self.nickname = nickname
self.chain_length = chain_length
self.chattiness = chattiness
self.max_words = max_words
def startedConnecting(self, connector):
print "started connecting on {0}:{1}"
.format(str(connector.host),str(connector.port))
def clientConnectionLost(self, connector, reason):
print "Lost connection (%s), reconnecting." % (reason,)
connector.connect()
def clientConnectionFailed(self, connector, reason):
print "Could not connect: %s" % (reason,)
if __name__ == "__main__":
chan = sys.argv[1]
reactor.connectTCP("localhost", 6667, MomBotFactory('#' + chan,
'YourMomDotCom', 2, chattiness=0.05))
reactor.run()
I added the startedConnection method in the client factory, which it is reaching and printing out the proper address:host. It then disconnects and enters the clientConnectionLost and prints the error:
Lost connection ([Failure instance: Traceback (failure with no frames):
<class 'twisted.internet.error.ConnectionDone'>: Connection was closed cleanly.
]), reconnecting.
If working properly it should log into the appropriate channel, specified as the first arg in the command (e.g. python module2.py botwar. would be channel #botwar.). It should respond with "hello there" if any one in the channel sends anything.
I have NGIRC running on the server, and it works if I connect from mIRC or any other IRC client.
I am unable to find a resolution as to why it is continually disconnecting. Any help on why would be greatly appreciated. Thank you!
One thing you may want to do is make sure you will see any error output produced by the server when your bot connects to it. My hunch is that the problem has something to do with authentication, or perhaps an unexpected difference in how ngirc handles one of the login/authentication commands used by IRCClient.
One approach that almost always applies is to capture a traffic log. Use a tool like tcpdump or wireshark.
Another approach you can try is to enable logging inside the Twisted application itself. Use twisted.protocols.policies.TrafficLoggingFactory for this:
from twisted.protocols.policies import TrafficLoggingFactory
appFactory = MomBotFactory(...)
logFactory = TrafficLoggingFactory(appFactory, "irc-")
reactor.connectTCP(..., logFactory)
This will log output to files starting with "irc-" (a different file for each connection).
You can also hook directly into your protocol implementation, at any one of several levels. For example, to display any bytes received at all:
class MomBot(irc.IRCClient):
def dataReceived(self, bytes):
print "Got", repr(bytes)
# Make sure to up-call - otherwise all of the IRC logic is disabled!
return irc.IRCClient.dataReceived(self, bytes)
With one of those approaches in place, hopefully you'll see something like:
:irc.example.net 451 * :Connection not registered
which I think means... you need to authenticate? Even if you see something else, hopefully this will help you narrow in more closely on the precise cause of the connection being closed.
Also, you can use tcpdump or wireshark to capture the traffic log between ngirc and one of the working IRC clients (eg mIRC) and then compare the two logs. Whatever different commands mIRC is sending should make it clear what changes you need to make to your bot.
I've got a bunch of celery tasks that take their results and post them to a RabbitMQ message queue. The results that get posted can become quite large (up to a few meg). Opinion is mixed as to whether putting large amounts of data in a RabbitMQ message is a good idea, but I've seen this work in other situations and as long as memory is kept under control, it seems to work.
However, for my current set of tasks, rabbit appears to be just dropping messages that seem to be too big. I've reduced it down to a fairly simple test case:
#!/usr/bin/env python
import string
import random
import pika
import os
qname='examplequeue'
connection = pika.BlockingConnection(pika.ConnectionParameters(
host='mq.example.com'))
channel = connection.channel()
channel.queue_declare(queue=qname,durable=True)
N=100000
body = ''.join(random.choice(string.ascii_uppercase) for x in range(N))
promise = channel.basic_publish(exchange='', routing_key=qname, body=body, mandatory=0, immediate=0, properties=pika.BasicProperties(content_type="text/plain",delivery_mode=2))
print " [x] Sent 'Hello World!'"
connection.close()
I have a 3-node RabbitMQ cluster, and mq.example.com round-robins to each node. Client is using Pika 0.9.5 on Ubuntu 12.04 and the RabbitMQ cluster is running RabbitMQ 2.8.7 on Erlang R14B04.
Executing this script prints the print statement and exits without any exceptions being raised. The message never appears in RabbitMQ.
Changing N to 10000 makes it work as expected.
Why?
I suppose you have problem with tcp-backpressure mechanizm in RabbitMq. You can read about http://www.rabbitmq.com/memory.html.
I see two ways to solve this problem:
Add tcp-callback and make reconnect every tcp-call from rabbit
Use compressing messages before sending it to rabbit, It will make easier push to rabbit.
def compress(s):
return binascii.hexlify(zlib.compress(s))
def decompress(s):
return zlib.decompress(binascii.unhexlify(s))
This is what I do to send and receive packets. It is somewhat more efficient than hexlify, because base64 may use one byte where two bytes are needed by hexlify to represent one character.
import zlib
import base64
def hexpress(send: str):
print(f"send: {send}")
bsend = send.encode()
print(f"byte-encoded send: {bsend}")
zbsend = zlib.compress(bsend)
print(f"zipped-byte-encoded-send: {zbsend}")
hzbsend = base64.b64encode(zbsend)
print(f"hex-zip-byte-encoded-send: {hzbsend}")
shzbsend = hzbsend.decode()
print(f"string-hex-zip-byte-encoded-send: {shzbsend}")
return shzbsend
def hextract(recv: str):
print(f"string-hex-zip-byte-encoded-recv: {recv}")
zbrecv = base64.b64decode(recv)
print(f"zipped-byte-encoded-recv: {zbrecv}")
brecv = zlib.decompress(zbrecv)
print(f"byte-encoded-recv: {brecv}")
recv = brecv.decode()
print(f"recv: {recv}")
return recv
print("sending ...\n")
send = "hello this is dog"
packet = hexpress(send)
print("\nover the wire -------->>>>>\n")
print("receiving...\n")
recv = hextract(packet)
I am using celery+rabbitmq. I can't find convenient way to clear queue in celery+rabbitmq. I do it with remove and create vhost.
rabbitmqctl delete_vhost <vhostpath>
rabbitmqctl add_vhost <vhostpath>
Is it prefer way to clear some celery queue ?
I'm not quite sure how celery works, but I suspect you want to purge a RabbitMQ queue (you're currently simulating this by deleting the queues and having celery re-create them).
You could install RabbitMQ's Management Plugin. Its WebUI will allow you to purge the required queue. This should also tell you which queue you're aiming for, so you wouldn't need to delete everything.
Once you know which queue it is, you could purge it programatically. For instance, using py-amqplib, you would do something like:
from amqplib import client_0_8 as amqp
conn = amqp.Connection(host="localhost:5672", userid="guest", password="guest", virtual_host="/", insist=False)
conn = conn.channel()
conn.queue_purge("the-target-queue")
There's probably a better way to do it, though.
If you are facing this problem because you used rabbitmq for the result backend and as a result you got too many queues, then i would suggest using a different result backend (redis or mongodb)
This is one well known flaw with the celery. It will create a separate queue for each result if you amqp for result backend.
If you still want to stick to amqp as result backend. It will clear itself in 24 hours. You can however set it to a smaller value using CELERY_AMQP_TASK_RESULT_EXPIRES setting.
If you need to delete ALL items in queue (especially when the list is long)
1) Saves all items into the file
sudo rabbitmqctl list_queues -p /yourvhost name > queues.txt
don't forget to remove first and last lines from 'queues.txt'
2) Use mentioned python code to do the job
from amqplib import client_0_8 as amqp
conn = amqp.Connection(host="127.0.0.1:5672", userid="guest", password="guest", virtual_host="/yourvhost", insist=False)
conn = conn.channel()
queues = None
with open('queues.txt', 'r') as f:
queues = f.readlines()
for q in queues:
if q:
#print 'deleting %s' % q
conn.queue_purge(q.strip())
print 'purged %d items' % len(queues)