i'm trying to do a simple message queue with RabitMQ i push a message with create_message
and then trying to get the message by the routing key.
it works great when the routing key is the same. the problem is when the routing key is different i keep on getting the message with the wrong routing key:
for example
def callback(ch, method, properties, body):
print("%r:%r" % (method.routing_key, body))
def create_message(self):
connection = pika.BlockingConnection(pika.ConnectionParameters(
'localhost'))
channel = connection.channel()
channel.exchange_declare(exchange='www')
channel.queue_declare(queue='hello')
channel.basic_publish(exchange='www',
routing_key="11",
body='Hello World1111!')
connection.close()
self.get_analysis_task_celery()
def get_message(self):
connection = pika.BlockingConnection(pika.ConnectionParameters(
'localhost'))
channel = connection.channel()
channel.exchange_declare(exchange='www')
timeout = 1
connection.add_timeout(timeout, on_timeout)
channel.queue_bind(exchange="www", queue="hello", routing_key="10")
channel.basic_consume(callback,
queue='hello',
no_ack=True,
consumer_tag= "11")
channel.start_consuming()
example for my output: '11':'Hello World1111!'
what am i doing wrong?
tnx for the help
this is a total guess, since i can't see your rabbitmq server..
if you open the RabbitMQ management website and look at your exchange, you will probably see that the exchange is bound to the queue for routing key 10 and 11, both of which are bound to the same queue.
since both go to the same queue, your message will always be delivered to that queue, the consumer will always pick up the message
again, i'm guessing since i can't see your server. but check the server to make sure you don't have leftover / extra bindings
Related
SOLVED PROBLEM.
Just create 2 different queues, like rpc.queue and pubsub.queue. Then you can use multiple messaging pattern in one service without any problem.
I create one rails service using Bunny and ConnectionPool Gem. This service "in my mind (cause not yet implemented)" handle multiple RMQ pattern such as Direct Messaging and RPC. These patterns initialized with different object of Connection Class and defined inside initalizer folder.
Initializer looks like this:
# RMQ Initializer for RabbitMQ Connection
class RMQ
include Lontara::RMQ
# NOTE: Call 2 Server caused errors
def self.start(url:, queue:, rpc_exchange:, pubsub_exchange:)
# Then start the consumer and subscriber
Server::RPCConsumer.new(Connection.new(url:), queue:, exchange: rpc_exchange).consume
Server::Subscriber.new(Connection.new(url:), queue:, exchange: pubsub_exchange).subscribe
end
end
RMQ.start(
url: ENV.fetch('RABBITMQ_URL', 'amqp://guest:guest#rmqserver:5672'),
queue: ENV.fetch('RABBITMQ_QUEUE_VOUCHER', 'lontara-dev.voucher'),
rpc_exchange: ENV.fetch('RABBITMQ_EXCHANGE_RPC', 'lontara-dev.rpc'),
pubsub_exchange: ENV.fetch('RABBITMQ_EXCHANGE_PUBSUB', 'lontara-dev.pubsub')
)
and Connection class:
module Lontara
module RMQ
# Class Connection initializing the connection to RabbitMQ.
class Connection
def initialize(url: ENV['RABBITMQ_URL'])
#connection = Bunny.new(url)
connection.start
#channel = channel_pool.with(&:create_channel)
yield self if block_given?
end
def close
channel.close
connection.close
end
attr_reader :connection, :channel
private
def channel_pool
#channel_pool ||= ConnectionPool.new { #connection }
end
end
end
end
The problem goes whenever these 2 Server:: (RPC and Subscriber) activated. Impacted only when use RPC as messaging, the problem is RPC Publisher does not get response from Consumer.
These steps (when RPC produce error) are:
Run Rails server
Open new terminal, and open rails console in same project
Create Request to Consumer using RPCPublisher
Publisher get response. Then send request again... On this step not get response.
Job is pending, i push ctrl+c to terminate job. Send request again, and get response...
Try again like step 4, and error...
But, if Server::Publisher not initialized on initializer, nothing error happened.
I assumed this error happened cause of thread... But i don't really get helped from other articles on internet.
My expectation is so simple:
RPC Connection requested for Get related (because RPC can reply this request) or other action requires response. And Pub/Sub (Direct) request for Create, Update, Delete since this type didn't need it.
Your answer really help me... Thankyou !
I have implemented Celery with RabbitMQ as Broker. I rely on Celery v4.4.7 since I have read that v5.0+ doesn't support RabbitMQ anymore. RabbitMQ is a MUST in my case.
Everything has been containerized then deployed as pods within Kubernetes 1.19. I am able to execute long running tasks and everything apparently looks fine at first glance. However, I have few concerns which require your expertise.
I have declared inbound and outbound queues but Celery created his owns and I do not see any message within those queues (inbound or outbound) :
inbound_queue = "_IN"
outbound_queue = "_OUT"
app = Celery()
app.conf.update(
broker_url = 'pyamqp://%s//' % path,
broker_heartbeat = None,
broker_connection_timeout = int(timeout)
result_backend = 'rpc://',
result_persistent = True,
task_queues = (
Queue(algorithm_queue, Exchange(inbound_queue), routing_key='default', auto_delete=False),
Queue(result_queue, Exchange(outbound_queue), routing_key='default', auto_delete=False),
),
task_default_queue = inbound_queue,
task_default_exchange = inbound_exchange,
task_default_exchange_type = 'direct',
task_default_routing_key = 'default',
)
#app.task(bind=True,
name='osmq.tasks.add',
queue=inbound_queue,
reply_to = outbound_queue,
autoretry_for=(Exception,),
retry_kwargs={'max_retries': 5, 'countdown': 2})
def execute(self, data):
<method_implementation>
I have implemented callbacks to get results back via REST APIs. However, randomly, it can return or not some results when the status is successfull. This is probably related to message persistency. In details, when I implement flower API to get info, status is successfull and the result is partially displayed (shortened json messages) - when I call AsyncResult, for the same status, result is either None or the right one. I do not understand the mechanism between rabbitmq queues and kombu which seems to cache the resulting message. I must guarantee to retrieve results everytime the task has been successfully executed.
def callback(uuid):
task = app.AsyncResult(uuid)
Specifically, it was that Celery 5.0+ did not support amqp:// as a result back end anymore. However, as your example, rpc:// is supported.
The relevant snippet is here: https://docs.celeryproject.org/en/stable/getting-started/backends-and-brokers/index.html#rabbitmq
We tend to always ignore_results=True in our implementation, so I can't give any practical tips of how to use rpc://, other than to infer that any response is put on an application-specific queue, instead of being able to put on a specified queue (or even different broker / rabbitmq instance) via amqp://.
import pika
params = pika.URLParameters([URL])
connection = pika.BlockingConnection(params)
channel = connection.channel()
channel.queue_declare(queue='test', durable=True)
channel.basic_consume(do_things, queue='test')
try:
channel.start_consuming()
except KeyboardInterrupt:
channel.stop_consuming()
except:
rollbar.report_exc_info()
finally:
channel.close()
connection.close()
This is the code I used to consume messages. The problem is, say I have 100 messages in the test queue. Once I start the consumer, it will get all 100 messages and process it one by one, i.e. the queue status became: message ready: 0, unacked: 100, total: 100. As a result, I wouldn't be able to spin up new consumers to process the 100 message in parallel, because there are no messages left for new consumers (all have been taken by the existing consumer, although most messages haven't be processed). Is there a way to let the consumer to only take 1 message at a time?
You need to specify the Quality of Service which is desired for your channel.
In your case, the prefetch_count is the parameter you need.
import pika
params = pika.URLParameters([URL])
connection = pika.BlockingConnection(params)
channel = connection.channel()
channel.basic_qos(prefetch_count=1)
I'm trying to achieve a reject/delay loop using Rabbit's operations, i.e. :
I Have:
Main Queue with Main Exchange binded to it and DLX to StandBy Exchange.
StandBy Queue with StandBy Exchange binded to it with 60s TTL and DLX to Main Exchange
Basically I want to:
Consume from Main Queue
Rejects message (under certain circunstances)
Will get redirect it to StandBy Queue because rejection
When TTL expire, re-queue message to Main Queue.
The steps 1, 2 and 3 are OK but the last one drop the message instead of re-queue it.
Some theory from RabbitMQ's docs what I used to design this was:
Messages from a queue can be 'dead-lettered'; that is, republished to another exchange when any of the following events occur:
The message is rejected (basic.reject or basic.nack) with requeue=false,
The TTL for the message expires; or
The queue length limit is exceeded.
...
It is possible to form a cycle of message dead-lettering. For instance, this can happen when a queue dead-letters messages to the default exchange without specifiying a dead-letter routing key. Messages in such cycles (i.e. messages that reach the same queue twice) will be dropped if there was no rejections in the entire cycle.
The theory says that it should be re-queue because it has a rejection in the cycle from step #2, so, can you help me figure it out why it drops the message instead of re-queue it?
UPDATE:
The version I was targeting was 2.8.4 and it seems that in that moment the if there was no rejections in the entire cycle wasn't in the uses cases, anyway you can check this yourselves RabbitMQ 2.8.x Docs
I'll accept #george answer as the original objective can be achieved by this code.
Rafael, I am not sure what client you are using but with the Pika client in Python you could implement something like this. For simplicity I only use one exchange. Are you sure you are setting the exchange and the routing-key properly?
sender.py
import sys
import pika
connection = pika.BlockingConnection(pika.ConnectionParameters(
'localhost'))
channel = connection.channel()
channel.exchange_declare(exchange='cycle', type='direct')
channel.queue_declare(queue='standby_queue',
arguments={
'x-message-ttl': 10000,
'x-dead-letter-exchange': 'cycle',
'x-dead-letter-routing-key': 'main_queue'})
channel.queue_declare(queue='main_queue',
arguments={
'x-dead-letter-exchange': 'cycle',
'x-dead-letter-routing-key': 'standby_queue'})
channel.queue_bind(queue='main_queue', exchange='cycle')
channel.queue_bind(queue='standby_queue', exchange='cycle')
channel.basic_publish(exchange='cycle',
routing_key='main_queue',
body="message body")
connection.close()
receiver.py
import sys
import pika
def callback(ch, method, properties, body):
print "Processing message: {}".format(body)
# replace with condition for rejection
if True:
print "Rejecting message"
ch.basic_nack(method.delivery_tag, False, False)
connection = pika.BlockingConnection(pika.ConnectionParameters('localhost'))
channel = connection.channel()
channel.basic_consume(callback, queue='main_queue')
channel.start_consuming()
I am using Celery with Rabbitmq broker on Server A. Some tasks require interaction with another server say, Server B and I am using Rabbitmq queues for this interaction.
Queue 1 - Server A (Producer), Server B (Consumer)
Queue 2 - Server B (Producer), Server A (Consumer)
My celery is unexpectedly hanging and I have found the reason to be incorrect implementation of Server A consumer code.
channel.start_consuming() keeps polling Rabbitmq as expected however putting this in a celery task creates multiple pollers which don't expire. I can add expiry but the time completion for the data being sent to Server B cannot be guaranteed. The code pasted below is one method I used to tackle the issue but I am not convinced this is best solution.
I wish to know what I am doing wrong and what is the right way to implement this because I have failed searching for articles on the web. Any tips, insights and even links to articles would be extremely helpful.
Finally, my code -
#celery.task
def task_a(data):
do_some_processing
# Create only 1 Rabbitmq consumer instance to avoid celery hangups
task_d.delay()
#celery.task
def task_b(data):
do_some_processing
if data is not None:
task_c.delay()
#celery.task
def task_c():
data = some_data
data = json.dumps(data)
conn_params = pika.ConnectionParameters(host=RABBITMQ_HOST)
connection = pika.BlockingConnection(conn_params)
channel = connection.channel()
channel.queue_declare(queue=QUEUE_1)
channel.basic_publish(exchange='',
routing_key=QUEUE_1,
body=data)
channel.close()
#celery.task
def task_d():
def queue_helper(ch, method, properties, body):
'''
Callback from queue.
'''
data = json.loads(body)
task_b.delay(data)
conn_params = pika.ConnectionParameters(host=RABBITMQ_HOST)
connection = pika.BlockingConnection(conn_params)
channel = connection.channel()
channel.queue_declare(queue=QUEUE_2)
channel.basic_consume(queue_helper,
queue=QUEUE_2,
no_ack=True)
channel.start_consuming()
channel.close()