I have two machines running RabbitMQ, Master(ip1) and Slave(ip2), and I mirrored them, added policy, synced a queue and exchange, but when I'm sending messages and the master goes down the slave stop responding and my application is not able to send messages anymore.
I have a virtual-host, durable queue and exchange, a policy to promote the slave.
The policy
Pattern ^test
Apply to all
Definition
ha-mode: exactly
ha-params: 5
ha-promote-on-failure: always
ha-promote-on-shutdown: always
ha-sync-mode: automatic
queue-master-locator: random
Priority 0
This is the Application that I'm using to publish messages (EDIT: The queue is already set on RabbitMQ UI)
import pika
import time
credentials = pika.PlainCredentials('usr', 'pass')
parameters = pika.ConnectionParameters('ip1',
5672,
'virtual_host',
credentials)
connection = pika.BlockingConnection(parameters)
channel = connection.channel()
channel.exchange_declare(
exchange="test_exchange",
exchange_type="direct",
passive=False,
durable=True,
auto_delete=False)
##channel.queue_declare(queue='test', durable=True, exclusive=False, auto_delete=False)
input('Press ENTER to begin')
for i in range(10000):
channel.basic_publish(exchange='',
routing_key='test',
body=('Hello World! '+ str(i)))
time.sleep(0.001)
print(" [x] Sent 'Hello World!'")
I wish that when the master goes down, the slave becomes promoted and receives the messages.
Related
I'm trying to achieve a reject/delay loop using Rabbit's operations, i.e. :
I Have:
Main Queue with Main Exchange binded to it and DLX to StandBy Exchange.
StandBy Queue with StandBy Exchange binded to it with 60s TTL and DLX to Main Exchange
Basically I want to:
Consume from Main Queue
Rejects message (under certain circunstances)
Will get redirect it to StandBy Queue because rejection
When TTL expire, re-queue message to Main Queue.
The steps 1, 2 and 3 are OK but the last one drop the message instead of re-queue it.
Some theory from RabbitMQ's docs what I used to design this was:
Messages from a queue can be 'dead-lettered'; that is, republished to another exchange when any of the following events occur:
The message is rejected (basic.reject or basic.nack) with requeue=false,
The TTL for the message expires; or
The queue length limit is exceeded.
...
It is possible to form a cycle of message dead-lettering. For instance, this can happen when a queue dead-letters messages to the default exchange without specifiying a dead-letter routing key. Messages in such cycles (i.e. messages that reach the same queue twice) will be dropped if there was no rejections in the entire cycle.
The theory says that it should be re-queue because it has a rejection in the cycle from step #2, so, can you help me figure it out why it drops the message instead of re-queue it?
UPDATE:
The version I was targeting was 2.8.4 and it seems that in that moment the if there was no rejections in the entire cycle wasn't in the uses cases, anyway you can check this yourselves RabbitMQ 2.8.x Docs
I'll accept #george answer as the original objective can be achieved by this code.
Rafael, I am not sure what client you are using but with the Pika client in Python you could implement something like this. For simplicity I only use one exchange. Are you sure you are setting the exchange and the routing-key properly?
sender.py
import sys
import pika
connection = pika.BlockingConnection(pika.ConnectionParameters(
'localhost'))
channel = connection.channel()
channel.exchange_declare(exchange='cycle', type='direct')
channel.queue_declare(queue='standby_queue',
arguments={
'x-message-ttl': 10000,
'x-dead-letter-exchange': 'cycle',
'x-dead-letter-routing-key': 'main_queue'})
channel.queue_declare(queue='main_queue',
arguments={
'x-dead-letter-exchange': 'cycle',
'x-dead-letter-routing-key': 'standby_queue'})
channel.queue_bind(queue='main_queue', exchange='cycle')
channel.queue_bind(queue='standby_queue', exchange='cycle')
channel.basic_publish(exchange='cycle',
routing_key='main_queue',
body="message body")
connection.close()
receiver.py
import sys
import pika
def callback(ch, method, properties, body):
print "Processing message: {}".format(body)
# replace with condition for rejection
if True:
print "Rejecting message"
ch.basic_nack(method.delivery_tag, False, False)
connection = pika.BlockingConnection(pika.ConnectionParameters('localhost'))
channel = connection.channel()
channel.basic_consume(callback, queue='main_queue')
channel.start_consuming()
I have two servers, call them A and B. B runs RabbitMQ, while A connects to RabbitMQ via Kombu. If I restart RabbitMQ on B, the kombu connection breaks, and the messages are no longer delivered. I then have to reset the process on A to re-establish the connection. Is there a better approach, i.e. is there a way for Kombu to re-connect automatically, even if the RabbitMQ process is restarted?
My basic code implementation is below, thanks in advance! :)
def start_consumer(routing_key, incoming_exchange_name, outgoing_exchange_name):
global rabbitmq_producer
incoming_exchange = kombu.Exchange(name=incoming_exchange_name, type='direct')
incoming_queue = kombu.Queue(name=routing_key+'_'+incoming_exchange_name, exchange=incoming_exchange, routing_key=routing_key)#, auto_delete=True)
outgoing_exchange = kombu.Exchange(name=outgoing_exchange_name, type='direct')
rabbitmq_producer = kombu.Producer(settings.rabbitmq_connection0, exchange=outgoing_exchange, serializer='json', compression=None, auto_declare=True)
settings.rabbitmq_connection0.connect()
if settings.rabbitmq_connection0.connected:
callbacks=[]
queues=[]
callbacks.append(callback)
# if push_queue:
# callbacks.append(push_message_callback)
queues.append(incoming_queue)
print 'opening a new *incoming* rabbitmq connection to the %s exchange for the %s queue' % (incoming_exchange.name, incoming_queue.name)
incoming_exchange(settings.rabbitmq_connection0).declare()
incoming_queue(settings.rabbitmq_connection0).declare()
print 'opening a new *outgoing* rabbitmq connection to the %s exchange' % outgoing_exchange.name
outgoing_exchange(settings.rabbitmq_connection0).declare()
with settings.rabbitmq_connection0.Consumer(queues=queues, callbacks=callbacks) as consumer:
while True:
settings.rabbitmq_connection0.drain_events()
On the consumer side, kombu.mixins.ConsumerMixin handles reconnecting when the connection goes away (and also does heartbeats, etc., and lets you write less code). There doesn't seem to be a ProducerMixin, unfortunately but you could potentially dig into the code and adapt it...?
i'm trying to do a simple message queue with RabitMQ i push a message with create_message
and then trying to get the message by the routing key.
it works great when the routing key is the same. the problem is when the routing key is different i keep on getting the message with the wrong routing key:
for example
def callback(ch, method, properties, body):
print("%r:%r" % (method.routing_key, body))
def create_message(self):
connection = pika.BlockingConnection(pika.ConnectionParameters(
'localhost'))
channel = connection.channel()
channel.exchange_declare(exchange='www')
channel.queue_declare(queue='hello')
channel.basic_publish(exchange='www',
routing_key="11",
body='Hello World1111!')
connection.close()
self.get_analysis_task_celery()
def get_message(self):
connection = pika.BlockingConnection(pika.ConnectionParameters(
'localhost'))
channel = connection.channel()
channel.exchange_declare(exchange='www')
timeout = 1
connection.add_timeout(timeout, on_timeout)
channel.queue_bind(exchange="www", queue="hello", routing_key="10")
channel.basic_consume(callback,
queue='hello',
no_ack=True,
consumer_tag= "11")
channel.start_consuming()
example for my output: '11':'Hello World1111!'
what am i doing wrong?
tnx for the help
this is a total guess, since i can't see your rabbitmq server..
if you open the RabbitMQ management website and look at your exchange, you will probably see that the exchange is bound to the queue for routing key 10 and 11, both of which are bound to the same queue.
since both go to the same queue, your message will always be delivered to that queue, the consumer will always pick up the message
again, i'm guessing since i can't see your server. but check the server to make sure you don't have leftover / extra bindings
I am using Celery with Rabbitmq broker on Server A. Some tasks require interaction with another server say, Server B and I am using Rabbitmq queues for this interaction.
Queue 1 - Server A (Producer), Server B (Consumer)
Queue 2 - Server B (Producer), Server A (Consumer)
My celery is unexpectedly hanging and I have found the reason to be incorrect implementation of Server A consumer code.
channel.start_consuming() keeps polling Rabbitmq as expected however putting this in a celery task creates multiple pollers which don't expire. I can add expiry but the time completion for the data being sent to Server B cannot be guaranteed. The code pasted below is one method I used to tackle the issue but I am not convinced this is best solution.
I wish to know what I am doing wrong and what is the right way to implement this because I have failed searching for articles on the web. Any tips, insights and even links to articles would be extremely helpful.
Finally, my code -
#celery.task
def task_a(data):
do_some_processing
# Create only 1 Rabbitmq consumer instance to avoid celery hangups
task_d.delay()
#celery.task
def task_b(data):
do_some_processing
if data is not None:
task_c.delay()
#celery.task
def task_c():
data = some_data
data = json.dumps(data)
conn_params = pika.ConnectionParameters(host=RABBITMQ_HOST)
connection = pika.BlockingConnection(conn_params)
channel = connection.channel()
channel.queue_declare(queue=QUEUE_1)
channel.basic_publish(exchange='',
routing_key=QUEUE_1,
body=data)
channel.close()
#celery.task
def task_d():
def queue_helper(ch, method, properties, body):
'''
Callback from queue.
'''
data = json.loads(body)
task_b.delay(data)
conn_params = pika.ConnectionParameters(host=RABBITMQ_HOST)
connection = pika.BlockingConnection(conn_params)
channel = connection.channel()
channel.queue_declare(queue=QUEUE_2)
channel.basic_consume(queue_helper,
queue=QUEUE_2,
no_ack=True)
channel.start_consuming()
channel.close()
I've got a python worker client that spins up a 10 workers which each hook onto a RabbitMQ queue. A bit like this:
#!/usr/bin/python
worker_count=10
def mqworker(queue, configurer):
connection = pika.BlockingConnection(pika.ConnectionParameters(host='mqhost'))
channel = connection.channel()
channel.queue_declare(queue=qname, durable=True)
channel.basic_consume(callback,queue=qname,no_ack=False)
channel.basic_qos(prefetch_count=1)
channel.start_consuming()
def callback(ch, method, properties, body):
doSomeWork();
ch.basic_ack(delivery_tag = method.delivery_tag)
if __name__ == '__main__':
for i in range(worker_count):
worker = multiprocessing.Process(target=mqworker)
worker.start()
The issue I have is that despite setting basic_qos on the channel, the first worker to start accepts all the messages off the queue, whilst the others sit there idle. I can see this in the rabbitmq interface, that even when I set worker_count to be 1 and dump 50 messages on the queue, all 50 go into the 'unacknowledged' bucket, whereas I'd expect 1 to become unacknowledged and the other 49 to be ready.
Why isn't this working?
I appear to have solved this by moving where basic_qos is called.
Placing it just after channel = connection.channel() appears to alter the behaviour to what I'd expect.