I'm successfully established Rabbitmq 3.3.1 queues in the application including DLX usage. Requirement is to read DLQ messages, update them and resend to the original queue. I use QueueingConsumer, channel.basicConsume and consumer.nextDelivery to read specified number of messages. But - after read is successfully finished, the queue is disappear, even there are more messages in the queue...
The DLX declaration is:
channel.exchangeDeclare(dlxName, FANOUT, true, false, true, args);
channel.queueDeclare(dlqName, true, false, true, args);
What can be wrong with the code?
Your third boolean argument to queueDeclare is true, that argument stands for auto delete, so when you close your AMQP connection, the queue is deleted.
Related
I'm trying to achieve a reject/delay loop using Rabbit's operations, i.e. :
I Have:
Main Queue with Main Exchange binded to it and DLX to StandBy Exchange.
StandBy Queue with StandBy Exchange binded to it with 60s TTL and DLX to Main Exchange
Basically I want to:
Consume from Main Queue
Rejects message (under certain circunstances)
Will get redirect it to StandBy Queue because rejection
When TTL expire, re-queue message to Main Queue.
The steps 1, 2 and 3 are OK but the last one drop the message instead of re-queue it.
Some theory from RabbitMQ's docs what I used to design this was:
Messages from a queue can be 'dead-lettered'; that is, republished to another exchange when any of the following events occur:
The message is rejected (basic.reject or basic.nack) with requeue=false,
The TTL for the message expires; or
The queue length limit is exceeded.
...
It is possible to form a cycle of message dead-lettering. For instance, this can happen when a queue dead-letters messages to the default exchange without specifiying a dead-letter routing key. Messages in such cycles (i.e. messages that reach the same queue twice) will be dropped if there was no rejections in the entire cycle.
The theory says that it should be re-queue because it has a rejection in the cycle from step #2, so, can you help me figure it out why it drops the message instead of re-queue it?
UPDATE:
The version I was targeting was 2.8.4 and it seems that in that moment the if there was no rejections in the entire cycle wasn't in the uses cases, anyway you can check this yourselves RabbitMQ 2.8.x Docs
I'll accept #george answer as the original objective can be achieved by this code.
Rafael, I am not sure what client you are using but with the Pika client in Python you could implement something like this. For simplicity I only use one exchange. Are you sure you are setting the exchange and the routing-key properly?
sender.py
import sys
import pika
connection = pika.BlockingConnection(pika.ConnectionParameters(
'localhost'))
channel = connection.channel()
channel.exchange_declare(exchange='cycle', type='direct')
channel.queue_declare(queue='standby_queue',
arguments={
'x-message-ttl': 10000,
'x-dead-letter-exchange': 'cycle',
'x-dead-letter-routing-key': 'main_queue'})
channel.queue_declare(queue='main_queue',
arguments={
'x-dead-letter-exchange': 'cycle',
'x-dead-letter-routing-key': 'standby_queue'})
channel.queue_bind(queue='main_queue', exchange='cycle')
channel.queue_bind(queue='standby_queue', exchange='cycle')
channel.basic_publish(exchange='cycle',
routing_key='main_queue',
body="message body")
connection.close()
receiver.py
import sys
import pika
def callback(ch, method, properties, body):
print "Processing message: {}".format(body)
# replace with condition for rejection
if True:
print "Rejecting message"
ch.basic_nack(method.delivery_tag, False, False)
connection = pika.BlockingConnection(pika.ConnectionParameters('localhost'))
channel = connection.channel()
channel.basic_consume(callback, queue='main_queue')
channel.start_consuming()
I have a http server which receives some messages and must reply 200 when a message is successfully stored in a queue and 500 is the message is not added to the queue.
I would like rabbitmq to refuse my messages when the queue reach a size limit.
How can I do it?
actually you can't configure RabbitMq is such a way. but you may programatically check queue size like:
`DeclareOk queueOkStatus = channel.queueDeclare(queueOutputName, true, false, false, null);
if(queueOkStatus.getMessageCount()==0){//your logic here}`
but be careful, because this method returns number of non-acked messages in queue.
If you want to be aware of this , you can check Q count before inserting. It sends request on the same channel. Asserting Q returns messageCount which is Number of 'Ready' Messages. Note : This does not include the messages in unAcknowledged state.
If you do not wish to be aware of the Q length, then as specified in 1st comment of the question:
x-max-length :
How many (ready) messages a queue can contain before it starts to drop them from its head.
(Sets the "x-max-length" argument.)
I had declare a queue like below:
Map<String, Object> args = new HashMap<String, Object>();
args.put("x-max-length-bytes", 2 * 1024 * 1024); // Max length is 2G
channel.queueDeclare("queueName", true, false, false, args);
When the queue messages count bytes is large than 2G, It will auto remove the message on the head of the queue.
But what I expected is That it reject produce the last message and return exception to the producer.
How can I get it?
A possible workaround is check the queue size before send your message using the HTTP API.
For example if you have a queue called: myqueuetest with max size = 20.
Before send the message you can call the HTTP API in this way:
http://localhost:15672/api/queues/
the result is a JSON like this:
"message_bytes":10,
"message_bytes_ready":10,
"message_bytes_unacknowledged":0,
"message_bytes_ram":10,
"message_bytes_persistent":0,
..
"name":"myqueuetest",
"vhost":"test",
"durable":true,
"auto_delete":false,
"arguments":{
"x-max-length-bytes":20
},
then you cloud read the message_bytes field before send your message and then decide if send or not.
Hope it helps
EDIT
This workaround could kill your application performance
This workaround is not safe if you have multi-threading/more publisher
This workaround is not a very "best practise"
Just try to see if it is ok for your application.
As explained on the official docs:
Messages will be dropped or dead-lettered from the front of the queue to make room for new messages once the limit is reached.
https://www.rabbitmq.com/maxlength.html
If you think RabbitMQ should drop messages form the end of the queue, feel free to open an issue here so we can discuss about it https://github.com/rabbitmq/rabbitmq-server/issues
Is there an easy way to implement something like "locking" to prevent race conditions in RabbitMQ queue when using ack?
I have the following problem - I have a couple of clients consuming a queue using ack. Whenever a client gets a message, he acknowledges it and processes it. However if the processing fails for some reason I'd like the message to be returned to the queue.
Simply process it and then acknowledge it.
And if processing fails requeue the message with ack or nack.
QueueingConsumer consumer = new QueueingConsumer(channel);
boolean autoAck = false;
channel.basicConsume("hello", autoAck, consumer);
QueueingConsumer.Delivery delivery = consumer.nextDelivery();
//do your processing
boolean requeue = false;
channel.basicAck(delivery.getEnvelope().getDeliveryTag(), requeue);
Want to know the behavior of rabbitmq multiple publisher and consumer.
Does rabbitmq server gives one message to any one of the consumer at a time and other consumers are ideal at that time?
OR
Consumers pick any unattended message from queue, so that at a time, more than one consumers are consuming the message from queue?
Basically I am designing a database queue and do not want more than one inserts at a time.
A message from the queue will be delivered to one consumer only. Ie: once the message makes its way to the queue - it won't be copied (broadcasted) to multiple consumers.
If you want to do broadcast - you have to use multiple queues.
See this tutorial for more details:
http://www.rabbitmq.com/tutorial-two-python.html
yes , RabitMQ supports multiple publisher and consumer.
Multiple Publisher
For publishing a messsge to rabbitmqyou need to declare a factory and do a connection to the rabbitmq server.
then decare a chennel to rabbitmq
ConnectionFactory FACTORY = new ConnectionFactory
FACTORY.setUsername ("guest")
FACTORY.setPassword ("guest")
FACTORY.setVirtualHost ("\")
FACTORY.setPort (5572)
FACTORY.setHost ("localhost")
Connection connection=FACTORY.newConnection
Channel channel = connection.createChannel
the basic key to route a message is a routing key
channel.basicPublish(EXCHANGE_NAME, "Queue1", MessageProperties.PERSISTENT_TEXT_PLAIN, "msg1".getBytes)
channel.basicPublish(EXCHANGE_NAME, "Queue2", MessageProperties.PERSISTENT_TEXT_PLAIN, "msg2".getBytes)
these two messages will be published to a seperate queue as per the routing key as mention queue1 and queue2
2.Multiple Consumer
for multiple consumer we declare a queue and bind to a particular routing key
the the message to that routing key will be publishe to respected queue.
channel.exchangeDeclare(EXCHANGE_NAME, "direct", durable)
channel.queueDeclare("q1", durable, false, false, null)
channel queueBind ("q1", EXCHANGE_NAME,"queue1")// routing key = "queue1"
val q1Consumer = new QueueingConsumer(channel)
channel basicConsume ("q1", false, q1Consumer)
like this u can consume messages from first queue
and same goes for second queue but specify the routing key as "queue2"
channel.exchangeDeclare(EXCHANGE_NAME, "direct", durable)
channel.queueDeclare("q2", durable, false, false, null)
channel queueBind ("q2", EXCHANGE_NAME,"queue2") // routing key = "queue2"
val q2Consumer = new QueueingConsumer(channel)
channel basicConsume ("q2", false, q2Consumer)