RabbitMQ default x-expires argument for all x-message-ttl queues - rabbitmq

I make different messages with different x-message-ttl arguments. Each of this messages create new queue like
enqueue.notifications.notifications.5000.x.delay
and I want to delete this queue after finish working with this queue.
Can I set in RabbitMQ config, when if I push message with x-message-ttl argument automatically set argument x-expires? Or maybe it can be something which deletes all delayed queue after it finishes?

Any way you can't be sure that the queue with delay you need is exists.
So you have to ensure that the queue exists by declaring it and during queue declaration you can set x-expires property according to your delay

Related

Message is not routing to dead letter queue when consumer is down

I've a service A which is publishing message to Queue(Q-A).
I've a dead letter queue(DLQ) bounded to DLX with DLRK.
Queue A is bounded to an exchange(E-A) with a routing key(RA).
I've also set x-letter-exchange(DLX) and x-dead-letter-routing-key(DLRK) on Q-A with ttl-per-message on this queue to 60 seconds
The DLQ is also set with x-letter-exchange(E-A) and x-dead-letter-routing-key(DLRK) with ttl-per-message on this queue to 60 seconds.
With above configuration I'm trying to route the message to DLQ from Q-A after ttl expires and vice versa.
On the consumer side which is another service, I throw AMQPRejectAndDontRequeueException with defaultRequeueRejected set to fals.
The above configuration works fine when the consumer is up and throws the
exception.
But I'm trying to limit my queue size to 1 and then publish 3 messages to the Q-A and also shutting down the consumer. I see all the three messages placed in both Q-A and DLQ and eventually all the messages are dropped.
But if I don't set the queue limit to 1 or start the consumer, everything works fine.
I've also set the x-overflow to reject-publish and when there is overflow, I get a nack at the publisher and then I've a scheduler which publish it again to Q-A.
Note: Both exchanges are Direct and I'm using routing keys to bind it to respective queue.
Kindly, let me know if I'm missing something here and let me know need to share my config
After digging through, I think i finally found the answer from the link Dead-lettering dead-lettered messages in RabbitMQ
answer by pinepain
It is possible to form a cycle of dead-letter queues. For instance, this can happen when a queue dead-letters messages to the default exchange without specifiying a dead-letter routing key. Messages in such cycles (i.e. messages that reach the same queue twice) will be dropped if the entire cycle is due to message expiry.
So I think to solve the problem I need to create another consumer to consume from dead letter queue and publish it back to original queue from the consumer and not directly ttl from the dead letter queue. Please correct me if my understanding is right.
I may have arrived at this too late, But I think I can help you with this.
Story:
You want a retry queue to send dead messages to and retrieve and re-queue them in the main queue after a certain amount of time.
Solution:
Declare your main queue and bind it to an exchange. We call them main_queue and main_exchange and add this feature to the main_queue: x-dead-letter-exchange: retry_exchange
Create your retry queue and bind it to another exchange. We call these retry_queue and retry_exchange and add these features to the retry queue: x-dead-letter-exchange: main_exchange and x-message-ttl: 10000
With this combination, dead messages from main_queue will be sent to retry_queue and after 10 seconds they will be sent again to the main_queue which will they last indefinitely until a consumer declares them dead.
Note: This method works only if you publish your messages to the exchange and not directly in the queue.

Is re-declaring queue for every message can decrease the rabbitmq performance?

Is declaring queues for every message can decrease the rabbitmq performance?
We have a scenario in which we don't know the rabbitmq queue is exist or not. So declaring a queue for every message is a good approach?. Or we should check for each message the queue is exists or not?. Any alternate good approach.
You dont need to create queue on every message. When you start your consumer,
you will create connection and channel inside it. Once you had those set up, make your declare queue request.
I'd suggest one queue declaration per consumer's channel.
Look at official tutorials for your language. In each, you create connection, channel, declare queue once, and start doing your work.
Note: you don't need to declare a queue if you are running a producer. Producer sends messages to an exchange, and should not be aware of any queues.

Dead-letterred messages not getting requeue to original queue after ttl

I have planned to delay the processing of messages in queue by following these two links link1 link2. So, as suggested in the link. I have declared the original queue with the x-dead-letter-exchange and x-dead-letter-routing-key args. Which published the messages to the so called dead-letter-queue when message either failed to get processed by consumer or ttl happen or queue length exceed. Now in the dead-letter-queue similar args have been set along with the ttl parameter. Which is suppose to republish the messages to the original queue after ttl exceed. But the problem is it is dropping all the messages.
Moreover, there is a catch here. If i explicitly publish the failed messages from original queue to dead-letter-queue. Then after ttl it republish the messages to the original queue. Why is it so and how do i make it work. So that dead-letter-queue republishes the messages to the original queue instead of dropping. I am using RabbitMQ 3.0.0.
FYI, I have created both the exchanges of direct type along with the routing key
When a queue has a TTL setup that means that messages in that queue will be sent to the dead-letter-exchange (DLX) associated with that queue after the TTL has expired. If the queue has no DLX assigned then the messages go into the bit bucket.
If you want to send messages back into the queue from which they came to be re-processed then you need to have the setup that I described in this post.
Dead-lettering dead-lettered messages in RabbitMQ
Hopefully that is helpful for you.
Suppose your original exchange is x.notification and is bind to the queue q.A with routing queue A. And your dead-letter-exchange namae is dlx.notification. Now in the queue q.A set ttl the time interval you want to wait and dead-lleter-exchange as dlx.notification. Now create another queue dlq.A to route the expired message from dlx.notification into dlq.A with routing key "A". I think thats all you need to do to achive your goal.

Binding to Non-Existing (Future) Queue

Is it at all possible to declare a binding between an existing exchange and a non-existing queue, so that when the queue (eventually) gets created by some other means in the future messages will start to get forwarded to it?
Is it at all possible to declare a binding between an existing exchange and a non-existing queue,
this is not possible.
you can only bind an exchange to an existing queue. you can only set up a consumer to get messages from an existing queue.
so that when the queue (eventually) gets created by some other means in the future messages will start to get forwarded to it?
sort of... when you create a queue and binding, messages will start flowing to that queue. but only new messages. old messages are lost and will not flow to that queue.
If you are dynamically creating queues and bindings for your consumers, then your consumer should be the one to declare the queues. The problem, as you've probably run in to, is that you will not have any messages in the queue until the queue is created and bound.
If you need messages to be there before the consumer connects, then some other code needs to set up the queue and binding before the consumer connects and starts consuming from the queue.

Correct config using rabbitmq as celery backend

I'm building a flask app with celery, using rabbitmq as celery's backend.
my conf for celery is
CELERY_BROKER_URL='amqp://localhost:5672',
CELERY_RESULT_BACKEND='amqp://',
CELERY_QUEUE_HA_POLICY='all',
CELERY_TASK_RESULT_EXPIRES=None
Then, declaring a queue produced a whole bunch of error
connection = pika.BlockingConnection(pika.ConnectionParameters('localhost'))
channel = connection.channel()
channel.queue_declare(queue=new_task_id)
error
PreconditionFailed: Queue.declare: (406) PRECONDITION_FAILED - inequivalent arg
'durable' for queue '1419349900' in vhost '/':
received 'true' but current is 'false'
OK, I changed it to channel.queue_declare(queue=new_task_id, durable=True)
again, error
PreconditionFailed: Queue.declare: (406) PRECONDITION_FAILED - inequivalent arg
'auto_delete' for queue '1419350288' in vhost '/':
received 'true' but current is 'false'
OK, I changed it to channel.queue_declare(queue=new_task_id, durable=True, auto_delete=True)
This time error disappeared.
But how would I know it before I got these errors? I searched celery's doc on this topic, detailed documentation, but didn't get what I need——it just list all the conf items yet don't tell me how to set it. Or it's rabbitmq's doc which I should refer to?
Thank you!
edit
So, all Queues declared in your configuration file, or in any registered tasks.
Could you explain a bit more on this? And what's the difference between declare and create?
You said Result queues will be created with 'durable', 'auto-delete' flags, where can I find this information? And How does celery know a queue is a result queue?
Celery default behaviour is to create all missing Queues (see CELERY_CREATE_MISSING_QUEUES Documentation. Task queues will be created by default with 'durable' flag. Result queues will be created with 'durable', 'auto-delete' flags, and 'x-expires' if your CELERY_TASK_RESULT_EXPIRES parameter is not None (by default, it is set to 1 day).
So, all Queues declared in your configuration file, or in any registered tasks. Moreover, as you use amqp result backend, the worker, if you does not have the CELERY_IGNORE_RESULT parameter set, the result queue will be created at initialisation of the tash, and named as the task_id.
So, if you try to redeclare this queue with a conflicting configuration, RabbitMQ will refuse it. And thus, you don't have to create it.
Edit
Queue "declaration", as indicated in the pika documentation, allow to check the existence of a Queue in RabbitMQ, and, if not, create it. If CELERY_CREATE_MISSING_QUEUESis set to True in your Celery configuration, at initialization, any queue listed in CELERY_QUEUES or CELERY_DEFAULT_QUEUE parameter, or any custom queue declared in registered tasks options, such as #task(name="custom", queue="my_custom_queue"), or even in a custom CELERY_ROUTING definition, will be "declared" to RabbitMQ, thus will be created if they don't exist.
Queue parametrization documentation can be found here, in the Using Transient Queues paragraph, but the best way to see it is to use the RabbitMQ management plugin, allowing you to monitor in a Web UI the declared queues and their configuration (you can see a D flag for Durable, and a A-D flag for Auto-Delete).
Finally, celery doesn't "know" if a queue is a result queue, but, when created, a task is assigned to an unique identifier. This identifier will be used a the queue name for any result. It means that if the producer of the task waits for a result, it will listen to that queue, whenever she will be created. And the consumer, once the task aknowledged, and before the task is really executed, and if the task does not ignore results (througt setting CELERY_IGNORE_RESULT or task custom options), will check if a queue named as the task identifier exists, if not, it will create it with the default Result configuration (see the Result Backend configuration)