RabbitMQ - Diff between 'delivery-limit' and 'x-delivery-limit' queue arguments - rabbitmq

What is the difference between delivery-limit and x-delivery-limit?
When I set the x-delivery-limit as RabbitMQ queue argument I can see it is limiting my message requeue attempts, as I expected, but in the official RabbitMQ documentation I see the usage of delivery-limit.

Both are valid settings.
The difference is that delivery-limit is a policy vs x-delivery-limit is a queue argument.
The same difference applies to other RabbitMQ settings, for example
dead-letter-exchange is a policy vs x-dead-letter-exchange is a queue argument
queue-length is a policy vs x-queue-length is a queue argument
A queue argument is prefixed by x- and is also referred to as an x-argument. The x stands for "extra" or "extended" because these arguments extend the mandatory queue settings. Mandatory queue settings are for example the durable and exclusive properties. x-arguments are optional queue settings. x-arguments are set by clients when they declare a queue.
That is, to change an x-argument, you would need to re-deploy the client, and re-declare the queue. For an existing queue, changing an x-argument is not allowed and will result in an inequivalent arg error closing the channel.
This is where policies come in handy. They have the following benefits:
Policies are applied dynamically. Once a queue is declared, policies can change queue settings at run time. Note that not all queue settings can be changed by a policy. For example changing the x-queue-type (for example from classic queue to quorum queue) is not allowed since a queue process and how it stores messages cannot just be changed dynamically once it has been created. However, most queue settings (including delivery-limit) can be changed dynamically via a policy.
Policies can be applied to groups of queues (and groups of exchanges). A queue argument can only be applied to a single queue.
In general, it's good practice to use a policy instead of queue argument where possible because policies are more flexible.
More in the official docs: https://www.rabbitmq.com/parameters.html#why-policies-exist

I believe that delivery-limit is just name for header value x-delivery-limit. You can find in RabbitMQ UI for queue arguments.
There is a blog post from RabbitMQ with screenshots (Fig 9. Quorum queue arguments) where they are using x-delivery-limit header which works only with quorum queues (feature matrix)
UPD: in according with this screenshot x-delivery-limit is a part of queue features however delivery-limit is a part of policy definition applied to this queue. Check this article for more details.

Related

RabbitMQ queue length limit not honored

I have tried using both the max-length and x-max-length arguments to limit queue lengths to no avail. I can't tell if I'm incorrectly using the arguments, whether it's due a limitation of using the RabbitMQ Delayed Message Plugin, or if there's an actual bug in RabbitMQ.
There's an exchange for use by the RabbitMQ Delayed Message Plugin which has multiple queues attached to it (these queues are only used through this exchange). A message is sent to one of these queues.
Whenever I redeploy the application server, there are two instances running for a brief period of time (rolling updates). Since both applications are publishing messages to the queues, each queue now has two messages in it. Every time there's a redeploy of the application server, yet another duplicate message is enqueued even though the max-length and/or x-max-length arguments are set to 1. I've even tried setting them to 0 but it didn't make any difference.
Here's how I'm declaring the queue:
Here's the policy I've applied to the queues:
Try the Message Deduplication Plugin. Seems like it could address your rolling-update use-case.

NServiceBus Delay Retries configure on only one queue

I have an instance of NServiceBus that is used for multiple queues. Now I have only one queue that requires a special type of delay retries and a custom policy. In this queue I have a 3rd party call, and I want it to be retried 10 times with a specific pattern of time interval.
I have read the documentation about delayed retries, my understanding is that it will affect all the queues, not only the one I want.
How this can be implemented?
I'm using NServiceBus with RabbitMq for testing envs and Azure Service Bus for prod envs.
Recoverability policy is applied on the endpoint level. When you need a certain message type to be processed with a different recoverability policy, you can override the default recoverability policy to customize it to your needs. When you need a completely different number of delayed retries configured that does not match the rest of the messages, you should split the logical endpoint into two and have the message type that requires a different recoverability handled by the new endpoint.

Routing Dead-Lettered Messages

Is there a way in EasyNetQ to set the routing key [x-dead-letter-routing-key] argument when creating a Queue? (as far as I can see you can only set a DeadLetterExchange.)
IQueue updateCacheQueue = advancedBus.QueueDeclare(name: "UpdateCache", deadLetterExchange: "UpdatesDeadLetter");
RabbitMQ assumes that exchanges are superior to queues. You can create an exchange that delivers to exactly one queue, and thus your DLQ addressing issue is solved. Should you decide you need to take additional actions in the future (e.g. store the message for potential reprocessing AND ALSO alert operations via email), you can do that in the exchange without mucking up the queue processor.
I Added another parameter to the QueueDeclare method and created a pull request, and you can set it after version 0.40.6.355

Correct config using rabbitmq as celery backend

I'm building a flask app with celery, using rabbitmq as celery's backend.
my conf for celery is
CELERY_BROKER_URL='amqp://localhost:5672',
CELERY_RESULT_BACKEND='amqp://',
CELERY_QUEUE_HA_POLICY='all',
CELERY_TASK_RESULT_EXPIRES=None
Then, declaring a queue produced a whole bunch of error
connection = pika.BlockingConnection(pika.ConnectionParameters('localhost'))
channel = connection.channel()
channel.queue_declare(queue=new_task_id)
error
PreconditionFailed: Queue.declare: (406) PRECONDITION_FAILED - inequivalent arg
'durable' for queue '1419349900' in vhost '/':
received 'true' but current is 'false'
OK, I changed it to channel.queue_declare(queue=new_task_id, durable=True)
again, error
PreconditionFailed: Queue.declare: (406) PRECONDITION_FAILED - inequivalent arg
'auto_delete' for queue '1419350288' in vhost '/':
received 'true' but current is 'false'
OK, I changed it to channel.queue_declare(queue=new_task_id, durable=True, auto_delete=True)
This time error disappeared.
But how would I know it before I got these errors? I searched celery's doc on this topic, detailed documentation, but didn't get what I need——it just list all the conf items yet don't tell me how to set it. Or it's rabbitmq's doc which I should refer to?
Thank you!
edit
So, all Queues declared in your configuration file, or in any registered tasks.
Could you explain a bit more on this? And what's the difference between declare and create?
You said Result queues will be created with 'durable', 'auto-delete' flags, where can I find this information? And How does celery know a queue is a result queue?
Celery default behaviour is to create all missing Queues (see CELERY_CREATE_MISSING_QUEUES Documentation. Task queues will be created by default with 'durable' flag. Result queues will be created with 'durable', 'auto-delete' flags, and 'x-expires' if your CELERY_TASK_RESULT_EXPIRES parameter is not None (by default, it is set to 1 day).
So, all Queues declared in your configuration file, or in any registered tasks. Moreover, as you use amqp result backend, the worker, if you does not have the CELERY_IGNORE_RESULT parameter set, the result queue will be created at initialisation of the tash, and named as the task_id.
So, if you try to redeclare this queue with a conflicting configuration, RabbitMQ will refuse it. And thus, you don't have to create it.
Edit
Queue "declaration", as indicated in the pika documentation, allow to check the existence of a Queue in RabbitMQ, and, if not, create it. If CELERY_CREATE_MISSING_QUEUESis set to True in your Celery configuration, at initialization, any queue listed in CELERY_QUEUES or CELERY_DEFAULT_QUEUE parameter, or any custom queue declared in registered tasks options, such as #task(name="custom", queue="my_custom_queue"), or even in a custom CELERY_ROUTING definition, will be "declared" to RabbitMQ, thus will be created if they don't exist.
Queue parametrization documentation can be found here, in the Using Transient Queues paragraph, but the best way to see it is to use the RabbitMQ management plugin, allowing you to monitor in a Web UI the declared queues and their configuration (you can see a D flag for Durable, and a A-D flag for Auto-Delete).
Finally, celery doesn't "know" if a queue is a result queue, but, when created, a task is assigned to an unique identifier. This identifier will be used a the queue name for any result. It means that if the producer of the task waits for a result, it will listen to that queue, whenever she will be created. And the consumer, once the task aknowledged, and before the task is really executed, and if the task does not ignore results (througt setting CELERY_IGNORE_RESULT or task custom options), will check if a queue named as the task identifier exists, if not, it will create it with the default Result configuration (see the Result Backend configuration)

How to set a redelivery policy in RabbitMQ/AMQP

I'm currently using ActiveMQ for my queueing system, and I'm wanting to make the transition to RabbitMQ. One feature I've been using that belongs to ActiveMQ is a redelivery policy, as sometimes our consumer rejects a message because it cannot handle it at this time, but may want to try again later, so it requeues it.
Right now in AMQP, when I reject a message, it's instantly pulled off the queue again immediately and tried again.
Is there a way, in RabbitMQ, to specify a redelivery policy for a queue, consumer, or message?
I also had problems with that behaviour. According to documentation (as far as I remember, maybe in newer version something changed) after requeue it is not stated where a message will be placed (it was described as undetermined). In my testcases (with version 2.8.2) some of messages were put to the end of a queue and one message (precisely first from clients prefetch) land on beggining (and being consumed immediately). In our application this caused livelock.
You could walkaround this by publishing copy of message to a queue and acking already delivered one in one transaction (but I recommend to carefully read section about transactions in docs) or use deadlettering to deal with temporaly unprocessable messages.