I'm building a flask app with celery, using rabbitmq as celery's backend.
my conf for celery is
CELERY_BROKER_URL='amqp://localhost:5672',
CELERY_RESULT_BACKEND='amqp://',
CELERY_QUEUE_HA_POLICY='all',
CELERY_TASK_RESULT_EXPIRES=None
Then, declaring a queue produced a whole bunch of error
connection = pika.BlockingConnection(pika.ConnectionParameters('localhost'))
channel = connection.channel()
channel.queue_declare(queue=new_task_id)
error
PreconditionFailed: Queue.declare: (406) PRECONDITION_FAILED - inequivalent arg
'durable' for queue '1419349900' in vhost '/':
received 'true' but current is 'false'
OK, I changed it to channel.queue_declare(queue=new_task_id, durable=True)
again, error
PreconditionFailed: Queue.declare: (406) PRECONDITION_FAILED - inequivalent arg
'auto_delete' for queue '1419350288' in vhost '/':
received 'true' but current is 'false'
OK, I changed it to channel.queue_declare(queue=new_task_id, durable=True, auto_delete=True)
This time error disappeared.
But how would I know it before I got these errors? I searched celery's doc on this topic, detailed documentation, but didn't get what I need——it just list all the conf items yet don't tell me how to set it. Or it's rabbitmq's doc which I should refer to?
Thank you!
edit
So, all Queues declared in your configuration file, or in any registered tasks.
Could you explain a bit more on this? And what's the difference between declare and create?
You said Result queues will be created with 'durable', 'auto-delete' flags, where can I find this information? And How does celery know a queue is a result queue?
Celery default behaviour is to create all missing Queues (see CELERY_CREATE_MISSING_QUEUES Documentation. Task queues will be created by default with 'durable' flag. Result queues will be created with 'durable', 'auto-delete' flags, and 'x-expires' if your CELERY_TASK_RESULT_EXPIRES parameter is not None (by default, it is set to 1 day).
So, all Queues declared in your configuration file, or in any registered tasks. Moreover, as you use amqp result backend, the worker, if you does not have the CELERY_IGNORE_RESULT parameter set, the result queue will be created at initialisation of the tash, and named as the task_id.
So, if you try to redeclare this queue with a conflicting configuration, RabbitMQ will refuse it. And thus, you don't have to create it.
Edit
Queue "declaration", as indicated in the pika documentation, allow to check the existence of a Queue in RabbitMQ, and, if not, create it. If CELERY_CREATE_MISSING_QUEUESis set to True in your Celery configuration, at initialization, any queue listed in CELERY_QUEUES or CELERY_DEFAULT_QUEUE parameter, or any custom queue declared in registered tasks options, such as #task(name="custom", queue="my_custom_queue"), or even in a custom CELERY_ROUTING definition, will be "declared" to RabbitMQ, thus will be created if they don't exist.
Queue parametrization documentation can be found here, in the Using Transient Queues paragraph, but the best way to see it is to use the RabbitMQ management plugin, allowing you to monitor in a Web UI the declared queues and their configuration (you can see a D flag for Durable, and a A-D flag for Auto-Delete).
Finally, celery doesn't "know" if a queue is a result queue, but, when created, a task is assigned to an unique identifier. This identifier will be used a the queue name for any result. It means that if the producer of the task waits for a result, it will listen to that queue, whenever she will be created. And the consumer, once the task aknowledged, and before the task is really executed, and if the task does not ignore results (througt setting CELERY_IGNORE_RESULT or task custom options), will check if a queue named as the task identifier exists, if not, it will create it with the default Result configuration (see the Result Backend configuration)
Related
I have tried using both the max-length and x-max-length arguments to limit queue lengths to no avail. I can't tell if I'm incorrectly using the arguments, whether it's due a limitation of using the RabbitMQ Delayed Message Plugin, or if there's an actual bug in RabbitMQ.
There's an exchange for use by the RabbitMQ Delayed Message Plugin which has multiple queues attached to it (these queues are only used through this exchange). A message is sent to one of these queues.
Whenever I redeploy the application server, there are two instances running for a brief period of time (rolling updates). Since both applications are publishing messages to the queues, each queue now has two messages in it. Every time there's a redeploy of the application server, yet another duplicate message is enqueued even though the max-length and/or x-max-length arguments are set to 1. I've even tried setting them to 0 but it didn't make any difference.
Here's how I'm declaring the queue:
Here's the policy I've applied to the queues:
Try the Message Deduplication Plugin. Seems like it could address your rolling-update use-case.
What is the difference between delivery-limit and x-delivery-limit?
When I set the x-delivery-limit as RabbitMQ queue argument I can see it is limiting my message requeue attempts, as I expected, but in the official RabbitMQ documentation I see the usage of delivery-limit.
Both are valid settings.
The difference is that delivery-limit is a policy vs x-delivery-limit is a queue argument.
The same difference applies to other RabbitMQ settings, for example
dead-letter-exchange is a policy vs x-dead-letter-exchange is a queue argument
queue-length is a policy vs x-queue-length is a queue argument
A queue argument is prefixed by x- and is also referred to as an x-argument. The x stands for "extra" or "extended" because these arguments extend the mandatory queue settings. Mandatory queue settings are for example the durable and exclusive properties. x-arguments are optional queue settings. x-arguments are set by clients when they declare a queue.
That is, to change an x-argument, you would need to re-deploy the client, and re-declare the queue. For an existing queue, changing an x-argument is not allowed and will result in an inequivalent arg error closing the channel.
This is where policies come in handy. They have the following benefits:
Policies are applied dynamically. Once a queue is declared, policies can change queue settings at run time. Note that not all queue settings can be changed by a policy. For example changing the x-queue-type (for example from classic queue to quorum queue) is not allowed since a queue process and how it stores messages cannot just be changed dynamically once it has been created. However, most queue settings (including delivery-limit) can be changed dynamically via a policy.
Policies can be applied to groups of queues (and groups of exchanges). A queue argument can only be applied to a single queue.
In general, it's good practice to use a policy instead of queue argument where possible because policies are more flexible.
More in the official docs: https://www.rabbitmq.com/parameters.html#why-policies-exist
I believe that delivery-limit is just name for header value x-delivery-limit. You can find in RabbitMQ UI for queue arguments.
There is a blog post from RabbitMQ with screenshots (Fig 9. Quorum queue arguments) where they are using x-delivery-limit header which works only with quorum queues (feature matrix)
UPD: in according with this screenshot x-delivery-limit is a part of queue features however delivery-limit is a part of policy definition applied to this queue. Check this article for more details.
I have producer in say Application A with the below configuration,
Producer Properties:
spring.cloud.stream.bindings.packageVersionUpdatesPublishChannel.destination=fabric-exchange
spring.cloud.stream.bindings.packageVersionUpdatesPublishChannel.producer.requiredGroups=version-updates
spring.cloud.stream.rabbit.bindings.packageVersionUpdatesPublishChannel.producer.exchangeType=direct
spring.cloud.stream.rabbit.bindings.packageVersionUpdatesPublishChannel.producer.routingKeyExpression='package-version'
spring.cloud.stream.rabbit.bindings.packageVersionUpdatesPublishChannel.producer.bindingRoutingKey=package-version
And I have a Consumer for the same Queue in an another application say B,
#Consumer Properties:
spring.cloud.stream.bindings.packageVersionUpdatesConsumerChannel.destination=fabric-exchange
spring.cloud.stream.bindings.packageVersionUpdatesConsumerChannel.group=package-version-updates
spring.cloud.stream.bindings.packageVersionUpdatesConsumerChannel.consumer.max-attempts=1
spring.cloud.stream.rabbit.bindings.packageVersionUpdatesConsumerChannel.consumer.exchangeType=direct
spring.cloud.stream.rabbit.bindings.packageVersionUpdatesConsumerChannel.consumer.durableSubscription=true
spring.cloud.stream.rabbit.bindings.packageVersionUpdatesConsumerChannel.consumer.bindingRoutingKey=package-version
#DLQ
spring.cloud.stream.rabbit.bindings.packageVersionUpdatesConsumerChannel.consumer.autoBindDlq=true
spring.cloud.stream.rabbit.bindings.packageVersionUpdatesConsumerChannel.consumer.dlqDeadLetterExchange=
spring.cloud.stream.rabbit.bindings.packageVersionUpdatesConsumerChannel.consumer.dlq-ttl=30000
#Error Exchange Creation and Bind the Same to Error Queue
spring.cloud.stream.bindings.packageVersionUpdatesErrorPublishChannel.destination=fabric-error-exchange
spring.cloud.stream.bindings.packageVersionUpdatesErrorPublishChannel.producer.requiredGroups=package-version-updates-error
spring.cloud.stream.rabbit.bindings.packageVersionUpdatesErrorPublishChannel.producer.exchangeType=direct
spring.cloud.stream.rabbit.bindings.packageVersionUpdatesErrorPublishChannel.producer.routingKeyExpression='packageversionupdateserror'
spring.cloud.stream.rabbit.bindings.packageVersionUpdatesErrorPublishChannel.producer.bindingRoutingKey=packageversionupdateserror
Now say for example if the Application A boots first, then the Queue version-updates would be created without any dead letter queue associated with it.
And now the when the Application B starts, this is the exception I get and the channel gets shudtdown, I think this is because app B is trying to re-create the queue with a different configuration
inequivalent arg 'x-dead-letter-exchange' for queue 'fabric-exchange.version-updates' in vhost '/': received the value 'DLX' of type 'longstr' but current is none
Can anyone please let me know, how do i solve this, where my requirement is to create a Queue in App A and App-A would simple produce the messages onto this queue
And App-B would consume the same and my requirement is to support re-tries after X amount of time through DLQ
required-groups is simply a convenience to provision the consumer queue when the producer starts, to avoid losing messages if the producer starts first.
You must use identical exchange/queue/binding configuration on both sides.
My problem:
I pull off a message from a RabbitMQ-Queue. I try to process this message and realize that it can't be processed yet. So i would like to add it back to the queue and let it return only on a specific time + 5000ms. Unfortunately that is more challenging than i thought.
What i've tried:
RabbitMQ Dead Letter Attributes -> My issue here is, even though the manual says that the default exchange is binded to every queue it doesnt forward it according to the routing criteria. I've tried to add expires = "5000" and x-dead-letter-routing-key = "queuename" also "x-dead-letter-exchange = "" as the default exchange should work. The only part which works is the expires part. The message will disappear and go into the dark. This also occurs with the dead-letter-exchange beeing amq.direct including the binding on the targeted queue.
Open gaps for me:
Where i'm a bit left in the dark is if the receivers have to be dead letter queues and if i the dead letter queue is a basic queue with extended functionality. It is also not clear if those parameters (x-dead-letter..) are only for DLX Queues. I would like to do this delayed delivery persistent and purely via. the message attributes and not via. queue configurations (only if required).
I've searched on the web and checked many different dead-letter infos. Im trying to build a micro-service like architecture while using RabbitMQ as the delivery mechanism (i use processes which take their work from the queue and forward it). I would believe other people here have the same running already but i couldn't find any blogs about this.
I had to come to the conclusion that on the message level it is not possible.
I've created now for each queue which is in use a separate queue ("name.delayed") , where i can add the message with the argument "expiration" = 5000
The queue settings itself has to be a dead letter queue routing it to the queue "name"
I am not sure if I need to call createQueueMessageStore for each queue that will be persisted, and it not, what is the purpose of this call ? Is setting the adapter on the broker enough without individual queueMessageStores ?
createQueueMessageStore() is used by the ActiveMQ Broker - you don't need to call this.
ActiveMQ automatically creates Queues and Topics on demand -
so if you send a message to a Queue foo.bar the ActiveMQ broker will check to see if it exists, and if it doesn't it will create it for you (using the createQueueMessageStore() call).