I have an instance of NServiceBus that is used for multiple queues. Now I have only one queue that requires a special type of delay retries and a custom policy. In this queue I have a 3rd party call, and I want it to be retried 10 times with a specific pattern of time interval.
I have read the documentation about delayed retries, my understanding is that it will affect all the queues, not only the one I want.
How this can be implemented?
I'm using NServiceBus with RabbitMq for testing envs and Azure Service Bus for prod envs.
Recoverability policy is applied on the endpoint level. When you need a certain message type to be processed with a different recoverability policy, you can override the default recoverability policy to customize it to your needs. When you need a completely different number of delayed retries configured that does not match the rest of the messages, you should split the logical endpoint into two and have the message type that requires a different recoverability handled by the new endpoint.
Related
What is the difference between delivery-limit and x-delivery-limit?
When I set the x-delivery-limit as RabbitMQ queue argument I can see it is limiting my message requeue attempts, as I expected, but in the official RabbitMQ documentation I see the usage of delivery-limit.
Both are valid settings.
The difference is that delivery-limit is a policy vs x-delivery-limit is a queue argument.
The same difference applies to other RabbitMQ settings, for example
dead-letter-exchange is a policy vs x-dead-letter-exchange is a queue argument
queue-length is a policy vs x-queue-length is a queue argument
A queue argument is prefixed by x- and is also referred to as an x-argument. The x stands for "extra" or "extended" because these arguments extend the mandatory queue settings. Mandatory queue settings are for example the durable and exclusive properties. x-arguments are optional queue settings. x-arguments are set by clients when they declare a queue.
That is, to change an x-argument, you would need to re-deploy the client, and re-declare the queue. For an existing queue, changing an x-argument is not allowed and will result in an inequivalent arg error closing the channel.
This is where policies come in handy. They have the following benefits:
Policies are applied dynamically. Once a queue is declared, policies can change queue settings at run time. Note that not all queue settings can be changed by a policy. For example changing the x-queue-type (for example from classic queue to quorum queue) is not allowed since a queue process and how it stores messages cannot just be changed dynamically once it has been created. However, most queue settings (including delivery-limit) can be changed dynamically via a policy.
Policies can be applied to groups of queues (and groups of exchanges). A queue argument can only be applied to a single queue.
In general, it's good practice to use a policy instead of queue argument where possible because policies are more flexible.
More in the official docs: https://www.rabbitmq.com/parameters.html#why-policies-exist
I believe that delivery-limit is just name for header value x-delivery-limit. You can find in RabbitMQ UI for queue arguments.
There is a blog post from RabbitMQ with screenshots (Fig 9. Quorum queue arguments) where they are using x-delivery-limit header which works only with quorum queues (feature matrix)
UPD: in according with this screenshot x-delivery-limit is a part of queue features however delivery-limit is a part of policy definition applied to this queue. Check this article for more details.
Right now we complete sagas which were successful from business point of view, but we store failed sagas for 3 months - we set a timeout and then mark saga as completed.
Is there a more generic way for setting saga's Time-To-Live which does not involve underlying messaging service?
For example AWS SQS max delay is 15 minutes, but for us it would be enough to run the garbage collection job once a week. Does NSB have this option?
Is there a more generic way for setting saga's Time-To-Live which does not involve underlying messaging service?
NServiceBus implements timeouts using delayed messages. SQS happened to be limited to a maximum of 15 minutes. To overcome this limitation, transport rescheduled the message multiple times to get the desired delay. Other transports such as Azure Service Bus, RabbitMQ, SQL Server transport do not require this as they can send a single delayed message for the necessary time. RabbitMQ doesn't support it natively either, so it also implements the feature internally. There's no "general" implementation. It's always specific to the transport you're using.
That's exactly what NServiceBus is doing by using the delayed delivery. Unfortunately, SQS as you've noticed doesn't support periods longer than 15 mins.
You could use a 3rd party scheduling library that would send a message that would complete the saga. https://docs.particular.net/nservicebus/scheduling/
I'll also follow-up internally to see if there are other recommendations.
RabbitMQ supports message priority: https://www.rabbitmq.com/priority.html
MassTransit allows user to set this up when configuring endpoints and when sending/publishing a message.
Question: Would it be possible to set a message priority when using a Routing Slip in MassTransit?
My Problem: We have a screen that can schedule items or process them right away. If scheduled, items can be processed in batches. If hundreds of items are processed at the same time, saving a record on the screen can take minutes because the message would go to the end of the queue, which can lead to a bad user experience.
So, if it's not possible to set the priority, what is the alternative here?
Thanks!
Your easiest option? Setup your activity services so that they host two endpoints, one for execute (anything, including batch) and one for execute-interactive, that you use when it is an interactive request. When you build the routing slip, use the appropriate queues for the activity execution, and you're off and running. Batch won't interfere because it's on a separate set of endpoints.
Your other option is a lot harder, and would involve creating send middleware that looks for RoutingSlip and checks some value and sets the priority.
I have a camel route processing messages from a RabbitMQ endpoint. I am keeping the defaults for concurrentConsumers (1) and threadPoolSize(10).
I am relative new to RabbitMQ, and still do not quite understand the relationship between the concurrentConsumer and threadPoolSize properties. The messages in my queues need to be processed in sequence, which I think shall be achieved by using a single consumer. However, will using a threadPoolSize value greater than one cause messages to be processed in parallel?
The default value is 10 (source : https://camel.apache.org/components/latest/rabbitmq-component.html)
It won't affect your concurrency. That means the only one consumer will have 10 threads available to use for the process. You can check at exclusiveConsumer if you want only one consumer shared between all your apps (needed if you could have multiple apps targeting the queue)
I have more-or-less implemented the Reliability Pattern in my Mule application using persistent VM queues CloudHub, as documented here. While everything works fine, it has left me with a number of questions about actually ensuring reliable delivery of my messages. To illustrate the points below, assume I have http-request component within my "application logic flow" (see the diagram on the link above) that is throwing an exception because the endpoint is down, and I want to ensure that the in flight message will eventually get delivered to the endpoint:
As detailed on the link above, I have observed that when the exception is thrown within my "application logic flow", and I have made the flow transactional, the message is put back on the VM queue. However all that happens is the message then repeatedly taken off the queue, processed by the flow, and the exception is thrown again - ad infinitum. There appears to be no way of configuring any sort of retry delay or maximum number of retries on VM queues as is possible, for example, with ActiveMQ. The best work around I have come up with is to surround the http-request message processor with the until-successful scope, but I'd rather have these sorts of things apply to my whole flow (without having to wrap the whole flow in until-successful). Is this sort of thing possible using only VM queues and CloudHub?
I have configured my until-successful to place the message on another VM queue which I want to use as a dead-letter-queue. Again, this works fine, and I can login to CloudHub and see the messages populated on my DLQ - but then it appears to offer no way of moving messages from this queue back into the flow when the endpoint comes back up. All it seems you can do in CloudHub is clear your queue. Again, is this possible using VM queues and CloudHub only (i.e. no other queueing tool)?
VM queues are very basic, whether you use them in CloudHub or not.
VM queues have no capacity for delaying redelivery (like exponential back-offs). Use JMS queues if you need such features.
You need to create a flow for processing the DLQ, for example one that regularly consumes the queue via the requester module and re-injects the messages into the main queue. Again, with JMS, you would have better control.
Alternatively to JMS, you could consider hosted queues like CloudAMQP, Iron.io or AWS SQS. You would lose transaction support on the inbound endpoint but would gain better control on the (re)delivery behaviour.