Is there a way in EasyNetQ to set the routing key [x-dead-letter-routing-key] argument when creating a Queue? (as far as I can see you can only set a DeadLetterExchange.)
IQueue updateCacheQueue = advancedBus.QueueDeclare(name: "UpdateCache", deadLetterExchange: "UpdatesDeadLetter");
RabbitMQ assumes that exchanges are superior to queues. You can create an exchange that delivers to exactly one queue, and thus your DLQ addressing issue is solved. Should you decide you need to take additional actions in the future (e.g. store the message for potential reprocessing AND ALSO alert operations via email), you can do that in the exchange without mucking up the queue processor.
I Added another parameter to the QueueDeclare method and created a pull request, and you can set it after version 0.40.6.355
Related
In my system, I use Topic Exchanges with lots of consumer queues. Each queue has it's own non-unique routing key (f.e. 'add.#' for all new entities or just '#' to consume all events).
I want to add support for retrying failed messages with some delay. The biggest issue that I see with Dead Letter Exchange approach is to send a message directly to the queue in which it failed. Routing keys for Queues are not unique, and even if I resubmit a message to the Exchange with the original routing key, it will be consumed by other queues.
One solution is having a "retry" exchange and every application will be subscribed to it with unique routing key (f.e. original queue name). But it sounds too complicated and I want to hide this infrastructure complexity from developers.
I came up with the idea to have a filter that will check the 'x-death' header, get the first queue (the queue where the error occurred in a first place), and process a message only for the appropriate queue. Otherwise - acknowledge the message.
Is it possible to implement this behavior with Spring AMQP? I'm looking into MessagePostProcessor, but how to Acknowledge a message from it?
If you really worry about only the target queue, so you need to consider a variant with republishing in the default exchange which has these abilities:
The default exchange is implicitly bound to every queue, with a routing key equal to the queue name. It is not possible to explicitly bind to, or unbind from the default exchange. It also cannot be deleted.
Pay attention to the routing key equal to the queue name part. I would consider to deal with a AmqpHeaders.CONSUMER_QUEUE and use its value as a routing key for republishing to the default exchange ("") during retry process.
I am using MassTransit with RabbitMQ at transport layer, and faced the need of messages deduplication.
Adding new massage to the queue should be skipped if duplicated message already queued (even if that message is processing by consumer). Duplicates could be identified by content of message for example.
Sending DoWork1, DoWork2, DoWork3 could be processed in parallel, but sending DoWork1, DoWork2, DoWork2 - duplicate should be skipped, and as far as DoWork1, DoWork2 processed same messages could be enqueued and should not be supposed as duplicates.
Solution 1: use "RabbitMQ Message Deduplication Plugin" at the exchange layer, ideal as for me, but not sure that solves described problem.
Solution 2: implement custom middleware with third party data storage.
Is there any better solution for described problem?
Thanks for help in advance!
The RabbitMQ deduplication plugin was designed for that purpose.
You can either de-duplicate at the exchange or at the queue. The main difference is the exchange de-duplicates a message if it has seen it previously while the queue de-duplicates it if already contains a copy of it.
When publishing a message, just set the x-deduplication-header header with a string which uniquely identifies a message (for example the MD5 hash of its body).
Using custom middleware will allow you more freedom of action at the cost of your own development.
I'm looking for a way to buffer messages received by the exchange as long as there is at least one queue bind to that exchange.
Is it supported by RabbitMQ?
Maybe there are some workarounds (I didn't find any).
EDIT
My use case:
I've got one data producer (which reads real-time data from an external system)
I've got one fanout exchange which receives data from the producer
On system startup, there might be no consumer, but after a few moments, there should be at least one which creates his own queue and binds it to the exchange from 2.
The problem is this short time between step 2. and 3. where there are no queues bound to the exchange created in step 1.
Of course, it's an edge case and after system initialization queues and exchanges are bound and everything works as expected.
Why queues and bindings has to be created by consumers (not by the producer)? Because I need a flexible setup where I can add consumers without any changes in other components code (e.g. producer).
EDIT 2
I'm processing the output from another system which stores both real-time and historical data. There are the cases where I want to read historical data first (on initialization) and then continue to handle real-time data.
I may mislead you by saying that there are multiple consumers. In the case where I need a buffer on exchange there is only one consumer (which writes everything to time series DB as it appears in queue).
The RabbitMQ team monitors this mailing list and only sometimes answers questions on StackOverflow.
Why queues and bindings has to be created by consumers (not by the producer)?
Queues and bindings can be created by producers or consumers or both. The requirement is that the exact same arguments are used when creating them if a client application tries to "re-create" a queue or binding. If different arguments are used, a channel-level error will happen.
As you have found, if a producer publishes to an exchange that can't route messages, they will be lost. Olivier's suggestion to use an alternate exchange is a good one, but I recommend you have your producers create queues and bindings as well.
If you mean to avoid throwing away messages because there is no destination configured for it, yes.
You should look at alternate exchange.
This assume that before (or when) you start (or when), the alternate exchange is created (would typically go for fanout) and a queue is binded to it (let's call it notroutedq).
So the messages are not lost, they will be stored in notroutedq.
From there you can possibly setup a mechanism that would reprocess messages in that queue - reinjecting them into the main exchange most likely - once a given time has passed or when a binding has been added to your main exchange.
-- EDIT --
Thanks for the updated info.
Could you indicate how long typically you'd expect the past messages to be useful to the consumers?
In your description, you mention real-time data and possibly multiple consumers coming and going. Based on that, I'm not sure how much of the data kept in the notroutedq would be of value, and with which frequency you'd expect to resend them to the consumers.
The cases I had with alternate exchange where mostly focused on identifying missing bindings, so that one could easily correct the bindings and reprocess the messages without loss.
If the number of consumers varies through time and the data content is real-time, I'd wonder a bit about the benefit of keeping the data.
I have implemented the example from the RabbitMQ website:
RabbitMQ Example
I have expanded it to have an application with a button to send a message.
Now I started two consumer on two different computers.
When I send the message the first message is sent to computer1, then the second message is sent to computer2, the thrid to computer1 and so on.
Why is this, and how can I change the behavior to send each message to each consumer?
Why is this
As noted by Yazan, messages are consumed from a single queue in a round-robin manner. The behavior your are seeing is by design, making it easy to scale up the number of consumers for a given queue.
how can I change the behavior to send each message to each consumer?
To have each consumer receive the same message, you need to create a queue for each consumer and deliver the same message to each queue.
The easiest way to do this is to use a fanout exchange. This will send every message to every queue that is bound to the exchange, completely ignoring the routing key.
If you need more control over the routing, you can use a topic or direct exchange and manage the routing keys.
Whatever type of exchange you choose, though, you will need to have a queue per consumer and have each message routed to each queue.
you can't it's controlled by the server check Round-robin dispatching section
It decides which consumer turn is. i'm not sure if there is a set of algorithms you can pick from, but at the end server will control this (i think round robin algorithm is default)
unless you want to use routing keys and exchanges
I would see this more as a design question. Ideally, producers should create the exchanges and the consumers create the queues and each consumer can create its own queue and hook it up to an exchange. This makes sure every consumer gets its message with its private queue.
What youre doing is essentially 'worker queues' model which is used to distribute tasks among worker nodes. Since each task needs to be performed only once, the message is sent to only one node. If you want to send a message to all the nodes, you need a different model called 'pub-sub' where each message is broadcasted to all the subscribers. The following link shows a simple pub-sub tutorial
https://www.rabbitmq.com/tutorials/tutorial-three-python.html
I have to implement this scenario:
An external application publish message to rabbitmq.
This message has a client_id property. We can place this id to routing key or message header or some other property.
I have to implement sharding in a exchange routng logic - the message should be delivered to specific queue based on the client_id range.
Is it possible to implement in a standard exchanges?
If not what exchange should I take as the base?
How to dynamicly change client_id ranges?
Take a look at the rabbitmq plugin. It's included in the RabbitMQ distribution from v3.6.0 onwards.
Just have your producer put enough info into the routing key that causes the message to go into the right queue on the other side of the Exchange.
So for example, create two queues called 1 and 2 and bind them with routing keys matching the names. Then have your producer decide which routing key to use when producing the event message. Customers with names starting with letters a-m go to 1, n-z go to 2, you get the idea. It pushes the sharding to the producer but that might be OK for your application.
AMQP doesn't have any explicit implementation of sharding, but its architecture should help you to do that.
Spreading messages to several queues is just a rabbitmq challenge (and part of amqp specification), and with routing, way you can attach hetereogeneous consumers to handle specific messages routed via the same exchange. Therefore, producer should push a specific key to be consumed by specific queue/consumer...
You can decide to make a static sharding, perhaps you have 10 queues with one consumer per queue. You could implement a consistent hashing function such that key is CLIENT_ID % 10.
Another ways and none static solutions could be propoused, and you can try to over this architecture.