I'm thinking of using RabbitMQ for a new project (with little own RabbitMQ experience) to solve the following problem:
Upon an event, a long running computation has to be performed. The "work queue" pattern as described in https://www.rabbitmq.com/tutorials/tutorial-two-python.html seems to be perfect, but I want an additional twist: I want no two jobs with the same routing key (or some parts of the payload or metadata, however to implement that) running on the workers at the same time. In other words: when one worker is processing job XY, and another job XY is queued, the message XY must not be delivered to a new idle worker until the running worker has completed the job.
What would be the best strategy to implement that? The only real solution I came up with was that when a worker gets a job, it has to check with all other workers if they are currently processing a similar job, and if so, reject the message (for requeueing).
Depending on your architecture there are two approaches to your problem.
The consumers share a cache of tasks under process and if a job of the same type shows up, they reject or requeue it.
This requires a shared cache to be maintained and a bit of logic on the consumers side.
The side effect is that duplicated jobs will keep returning to the consumers in case of rejection while in case of requeueing they will be processed with unpredictable delay (depending on how big the queue is).
You use the deduplication plugin on the queue.
You won't need any additional cache, only a few lines of code on the publisher side.
The downside of this approach is that duplicated messages will be dropped. If you want them to be delivered, you will need to instruct the publisher to retry in case of a negative acknowledgment on the publisher.
Related
Is there any way to make a RabbitMQ queue behave as a Stack, i.e. the client gets the last message that was posted in the queue (LIFO) rather than the first one? Or maybe alternatively make it a priority queue using a timestamp which the client could set?
RabbitMQ does support priority queues but the priority it allows is just a number up to 255 (recommended to use up to 10).
What I want to achieve is that the latest messages are processed first because they contain the latest information about the source. I still want to process the old messages, but in situations when the client cannot keep up (or there was some downtime and the client is recovering) I want to process the latest state information first.
The only solution I came up with so far is to use a TTL on the messages of the main queue and have them go to a dead letter queue when they expire, which is also processed by the client. However this is not so clean, and if the source of the message takes longer than the TTL to send a new status update, the latest state will be stuck in queue behind the other older expired messages still to be processed.
If it is not possible to achieve with RabbitMQ, is there any other recommended messaging framework that supports this requirement?
Kafka Log Compaction was created for exactly the use case you describe:
Log compaction ensures that Kafka will always retain at least the last
known value for each message key within the log of data for a single
topic partition. It addresses use cases and scenarios such as
restoring state after application crashes or system failure, or
reloading caches after application restarts during operational
maintenance. Let's dive into these use cases in more detail and then
describe how compaction works.
So, RabbitMQ is a queue, not a stack. It is specifically designed NOT to do what you are asking (a queue is always a first-in, first-out data structure).
However, there are options:
Presumably some process (e.g. a web service) exists between the client and the message server. This process could save the data off to an additional storage location (e.g. memcached) for immediate access of the latest value, thus leaving the queue untouched.
You could configure a secondary queue/service combination. When messages are published, they can then be routed to both queues. The first queue is for your heavy processing, and the second queue would be a service whose only task is to update the latest value in memcached or some other fast storage/retrieval system. Thus, message lifetime in this queue would presumably be much shorter.
You could implement multiple processing steps. The first step would be to update the current state (presumably a quick operation), after which the message is then re-published to the longer processing step's queue.
I want to implement a priority work queue, in which the priority of a group of messages can change once they are in the queue. Since it is a work queue with variable processing time, the messages are not assigned using round-robin algorithm, but are pulled from the queue when a resource is free (using per-consumer limit).
I came up with 2 ideas for implementation:
Use priority queue from RabbitMQ, and when a request for priority change comes, read messages with this priority from the queue and re-send them with different priority. (I am not sure this is a good approach, given the O(n) complexity.)
Use several queues with distinct names for each group of messages, and use a separate queue to communicate the current priority list (ordered list of queue names) to workers. (Using this approach, I am not sure how to make the list of priorities "persistent", so that newly joined worker knows what is the current priority list.)
How would you implement it? Is RabbitMQ viable option for this use case?
your idea "priority of a message can change once they are in the queue" IMO is not possible with rabbitmq because rabbitmq only allows you to get messages from the head of a queue.
for example:
you have N queues each used for a different priority
each queue has 100+ messages
your idea requires you to reach into the middle of a queue to get a specific message but this is not possible with rabbitmq so the thought experiment stops here because you can only get messages at the head of a queue
your idea IMO would require using something else besides rabbitmq.
a quick and dirty idea that would work with rabbitmq now and is similar to your idea:
create one rabbitmq queue with N priorities
submit a message with priority x
if you need to change the priority to higher priority like priority y then you could send the same message again but with a new higher priority y
this would ensure the new message is processed faster
the side effect is that you may process the same request twice
you could fix the side effect in your design by having a some sort of database for synchronization to keep track of what jobs are completed and then this could avoid processing the job twice
there are many other details that would need to be addressed like keeping the original message around somehow outside of rabbitmq, concurrency, etc, etc,
I have a question about multi consumer concurrency.
I want to send works to rabbitmq that comes from web request to distributed queues.
I just want to be sure about order of works in multiple queues (FIFO).
Because this request comes from different users eech user requests/works must be ordered.
I have found this feature with different names on Azure ServiceBus and ActiveMQ message grouping.
Is there any way to do this in pretty RabbitMQ ?
I want to quaranty that customer's requests must be ordered each other.
Each customer may have multiple requests but those requests for that customer must be processed in order.
I desire to process quickly incoming requests with using multiple consumer on different nodes.
For example different customers 1 to 1000 send requests over 1 millions.
If I put this huge request in only one queue it takes a lot of time to consume. So I want to share this process load between n (5) node. For customer X 's requests must be in same sequence for processing
When working with event-based systems, and especially when using multiple producers and/or consumers, it is important to come to terms with the fact that there usually is no such thing as a guaranteed order of events. And to get a robust system, it is also wise to design the system so the message handlers are idempotent; they should tolerate to get the same message twice (or more).
There are way to many things that may (and actually should be allowed to) interfere with the order;
The producers may deliver the messages in a slightly different pace
One producer might miss an ack (due to a missed package) and will resend the message
One consumer may get and process a message, but the ack is lost on the way back, so the message is delivered twice (to another consumer).
Some other service that your handlers depend on might be down, so that you have to reject the message.
That being said, there is one pattern that servicebus-systems like NServicebus use to enforce the order messages are consumed. There are some requirements:
You will need a centralized storage (like a sql-server or document store) that allows for conditional updates; for instance you want to be able to store the sequence number of the last processed message (or how far you have come in the process), but only if the already stored sequence/progress is the right/expected one. Storing the user-id and the progress even for millions of customers should be a very easy operation for most databases.
You make sure the queue is configured with a dead-letter-queue/exchange for retries, and then set your original queue as a dead-letter-queue for that one again.
You set a TTL (for instance 30 seconds) on the retry/dead-letter-queue. This way the messages that appear on the dead-letter-queue will automatically be pushed back to your original queue after some timeout.
When processing your messages you check your storage/database if you are in the right state to handle the message (i.e. the needed previous steps are already done).
If you are ok to handle it you do and update the storage (conditionally!).
If not - you nack the message, so that it is thrown on the dead-letter queue. Basically you are saying "nah - I can't handle this message, there are probably some other message in the queue that should be handled first".
This way the happy-path is to process a great number of messages in the right order.
But if something happens and a you get a message out of band, you will throw it on the retry-queue (the dead-letter-queue) and Rabbit will make sure it will get back in the queue to be retried at a later stage. But only after a delay.
The beauty of this is that you are able to handle most of the situations that may interfere with processing the message (out of order messages, dependent services being down, your handler being shut down in the middle of handling the message) in exact the same way; by rejecting the message and letting your infrastructure (Rabbit) take care of it being retried after a while.
(Assuming the OP is asking about things like ActiveMQs "message grouping:)
This isn't currently built in to RabbitMQ AFAIK (it wasn't as of 2013 as per this answer) and I'm not aware of it now (though I haven't kept up lately).
However, RabbitMQ's model of exchanges and queues is very flexible - exchanges and queues can be easily created dynamically (this can be done in other messaging systems but, for example, if you read ActiveMQ documentation or Red Hat AMQ documentation you'll find all of the examples in the user guides are using pre-declared queues in configuration files loaded at system startup - except for RPC-like request/response communication).
Also it is very easy in RabbitMQ for a consumer (i.e., message consuming thread) to consume from multiple queues.
So you could build, on top of RabbitMQ, a system where you got your desired grouping semantics.
One way would be to create dynamic queues: The first time a customer order was seen or a new group of customer orders a queue would be created with a unique name for all messages for that group - that queue name would be communicated (via another queue) to a consumer who's sole purpose was to load-balance among other consumers that were responsible for handling customer order groups. I.e., the load-balancer would pull off of its queue a message saying "new group with queue name XYZ" and it would find in a pool of order group consumer a consumer which could take this load and pass it a message saying "start listening to XYZ".
Another way to do it is with pub/sub and topic routing - each customer order group would get a unique topic - and proceed as above.
RabbitMQ Consistent Hash Exchange Type
We are using RabbitMQ and we have found a plugin. It use Consistent Hashing algorithm to distribute messages in order to consistent keys.
For more information about Consistent Hashing ;
https://en.wikipedia.org/wiki/Consistent_hashing
https://www.youtube.com/watch?v=viaNG1zyx1g
You can find this plugin from rabbitmq web page
plugin : rabbitmq_consistent_hash_exchange
https://www.rabbitmq.com/plugins.html
My users are editing data collaboratively on the web. My product needs their edits to be made atomic. I can't guarantee it at the database level, so I would like the updates to be performed one at a time.
Here is what I would need to be able to parallelize multiple documents :
Let's say we have two documents A and B
1) The queue server starts empty
2) 1 user submits an update for document A
3) The queue server receives the update, creates QueueA and puts the update in it
4) 3 other users submit updates to documentA, which are queued in QueueA
5) 2 other users submit changes for document B, which are queued in new queue QueueB
6) The worker pool is started.
7) Worker1 makes a request, the first message of QueueA is delivered (although it would not be an issue if it was the message in QueueB first). QueueA is marked as busy until it gets a response
8) Another worker makes a request, the item from QueueB is returned. QueueB is marked as busy.
9) On the third request, nothing is returned as both queues are busy.
10) The first worker finishes its task, calls the broker and QueueA is not busy anymore.
11) A worker makes a request, it should get the message from QueueA.
12) Worker B times out, which frees QueueB for message consumption.
I have started to read about Rabbit MQ, AWS SQS/SNS, Kafka... I am not very knowledgeable in that field, but to my great surprise I haven't been able to find a system matching my requirements on the web.
For now I don't know if my design has issues i haven't seen, if I just haven't found the right keyword or software for my use... Scalability should be easy which is why I have looked at these tools.
How could I easily implement this design ?
This is an application design question that is hard to accurately address in a stack overflow answer. What you are doing sounds like async processing of data using a queue to buffer as well as scale. The scale part is easy.. you add more consumers (aka running service processes) and requests can be processed individually in parallel.
I think the best way to think of the problem is to break it down into individual steps of data processing and use the queues as on and off ramps into other distinct processes. More than that, and I'd need some whiteboard time to walk through the entire problem space.
ActiveMQ and RabbitMQ sound more of a fit here. Pressed to recommend one, I tend to lean ActiveMQ b/c its Java-based and most shops know how to monitor and support Java-based apps. SQS is limited and given this sounds business data, using HTTP as transport is not a robust solution. Kafka doesn't sound like a fit here.
I'm new to RabbitMQ and I'm wondering how to implement the following: producer creates tasks for multiple sites, there's a bunch of consumers that should process these tasks one by one, but only talking to 1 site with concurrency of 1, without starting a new task for this site before the previous one ended. This way slow site would be processed slowly, and the fast ones - fast (as opposed by slow sites taking up all the worker capacity).
Ideally a site would be processed only by one worker at a time, being replaced by another worker if it dies. This seems like a task for exclusive queues, but apparently there's no easy way to list and subscribe to new queues. What is the proper way to achieve such results with RabbitMQ?
I think you may have things the wrong way round. For workers you have 1 or more producers sending to 1 exchange. The exchange has 1 queue (you can send directly to the queue, but all that is really doing is going via a default exchange, I prefer to be explicit). All consumers connect to the single queue and read off tasks in turn. You should set the queue to require messages to be ACKed before removing them. That way if a process dies it should be returned to the queue and picked up by the next consumer/worker.