I'm working with RabbitMQ and I'd like to have multiple consumers doing different things for the same message, with this message being exactly in one queue.Each consumer would work on his own, and in the moment the consumer ends with his part, it marks the message as having completed phase "x" , when all the phases are completed for one message, then use the method a basicAck() to remove our message from the queue.
I suspect this to be impossible, if so, I would face this in other way. Having multiple queues with the same message ( using an exchange), each queue with a different consumer , which would communicate with with a Server. This server would then work with a database and checking/updating the completed phases. When all the phases are completed, log it in some way.
But this workaround seems exceedingly unefficient, I'd like to skip it if posssible.
Could it be posssible to set "states" or "phases" to a message in rabbitMQ?
So, first of all, in the context you're talking about, a "message" is an order to do some unit of work.
The first part of your question, by referring to "marking the message" treats the message as a stateful object. This is incorrect. Once a message is produced, it is immutable, meaning no changes are permitted to it. If you violate, or attempt to violate this principle, you have made an excursion beyond the realm of sound design.
So, let's reframe. In a properly-archtiected message-oriented system, a message can represent either a command ("do something") or an event ("something happened"). Note that sometimes we can call a reply message (something sent in response to a command) a third category, but it's really a sub-category of event.
Thus, we are led to the possibility of having (a) one message going to one queue, to be picked up by one consumer, or (b) one message going to many queues, to be picked up by many consumers. You take (a) and (b) to compose complex system behaviors that evolve over time with the execution of each of these small behaviors, and suddenly you have a complex system.
Messages do, in fact, have state. Their state is "processed" or "unprocessed", as appropriate. That is the limit to their statefulness.
Bottom Line
Your situation describes a series of activities (what each consumer does) being acted upon some sort of shared state among the activities. The role of messages and the message broker is to assist in the orchestration of these activities, by providing instruction on what to do (via commands) and what took place (via events). Messages themselves cannot be the shared state. So, you still need some sort of a database or other means to persist the state of your system. There is no way to avoid this.
Related
I'm new to Mass Transit and I would like to understand if it can helps with my scenario.
I'm building a sample application implemented with a CQRS event sourcing architecture and I need a service bus in order to dispatch the events created by the command stack to the query stack denormalizers.
Let's suppose of having a single aggregate in our domain, let's call it Photo, and two different domain events: PhotoUploaded and PhotoArchived.
Given this scenario, we have two different message types and the default Mass Transit behaviour is creating two different RabbitMq exchanges: one for the PhotoUploaded message type and the other for the PhotoArchived message type.
Let's suppose of having a single denormalizer called PhotoDenormalizer: this service will be a consumer of both message types, because the photo read model must be updated whenever a photo is uploaded or archived.
Given the default Mass Transit topology, there will be two different exchanges so the message processing order cannot be guaranteed between events of different types: the only guarantee that we have is that all the events of the same type will be processed in order, but we cannot guarantee the processing order between events of different type (notice that, given the events semantic of my example, the processing order matters).
How can I handle such a scenario ? Is Mass Transit suitable with my needs ? Am I completely missing the point with domain events dispatching ?
Disclaimer: this is not an answer to your question, but rather a preventive message why you should not do what you are planning to do.
Whilst message brokers like RMQ and messaging middleware libraries like MassTransit are perfect for integration, I strongly advise against using message brokers for event-sourcing. I can refer to my old answer Event-sourcing: when (and not) should I use Message Queue? that explains the reasons behind it.
One of the reasons you have found yourself - event order will never be guaranteed.
Another obvious reason is that building read models from events that are published via a message broker effectively removes the possibility for replay and to build new read models that would need to start processing events from the beginning of time, but all they get are events that are being published now.
Aggregates form transactional boundaries, so every command needs to guarantee that it completes within one transaction. Whilst MT supports the transaction middleware, it only guarantees that you get a transaction for dependencies that support them, but not for context.Publish(#event) in the consumer body, since RMQ doesn't support transactions. You get a good chance of committing changes and not getting events on the read side. So, the rule of thumb for event stores that you should be able to subscribe to the stream of changes from the store, and not publish events from your code, unless those are integration events and not domain events.
For event-sourcing, it is crucial that each read-model keeps its own checkpoint in the stream of events it is projecting. Message brokers don't give you that kind of power since the "checkpoint" is actually your queue and as soon as the message is gone from the queue - it is gone forever, there's no coming back.
Concerning the actual question:
You can use the message topology configuration to set the same entity name for different messages and then they'll be published to the same exchange, but that falls to the "abuse" category like Chris wrote on that page. I haven't tried that but you definitely can experiment. Message CLR type is part of the metadata, so there shouldn't be deserialization issues.
But again, putting messages in the same exchange won't give you any ordering guarantees, except the fact that all messages will land in one queue for the consuming service.
You will have to at least set the partitioning filter based on your aggregate id, to prevent multiple messages for the same aggregate from being processed in parallel. That, by the way, is also useful for integration. That's how we do it:
void AddHandler<T>(Func<ConsumeContext<T>, string> partition) where T : class
=> ep.Handler<T>(
c => appService.Handle(c, aggregateStore),
hc => hc.UsePartitioner(8, partition));
AddHandler<InternalCommands.V1.Whatever>(c => c.Message.StreamGuid);
I have a Java application which publishes events to RabbitMQ. It has one very important characteristic: message order must be preserved at all times. The consumer can handle duplicates, but it cannot handle when message 2 is enqueued before message 1, so to say.
I have been reading a lot about RabbitMQ lately, and I feel there is only solution to do this: set the channel in confirm mode (https://www.rabbitmq.com/confirms.html - basically, it forces the broker to acknowledge the publication) and publish one by one. With one by one I mean that the message 2 is only published after RabbitMQ confirmed (via an asynchronous ACK response) that message 1 is actually well received and persisted.
I tried this in a conceptual implementation, and while this works fine, it's uber slow, without exaggerating. Which makes sense: after all, we are now limiting our message rate to 1 message at a time.
So this leads me to my question: are there other, more performant, ways to ensure that message ordering is always preserved (either in RabbitMQ or via different approaches)?
Although my concern is RabbitMQ, I believe this question might be applied to any kind of asynchronous message queue service.
RabbitMQ's clients enqueue in the same order that you sent. It's when subscribers go down, you get network splits or the subscriber NACKs messages that they can get re-ordered; and even then RMQ tries to keep them in the same approximate order by re-queueing at the same position, or as close to the same position.
You can do it like you suggest; take one message at a time, because if you take a message, but crash before you've ACKed it from the broker, it will pop up when your service comes back up, at the same position.
This assumes you only have a single service instance at any given time, consuming from the queue. Which in turn is a distributed systems problem on its own, if you have a scheduler like Kubernetes or Mesos, spawning your service instances.
Another solution would be to ensure ordering of processing in the receiving service, by "resequencing" the messages based on their logical timestamps/sequence numbers.
I've written a much more thorough guide as annotated code here https://github.com/haf/rmq-publisher-confirms-hopac/blob/master/src/Server/Shared/RabbitMQ.fs — with batching you can resequence. Furthermore, if your idempotence builds the consecutive sequence numbers into its logic, you can start taking batches and each event will be idempotent, despite being re-consumed.
In some exceptional situations I need somehow to tell consumer on receiving point that some messages shouldn’t be processed. Otherwise two systems will become out-of-sync (we deal with some outdates external systems, and if, for example, connection is dropped we have to discard all queued operations in scope of that connection).
Take a risk and resolve problem messages manually? Compensation actions (that could be tough to support in my case)? Anything else?
There are a few ways:
You can set a time-to-live when sending a message: await endpoint.Send(myMessage, c => c.TimeToLive = TimeSpan.FromHours(1));, but this will apply to all messages that are sent (or published) like this. I would consider this, after looking at your requirements. This is technical, but it is a proper messaging pattern.
Make TTL and generation timestamp properties of your message itself and let the consumer decide if the message is still worth processing. This is more business and, probably, the most correct way.
Combine tech and business - keep the timestamp and TTL in message headers so they don't pollute your message contracts, and filter them out using a custom middleware. In this case, you need to be careful to log such drops so you won't be left wonder why messages disappear now and then.
Almost any unreliable integration can be monitored using sagas, with timeouts. For example, we use a saga to integrate with Twilio. Since we have no ability to open a webhook for them, we poll after some interval to check the message status. You can start a saga when you get a message and schedule a message to check if the processing is still waiting. As discussed in comments, you can either use the "human intervention required" way to fix the issue or let the saga decide to drop the message.
A similar way could be to use a lookup table, where you put the list of messages that aren't relevant for processing. Such a table would be similar to the list of sagas. It seems that this way would also require scheduling. Both here, and for the saga, I'd recommend using a separate receive endpoint (a queue) for the DropIt message, with only one consumer. It would prevent DropIt messages from getting stuck behind the integration messages that are waiting to be processed (and some should be already dropped)
Use RMQ management API to remove messages from the queue. This is the worst method, I won't recommend it.
From what I understand, you're building a system that sends messages to 3rd party systems. In other words, systems you don't control. It has an API but compensating actions aren't always possible, because the API doesn't provide it or because actions are performed inside the 3rd party system that can't be compensated or rolled back?
If possible try to solve this via sagas. Make sure the saga executes the different steps (the sending of messages) in the right order. So that messages that cannot be compensated are sent last. This way message that can be compensated if they fail, will be compensated by the saga. The ones that cannot be compensated should be sent last, when you're as sure as possible that they don't have to be compensated. Because that last message is the last step in synchronizing all systems.
All in all this is one of the problems with distributed systems, keeping everything in sync. Compensating actions is the way to deal with this. If compensating actions aren't possible, you're in a very difficult situation. Try to see if the business can help by becoming more flexible and accepting that you need to compensate things, where they'll tell you it's not possible.
In some exceptional situations I need somehow to tell consumer on receiving point that some messages shouldn’t be processed.
Can't you revert this into:
Tell the consumer that an earlier message can be processed.
This way you can easily turn this in a state machine (like a saga) that acts on two messages. If the 2nd message never arrives then you can discard the 1st after a while or do something else.
The strategy here is to halt/wait until certain that no actions need to be reverted.
We want to use Akka to implement a scenario when messages are fetched from a message queue (RabbitMQ) and then processed by a chain of actors. The queue is durable and messages must not be lost. So we need to send an acknowledgement (BasicAck in RabbitMQ) back to the queue in order to finalize the dequeued message. Because of that the very last actor in the processing chain needs to do the acknowledgement. This seems to be rather common need, and I wonder if there is a known pattern for this. Vaughn Vernon in his book writes about using Return Address, so all messages sent along the chain will have the return address (of the MQ channel actor) and the correlation identifier that specifies the queue message tag. Is this the proper way to do it?
An alternative is to ack the message right after the receival and then use persistent actors to provide its guaranteed delivery, but I was adviced against such approach because use of AMPQ eliminates the need for actor persistance for this particular scenario.
I'm not really familiar with Akka, but I think I get the gist of what it does (very similar to "process" in Erlang - i think - which is what RMQ is built on).
In general, your first suggestion from Vaughn Vernon's book is the way to go.
In my specific scenarios, I have taken a "middleware" approach to what you are suggesting. My specific middleware implementation forwards the message itself through a chain of commands that process the message. Each command calls an action.next() method to continue forwarding to the next command.
Prior to sending the message through the middleware, I create a default last-command-in-the-chain. This default command simply calls actions.ack() - which, behind the scenes, acknowledged the message.
I do things this way so that the commands never have to know anything about how to actually implement the mechanics of completing and moving on to the next thing. They have an API specific to themselves, being commands in a chain.
This allows me to change the implementation of acknowledging the message, or how i handle messages from RMQ, etc, without changing the commands directly.
Ack'ing the message immediately introduces danger, as your actor could crash, Akka itself could crash, and a host of other problems can (and will) occur, and you'll be more likely to lose the message.
Remember, though - there is not 100% perfect setup. You will, at some point, lose a message or process the same message twice. Your system needs to handle these scenarios in some way, at some point. Everything your doing is heading down the right path to make this less likely, but nothing will ever prevent crashes and message loss 100% of the time.
We have a requirement for all our messages to be processed in the order of arrival to MSMQ.
We will be exposing a WCF service to the clients, and this WCF service will post the messages using NServiceBus (Sendonly Bus) to MSMQ.
We are going to develop a windows service(MessageHandler), which will use Nservicebus to read the message from MSMQ and save it to the database. Our database will not be available for few hours everyday.
During the db downtime we expect that the process to retry the first message in MSMQ and halt processing other messages until the database is up. Once the database is up we want NServicebus to process in the order the message is sent.
Will setting up MaximumConcurrencyLevel="1" MaximumMessageThroughputPerSecond="1" helps in this scenario?
What is the best way using NServiceBus to handle this scenario?
We have a requirement for all our messages to be processed in the
order of arrival to MSMQ.
See the answer to this question How to handle message order in nservicebus?, and also this post here.
I am in agreement that while in-order delivery is possible, it is much better to design your system such that order does not matter. The linked article outlines the following soltuion:
Add a sequence number to all messages
in the receiver check the sequence number is the last seen number + 1 if not throw an out of sequence exception
Enable second level retries (so if they are out of order they will try again later hopefully after the correct message was received)
However, in the interest of anwering your specific question:
Will setting up MaximumConcurrencyLevel="1"
MaximumMessageThroughputPerSecond="1" helps in this scenario?
Not really.
Whenever you have a requirement for ordered delivery, the fundamental laws of logic dictate that somewhere along your message processing pipeline you must have a single-threaded process in order to guarantee in-order delivery.
Where this happens is up to you (check out the resequencer pattern), but you could certainly throttle the NserviceBus handler to a single thread (I don't think you need to set the MaximumMessageThroughputPerSecond to make it single threaded though).
However, even if you did this, and even if you used transactional queues, you could still not guarantee that each message would be dequeued and processed to the database in order, because if there are any permanent failures on any of the messages they will be removed from the queue and the next message processed.
During the db downtime we expect that the process to retry the first
message in MSMQ and halt processing other messages until the database
is up. Once the database is up we want NServicebus to process in the
order the message is sent.
This is not recommended. The second level retry functionality in NServiceBus is designed to handle unexpected and short-term outages, not planned and long-term outages.
For starters, when your NServiceBus message handler endpoint tries to process a message in it's input queue and finds the database unavailable, it will implement it's 2nd level retry policy, which by default will attempt the dequeue 5 times with increasing infrequency, and then fail permanently, sticking the failed message in it's error queue. It will then move onto the next message in the input queue.
While this doesn't violate your in-order delivery requirement on its own, it will make life very difficult for two reasons:
The permanently failed messages will need to be re-processed with priority once the database becomes available again, and
there will be a ton of unwanted failure logging, which will obfuscate any genuine handling errors.
If you have a regular planned outages which you know about in advance, then the simplest way to deal with them is to implement a service window, which another term for a schedule.
However, Windows services manager does not support the concept of service windows, so you would have to use a scheduled task to stop then start your service, or look at other options such as hangfire, quartz.net or some other cron-type library.
It kinds of depends why you need the messages to arrive in order. If it's like you first receive an Order message and then various OrderLine messages that all belong to a certain order, there are multiple possibilities.
One is to just accept that there can be OrderLine messages without an Order. The Order will come in later anyway. Eventual Consistency.
Another one is to collect messages (and possible state) in an NServiceBus Saga. When normally MessageA needs to arrive first, only to receive MessageB and MessageC later, give all three messages the ability to start the saga. All three messages need to have something that ties them together, like a unique GUID. Then the saga will make sure it collects them properly and when all messages have arrived, perhaps store its final state and mark the saga as completed.
Another option is to just persist all messages directly into the database and have something else figure out what belongs to what. This is a scenario useful for a data warehouse where the data just needs to be collected, no matter what. Some data might not be 100% accurate (or consistent) but that's okay.
Asynchronous messaging makes it hard to process them 100% in order, especially when the client calling the WCF is making mistakes and/or sending them out of order. It wouldn't be the first time I had such a requirement and out-of-order messages.