We have two long running sagas that are both run for an infinite amount of time and respond to a timeout. The first subscribes to a timeout every 15 minutes, and the second every 24 hours. Each saga keeps track of its own execution time and notifies the other saga when it starts running and when it has completed. The bulk data loads these sagas are responsible for cannot be run at the same time due to database contention.
When the first saga (Saga A - 15 min) kicks off, it first checks (using an internal variable) to see if the second saga (Saga B - 24 hr) is currently running. If not, it begins its processing steps (shelling off to another process, and polling it over time to see when it's completed). These two sagas communicate through sending messages to notify each other when they're starting up or completed.
For some reason this seems smelly to me on two levels:
We've essentially got a singleton saga that never completes. Is this an anti-pattern in its own right?
We're sending messages bidirectionally with the sole intention of modifying state. It seems as though there should be a better way to handle this type of scenario. With the release of NSB 4.0, we started getting errors when sending commands. The errors cleared up when we used a pub-sub approach instead.
Is this considered an NServiceBus implementation anti-pattern, and is there a better pattern for this sort of requirement?
Generally speaking, I don't think sagas communicating with each other is an antipattern. In your specific case, however, it does sound smelly.
From what you've said about the behaviors, it appears as if this could be a single saga. A saga can request multiple timeouts of different types. So you could effectively merge these sagas, but then you'd be able to get rid of all the messages that exist just for the purpose of modifying state in the sibling, because the state would be shared.
In a general sense, however, it's perfectly fine for sagas to communicate. Doing so through commands should be treated carefully, as that creates direct coupling between the two, although this is still possible. An example would be a parent and child saga pair, where the parent workflow commands a child workflow to begin, but the child workflow is independent until it replies to its parent that it's done. We just realize that these are tightly coupled processes within the same service boundary. We might do this just to keep each saga more focused, or because the parent starts multiple child sagas with different data.
An even better example is saga communication through events. One saga will publish an event, and another saga will respond with its own long-running process. This is all decoupled and good. However if the second saga publishes an event that the first one responds to, then even though you're using events you've created a loop, so it's really not that dissimilar from commands at that point although it is still decoupled from any other external subscribers.
Related
In some exceptional situations I need somehow to tell consumer on receiving point that some messages shouldn’t be processed. Otherwise two systems will become out-of-sync (we deal with some outdates external systems, and if, for example, connection is dropped we have to discard all queued operations in scope of that connection).
Take a risk and resolve problem messages manually? Compensation actions (that could be tough to support in my case)? Anything else?
There are a few ways:
You can set a time-to-live when sending a message: await endpoint.Send(myMessage, c => c.TimeToLive = TimeSpan.FromHours(1));, but this will apply to all messages that are sent (or published) like this. I would consider this, after looking at your requirements. This is technical, but it is a proper messaging pattern.
Make TTL and generation timestamp properties of your message itself and let the consumer decide if the message is still worth processing. This is more business and, probably, the most correct way.
Combine tech and business - keep the timestamp and TTL in message headers so they don't pollute your message contracts, and filter them out using a custom middleware. In this case, you need to be careful to log such drops so you won't be left wonder why messages disappear now and then.
Almost any unreliable integration can be monitored using sagas, with timeouts. For example, we use a saga to integrate with Twilio. Since we have no ability to open a webhook for them, we poll after some interval to check the message status. You can start a saga when you get a message and schedule a message to check if the processing is still waiting. As discussed in comments, you can either use the "human intervention required" way to fix the issue or let the saga decide to drop the message.
A similar way could be to use a lookup table, where you put the list of messages that aren't relevant for processing. Such a table would be similar to the list of sagas. It seems that this way would also require scheduling. Both here, and for the saga, I'd recommend using a separate receive endpoint (a queue) for the DropIt message, with only one consumer. It would prevent DropIt messages from getting stuck behind the integration messages that are waiting to be processed (and some should be already dropped)
Use RMQ management API to remove messages from the queue. This is the worst method, I won't recommend it.
From what I understand, you're building a system that sends messages to 3rd party systems. In other words, systems you don't control. It has an API but compensating actions aren't always possible, because the API doesn't provide it or because actions are performed inside the 3rd party system that can't be compensated or rolled back?
If possible try to solve this via sagas. Make sure the saga executes the different steps (the sending of messages) in the right order. So that messages that cannot be compensated are sent last. This way message that can be compensated if they fail, will be compensated by the saga. The ones that cannot be compensated should be sent last, when you're as sure as possible that they don't have to be compensated. Because that last message is the last step in synchronizing all systems.
All in all this is one of the problems with distributed systems, keeping everything in sync. Compensating actions is the way to deal with this. If compensating actions aren't possible, you're in a very difficult situation. Try to see if the business can help by becoming more flexible and accepting that you need to compensate things, where they'll tell you it's not possible.
In some exceptional situations I need somehow to tell consumer on receiving point that some messages shouldn’t be processed.
Can't you revert this into:
Tell the consumer that an earlier message can be processed.
This way you can easily turn this in a state machine (like a saga) that acts on two messages. If the 2nd message never arrives then you can discard the 1st after a while or do something else.
The strategy here is to halt/wait until certain that no actions need to be reverted.
I'm struggling to understand how to implement Eventual Consistency with the exposed example of BacklogItems and Tasks from Vaughn Vernon. The statement I've understood so far is (considering the case where he splits BacklogItem and Task into separate aggregate roots):
A BacklogItem can contain one or more tasks. When all remaining hours from a the tasks of a BacklogItem are 0, the status of the BacklogItem should change to "DONE"
I'm aware about the rule that says that you should not update two aggregate roots in the same transaction, and that you should accomplish that with eventual consistency.
Once a Domain Service updates the amount of hours of a Task, a TaskRemainingHoursUpdated event should be published to a DomainEventPublisher which lives in the same thread as the executing code. And here it is where I'm at a loss with the following questions:
I suppose that there should be a subscriber (also living in the same thread I guess) that should react to TaskRemainingHoursUpdated events. At which point in your Desktop/Web application you perform this subscription to the Bus? At the very initialization of your app? In the application code? Is there any reasoning to place domain subscriptors in a specific place?
Should that subscriptor (in the same thread) call a BacklogItem repository and perform the update? (But that would be a violation of the rule of not updating two aggregates in the same transaction since this would happen synchronously, right?).
If you want to achieve eventual consistency to fulfil the previously mentioned rule, do I really need a Message Broker like RabbitMQ even though both BacklogItem and Task live inside the same Bounded Context?
If I use this message broker, should I have a background thread or something that just consumes events from a RabbitMQ queue and then dispatches the event to update the product?
I'd appreciate if someone can shed some clear light over this since it is quite complex to picture in its completeness.
So to start with, you need to recognize that, if the BacklogItem is the authority for whether or not it is "Done", then it needs to have all of the information to compute that for itself.
So somewhere within the BacklogItem is data that is tracking which Tasks it knows about, and the known state of those tasks. In other words, the BacklogItem has a stale copy of information about the task.
That's the "eventually consistent" bit; we're trying to arrange the system so that the cached copy of the data in the BacklogItem boundary includes the new changes to the task state.
That in turn means we need to send a command to the BacklogItem advising it of the changes to the task.
From the point of view of the backlog item, we don't really care where the command comes from. We could, for example, make it a manual process "After you complete the task, click this button here to inform the backlog item".
But for the sanity of our users, we're more likely to arrange an event handler to be running: when you see the output from the task, forward it to the corresponding backlog item.
At which point in your Desktop/Web application you perform this subscription to the Bus? At the very initialization of your app?
That seems pretty reasonable.
Should that subscriptor (in the same thread) call a BacklogItem repository and perform the update? (But that would be a violation of the rule of not updating two aggregates in the same transaction since this would happen synchronously, right?).
Same thread and same transaction are not necessarily coincident. It can all be coordinated in the same thread; but it probably makes more sense to let the consequences happen in the background. At their core, events and commands are just messages - write the message, put it into an inbox, and let the next thread worry about processing.
If you want to achieve eventual consistency to fulfil the previously mentioned rule, do I really need a Message Broker like RabbitMQ even though both BacklogItem and Task live inside the same Bounded Context?
No; the mechanics of the plumbing matter not at all.
I have a task that can be started by the user, that could take hours to run, and where there's a reasonable chance that the user will start the task multiple times during a run.
I've broken the processing of the task up into smaller batches, but the way the data looks it's very difficult to tell what's still to be processed. I batch it using messages that each process a bite sized chunk of the data.
I have thought of using a Saga to control access to starting this process, with a Saga property called Processing that I set at the start of the handler and then unset at the end of the handler. The handler does some work and sends the messages to process the data. I check the value at the start of the handler, and if it's set, then just return.
I'm using Azure storage for Saga storage, if it makes a difference for the next bit. I'm also using NSB 6
I have a few questions though:
Is this the correct approach to re-entrancy with NSB?
When is a change to Saga data persisted? (and is it different depending on the transport?)
Following on from the above, if I set a Saga value in a handler, wait a while and then reset it to its original value will it change the persistent storage at all?
Seem to be cross posted in the Particular Software google group:
https://groups.google.com/forum/#!topic/particularsoftware/p-qD5merxZQ
Sagas are very often used for such patterns. The saga instance would track progress and guard that the (sub)tasks aren't invoked multiple times but could also take actions if the expected task(s) didn't complete or is/are over time.
The saga instance data is stored after processing the message and not when updating any of the saga data properties. The logic you described would not work.
The correct way would be having a saga that orchestrates your process and having regular handlers that do the actual work.
In the saga handle method that creates the saga check if the saga was already created or already the 'busy' status and if it does not have this status send a message to do some work. This will guard that the task is only initiated once and after that the saga is stored.
The handler can now do the actual task, when it completes it can do a 'Reply' back to the saga
When the saga receives the reply it can now start any other follow up task or raise an event and it can also 'complete'.
Optimistic concurrency control and batched sends
If two message are received that create/update the same saga instance only the first writer wins. The other will fail because of optimistic concurrency control.
However, if these messages are not processed in parallel but sequential both fail unless the saga checks if the saga instance is already initialized.
The following sample demonstrates this: https://github.com/ramonsmits/docs.particular.net/tree/azure-storage-saga-optimistic-concurrency-control/samples/azure/storage-persistence/ASP_1
The client sends two identical message bodies. The saga is launched and only 1 message succeeds due to optimistic concurrency control.
Due to retries eventually the second copy will be processed to but the saga checks the saga data for a field that it knows would normally be initialized by by a message that 'starts' the saga. If that field is already initialized it assumes the message is already processed and just returns:
It also demonstrates batches sends. Messages are not immediately send until the all handlers/sagas are completed.
Saga design
The following video might help you with designing your sagas and understand the various patterns:
Integration Patterns with NServiceBus: https://www.youtube.com/watch?v=BK8JPp8prXc
Keep in mind that Azure Storage isn't transactional and does not provide locking, it is only atomic. Any work you do within a handler or saga can potentially be invoked more than once and if you use non-transactional resources then make sure that logic is idempotent.
So after a lot of testing
I don't believe that this is the right approach.
As Archer says, you can manipulate the saga data properties as much as you like, they are only saved at the end of the handler.
So if the saga receives two simultaneous messages the check for Processing will pass both times and I'll have two processes running (and in my case processing the same data twice).
The saga within a saga faces a similar problem too.
What I believe will work (and has done during my PoC testing) is using a database unique index to help out. I'm using entity framework and azure sql, so database access is not contained within the handler's transaction (this is the important difference between the database and the saga data). The database will also operate across all instances of the endpoint and generally seems like a good solution.
The table that I'm using has each of the columns that make up the saga 'id', and there is a unique index on them.
At the beginning of the handler I retrieve a row from the database. If there is a row, the handler returns (in my case this is okay, in others you could throw an exception to get the handler to run again). The first thing that the handler does (before any work, although I'm not 100% sure that it matters) is to write a row to the table. If the write fails (probably because of the unique constraint being violated) the exception puts the message back on the queue. It doesn't really matter why the database write fails, as NSB will handle it.
Then the handler does the work.
Then remove the row.
Of course there is a chance that something happens during processing of the work, so I'm also using a timestamp and another process to reset it if it's busy for too long. (still need to define 'too long' though :) )
Maybe this can help someone with a similar problem.
I have converted the Starbucks example to use RabbitMQ and NHibernate.. However, there is a bug/challenge/issue with the DrinkPreparationSaga and when it actually gets saved to the database vs. when the PaymentCompleteMessage gets submitted.
How the code works (out of the box, this isn't anything I changed)... The new instance of the saga isn't saved to the database until AFTER the Initial state completes and it transitions to its next state.
The problem is that in the sample Starbucks application the DrinkPreparationSaga starts off with a very slow method that prints out coffee making sounds once every 1 a second 10 times..
So there is 10 seconds between when the Saga is actually created and when its saved to the database.. The bigger problem with that is, that any other messages that are destined to that instances of the saga (by CorrolationId) are thrown in the error queue because the Saga doesn't exist.
Shouldn't the NHibernateSagaRepository immediately save the new Saga Instance, then run the workflow, then update the saga post workflow? I can't seem to think of another way to make the example work, but that would require a decent bit of reorganizing in the NHibernateSagaRepository.
Thanks in advance.
The reason sagas are not saved before processing the message is that some members of the saga may not be nullable (or allow nulls) and they are not set before the initial saga message is processed.
And you're correct. Take a look at the Riktig sample (http://github.com/phatboyg/Riktig) to see how the Automatonymous sagas are used and how the correlation of another service (in this case, the image retrieval service) is used. Sagas should not actually perform work, but coordinate the state of a transaction. The earlier Starbucks example was a naive implementation that we built one morning in Austin. It is long overdue for an update (for more reasons, including that it still uses Magnum state machines, which are soon deprecated in favor of Automatonymous).
I have a situation where I have a service subscribing to event messages and performing some work when they arrive. There is a certain class of events which can arrive in short bursts of many events which reference the same underlying data. I would like to be able to defer processing of related events for a short period of time, so that I only do the calculation once for each batch of related events, rather than in response to each individual event. Is there some kind of pattern I can follow which will allow me to collect related events for a period of time and then process them all at once? I was thinking a saga + timeout might be able to achieve this, but not sure if this is an appropriate use for that.
Thanks!
Yes, a saga could be the way to go - however consider the performance of the saga persistence (NHibernate over a DB in the current version, RavenDB in the next version) as compared to your fault-tolerance needs (if a machine crashes, would it be acceptable to lose some messages).
No easy answers, I'm afraid.