NServiceBus: How to archive completed or terminated sagas - nservicebus

NServiceBus removes Saga data at least in the RavenDB persistens store when this.MarkAsComplete(); is called from the Saga itself.
Is there a built-in way to archive the Saga data when the Saga becomes completed or terminated? We need such a feature for traceability reasons.

You can put an internal flag in you saga data, set it to complete instead of calling MarkAsComplete and check it in your (saga) handlers.
(this way you can restart a saga if you want and you sagas will live forever)
Dose that make sense?

When using the rest of the Particular Service Platform, all actions on a saga get audited automatically, including the state that the saga was in when it completed.
ServiceInsight provides visualization of all of these state changes.

Related

How to make a Saga handler Reentrant

I have a task that can be started by the user, that could take hours to run, and where there's a reasonable chance that the user will start the task multiple times during a run.
I've broken the processing of the task up into smaller batches, but the way the data looks it's very difficult to tell what's still to be processed. I batch it using messages that each process a bite sized chunk of the data.
I have thought of using a Saga to control access to starting this process, with a Saga property called Processing that I set at the start of the handler and then unset at the end of the handler. The handler does some work and sends the messages to process the data. I check the value at the start of the handler, and if it's set, then just return.
I'm using Azure storage for Saga storage, if it makes a difference for the next bit. I'm also using NSB 6
I have a few questions though:
Is this the correct approach to re-entrancy with NSB?
When is a change to Saga data persisted? (and is it different depending on the transport?)
Following on from the above, if I set a Saga value in a handler, wait a while and then reset it to its original value will it change the persistent storage at all?
Seem to be cross posted in the Particular Software google group:
https://groups.google.com/forum/#!topic/particularsoftware/p-qD5merxZQ
Sagas are very often used for such patterns. The saga instance would track progress and guard that the (sub)tasks aren't invoked multiple times but could also take actions if the expected task(s) didn't complete or is/are over time.
The saga instance data is stored after processing the message and not when updating any of the saga data properties. The logic you described would not work.
The correct way would be having a saga that orchestrates your process and having regular handlers that do the actual work.
In the saga handle method that creates the saga check if the saga was already created or already the 'busy' status and if it does not have this status send a message to do some work. This will guard that the task is only initiated once and after that the saga is stored.
The handler can now do the actual task, when it completes it can do a 'Reply' back to the saga
When the saga receives the reply it can now start any other follow up task or raise an event and it can also 'complete'.
Optimistic concurrency control and batched sends
If two message are received that create/update the same saga instance only the first writer wins. The other will fail because of optimistic concurrency control.
However, if these messages are not processed in parallel but sequential both fail unless the saga checks if the saga instance is already initialized.
The following sample demonstrates this: https://github.com/ramonsmits/docs.particular.net/tree/azure-storage-saga-optimistic-concurrency-control/samples/azure/storage-persistence/ASP_1
The client sends two identical message bodies. The saga is launched and only 1 message succeeds due to optimistic concurrency control.
Due to retries eventually the second copy will be processed to but the saga checks the saga data for a field that it knows would normally be initialized by by a message that 'starts' the saga. If that field is already initialized it assumes the message is already processed and just returns:
It also demonstrates batches sends. Messages are not immediately send until the all handlers/sagas are completed.
Saga design
The following video might help you with designing your sagas and understand the various patterns:
Integration Patterns with NServiceBus: https://www.youtube.com/watch?v=BK8JPp8prXc
Keep in mind that Azure Storage isn't transactional and does not provide locking, it is only atomic. Any work you do within a handler or saga can potentially be invoked more than once and if you use non-transactional resources then make sure that logic is idempotent.
So after a lot of testing
I don't believe that this is the right approach.
As Archer says, you can manipulate the saga data properties as much as you like, they are only saved at the end of the handler.
So if the saga receives two simultaneous messages the check for Processing will pass both times and I'll have two processes running (and in my case processing the same data twice).
The saga within a saga faces a similar problem too.
What I believe will work (and has done during my PoC testing) is using a database unique index to help out. I'm using entity framework and azure sql, so database access is not contained within the handler's transaction (this is the important difference between the database and the saga data). The database will also operate across all instances of the endpoint and generally seems like a good solution.
The table that I'm using has each of the columns that make up the saga 'id', and there is a unique index on them.
At the beginning of the handler I retrieve a row from the database. If there is a row, the handler returns (in my case this is okay, in others you could throw an exception to get the handler to run again). The first thing that the handler does (before any work, although I'm not 100% sure that it matters) is to write a row to the table. If the write fails (probably because of the unique constraint being violated) the exception puts the message back on the queue. It doesn't really matter why the database write fails, as NSB will handle it.
Then the handler does the work.
Then remove the row.
Of course there is a chance that something happens during processing of the work, so I'm also using a timestamp and another process to reset it if it's busy for too long. (still need to define 'too long' though :) )
Maybe this can help someone with a similar problem.

NServicebus saga performance

I have a trouble with NSB saga performance. We have one single saga that orchestrate long running session. Saga sends a lot of messages to different processors and than gets its replies.
I see that sagas queue contains tons of incoming messages. Each messages processing is very fast, but there is a delay between handling next message. Here is a part of log file:
16:26:42 [14][DEBUG] Finished handling message.
16:26:46 [15][DEBUG] ChildContainerBehavior
16:26:46 [15][DEBUG] MessageHandlingLoggingBehavior
16:26:46 [15][DEBUG] Received message with ID 28b285ce-3b77-4a69-a13a-a3bf009717fd from sender xxxHost#PROCESSOR01
We see a 4 seconds delay. That is very slow. Please help, what is wrong with my saga?
Thanks!
Since you have a monolithic saga, you will have some contention on the state record that backs the saga in storage. You will want to consider breaking up your endpoint or redesigning how you gather the information. Check out this Routing Slip implementation.

MassTransit Saga saves late with NHibernateSagaRepository (Starbucks example)

I have converted the Starbucks example to use RabbitMQ and NHibernate.. However, there is a bug/challenge/issue with the DrinkPreparationSaga and when it actually gets saved to the database vs. when the PaymentCompleteMessage gets submitted.
How the code works (out of the box, this isn't anything I changed)... The new instance of the saga isn't saved to the database until AFTER the Initial state completes and it transitions to its next state.
The problem is that in the sample Starbucks application the DrinkPreparationSaga starts off with a very slow method that prints out coffee making sounds once every 1 a second 10 times..
So there is 10 seconds between when the Saga is actually created and when its saved to the database.. The bigger problem with that is, that any other messages that are destined to that instances of the saga (by CorrolationId) are thrown in the error queue because the Saga doesn't exist.
Shouldn't the NHibernateSagaRepository immediately save the new Saga Instance, then run the workflow, then update the saga post workflow? I can't seem to think of another way to make the example work, but that would require a decent bit of reorganizing in the NHibernateSagaRepository.
Thanks in advance.
The reason sagas are not saved before processing the message is that some members of the saga may not be nullable (or allow nulls) and they are not set before the initial saga message is processed.
And you're correct. Take a look at the Riktig sample (http://github.com/phatboyg/Riktig) to see how the Automatonymous sagas are used and how the correlation of another service (in this case, the image retrieval service) is used. Sagas should not actually perform work, but coordinate the state of a transaction. The earlier Starbucks example was a naive implementation that we built one morning in Austin. It is long overdue for an update (for more reasons, including that it still uses Magnum state machines, which are soon deprecated in favor of Automatonymous).

Are NServiceBus saga to saga communications an anti-pattern?

We have two long running sagas that are both run for an infinite amount of time and respond to a timeout. The first subscribes to a timeout every 15 minutes, and the second every 24 hours. Each saga keeps track of its own execution time and notifies the other saga when it starts running and when it has completed. The bulk data loads these sagas are responsible for cannot be run at the same time due to database contention.
When the first saga (Saga A - 15 min) kicks off, it first checks (using an internal variable) to see if the second saga (Saga B - 24 hr) is currently running. If not, it begins its processing steps (shelling off to another process, and polling it over time to see when it's completed). These two sagas communicate through sending messages to notify each other when they're starting up or completed.
For some reason this seems smelly to me on two levels:
We've essentially got a singleton saga that never completes. Is this an anti-pattern in its own right?
We're sending messages bidirectionally with the sole intention of modifying state. It seems as though there should be a better way to handle this type of scenario. With the release of NSB 4.0, we started getting errors when sending commands. The errors cleared up when we used a pub-sub approach instead.
Is this considered an NServiceBus implementation anti-pattern, and is there a better pattern for this sort of requirement?
Generally speaking, I don't think sagas communicating with each other is an antipattern. In your specific case, however, it does sound smelly.
From what you've said about the behaviors, it appears as if this could be a single saga. A saga can request multiple timeouts of different types. So you could effectively merge these sagas, but then you'd be able to get rid of all the messages that exist just for the purpose of modifying state in the sibling, because the state would be shared.
In a general sense, however, it's perfectly fine for sagas to communicate. Doing so through commands should be treated carefully, as that creates direct coupling between the two, although this is still possible. An example would be a parent and child saga pair, where the parent workflow commands a child workflow to begin, but the child workflow is independent until it replies to its parent that it's done. We just realize that these are tightly coupled processes within the same service boundary. We might do this just to keep each saga more focused, or because the parent starts multiple child sagas with different data.
An even better example is saga communication through events. One saga will publish an event, and another saga will respond with its own long-running process. This is all decoupled and good. However if the second saga publishes an event that the first one responds to, then even though you're using events you've created a loop, so it's really not that dissimilar from commands at that point although it is still decoupled from any other external subscribers.

How to write handler for Error queues in NServiceBus Saga?

I have a situation where the Maxtries in my MSMQ is 5. After 5 times nservicebus sends the message to the Error que that I have defined. Now I want to perfomr some further action when this happens (I have to update status of some processes to Error)
Is it possible to write a handler in my Saga class to read these error queues?
Thanks in Advance
Haris
If your are using 2.x you may want to consider writing a separate endpoint where the error queue is its input queue. The downside to this is that the messages will come off the queue. Assuming you still want to store them, you'll have to push them off to a database or some other type of storage.
You could also write a Saga that polls the error queue to check for messages and updates the appropriate status. After each time you check the queue, you would need to request another Timeout.
In 3.0, you have more control over the exceptions, and can implement your own way to handle the errors. If you implement the interface IManageMessageFailures, you can do your work there.
As an alternative to the solutions provided by Adam, you can subscribe to events raised by ServiceControl which are raisesd when a messages is sent to the errorqueue. See the official documentation about this here: http://docs.particular.net/servicecontrol/contracts
Another approach would be the notification API as described here: http://docs.particular.net/nservicebus/errors/subscribing-to-error-notifications. It allows you to subscribe to certain events (not event messages) like "MessageSentToErrorQueue" directly on the endpoint, so you wouldn't need to consume the error queue.