Nsb: Custom behavior after every handler - nservicebus

We want to log every occurrence of a handler running to completion and we're wondering what's the cleanest way to do it.
More specifically, when a Handler completes, we want to write some basic information like the type of the message that was processed etc, to a Db.
One way to do it is by creating and sending a new message (publishing an event) at the end of each handler.
But we're wondering if there is another way to do this without "polluting" the message handlers with those extra line of code :) For example, if after a Handler runs to completion, another method defined elsewhere would pick up execution and handle the logic of writing to the database.
Hope I made myself clear enough. Thanks

You could use the auditing pipeline and forward the audit messages to your audit queue and handle a copy of all messages there...
Here is some more info: https://docs.particular.net/nservicebus/operations/auditing?version=core_7.2
Does that make sense?

Related

How to prevent NServiceBus from not sending messages on errors

I'm new to NServiceBus, so maybe I'm asking something pretty silly here, but is there a way to make NServiceBus not stop sending any messages that are sent in response to a message whose handler fails?
Let me explain with a simple example.
Suppose I have an OrderPaidEvent that has a handler that does the following:
Look for the customer
Start a DB transaction
Update the customer to a good customer
Send an CustomerUpgradedToGoodCustomerEvent message
Commit the DB transaction
Fairly straightforward, all is well in the world. Now a few months later someone else figures that an email would be nice when an order is paid and thus adds another handler to the OrderPaidEvent to send an email.
Unfortunately, now whenever the mailserver has an issue, this second handler will fail with an error which will however prevent the original CustomerUpgradedToGoodCustomerEvent message from being sent (step 4). But because the DB transaction was already committed (step 5) the customer has already been upgraded to a good customer in the database.
This means that even if the OrderPaidEvent handler is retried the customer no longer changes and thus the CustomerUpgradedToGoodCustomerEvent message is never sent. Worse yet, this is all because of a change to the code that has nothing to do with the original message handler and will thus be difficult to detect.
This seems like a massive flaw and since I'm new to this I'm certain there's something I'm doing wrong, but I can't seem to figure out what it is.
Any help from you fine people would be great.
Thanks in advance.
How about breaking down your procedural code into separate handlers?
Thereafter each logical operation will either be done or will not be done based on successful completion of each granular task.
If you add a Saga to the mix then you can make business decisions based on the completed steps in your Saga.
Also maybe read more about transactions and NServiceBus here
First of all I would send out the CustomerUpgradedToGoodCustomerEvent after the commit. At that point you are sure that the event actually took place.
And in response to your question: You could handle the email in some 'SendEmail' command that is raised after the db commit and before the event is published. If that command fails it will not hurt the handling of the OrderPaid event. When mail is up again, the command can be retried and handled normally.

AXON framework synchronous response

I am new to AXON framework and are using it for our development. We have a requirement where command (command side) is created for the persisting data, for the same event is triggered which is consumed at query side. Now we need to have a response back to command side from query side which says if the record is persisted into database successfully (custom successful message) or if failed then the reason of the failure (custom exception message as response). Kindly help if there is any way to achieve such scenario.
Here command side and query side are 2 different micro-services and we are using Rabbit Mq for event driven technique.
Thanks in advance
I think that what you are asking is if there is a way for the command and event to be processed in a single transaction?
If you use a subscribing event processor, running in the same JVM, the event is processed synchronously and the whole transaction is rolled back in case of an exception in an event handler. This is not the case here, because you have loosely coupled separate services, which is good.
It's best practice for the aggregate with the command handler to have all the information available to decide whether or not the command can successfully be processed, and when an event is applied, this is a signal that it has happened, and the other services (the query side in this case) have to be informed. It's not good practice for a query module to overrule this ("you say it happened, I say it didn't"). If there is an error in the query side, you fix it, and replay the event.
If it really is an error in the event handler that the whole system must know about, that is really a separate event. You can apply such an event directly on the event bus and notify the whole system. Something like this:
#Autowired
private EventBus eventBus;
(...)
CatastrophicFailureEvent failureEvent = new CatastrophicFailureEvent("OH NO!");
eventBus.publish(GenericEventMessage.asEventMessage(failureEvent));
I think you might need to reconsider your architecture. Keep in mind that events should encapsulate the irreversible state changes of your system. These state changes should not be questioned after they have happened. Your query side should only need to care about projecting these valid state changes that your command side has decided on.
If you need to check whether a user already existed, you need to do this on the command side in your aggregate. The aggregate can keep a list of all the existing usernames and throw an exception if an invalid command is given. The command response (tip: using the sendAndWait() method on the CommandGateway returns a response) can then be used as the system to inform your user about the success/failure of its action.
The following flow might solve your problem, but keep in mind that the user will get a callback on the success of the action even though the query side might not have processed its result yet. This part is eventually consistent.
Command Side:
Request from frontend handled by a Controller class and creates an corresponding command
The above command is invoked and handled by a command handler which creates the corresponding event or throws an exception if the user already exists.
The invoker of the command is informed about the success of the command or the exception is handled and the error shown to the user.
The above event is published through rabbit mq event bus if the command was successful.
Query side:
The event that is published in the step 4 is consumed by the event handler in query side. No checks or validations should be necessary, since they were already handled on the command side.
#Mzzl
Series of activities
Command Side:
1. Request from frontend handled by a Controller class and creates an corresponding command
2. The above command is invoked and handled by a command handler which in return create corresponding event
3. The above event is then published through rabbit mq event bus.
Query Side:
4. The event that is published in the step 3 is consumed by the event handler in query side.
5. The event handler has the logic to perform db transaction (lets assume add a user). Once a user is added then a success message or failure message (lets assume user already available in the DB so could not create duplicate entry) should flow from query side to command side and eventually back to UI as a repsonse.
I'm not sure I've fully understand your issue (especially the microservice part :)),
but if your problem is related to having the query side up to date after the command execution, then you can have a look at this project.
In this example, you can see that he uses a SubscriptionQueryResult in conjunction with a QueryUpdateEmitter (see here)
Basically you will subscribe to query side changes before the command is issued, and you will block after the command execution until the query side send a notification when it is up to date.
This way you can avoid the eventual consistency.

How to make a Saga handler Reentrant

I have a task that can be started by the user, that could take hours to run, and where there's a reasonable chance that the user will start the task multiple times during a run.
I've broken the processing of the task up into smaller batches, but the way the data looks it's very difficult to tell what's still to be processed. I batch it using messages that each process a bite sized chunk of the data.
I have thought of using a Saga to control access to starting this process, with a Saga property called Processing that I set at the start of the handler and then unset at the end of the handler. The handler does some work and sends the messages to process the data. I check the value at the start of the handler, and if it's set, then just return.
I'm using Azure storage for Saga storage, if it makes a difference for the next bit. I'm also using NSB 6
I have a few questions though:
Is this the correct approach to re-entrancy with NSB?
When is a change to Saga data persisted? (and is it different depending on the transport?)
Following on from the above, if I set a Saga value in a handler, wait a while and then reset it to its original value will it change the persistent storage at all?
Seem to be cross posted in the Particular Software google group:
https://groups.google.com/forum/#!topic/particularsoftware/p-qD5merxZQ
Sagas are very often used for such patterns. The saga instance would track progress and guard that the (sub)tasks aren't invoked multiple times but could also take actions if the expected task(s) didn't complete or is/are over time.
The saga instance data is stored after processing the message and not when updating any of the saga data properties. The logic you described would not work.
The correct way would be having a saga that orchestrates your process and having regular handlers that do the actual work.
In the saga handle method that creates the saga check if the saga was already created or already the 'busy' status and if it does not have this status send a message to do some work. This will guard that the task is only initiated once and after that the saga is stored.
The handler can now do the actual task, when it completes it can do a 'Reply' back to the saga
When the saga receives the reply it can now start any other follow up task or raise an event and it can also 'complete'.
Optimistic concurrency control and batched sends
If two message are received that create/update the same saga instance only the first writer wins. The other will fail because of optimistic concurrency control.
However, if these messages are not processed in parallel but sequential both fail unless the saga checks if the saga instance is already initialized.
The following sample demonstrates this: https://github.com/ramonsmits/docs.particular.net/tree/azure-storage-saga-optimistic-concurrency-control/samples/azure/storage-persistence/ASP_1
The client sends two identical message bodies. The saga is launched and only 1 message succeeds due to optimistic concurrency control.
Due to retries eventually the second copy will be processed to but the saga checks the saga data for a field that it knows would normally be initialized by by a message that 'starts' the saga. If that field is already initialized it assumes the message is already processed and just returns:
It also demonstrates batches sends. Messages are not immediately send until the all handlers/sagas are completed.
Saga design
The following video might help you with designing your sagas and understand the various patterns:
Integration Patterns with NServiceBus: https://www.youtube.com/watch?v=BK8JPp8prXc
Keep in mind that Azure Storage isn't transactional and does not provide locking, it is only atomic. Any work you do within a handler or saga can potentially be invoked more than once and if you use non-transactional resources then make sure that logic is idempotent.
So after a lot of testing
I don't believe that this is the right approach.
As Archer says, you can manipulate the saga data properties as much as you like, they are only saved at the end of the handler.
So if the saga receives two simultaneous messages the check for Processing will pass both times and I'll have two processes running (and in my case processing the same data twice).
The saga within a saga faces a similar problem too.
What I believe will work (and has done during my PoC testing) is using a database unique index to help out. I'm using entity framework and azure sql, so database access is not contained within the handler's transaction (this is the important difference between the database and the saga data). The database will also operate across all instances of the endpoint and generally seems like a good solution.
The table that I'm using has each of the columns that make up the saga 'id', and there is a unique index on them.
At the beginning of the handler I retrieve a row from the database. If there is a row, the handler returns (in my case this is okay, in others you could throw an exception to get the handler to run again). The first thing that the handler does (before any work, although I'm not 100% sure that it matters) is to write a row to the table. If the write fails (probably because of the unique constraint being violated) the exception puts the message back on the queue. It doesn't really matter why the database write fails, as NSB will handle it.
Then the handler does the work.
Then remove the row.
Of course there is a chance that something happens during processing of the work, so I'm also using a timestamp and another process to reset it if it's busy for too long. (still need to define 'too long' though :) )
Maybe this can help someone with a similar problem.

Nservicebus possible to publish an event when a message gets moved to error queue?

I have a saga that does a bulk import by creating a bunch of commands (It keeps track of the # of commands sent) then listens to an event indicating the task succeeded. I would also like to be notified when the command fails (moves into error queue).
I want to take advantage of nservicebus's retry functionality so I don't want to simply wrap it in a try catch, I really only want to publish this event when it is moving to the error queue.
Is it possible to create another end point that handles the generated commands but listens to the error queue? Or is there another better way to accomplish this?
You can take control over how the exceptions are handled using a custom fault handler

Chaining events/commands?

I have a feature I'm attempting to implement using NServiceBus but not sure the pattern to use here. (I'm fairly new to NServiceBus)
I'll try to explain where my uncertainty comes from:
User interaction triggers MVC controller to send a command to perform a domain operation. This command raises an event to notify others that this occurred.
A handler that subscribes to this event determines whether or not another domain operation should occur.
This is where I'm unclear as to the proper pattern to follow. At this point should the event handler:
just make the changes required?
send a new command to do it? If so, send it back to the originating service/process?
another option?
Part of me is wondering if I should be using an in-proc domain event to handle this, but I don't think the first command should have to wait on the second one before it returns. In fact it could happen much later. That is why I went the route of using the bus to handle it async. Also, an email will need to be generated once the second operation finishes. Should that be triggered from yet another event/command?
Any and all guidance appreciated.
If there is no need to wait for the second action then yes, it should be done asynchronously so the processing of the first command should publish an NServiceBus event. The handler for that event would (likely) be hosted in a separate endpoint which would then just do the work - no need to send another command there.
To add to Udi's answer, I would only turn around and send a command back to the originating service if the service at the originating endpoint is really the one that should be responsible for the behavior of that command. Otherwise, the service (endpoint) receiving the event should just do what it needs to do in response to the event (which sounds like your case).