Add data to database and queue without transactions using NServiceBus - nservicebus

I'm currently developing a REST api. The api performs basic crud operations. Data is synced to a legacy system using RabbitMQ. The api is running on SQL Server as a DB.
I'm wondering how to make sure data is saved in the DB and a message is put on the bus.
The fact you are missing distributed transactions looks like a very general issue to me so I'm wondering if there are any best practices using NServiceBus to solve this issue?

RabbitMQ doesn't support distributed transactions on its own, so there isn't much NServiceBus can do in this scenario. One option though is:
The endpoint is configured to use the Outbox feature
when the HTTP request is received by the REST endpoint a message is sent locally to self. No DB operations are performed at this stage
when the sent-to-self message is received you're now in the context of an incoming message and you can:
execute CRUD operations
send outgoing messages
The outbox will guarantee consistency even if there are no distributed transactions

Related

nservicebus persistance data - what is it?

as I was reading thru documentation on nservicebus, I wasnt able to find what is persisted under Persistence section.
If nservicebus is a loosely coupled distributed library sending self-contained messages, what is there to persist? I dont understand.
With web app, when a user has a Session, we may choose to persist the Session in SQL Server, in Memory or somehow else, but with nservicebus there is no session to persist.
So, what is actually the Persistence in nservicebus?
What sort of data that could be persisted and for what reason?
Transports like RabbitMQ and Azure Service Bus natively support publish/subscribe. If an endpoint wants to receive published events, a 'subscription' to those events is stored inside those queuing technologies. Other queuing technologies don't support publish/subscribe natively, like MSMQ and Azure Storage Queues. NServiceBus mimics the behavior, but needs to store those subscriptions somewhere else.
Other things we can store are timeout and deferred messages and saga state. A saga is kind of a state machine (a workflow) and this state needs to be stored somewhere. Another feature NServiceBus supports is the outbox, which removes the need for distributed transactions by putting the message transaction and business transaction in the same database.
If you only use certain features, some transports allow you to do that natively. This removes the need for a persister. Sagas and Outbox always need persistence.

NServiceBus Pub/Subscribe using SQLServer transport - can the subscriber scale out?

Using the latest version of NServiceBus 4.4 I believe.
We are looking to implement NServiceBus and this section is using SQLServer as a transport. We want to pub/subscribe, which is fine but how would it work with scaling out the subscribers?
I have done a PoC where I ran the recieving endpoint of a SQLServer transport multiple times and when a message came in, the first instance of the running reciever got the message and processed it, resulting in the other process NOT processing it, which is correct.
In a pub/subscribe architecture using SQLServer, would this same method of running multiple instances of the subscriber work and since we are using a common queue (SQLServer) it will just sort itself out and not process the message multiple times?
When using SQL Server persistence, the subscribers for your events and messages are held in the Subscription table within the NServiceBus database, so you can check which endpoints are subscribing to what messages or events by viewing the contents of that.
It's worth noting that you can only publish "message" classes with NServiceBus that are implementing the IEvent interface (unless you make use of unobtrusive mode).
When you publish a message or event using bus.Publish, all subscribers to that type will subscribe to it, as long as the individual endpoint names are different.
More information from Particular Software is here:
And here.

CommonDomain / EventStore with Raven persistance for multi-tenant app

How should I setup EventStore's RavenPersistence in a multi-tenant application?
I have an Azure worker role that processes commands received through service bus.
Each message may belong to a different tenant. The actual tenant is sent in the message header, which means that I know which database to use only after I receive each message.
I'm using CommonDomain so my command handlers have IRepository injected.
Right now I build a new store while processing each message (I set DefaultDatabase) but I have a feeling this may not be the most optimal way.
Is there a way to create a single event store and then just switch databases?
If not, can I cache the stores for each tenant?
Do you know about any multi-tenant sample that uses EventStore with RavenDB?
We do exactly the same - spawn new instance of EventStore for every request. JOliver EventStore was designed without multi-tenancy support in mind. So this is the only way ...

NServiceBus Sagas and REST API Integration best-practices

What is the most sensible approach to integrate/interact NServiceBus Sagas with REST APIs?
The scenario is as follows,
We have a load balanced REST API. Depending on the load we can add more nodes.
REST API is a wrapper around a DomainServices API. This means the API can be consumed directly.
We would like to use Sagas for workflow and implement NServiceBus Distributor to scale-out.
Question is, if we use the REST API from Sagas, the actual processing happens in the API farm. This in a way defeats the purpose of implementing distributor pattern.
On the other hand, using DomainServives API directly from Sagas, allows processing locally within worker nodes. With this approach we will have to maintain API assemblies in multiple locations but the throughput could be higher.
I am trying to understand the best approach. Personally, I’d prefer to consume the API (if readily available) but this could introduce chattiness to the system and could take longer to complete as compared to to in-process.
A typical sequence could be similar to publishing an online advertisement,
Advertiser submits a new advertisement request via a web application.
Web application invokes the relevant API endpoint and sends a command
message.
Command message initiates a new publish advertisement Saga
instance.
Saga sends a command to validate caller permissions (in
process/out of process API call)
Saga sends a command to validate the
advertisement data (in process/out of process API call)
Saga sends a
command to the fraud service (third party service)
Once the content and fraud verifications are successful,
Saga sends a command to the billing system.
Saga invokes an API call to save add details. (in
process/out of process API call)
And this goes on until the advertisement is expired, there are a number of retry and failure condition paths.
After a number of design iterations we came up with the following guidelines,
Treat REST API layer as the integration platform.
Assume API endpoints are capable of abstracting fairly complex micro work-flows. Micro work-flows are operations that executes in a single burst (not interruptible) and completes with-in a short time span (<1 second).
Assume API farm is capable of serving many concurrent requests and can be easily scaled-out.
Favor synchronous invocations over asynchronous message based invocations when the target operation is fairly straightforward.
When asynchronous processing is required use a single message handler and invoke API from the handlers. This will delegate work to the API farm. This will also eliminate the need for a distributor and extra hardware resources.
Avoid Saga’s unless if the business work-flow contains multiple transactions, compensation logic and resumes. Tests reveals Sagas do not perform well under load.
Avoid consuming DomainServices directly from a message handler. This till do the work locally and also introduces a deployment hassle by distributing business logic.
Happy to hear out thoughts.
You are right on with identifying that you will need Sagas to manage workflow. I'm willing to bet that your Domain hooks up to a common database. If that is true then it will be faster to use your Domain directly and remove the serialization/network overhead. You will also lose the ability to easily manage the transactions at the database level.
Assuming your are directly calling your Domain, the performance becomes a question of how the Domain performs. You may take steps to optimize the database, drive down distributed transaction costs, sharding the data, etc. You may end up using the Distributor to have multiple Saga processing nodes, but it sounds like you have some more testing to do once a design is chosen.
Generically speaking, we use REST APIs to model the commands as resources(via POST) to allow interaction with NSB from clients who don't have direct access to messaging. This is a potential solution to get things onto NSB from your web app.

WCF basicHttpBinding: Rollback when reply to client fails

I am exposing a WCF service through a basicHttpBinding that executes several operations on a database.
I want to guarantee that if the client does not receive the reply the database operations are rolled back (without any transaction flow through WCF).
E.g. the client calls the "DoX" method which executes on the server but before it is finished the client crashes. The database operations should then be rolled back as soon as the reply can not be send to the client.
Is there any way to do that? Will the [OperationBehavior(TransactionScopeRequired=true)] attribute work in such a manner? Is there a possibility to handle communication errors on the server side?
Update 1:
It seems [OperationBehavior(TransactionScopeRequired=true)] commits the transaction before the reply is send to the client and thus can not be used to perform a rollback if the client does not receive the reply.
Update 2:
To state it clearly again, I do not have the need for the transaction to interact in any way with the client side. The client should neither know of the transaction, have the ability to cancel or commit it, nor should any transaction flow through the binding. The only place I want the transaction to rollback is on the server side if the transport channel can not deliver the message to the receiving client. With the case of TCP/IP this information should be readily available to the server. (No ACK of the TCP packet send back to the client)
So a hypothetical execution flow on the server side (notice the lack of client side) should be:
Receive client request
Start transaction
Execute all logic inside the service operation
Send reply back to client
if (reply.failedToReceive) { transaction.Rollback() } // due to a failing TCP/IP transmission
There is no easy answer to this question. You are asking for a behaviour that is implemented in WS-* but done using basic SOAP. I think your only option if you REALLY can't switch to wsHttpBinding or use duplex as suggested by #Trevor Pilley is to try to mimic the behaviour of WS-Transaction in your own custom protocol based on basic SOAP.
You should be able to get some simplification over the full WS-Transaction specification because
You will probably only need to support transactions over a single service - you will not be doing a distributed transaction over several independent services
You will not need to support both short a transactions (WS-AtomicTransaction) as well as long running transactions (WS-BusinessActivity) probaby atomic transactions would do
You would not need to support any kind of extensibility model (WS-Coordination)
You would not need to implement a discovery/metadata model that describes the protocol (e.g. like WSDL) because you would be coding the protocol behaviour directly into the client and service.
However, you would probably need elements of both WS-Coordination and WS-AtomicTransaction. This is not a simple task by any means and it will be easy to miss something subtle that could cause either rollbacks to not happen or (just as bad) to destroy the performance of your service by having long duration locks all over your database due to crashed clients.
Like I say, this is a complex behaviour and if you cannot use ready-made, standardised protocols, there is no simple answer.