In my new project I started developing the WPF client. For a better unit testing experience I chose to use sync service calls.
Now I face the situation that the amount of data I call the service for is not that small, so it will take some time until the call returns.
I'm not sure wether to wrap my sync service calls into TPL Tasks or should I generate asynchronous service proxies?
Related
I am working in application and it has few components like single page application, back-end API application written in .Net core. My back-end application calls azure function which will run for 2 to 10 mins depending on the data processing. So I do not want to wait for the Azure function to complete the processing. so after googling around for quite some time I came up with below approach.
I will place one service bus/queue between my back end and azure function. As soon as UI triggers something my back-end API will be called and add some messages to queue/service bus. I will add some trigger to my azure function to start of when message is added to service bus/ queue. Once message comes to queue/ service bus azure function executes lets say around 5 mins and it will call again one more azure function and here in my last azure function I have added implementation for SignalR to push notification to UI.
This is what I thought this solution to handle long processing jobs. All my webapps/api app deployed to azure app service. Now my only question is this is appropriate solution or some better way I can handle this? Can someone help me this is best solution or any better work around will be there.
Since you are already using a service bus you might consider creating a worker process handles the 2-to-10-minute processing loads. You have lots of choices for writing that sort of app and running it in Azure, including console applications or a function app. You can also host long playing services in your web app, the trade-off is between more complicated code (co-hosting web and long playing app) and deploying an extra app.
What is the most sensible approach to integrate/interact NServiceBus Sagas with REST APIs?
The scenario is as follows,
We have a load balanced REST API. Depending on the load we can add more nodes.
REST API is a wrapper around a DomainServices API. This means the API can be consumed directly.
We would like to use Sagas for workflow and implement NServiceBus Distributor to scale-out.
Question is, if we use the REST API from Sagas, the actual processing happens in the API farm. This in a way defeats the purpose of implementing distributor pattern.
On the other hand, using DomainServives API directly from Sagas, allows processing locally within worker nodes. With this approach we will have to maintain API assemblies in multiple locations but the throughput could be higher.
I am trying to understand the best approach. Personally, I’d prefer to consume the API (if readily available) but this could introduce chattiness to the system and could take longer to complete as compared to to in-process.
A typical sequence could be similar to publishing an online advertisement,
Advertiser submits a new advertisement request via a web application.
Web application invokes the relevant API endpoint and sends a command
message.
Command message initiates a new publish advertisement Saga
instance.
Saga sends a command to validate caller permissions (in
process/out of process API call)
Saga sends a command to validate the
advertisement data (in process/out of process API call)
Saga sends a
command to the fraud service (third party service)
Once the content and fraud verifications are successful,
Saga sends a command to the billing system.
Saga invokes an API call to save add details. (in
process/out of process API call)
And this goes on until the advertisement is expired, there are a number of retry and failure condition paths.
After a number of design iterations we came up with the following guidelines,
Treat REST API layer as the integration platform.
Assume API endpoints are capable of abstracting fairly complex micro work-flows. Micro work-flows are operations that executes in a single burst (not interruptible) and completes with-in a short time span (<1 second).
Assume API farm is capable of serving many concurrent requests and can be easily scaled-out.
Favor synchronous invocations over asynchronous message based invocations when the target operation is fairly straightforward.
When asynchronous processing is required use a single message handler and invoke API from the handlers. This will delegate work to the API farm. This will also eliminate the need for a distributor and extra hardware resources.
Avoid Saga’s unless if the business work-flow contains multiple transactions, compensation logic and resumes. Tests reveals Sagas do not perform well under load.
Avoid consuming DomainServices directly from a message handler. This till do the work locally and also introduces a deployment hassle by distributing business logic.
Happy to hear out thoughts.
You are right on with identifying that you will need Sagas to manage workflow. I'm willing to bet that your Domain hooks up to a common database. If that is true then it will be faster to use your Domain directly and remove the serialization/network overhead. You will also lose the ability to easily manage the transactions at the database level.
Assuming your are directly calling your Domain, the performance becomes a question of how the Domain performs. You may take steps to optimize the database, drive down distributed transaction costs, sharding the data, etc. You may end up using the Distributor to have multiple Saga processing nodes, but it sounds like you have some more testing to do once a design is chosen.
Generically speaking, we use REST APIs to model the commands as resources(via POST) to allow interaction with NSB from clients who don't have direct access to messaging. This is a potential solution to get things onto NSB from your web app.
I want to implement a WCF service that responds immediately to the caller, but queues up an asynchronous job to be handled later. What is the best way to go about doing this? I've read the MSDN article on how to implement an asynchronous service operation, but that solution seems to still require the task to finish before responding to the caller.
There are many ways to accomplish this depending what you want to do and what technologies you are using (e.g. Unless you are using silverlight, you may not need to have your app call the service asynchronously) The most straight forward way to achieve your goal would be to have your service method start up a thread to perform the bulk of the processing and return immediately.
Another would be to create some kind of request (e.g. Create an entry in a datastore of some kind) and return. Another process (e.g. A windows service, etc.) could then pick up the request and perform the processing.
Any WCF service can be made asynchronous -
One of the nice things about WCF is you can write a service synchronously. When you add a ServiceReference in the client, you have the option of generating asynchronous methods.
This will automatically make the service call asynchronous. The service will return when it's done, but the client will get two methods - BeginXXX and EndXXX, as well as XXXAsync + an XXXCompleted event, either of which allows for completely asynchronous operation.
Is it possible to easily call a long running WF service from an other long running workflow service and have the calling service wait for the called service to complete. Is there any out of the box support for this scenario.
I am not talking about using library services, but rather a whole contained sub workflow service.
One of the reasons for doing this would be so as to decouple parts of a complex system so that they can version independently.
An example might be - an ordering fulfillment system where there is a separate customer service work flow - the ordering system might want to wait for customer service to process and correct a problem order before continuing. From a systems point of view the ordering system would version independently of the customer support workflow - unless the customer support workflow's inputs and outputs changed.
Yes you can. Because you are using 2 long running workflows your best option is to use duplex communications with the second workflow calling back into the first workflow when it is done.
See here and here for 2 blog posts I did on duplex WCF and workflow services. They us a simple console app as the client but with a workflow the principle is the same.
We have a Silverlight 4 application utilizing an IDuplexSessionChannel to send data to/from the application to a WCF service.
I've noticed that during very high UI thread usage (such as application startup when the UI is building itself) calls to our WCF service via IDuplexSessionChannel.BeginSend are not sent until the UI has completed rendering.
The calls to BeginSend are being done within a BackgroundWorker.
Does the actual execution of BeginSend occur on the main thread? I couldn't find anything "official" that documents this.
It would seem so since even if I have the main thread Sleep or WaitOne the messages still do not go through (note this was just a test).
What would be the best way to get those calls to go out immediately?
Thanks