Long running workflow in asp.net mvc - asp.net-mvc-4

I'm developing an intranet site using asp.net mvc4 to manage some of our data. One important feature of this site is to trigger import/export jobs. These jobs can take anywhere between 5 minutes to 1 hour. Users of the site need to be able to determine whether a job is currently running as well as the status of prior jobs. Many jobs will often include warning messages concerning duplicate data and these warnings need to be visible on the site.
My plan is to implement these long running processes as a WCF Workflow Service that the asp.net site will interact with. I've got much of the business logic implemented via activities and have tested it using a simple console application. I should note I'm using a correlation handle in order to partition the service based on specific "Projects" on the site.
My problem is how do I go by querying the status of an active job (if one exists) as well as the warning messages of previous jobs. I suspect the best way to do this would be to use the AppFabric tracking service and have my asp.net query a SQL monitoring store and report back on the current status. After setting up AppFabric and adding custom tracking messages, I ran into a few issues. My first issue is that I cannot figure out how to filter out workflow instances that were not using the correct correlation handle as I'd like to show only workflows for a specific project. The other issue is that the tracking database can be delayed quite a bit which causes issues for me trying to determine if a workflow is currently running.
Another possible solution could be to have the workflow explicitly update a database with its current status and any error messages. I'm leaning towards this solution but could use some expert advice.
TL;DR: I need to know the best way to query the execution status and any warning messages of a WCF Workflow service.

As you want to query workflow status and messages even after the workflow is finished I would start by creating a table where you can convert the correlation values a client send to the related workflow ID. I would create a custom activity to do that and drop it right after the receive that creates the workflow.
Next I would create a regular WCF service the client app uses to query the status. This WCF service can query the WF persistence store to see if a given workflow is still running. If so the active bookmarks column will tell you what SOAP messages the workflow is currently waiting for.
As far as messages go you can either use the AppFabric tracking infrastructure to store and retrieve them or you could create a custom activity and store them in your own database. It really depends if you are also interested in the standard WF tracking messages generated.
Update on cheking for running workflow instances:
There are several downsides to adding an IsRunning message to your workflow. For one you would need to make sure one branch keeps looping and waiting for the message but stops as soon as the other real workflow branch is done. Certainly possible but it complicates the workflow and is a possible source of errors. And as it is not part of the business problem it really has no place in the workflow as far as I am concerned. It also means that you will have to load a workflow from disk and persist it back just to tell you that it is there. If it was finished you will need to wait for a fault to indicate there was no workflow instance. And that usually means you get a timeout exception after, by default, 60 seconds. Add throttling to that and you request might be queued because there are too many other workflow instances or SOAP request being processed. So a timeout might mean that a workflow instance exists but is unreachable due to system constraints. Instead I would opt for the simple thing and check if the record in the instance store is still available. The additional info from the active bookmarks column will tell you what the workflow is waiting on, information I have used in the past to dynamically update the UI by enabling/disabling UI elements.

Related

Starting a Saga with Bus.SendLocal(IMessage) instead of Bus.Publish(IEvent)

I'm working on an application that requires regular polling of a 3rd party API. We've used NServiceBus heavily throughout this project and I decided to use the benefits of a Saga to help maintain the state of my poller.
In short, the Saga is used to maintain information required to ensure the polling is done correctly, and also to give us the simplicity of creating a timeout (after each poll) in order to ensure the next poll takes place, even if the service is stopped/compromised/blocked for whatever reason.
My first issue arose when I decided to initiate the Saga by having the service subscribe to its own events, and then publish one of those events when the service started (using IWantToRunWhenBusStartsAndStops). The problem with that was that the service would start and therefore publish the event, but it would happen before the subscriptions were created. The service would therefore not handle the event that was meant to kick off the whole Saga, unless I restarted it. Restarting the service in order to bypass this problem is not a solution that I want to even consider.
Since then, and with some playing around, I have discovered that using
Bus.SendLocal(new MyMessage()); (MyMessage implements IMessage)
will effectively start the Saga, without the need for subscription. The Saga is created in the database (I use NHibernate & MSSQL for persistence), and the timeouts are correctly created and function exactly as expected.
My only problem with this solution is that I am doing something that I cannot find any reference to in the NServiceBus documentation, and I'm concerned that I may be utilising a "feature" that may disappear in a future version, due to actually being an unintended side-effect.
In a nutshell - I'm starting a Saga by sending an IMessage using SendLocal. It works 100% and fixes all my issues, but is it "correct"?
Your solution is absolutely correct and there is no reason I can think of not to do that.

Best way to queue WCF requests so that only one is processed at a time

I'm building a WCF service to handle all QuickBooks SDK functionality for two companies. Since the QuickBooks SDK needs to open/close the actual QuickBooks application to process a request, only one can be handled at a time or QuickBooks goes into a really bad state. I'm looking for the best way to allow end users to make a QuickBooks data request, and have my WCF application hold that request until the previous request is completed.
If nothing is currently being processed, then the request will go through immediately.
Does anyone know of the best method to handle that type of functionality? Anything third party/built-in .NET libraries?
Thanks!
Use WCF Throttling. Its configurable and will solve your problem without code changes.
See my answer for WCF ConcurrencyMode Single and InstanceContextMode PerCall.
One way to do this is to Place a Queue between the user and the Quickbooks Application:
The request from the user is placed i a Queue or Data table.
A background process reads the one item at a time out of the Queue, sends it to Quickbooks and Places the result in a result table.
The Client applictaion reads the result from the result table.
This requires some work, but the user will allways be able to submit requests and only one will be processed at a time.
The solution given by ErnieL will also work if you use Concurrency mode Single, but in Heavy load scenarios the users will get timeouts.

Raise an event or send a command?

We've created a web application that is an a e-book reader. So one thing to keep in mind is that the domain is not exactly that of reading a physical book. We are now trying to gather users' reading behavior by storing information about e-book pages accessed by our users. Since this information goes to a data warehouse we thought raising an event from the bookcontroller is the right way to do it.
bus.Publish()
But we are not sure if it should be a publish or a send since there is really only one consumer to this event and that is our business intelligence team. We've also read that it is not advisable to publish from the web app (http://www.make-awesome.com/2010/10/why-not-publish-nservicebus-messages-from-a-web-application/). So now the alternative is to use bus.Send(RecordPageAccessedCommand)
But the above command does not change our application state in anyway. So is it truly a command? I have a feeling that the mistake we are making is using NServiebus's features (Publish,Send) and trying to equate it with what a command or event is.
Please let me know what the solution to this is.
Based on the information you provided, I would recommend "sending" to your endpoint.
Sending a command implies that the endpoint handling the message should do something. In your case, recording that the page was accessed is the thing the endpoint should do.
Publishing an event implies that you are notifying 0..n subscribers that something occurred. You could publish an event from your command handler if some other service in your system was interested in the fact that a page was accessed. The key point here is that it's not a "fact" until you've recorded it.
I've found that consumers tend to grow once data is available. Having the ability to publish an event from your command handler will make it trivial to notify new consumers without changing/risking your existing code base.
The RecordPageAccessedCommand is a command as it is commanding the system to do something, in this case, record that a page has been accessed.
If I've understood your scenario correctly. A message should be sent from your controller to the "Business intelligence Team Service" telling the system to record that a page has been accessed. This service would store this information and would be the owner/technical authority of this information.
No other services should store or require this information in its pure form, they can however subscribe to events from this service, in highly contrived scenario for example, when a user reads 1000 pages the "Business intelligence Team Service" can publish an event that a 1000 pages have been read ie Bus.Publish(), which may be handled by a billing service that gives a discount for the user on their next purchase.
The data warehouse can have access to this information stored in your "Business intelligence Team Service" as it would fall under IT/OPS.

NServiceBus design ideas

Can any developers/architects with experience with NServiceBus offer guidance and help on the following?
We have a requirement in the business (and not a lot of money) to create a robust interface between an externally hosted application and our internal ERP's (yup, more than one).
When certain activities take place in the third party application they will send us the message. i.e. call a web service passing various fields of information in the message etc. We are not in control nor can we change this third party application.
My responsibility is creating this web service and the processing of the messages into each ERP. The third party dictates how the web service will look, but not what its responsible for. We have to accept that if they get a response back of 'success' then we at this point have taken responsibility for that message! i.e. we need to ensure as close to perfect no data loss takes place.
This is where I'm interested in the use of NServiceBus. Use it to store/accept a message at first. At this point I get lost, I can't tell what should happen, i.e. what design follows. Does another machine (process) subscribe and grab the message to process it into an ERP, if so since each ERPs integration logic differs do I make a subscriber per ERP? A message may have two destination ERP targets however, so is it best the message is sent and not subscribed to.
Obviously in the whole design, I need to have some business rules which help determine the destination ERP's and then business rules that determine what actually takes place with in each ERP. So I also have a question on BRE's but this can wait although still may be a driver for what the message has to do.
so:
Third party > web service call > store message (& return success) > determine which ERP is target > process each into ERP > mark message complete
If anything fails along the lines making sure the message does not get lost. p.s. how does MSMQ prevent loss since the whole machine may die ? is this just disk resilience etc?
Many thanks if you've read and even more for any advice.
This sounds like a perfect application for NServiceBus.
Your web service should ONLY parse the request from the third and translate it into an NServiceBus message, which it should Bus.Send(). You don't respond with a 200 status code until that message is on the Bus, at which point, you are responsible for it, and NServiceBus's built-in error/retry and error queue facilities become your best friend.
This message should be received by another endpoint, but it needs to be able to account for duplicate messages or use idempotence so that duplicates aren't a problem. If the third party hits your web service, and the message is successfully placed on the bus, but then some error prevents them from receiving the 200 response code, you will get duplicates from them.
At this point, the endpoint receiving the MessageFromWebServiceCommand message could Bus.Publish() a SomeBusinessEventHappenedEvent that contains the command data.
For each ERP, create an additional endpoint that subscribes to the SomeBusinessEventHappenedEvent and uses your business logic to decide what to do respective to that ERP. In some cases, that "something" may be "nothing". Keep idempotence in mind here too, because if the message fails it will be retried.
All the other things you're worried about (preventing loss of messages, what happened if machines die) will be taken care of thanks to NServiceBus and MSMQ being naturally resilient to such problems.
Here is a blog post, including a sample project, that shows how to receive messages from an external partner via a web service and handle them with NServiceBus, and a link straight to the sample project on GitHub:
Robust 3rd Party Integrations with NServiceBus
Project Source Code on GitHub

REST API Design - is the Plancake end_timestamp for synchronization a good practise?

I need to design a REST API for my project. It is needed for a two-way synchronization of a mobile application with a web application.
Obviously, before starting, I have studied how other projects have implemented a similar scenario.
Commonly when the mobile app wants to get all the new items from the web application, it sends a 'start_timestamp' in the request as a time reference.
I have found out that Plancake requires also an 'end_timestamp', in order to define a clear time window. You can read that in the last point of 'About the request:' in this paragraph:
http://www.plancake.com/apiDocumentation#api_doc_overview
Do you think I should consider using an 'end_timestamp' or it is a complication that brings very small benefit?
Thanks,
Dan
start_timestamp and end_timestamp are powerful information that enhance task management and give you more control over your tasks. In addition they add a monitoring feature to your application.
Imagine your application fires two tasks to sync with your web server. Both would possibly write the same data and your data would get inconsistent. So you need a feature to control and guarantee that no other task is operating on a specific part of your web server for the moment.
everytime you can pass a 'start
timestamp' as parameter (typically the
server timestamp of the latest
synchronization), you are forced to
pass also an 'end timestamp' (that
should be the server timestamp of when
you start the current syncing).
Although many products don't require
that, we believe that is very
important in order to assure
consistency (think of what can happen
if a task is added while a
synchronization takes place)
Of course you may use your own technique to accomplish that.