Windows Workflow 4.5 Paradigm Questions - asp.net-mvc-4

I've been digging into the technical details and implementation of Windows Workflow 4.5 as a beginner and having decent results. My question is more of a "why and when" vs. a "how to" question so bear with me.
I've taken a familiar concept to us all and abstracted the business logic into WF, namely the universal log on process. What I wanted to accomplish is having reusable logic that I can call from an MVC website, a Windows Forms application, etc. and have everything run through the same workflow and I have achieved that.
Now I have 2 conceptual questions as to "when" to apply WF and when to use code.
1 - Take simple validation as an example. I'm trying to login but I've passed an empty user name or password string. Obviously, I want to send a message back to the end-user "UserName Required" and "Password Required", which I've done. Now, the way that I did that is I have a validation class (FluentValidation NuGet package if it matters) but the important thing is I'm doing this in code. So, in WF I call my validation code via an ExecuteMethod and everything works just fine. My question is: Is this the wrong approach with a WF mindset? Should I be doing inline WF "If" Actions/Decisions and building up the validation messages inside of WF directly versus calling out to some chunk of code? I'm asking not just for validation but as a concept we can all relate to but more generally should I be attempting to put anything and everything I can into WF itself or is it better to call custom code? I'm looking more for best practice with reasoning from seasoned Software Architects with WF experience versus someone's opinion if possible.
2 - Picking up a workflow on another machine. So, part of the same login workflow activity requires a service method call. I've written the code and workflow in such a way that the workflow receives an In parameter of ILogOnService which has an interface method "AuthenticateUser". The concrete implementation I'm passing in calls out to an MVC4 Web Api post method, in async, to do a standard Asp.Net membership ValidateUser. Again, should I be calling this Web Api PostAsync from inside the WF workflow? If so, doesn't that tightly-couple my workflow to Asp.Net Membership and my particular service choice. It seems there are ways to get the workflow to a certain point and then resume the process on another machine, e.g. where a service is running, and continue the process but I'm not able to find good examples of attempting that.
Just looking for some guidelines and ideas from the pros at this technology but I will pick the most informative answer.

There is nothing wrong with using C# code to implement details of a workflow. In fact I always tell people that if they are using WF4 with just the standard out of the box activities they are probably doing things wrong. You really need to be creating, or have someone else do it for your, custom activities that model business activities for your business. Now if that means creating an activity that validates a login using the FluentValidation that is perfectly fine. Another time you might build a higher level business activity out of lower level WF4 activities, just combine them as works best in your case.
Calling a service with something like PostAsync can work well if you know the action is short lived and is normally available. However when you get into SOA styles you really want to start using temporal decoupling so one service is not dependent on another service being available right away. And when you get into temporal decoupling you really want to be using queues, maybe MSMQ or maybe another similar technology. So in that cas you really want to send a one way message with a response queue and have to workflow go idle and wait for the response message to arrive. This would reload the workfloe, possibly on another machine. Now that might not always be appropriate, for example in your login it would not be much use to grant the login a day later because the membership service was unavailable, but can result in very scalable and fault tolerant systems. Of course there is no free lunch as these systems are very hard to design properly.

Related

Testing microservices?

I know this question is a little subjective but I am lost on what to do here. At the moment I am using Go + Go-kit to write some microservices. I'd like to test the endpoints of these microservices in an integration test type fashion but I am unsure how to go about it. The only thing I can think of is to have shell scripts that hit the endpoints and check for responses. But this seems like kludge and not a real smart practice. I feel like there should be a better way to do this. Does anyone have any suggestions?
An alternative approach to end-to-end testing is Consumer-Driven Contract (CDC).
Although is useful to have some end-to-end tests, they have some disadvantages like:
the consumer service must know how to start the provider service. This sounds like unnecessary information, likely difficult to maintain when the number of services start ramping up;
starting up a service can be slow. Even if we’re only talking a few seconds, this is adding overhead to build times. If a consumer depends on multiple services, this all starts adding up;
the provider service might depend on a data store or other services to work as expected. It means that now not only the Provider needs to be started but also a few other services, maybe a database.
The idea of CDC is described shortly as:
The consumer defines what it expects from a specific request to a service
The provider and the consumer agree on this contract
The provider continuously verifies that the contract is fulfilled
This information is taken from here. Read more on this article, it can be useful even if it is specific to Java.
You can do this in a standard Go unit test using the httptest package. This allows you to create mock Request and ResponseWriter objects that can be passed to any Handler or HandleFunc. You create the appropriate Request, pass it to your handler, then read the response out of the ResponseRecorder and check it against the expected response.
If you're using the default mux (calling http.Handle() to register handlers) you can test against http.DefaultServeMux. I've used it for microservices in the past with good results. Works for benchmarking handlers, routing, and middleware as well.
You should always use golang's native unit testing framework to test each individual service (please, no shell script!). httptest seems fine, but I would argue it is helpful to have finer-grained test boundaries -- you should really have one _test.go for each functional block of your code. Smaller tests are easier to maintain.
In terms of overall integration tests that involve multiple microservices, you shouldn't do them at development time. Set up a staging area and run the tests over there.
My 2 cents.

What are some good ways of converting business logic errors to rest API errors?

I have a business logic layer that acts as a service and is agnostic to any application facing or user facing interface. For example, I've got UserService which takes care of operations related to user (e.x. creating users) and at the moment it returns a custom error object that includes a message that explains what went wrong. Now my RESTful API would use services to handle api requests but how do I handle business errors? how do I know what status code to use? I obviously don't like to put a lot of if statements in every single API call. I also thought about having a global error handler that would map every single business logic error to a status code and return that but that's also very verbose code. I'd love to hear some good ideas to handle this elegantly.

Call WF Workflow from Browser

OK, I've searched the Internet for the answer to this and haven't found anything... maybe I'm missing the obvious here or just asking the wrong question, but...
How do you call a WF WCF Workflow just by it's URL with parameters? I have a Workflow xmlx, we'll call it DeepThought.xamlx, an operation named TheQuestion and I need to pass the parameter Answer = 42 to it.
I've tried http://localhost:8042/DeepThought.xamlx/TheQuestion?Answer=42 and just about everything else I can think of. I've scoured the Internet and even the wsdl but am either just flat out missing the answer or simply not seeing it.
I assume it's possible, otherwise, what's the point? Clues appreciated.
At least out-of-the-box this is not possible. The standard Receive
activity uses SOAP. I'm sure it's possible to implement a custom Receive but I guess it would be a non-trivial amount of work.
You can also take a look a the following questions. They are REST-related but still may give you some options (a community RESTful endpoint is being mentioned, no idea of its current state though):
RESTful Workflow Service Endpoints in WF4 / WCF
WCF Workflow Service REST interface
I ended up implementing the workflow as a regular activity (non service) inside WCF. This gave me the ability to use their parameters and pass them to the workflow directly. In the end, not too difficult to implement.

WCF - quick response with status then continue doing longer process

I spent few days search on this case. I checked out all wcf asynchronous implementaions.
I wasn't able to find what I was looking for.
Below is scenario.
WCF is running to accept xml
WCF needs to response to user for success receiving xml and release
the request immediately
WCF then needs to do processing to save xml to database and parsing xml to
convert something else.
I don't want to use separate service to process above. I want to use one service to handle all 3 cases above.
I checked out asynchronous way of coding in WCF, but this doesn't release the request right away. What is the best practice for this? Is there any sample code I can use?
Thank you in advance.
I think you would be better suited to using a different technology. Maybe look at Windows Workflow Foundation.
You can host WCF Workflow Services the same way as you host a standard WCF service, the main difference is that you can create specific workflows that can continue after acknowledging receipt of the original message.
You do this by persisting the message and returning to the user. WF allows you to create actions that continue after sending response back to the caller.
Visual studio provides you with a design surface that allows you to drag and drop components to create custom workflows. Additionally you can also make calls to other services if required.
With .net 4.5 you can now use C#, in previous versions of WF you had to use VB.net.
You can read about it on the MSDN site here:
http://msdn.microsoft.com/en-us/vstudio/jj684582.aspx
Hope this helps

good practice: REST API as the interface between the interface layer and business layer?

I was thinking about the architecture of a web application that I am planning on building and I found myself thinking a lot about a core part of the application. Since I will want to create, for example, an android application to access it, I was already thinking about having an API.
Given the fact that I will want to have an external API to my application from day one, is it a good idea to use that API as an interface between the interface layer (web) and the business layer of my application? This means that even the main interface of my application would access the data through the API. What are the downsides of this approach? performance?
In more general terms, if one is building a web application that is likely to need to be accessed in different ways, is it a good architectural design to have an API (web service) as the interface between the interface layer and business layer? Is REST a good "tool" for that?
Sounds like you've got two questions there, so my answer is in two parts.
Firstly, should you use an API between the interface layer and the business layer? This is certainly a valid approach, one that I'm using in my current project, but you'll have to decide on the benefits yourself, because only you know your project. Possibly the largest factor to consider is whether there will be enough different clients accessing the business layer to justify the extra development effort in developing an API? Often that simply means more than 1 client, as the benefits of having an API will be evident when you come to release changes or bug fixes. Also consider the added complexity, the extra code maintenance overhead and any benefits that might come from separating the interface and business layers such as increased testability.
Secondly, if you implement an API, should you use REST? REST is an architecture, which says as much about how the remainder of your application is developed as it does the API. It's no good defining resources at the API level that don't translate to the Business Layer. Rest tends to be a good approach when you want lots of people to be able to develop against your API (like NetFlix for example). In the case of my current project, we've gone for XML over HTTP, because we don't need the benefits that Rest generally offers (or SOAP for that matter).
In general, the rule of thumb is to implement the simplest solution that works, and without coding yourself into a corner, develop for today's requirements, not tomorrow's.
Chris
You will definitely need need a Web Service layer if you're going to be accessing it from a native client over the Internet.
There are obviously many approaches and solutions to achieve this however I consider the correct architectural guideline to follow is to have a well-defined Service Interface on the Server which is accessed by the Gateway on the client. You would then use POCO DTO's (Plain old DTO's) to communicate between the endpoints. The DTO's main purpose is to provide optimal representation of your web service over the wire, it also allows you to avoid having to deal with serialization as it should be handled transparently by the Client Gateway and Service Interface libraries.
It really depends on how to big your project / app is whether or not you want want to go through the effort to mapping your DTO's to the client and server domain models. For large applications the general approach would be on the client to map your DTO's to your UI Models and have your UI Views bind to that. On the server you would map your DTO's to your domain models and depending on the implementation of the service persist that.
REST is an architectural pattern which for small projects I consider an additional overhead/complexity as it is not as good programattic fit compared to RPC / Document Centric web services. In not so many words the general idea of REST is to develop your services around resources. These resources can have multiple representations which your web service should provide depending on the preferred Content-Type indicated by your HTTP Client (i.e. in the HTTP ACCEPT HEADER). The canonical urls for your web services should also be logically formed (e.g. /customers/reports/1 as opposed to /GetCustomerReports?Id=1) and your web services would ideally return the list of 'valid states your client can enter' with each response. Basically REST is a nice approach promoting a loosely-coupled architecture and re-use however requires more effort to 'adhere' to than standard RPC/Document based web services whose benefits are unlikely to be visible in small projects.
If you're still evaluating what web service technology you should use, you may want to consider using my open source web framework as it is optimized for this task. The DTO's that you use to define your web services interface with can be re-used on the client (which is not normally the case) to provide a strongly-typed interface where all the serialization is taken for you. It also has the added benefit of enabling each web service you create to be called by SOAP 1.1/1.2, XML and JSON web services automatically without any extra configuration so you can choose the most optimal end point for every client scenario, i.e. Native Desktop or Web App, etc.
My recent preference, which is based on J2EE6, is to implement the business logic in session beans and then add SOAP and RESTful web services as needed. It's very simple to add the glue to implement the web services around those session beans. That way I can provide the service that makes the most sense for a particular user application.
We've had good luck doing something like this on a project. Our web services mainly do standard content management, with a high proportion of reads (GET) to writes (PUT, POST, DELETE). So if your logic layer is similar, this is a very reasonable approach to consider.
In one case, we have a video player app on Android (Motorola Droid, Droid 2, Droid X, ...) which is supported by a set of REST web services off in the cloud. These expose a catalog of video on demand content, enable video session setup and tear-down, handle bookmarking, and so on. REST worked out very well for this.
For us one of the key advantages of REST is scalability: since RESTful GET responses may be cached in the HTTP infrastructure, many more clients can be served from the same web application.
But REST doesn't seem to fit some kinds of business logic very well. For instance in one case I wrapped a daily maintenance operation behind a web service API. It wasn't obvious what verb to use, since this operation read data from a remote source, used it to do a lot of creates and updates to a local database, then did deletes of old data, then went off and told an external system to do stuff. So I settled on making this a POST, making this part of the API non-RESTful. Even so, by having a web services layer on top of this operation, we can run the daily script on a timer, run it in response to some external event, and/or have it run as part of a higher level workflow.
Since you're using Android, take a look at the Java Restlet Framework. There's a Restlet edition supporting Android. The director of engineering at Overstock.com raved about it to me a few years ago, and everything he told us was true, it's a phenomenally well-done framework that makes things easy.
Sure, REST could be used for that. But first ask yourself, does it make sense? REST is a tool like any other, and while a good one, not always the best hammer for every nail. The advantage of building this interface RESTfully is that, IMO, it will make it easier in the future to create other uses for this data - maybe something you haven't thought of yet. If you decide to go with a REST API your next question is, what language will it speak? I've found AtomPub to be a great way for processes/applications to exchange info - and it's very extensible so you can add a lot of custom metadata and yet still be eaily parsed with any Atom libraries. Microsoft uses AtomPub in it's cloud [Azure] platform to talk between the data producers and consumers. Just a thought.