Jumping into N-Tier architecture with WCF? - wcf

I work for a large state government agency that is a tad behind the times. Our skill sets are outdated and budgetary freezes prevent any training or hiring of new employees/consultants (firing people is also impossible). Designing business objects, implementing design patterns, establishing code libraries and services, unit testing, source control, etc. are all things that you will not find being done here. We are as much of a 0 on the Joel Test as you can possibly get. The good news is that we can only go up from here!
We develop desktop CRUD applications (in C++, C#, or Java) that hit the Oracle database directly through an ODBC connection. We basically have GUI's littered with SQL statements and patchwork code. We have been told to move towards a service-oriented n-tier architecture to prevent direct access to the database and remove the Oracle Client need on user machines.
Is WCF the path we should be headed down? We've done a few of the n-tier application walkthroughs (like this one) and they seem easy to implement, but we just don't know enough to understand if we are even considering the right technologies. Utilizing the .NET generated typed DataSets seems like a nice stopgap to save us month/years of work (as opposed to creating new business objects from the ground up for numerous projects). Is this canned approach viable for a first step?

I recently started using WCF services for my Data Layer in some web applications and I must say, it's frustrating at the beginning (the first week or so), but it is totally worth it once the code is deployed.
You should first try it out with a small existing app, or maybe a proof of concept to make sure it will fit your needs.
From the description of the environment you are in, I'm sure you'll realize the benefit almost immediately.

The last company I worked for chose WCF for almost the exact reason you describe above. There is lots of good documentation and books for WCF, its relatively easy to get working, and WCF supports a lot of configuration options.
There can be some headaches when you start trying to bend WCF to work in a way not specifically designed out of the box. These are generally configuration issues. But sites like this or IDesign can help you through those.

First of all, I would definitely not (sorry for the emphasis) worry about the time you'll save using typed DataSet's versus creating your own business objects. That is usually not where you will spend most of your development time. I prefer using business objects myself.
In you're situation I would want to implement a proof-of-concept first. One that addresses all issues you may encounter. This proof-of-concept should implement an entire use case, starting on the client, retrieving data from the database and returning it to the client. You should feel confident about your implementation before continuing.
Then about choice of technology. WCF is definitely a good choice for communication between your client applications and the service layer. I suppose that both your clients as well as your service layer will become C# applications? That makes things a lot easier since interoperability between different platforms (Java/C# for example) is still not trivial although it should work in most cases.

Take a look at Entity Framework (as there are a couple Oracle providers available for it already) in conjunction with .NET 3.5 SP1 which enables built-in WCF serialization of your EF generated classes.
Here is a good blog to get started: http://blogs.msdn.com/dsimmons

CSLA might be a good fit for your N-Tier desktop apps. It supports WCF, has a large dev community, and is well documented. It is very object oriented.

Related

SOA Solution Design and Service Encapsulation: Should I keep service codebases in separate solutions, or all together in one?

I am developing an SOA project using principles I learned in Udi Dahan's Advanced Distributed Systems Design course. So far the project has 6 completely independent services, as well as an IT/Ops service for aggregation and integration with third party services. I also have a set of libraries I've developed for use across all of these services with infrastructure code, i/o, and utilities.
Is it better to keep these services' code together under one solution, or to keep them in entirely separate solutions? Or both? Integration testing will be much simpler with one solution, but I'm concerned with service encapsulation - my gut tells me I don't want a developer working on one service to know how another service works, because it can lead to unintentional coupling. Has anyone run into real-world issues with service encapsulation when keeping all codebases under one solution?
I would keep the services in separate solutions or even separate repositories.
I don't think it has anything to do with service encapsulation, but more to help deal with code contention better and make it hard to take dependancies between services.
As for common code (like contracts) put them in a separate repository and use nuget (or similar) to reference form your solutions.
Make sense?

Where does the business logic go in WCF Data service using entity framework?

I was looking at how you can create a WCF data service around an entity framework context and you can consume it as an EF context as well.
Creating an OData API for StackOverflow including XML and JSON in 30 minutes
I really just started looking at this, but I was wondering where would the business logic go? As a service I would expect that you couldn't just freely add/delete etc without it having some validation.
If I wrote an MVC app to consume this service, how would I best implement business logic. Not simple property level validation that you could do with attributes, but more complex stuff that needs to check the data store first etc.
It sounds like you need a custom data service provider (msdn link). They're quite powerful and give you full control over all of your reading/writing logic.
For example I wrote one that enforces our licensing logic in the update provider.
You can put some in the Data Service class, but you are limited as to what you can and can't do there. And then of course you can put some in the client above the service, but that's not ideal either.
I've only spent a few weeks with WCF Data Services but you highlight (one of) the big problems with it - lack of flexibility. It's fantastic for rapid development and banging out LOB applications, but anything with a deliberate design is very difficult to implement. I needed to include objects in my entity model simply to allow them to be exposed through the service, and I had huge headaches just trying to extend those classes with a simple property.
I'd only recommend using WCF Data Services for trivially simple applications that needed extremely fast development - a one or two day development cycle, for example. Anything else is worth doing thoroughly with regular WCF services, writing your own data layer and so on.
Depending on your specific needs, it sounds like Web API might be a good fit. Web API may never get the full range of OData support that WCF Data Services has, but it does make certain things easier (like adding business logic). I'm quite confident that Web API's initial support for OData will cover a significant number of use cases, and that support will grow over time.
While a custom data provider will most certainly do about anything you want and may well be a great solution for you if you have a complex architecture, I wasn't really thrilled when I attempted to save back through the client and found out I had to implement my the IUpdatable Interface as part of my Context.(I was attempting to build a repository pattern out of my context and DataService).
I'm sure it's very useful for many people, but I really only needed the functionality the EntityProvider already contained and didn't have the time in my project schedule to figure the Iupdatable piece of the custom provider out, so my team, specifically Geoff , stuck with the Entity Provider and used Change and Query Interceptors to route the DataService requests through our Business Logic classes on the server. It provides a central point of control. We used these to provide security checks, run calculations and other operations on Insert/Update, etc. Turned out great. You can also use service methods as another way to provide specific business logic functions to your clients.

MPI vs. Microsoft WCF vs. Microsoft TPL

I have a scientific program written in F# which I want to parallelize and run on 1 server with multiple processors (64) and for the future also in the cloud (Windows Azure?). The program will have a simple 1-1 communication between the nodes (no broadcast etc.).
If I used WCF, would it be as fast as MPI? What has MPI that WCF does not? There exists Pure MPI .NET written for WCF which puzzles me even more. I do not know if to use WCF or MPI.NET or Pure Mpi running on WCF.
PS: I guess that TPL is out of the game for 64 processors and more, right?
It is difficult to give a concrete answer, because it all depends on the specific aspects of your application, its current architecture (I suppose you already have some app) etc.
As you mention MPI and WCF, I assume that the application is written as several components that communicate with each other. The best way to structure this kind of application is to use F# agents.
As far as I understand, you want to run the application on a single server first. If you write it using agents, the agents can just communicate directly with each other (so you don't need MPI or WCF).
TPL should work well on a single-server (with lots of CPUs), but it will not scale to the distributed setting - you cannot run Task on another machine. However, you can use it inside individual components (e.g. agents) that will be distributed.
Regarding MPI vs. WCF - I don't have enough experience to answer that. However, if you use agent-based architecture, it should be easy to try various options. You may also check out fracture and related projects, which aims to implement high-performance sockets for F# (and possibly distributed agents in the future).
If you're doing it on 1 server you could just execute one process and execute the code in parallel. That way you could share memory more easily and faster than doing it through messages like MPI and WCF. Although the overhead of communication might not be that much, depending on your problem + solution.
Also the changes to your code would be much less that way, F# can usually be turned into prallel code with little effort. Going to MPI/WCF would require you to rewrite large portions.
Googling for F# + parallel gives plenty useful info that you should read first, like this for a good start:
http://blogs.msdn.com/b/dsyme/archive/2010/01/09/async-and-parallel-design-patterns-in-f-parallelizing-cpu-and-i-o-computations.aspx
So on 1 server, I woudl use the parallel features of F#, it's designed to prallelize easily.
Later when you want to go for cloud, that would be turning it into cleint-server. That's a different problem then parallization. I would treat and solve them seperately.
On the MPI vs WCF. WCF is designed as a RPC technology, i.e. you call remote procedures and get answers. If you want to use it for parallel programming with separate processes, you would have to create the boilerplate code for that. (Keep track of subsribed clients etc.)
MPI was designed to run that kind of architecture and handles it much more easily. (the first process gets number 0 and is the master, the other are slaves get numbered incrementally etc.)
Howver I don't think MPI will be very good to go cloud, since that invloves http, protocols, security etc. Not sure how well MPI works for those kind of things, WCF will handle that very well indeed.
The fact that there is an MPI.NET for WCF is because MPI is about a certain style of parallizing code that a lot of people are familiar with. So you can use the programming concepts and use them on the .NET platform leveraging WCF for the communications.
Something else you might want to look into if you need to exchange a lot of data over the wire is protocol-buffers (see protobuf-net for instance). That can easily be combined with WCF for communication and is very lean in serializing structured data so you can send over the wire efficiently.
Gert-Jan
WCF and MPI are different concepts. WCF is like a person A asks a person B to do something where as MPI is like a person A creates clones of himself (all clone have same ability/logic) and then these clones work on specific parts of the problem to be solved and once done they combine their results.
So choosing between which one fits your specific application depends on the problem your application is trying to solve. It may even be a combination of both WCF and MPI. Where your client application asks the WCF to do some task and the WCF create clones of the "problem solver" using MPI and when the clone are done with solving the problem (in parallel) they return the aggregated result back to the WCF and then that result is sent to client application.
You might also want to take at the 'mbrace' product, which provides a cloud monad (http://blogs.msdn.com/b/dsyme/archive/2011/08/23/m-brace-f-in-the-cloud.aspx). It's still at a fairly early stage though. I'm no expert but it may be that you can run an mbrace-based solution as effectively a private cloud on your 64-processor setup. When you outgrow that, a move to Azure would be seamless.

Is it advisable to use the canonical form in a Silverlight application?

We are developing a LOB application using Silverlight and several team members are advocating the use of the canonical design pattern instead of creating simple WCF services. As the lead, I’m trying to balance best practices with an incredibly tight time line.
Here are the reasons I do NOT think Canonical is a good approach for our project.
We have no immediate (<5 years) requirement to expose any internal services to the enterprise.
Time required for governance. (Developing adapters with data transformation logic, developing XSDs, and developing contracts [fault, data, and operation]).
No need to expose a different data contracts than what exists in the data layer
It doesn’t appear that we can easily use ‘self tracking entities’ with the Canonical approach.
Here are some reasons I’m considering using Canonical approach.
We can use the XSD schemas for data type and length validation.
We will be prepared to allow consumption of our services to the enterprise, whether it’s 5 years or 1 year.
We can feel good that we’re implementing best practices. :)
So, is it advisable to follow the Canonical approach with a Silverlight application? It does not seem that the benefits Canonical provide out weigh the additional work. …or perhaps I’m wrong and it’s not additional work.
I think you should definitely go with WCF RIA services. It's extensible in every point possible, it's fast to develop, it's accessible as regular WCF services, it also has plenty different available end point types, and generally very mature. And implements best practices, and validation process is fully customizable. It really is a no brainer, if you have some additional questions about it shoot away, i'll gladly answer them:)

Ria Services vs WCF Dataservices

My Team are evaluation to a bigger Business portal. (Invoicing, Bookkeeping, Salaries.....)
We are all used to work with DDD, O/R mappers with NHibernate as our first choice.
We have chosen to work with CompositeWPF to keep modularity between all modules and part system in the business portal.
Now we have evaluated Ria Services and are kind of disappointed how it works in a Data Oriented way, Data Oriented can be good in a service oriented scenario, but we feel that we can with an Object Oriented approach to, and we feel that we can get an application with less complexity with the OO approach than the DO approach.
For example it doesn't allow Value Objects, Many-to-many relations, everything needs to have keys and so on.
We haven't looked at WCF Data Services yet so our question is WCF Data Services our answere? Does it integrate well with Silverlight 4? Can we work with it in a OO manner?
RIA / WCF is not about replacing O/R mappers etc. It is about exposing data in an open format to another application. Not high end, but basically for integration. It is IMHO pretty stupid to put that within an application, but it is a great external interface, especially as it gets tooling support.
Good examples:
Bank accounting access. If I only could do home banking using Odata ;) And get my account statements into excel.
Trading ;) Yeah, ok - I have a trading server (that then connects to various brokers). I have a web front end. I now will expose certain data through OData, too, so I can easily get things out in excel etc., or even use a silverlight application for some stuff... but i will NOT use OData within one application to replace my object infrastructure- way too muc hoverhead.
Ebay could provide an OData interface for larger customers. Nice to get an overview over your auctions AND do some basic maintenance on your account. Nothing high performance, but again, TOOLING support. Excel, Report services all soon support OData.
If you look at it from that integration point of view it makes a LOT mroe sense. It is not a full environment - that "never" works. It is a great standardization, though, to open up an application with semantics (better than web services - standardized query and filter logic) AND tooling support.
I somehow dont really run into many problems with a lot of items you mention, though:
Anything I work with has a key per definition
I neve rdo many:many relations. I always havean interim object WITH A KEY.... so that I can add properties to it (and if that is only a timestamp).
The servies ARE data oriented, and seriously - I love them. I am a big OO fan, but the tooling support makes that a PERFECT external interface for applications.