Layers on a WCF service - wcf

Okay, I've got this WCF service going. It has a public access, which is the main service itself (HydSQLService) which contains a DataContext for access to the database. This DataContext was generated by SQLMetal.exe, although I created a partial class to fill in the partial methods.
So this question is more about how to layer this application. At the moment, the service (i.e. the publically exposed bit) holds a reference to the DataContext object. It goes through this to access the SQL database.
I intend to add a layer between these for server side validation, but I'm not sure if I'm missing a layer or something (I'm somewhat new to all this).
So is this the right amount of layers? Is it structured correctly, or have I made some horrendous oversight? Suggestions would be welcome.

The answer is - as always - it depends.
To understand the pros and cons of your architecture as described, we would need to know a whole lot more about the requirements and environment that you're working with. However, the fact that you have layers is likely a good thing. The fact that you're thinking about this aspect of your application is definitely a good thing.
In general, we add layers to solve a few challenges:
Separation of concerns. Having a layer handle one aspect of the application (and handle it well) is seldom a bad thing. This allows you to rip out that layer and replace it without rewriting the rest of the application.
Testability - It's often beneficial to test the layers in isolation (e.g. automated unit tests) that ensure that piece is working correctly.
Abstract away common functions (data access, validation, etc). This can make the application easier to maintain. For example, not having to maintain a bunch of data access specific code in the middle of a business object layer is nice.
This sort of question is difficult to answer specifically in this context. You would a much more in depth review to get the kind of feedback / direction you're looking for.

Related

How to design a business logic layer

To be perfectly clear, I do not expect a solution to this problem. A big part of figuring this out is obviously solving the problem. However, I don't have a lot of experience with well architected n-tier applications and I don't want to end up with an unruly BLL.
At the moment of writing this, our business logic is largely a intermingled ball of twine. An intergalactic mess of dependencies with the same identical business logic being replicated more than once. My focus right now is to pull the business logic out of the thing we refer to as a data access layer, so that I can define well known events that can be subscribed to. I think I want to support an event driven/reactive programming model.
My hope is that there's certain attainable goals that tell me how to design these collection of classes in a manner well suited for business logic. If there are things that differentiate a good BLL from a bad BLL I'd like to hear more about them.
As a seasoned programmer but fairly modest architect I ask my fellow community members for advice.
Edit 1:
So the validation logic goes into the business objects, but that means that the business objects need to communicate validation error/logic back to the GUI. That get's me thinking of implementing business operations as objects rather than objects to provide a lot more metadata about the necessities of an operation. I'm not a big fan of code cloning.
Kind of a broad question. Separate your DB from your business logic (horrible term) with ORM tech (NHibernate perhaps?). That let's you stay in OO land mostly (obviously) and you can mostly ignore the DB side of things from an architectural point of view.
Moving on, I find Domain Driven Design (DDD) to be the most successful method for breaking a complex system into manageable chunks, and although it gets no respect I genuinely find UML - especially action and class diagrams - to be critically useful in understanding and communicating system design.
General advice: Interface everything, build your unit tests from the start, and learn to recognise and separate the reusable service components that can exist as subsystems. FWIW if there's a bunch of you working on this I'd also agree on and aggressively use stylecop from the get go :)
I have found some o fthe practices of Domain Driven Design to be excellent when it comes to splitting up complex business logic into more managable/testable chunks.
Have a look through the sample code from the following link:
http://dddpds.codeplex.com/
DDD focuses on your Domain layer or BLL if you like, I hope it helps.
We're just talking about this from an architecture standpoint, and what remains as the gist of it is "abstraction, abstraction, abstraction".
You could use EBC to design top-down and pass the interface definitions to the programmer teams. Using a methology like this (or any other visualisation technique) visualizing the dependencies prevents you from duplicating business logic anywhere in your project.
Hmm, I can tell you the technique we used for a rather large database-centered application. We had one class which managed the datalayer as you suggested which had suffix DL. We had a program which automatically generated this source file (which was quite convenient), though it also meant if we wanted to extend functionality, you needed to derive the class since upon regeneration of the source you'd overwrite it.
We had another file end with OBJ which simply defined the actual database row handled by the datalayer.
And last but not least, with a well-formed base class there was a file ending in BS (standing for business logic) as the only file not generated automatically defining event methods such as "New" and "Save" such that by calling the base, the default action was done. Therefore, any deviation from the norm could be handled in this file (including complete rewrites of default functionality if necessary).
You should create a single group of such files for each table and its children (or grandchildren) tables which derive from that master table. You'll also need a factory which contains the full names of all objects so that any object can be created via reflection. So to patch the program, you'd merely have to derive from the base functionality and update a line in the database so that the factory creates that object rather than the default.
Hope that helps, though I'll leave this a community wiki response so perhaps you can get some more feedback on this suggestion.
Have a look in this thread. May give you some thoughts.
How should my business logic interact with my data layer?
This guide from Microsoft could also be helpful.
Regarding "Edit 1" - I've encountered exactly that problem many times. I agree with you completely: there are multiple places where the same validation must occur.
The way I've resolved it in the past is to encapsulate the validation rules somehow. Metadata/XML, separate objects, whatever. Just make sure it's something that can be requested from the business objects, taken somewhere else and executed there. That way, you're writing the validation code once, and it can be executed by your business objects or UI objects, or possibly even by third-party consumers of your code.
There is one caveat: some validation rules are easy to encapsulate/transport; "last name is a required field" for example. However, some of your validation rules may be too complex and involve far too many objects to be easily encapsulated or described in metadata: "user can include that coupon only if they aren't an employee, and the order is placed on labor day weekend, and they have between 2 and 5 items of this particular type in their cart, unless they also have these other items in their cart, but only if the color is one of our 'premiere sale' colors, except blah blah blah...." - you know how business 'logic' is! ;)
In those cases, I usually just accept the fact that there will be some additional validation done only at the business layer, and ensure there's a way for those errors to be propagated back to the UI layer when they occur (you're going to need that communication channel anyway, to report back persistence-layer errors anyway).

approaches to WCF service version control

We are implementing numerous services in our company and running into versioning issues with data contracts. One of the problems we have is that our data contract are also used as the model of the actual application behind the service. I was wondering what approach others have taken in this kind of situation or just service versioning in general. I am aware of the microsoft best practices guide but wanted to see if anybody has any other ideas on how to version.
The first rule of Services, Business Object != Message Object. Basicly, never expose your business objects as data contracts. Or as I like to say, you can't fax a cat. You can send a facsimile of a cat, but you can't send a cat over the wire. Here's a great picture to remind you: http://www.humorhound.com/2009/04/demotivational-poster-youre-doing-it-wrong/
In more modern terms, it is really the MVVM pattern. The view of the model that the domain layer uses is not built for a client, so you have to create a separate model and view for the other layers. Yes it seems like a lot more work, but in the end it is a much easier and better way to build service oriented applications. Versioning is just one of the ways that it makes life easier. The other important thing is that you tend to build models that are geared around how it is going to be used, and you wind up with more explict code (less crazy branching).
The way that we have implemented this is to build a facade layer on top of the business layer.
The facade layer talks to the rest of the world using the objects defined in the data contracts.
The facade layer maps the objects to internal objects before sending the data into the business layer.
This isolates the internal functionality of your system from the objects used in the data contracts.

traversing object graph from n-tier client

I'm a student currently dabbling in a .Net n-tier app that uses Nhibernate+WCF+WPF.
One of the things that is done quite terribly is object graph serialisation, In fact it isn't done at all, currently associations are ignored and we are using DTOs everywhere.
As far as I can tell one method to proceed is to predefine which objects and collections should be loaded and serialised to go across the wire, thus being able to present some associations to the client, however this seems limited, inflexible and inconsistent (can you tell that I don't like this idea).
One option that occurred to me was to simply replace the NHProxies that lazy load collection on the client tier with a "disconnectedProxy" that would retrieve the associated stuff over the wire. This would mean that we'd have to expand our web service signature a little and do some hackery on our generated proxies but this seemed like a good T4/other code gen experiment.
As far as I can tell this seems to be a common stumbling block but after doing a lot of reading I haven't been able to figure out any good/generally accepted solutions. I'm looking for a bit of direction as much as any particular solution, but if there is an easy way to make the client "feel" connected please let me know.
You ask a very good question that unfortunately does not have a very clean answer. Even if you were able to get lazy loading to work over WCF (which we were able to do) you still would have issues using the proxy interceptor. Trust me on this one, you want POCO objects on the client tier!
What you really need to consider...what has been conceived as the industry standard approach to this problem from the research I have seen, is called persistence vs. usage or persistence ignorance. In other words, your object model and mappings represent your persistence domain but it does not match your ideal usage scenarios. You don't want to bring the whole database down to the client just to display a couple properties right??
It seems like such a simple problem but the solution is either very simple, or very complex. On one hand you can design your entities around your usage scenarios but then you end up with proliferation of your object domain making it difficult to maintain. On the other, you still want the rich object model relationships in order to write granular business logic.
To simplify this problem let’s examine the two main gaps we need to fill…between the database and the database/service layer and the service to client gap. NHibernate fills the first one just fine by providing an ORM to load data into your objects. It does a decent job, but in order to achieve great performance it needs to be tweaked using custom loading strategies. I digress…
The second gap, between the server and client, is where things get dicey. To simplify, imagine if you did not send any mapped entities over the wire to the client? Try creating a mechanism that exchanges business entities into DTO objects and likewise DTO objects into business entities. That way your client deals with only DTOs (POCO of course), and your business logic can maintain its rich structure. This allows you to leverage not only NHibernate’s lazy loading mechanism, but other benefits from the session such as L1 cache.
For brevity and intellectual property reasons I will not go into the design of said mechanism, but hopefully this is enough information to point you in the right direction. If you don’t care about performance or latency at all…just turn lazy loading off all together and work through the serialization issues.
It has been a while for me but the injection/disconnected proxies may not be as bad as it sounds. Since you are a student I am going to assume you have some time and want to muck around a bit.
If you want to inject your own custom serialization/deserialization logic you can use IDataContractSurrogate which can be applied using DataContractSerializerOperationBehavior. I have only done a few basic things with this but it may be worth looking into. By adding some fun logic (read: potentially hackish) at this layer you might be able to make it more connected.
Here is an MSDN post about someone who came to the same realization, DynamicProxy used by NHibernate makes it not possible to directly serialize NHibernate objects doing lazy loading.
If you are really determined to transport the object graph across the network and preserve lazy loading functionality. Take a look at some code I produced over here http://slagd.com/?page_id=6 . Basically it creates a fake session on the other side of the wire and allows the nhibernate proxies to retain their functionality. Not saying it's the right way to do things, but it might give you some ideas.

I've never encountered a well written business layer. Any advice?

I look around and see some great snippets of code for defining rules, validation, business objects (entities) and the like, but I have to admit to having never seen a great and well-written business layer in its entirety.
I'm left knowing what I don't like, but not knowing what a great one is.
Can anyone point out some good OO business layers (or great business objects) or let me know how they judge a business layer and what makes one great?
Thanks
I’ve never encountered a well written business layer.
Here is Alex Papadimoulis's take on this:
[...] If you think about it, virtually every line of code in a software
application is business logic:
The Customers database table, with
its CustomerNumber (CHAR-13),
ApprovedDate (DATETIME), and
SalesRepName (VARCHAR-35) columns:
business logic. If it wasn’t, it’d
just be Table032 with Column01,
Column02, and Column03.
The
subroutine that extends a ten-percent
discount to first time customers:
definitely business logic. And
hopefully, not soft-coded.
And
the code that highlights past-due
invoices in red: that’s business
logic, too. Internet Explorer
certainly doesn’t look for the strings
“unpaid” and “30+ days” and go, hey,
that sure would look good with a #990000 background!
So how then is possible to encapsulate all of this business logic
in a single layer of code? With
terrible architecture and bad code of
course!
[...] By implying that a system’s architecture should include a layer dedicated to business logic, many developers employ all sorts of horribly clever techniques to achieve that goal. And it always ends up in a disaster.
I imagine this is because business logic, as a general rule, is arbitrary and nasty. Garbage in, garbage out.
Also, most of the really good business layers are most probably proprietary. ;-)
Good business layers have been designed after a thorough domain analysis. If you can capture the business' semantics and isolate it from any kind of implementation, whether that be in data storage or any specific application (including presentation), then the logic should be well-factored and reusable in different contexts.
Just as a good database schema design should capture business semantics and isolate itself from any application, a business layer should do the same and even if a database schema and a business layer describe the same entities and concepts, the two should be usable in separate contexts--a database schema shouldn't have to change even when the business logic changes unless the schema doesn't reflect the current business. A business layer should work with any storage schema provided that it's abstracted via an intermdiate layer. For example, the ADO.NET Entity framework lets you design a conceptual schema which maps to the business layer and has a separate mapping to the storage schema which can be changed without recompiling the business object layer or conceptual layer.
If a person from the business side of things can look at code written with the business layer and have a rough idea of what's going on then it might be a good indication that the objects were designed right--you've succesfully conveyed a solution in the problem domain without obfuscating it with artifacts from the solution domain.
I've always been stuck between a rock and a hard place. Ideally, your business logic wouldn't be at all concerned with database or UI-related issues.
Keys Cause Problems
Still, I find things like primary and foreign keys causing problems. Even tools like Entity Framework don't completely eliminate this creep. It can be extremely inefficient to convert IDs passed as POST data into their respective objects, only to pass this to the business layer, which then passes them to the data layer to just be stripped down again.
Even NoSQL databases come with problems. They tend to return full object models, but they usually return more than you need and can lead to problems because you're assuming that object model won't change. And keys are still found in NoSQL databases.
Reuse vs. Overhead
There's also the issue of code reuse. It's pretty common for data layers to return fully populated objects, including every column in that particular table or tables. However, often business logic only cares about a limited subset of this information. It lends itself to specialized data transfer objects that only carry with them the relavent data. Of course, you need to convert between representations, so you create a mapper class. Then, when you save, you need to somehow convert these lesser objects back into the full database representation or do a partial UPDATE (meaning a another SQL command).
So, I see a lot of business layer classes accepting objects mapping directly to database tables (data transfer objects). I also see a lot of business layers accepting raw UI values (presentation objects), as well. It's also not unusual to see business layers calling out to the database mid-computation to retrieve needed data. To try to grab it up-front would probably be inefficient (think about how and if-statement can affect the data that gets retrieved) and lazy loaded values result in a lot of magic or unintended calls out to the database.
Write Your Logic First
Recently, I've been trying to write the "core" code first. This is the code that performs the actual business logic. I don't know about you, but many times when going over someone else's code, I ask the question, "But, where does it do [business rule]?" Often, the business logic is so crowded with concerns about grabbing data, transforming it and whatnot that I can't even see it (needle in a hay stack). So, now I implement the logic first and as I figure out what data I need, I add it as a parameter or add it to a parameter object. Getting the rest of the code to fit this new interface usually falls on a mediator class of some kind.
Like I said, though, you have to keep a lot in mind when writing business layers, including performance. The approach above has been useful lately because I don't have rights to version control or the database schema yet. I am working in a dark room with just my understanding of the requirements so far.
Write with Testing in Mind
Utiltizing dependency injection can be useful for designing a good architecture up-front. Try to think about how you would test your code without hitting a database or other service. This also lends itself to small, reusable classes that can run in multiple contexts.
Conclusion
My conclusion is that there really is no such thing as a perfect business layer. Even in the same application, there can be times when one approach only works 90% of the time. The best we can do is try to write the simplest thing that works. For the longest time I avoided DTOs and wrapped ADO.NET DataRows with objects so updates were immediately recorded in the underlying DataTable. This was a HUGE mistake because I couldn't copy objects and constraints caused exceptions to be thrown at weird times. I only did it to avoid setting parameter values explicitly.
Martin Fowler has blogged extensively about DSLs. I would recommend starting there.
http://martinfowler.com/bliki/dsl.html
It was helpful to me to learn and play with CSLA.Net (if you are a MS guy). I've never implemented a "pure" CSLA application, but have used many of the ideas presented in the architecture.
Your best bet is keep looking for that elusive magic bullet and use the ideas that best fit the problem you are solving. Keep it simple.
One problem I find is that even when you have a nicely designed business layer it is hard to stop business logic leaking out, and development tools tend to encourage this. For example as soon as you add a validator control to an ASP.NET WebForm you have let business logic leak out into the view. The validation should occur in the business layer and only the results of it displayed in the view. And as soon as you add constraints to a database you then have business logic in your database as well. DBA types tend to disagree strongly with this last point though.
Neither have I. We don't create a business layer in our applications. Instead we use MVC-ARS. The business logic is embedded in the (S) state machine and the (A) action.
Possibly because in reality we are never able to fully decouple the business logic from the "process", the inputs, outputs, interface and that ultimately people find it hard to deal with the abstract let alone relating it back to reality.

Code generators or ORMs?

What do you suggest for Data Access layer? Using ORMs like Entity Framework and Hibernate OR Code Generators like Subsonic, .netTiers, T4, etc.?
For me, this is a no-brainer, you generate the code.
I'm going to go slightly off topic here because there's a bigger underlying fallacy at play. The fallacy is that these ORM frameworks solve the object/relational impedence mismatch. This claim is a barefaced lie.
I find the best way to resolve the object/relational impedance mismatch is to either use OOP exclusively and use an object database or use the idioms of the relational database exclusively and ignore OOP.
The abstraction "everything is a table" is to me, much more powerful than the abstraction "everything is a class." It takes less code, less intellectual effort and leads to faster code when you code to the database rather than to an object model.
To me this seems obvious. If your application is data driven then surely your code should be data driven too? Yet to say this is hugely controversial.
The central problem here is that OOP becomes a really leaky abstraction when used in conjunction with a database. Code that look perfectly sensible when written to the idioms of OOP looks completely insane when you see the traffic that code generates at the database. When that messiness becomes a performance problem, OOP is the first casualty.
There is really no way to resolve this. Databases work with sets of data. OOP focus on instances of classes. Trying to marry the two is always going to end in divorce.
So to answer your question, I believe you should generate your classes and try and make them map the underlying database structure as closely as possible.
Perhaps controversially, I've always felt that using code generators for the ADO.NET plumbing is fundamentally solving the wrong problem.
At some point, hopefully not too long after learning about Connection Strings, SqlCommands, DataAdapters, and all that, we notice that:
Such code is ugly
It is very boring to write
It's very easy to miss something if you're doing it by hand
It has to be repeated every time you want to access the database
So, the problem to solve is "how to do the same thing lots of times fast"?
I say no.
Using code generators to make this process quick still means that you have a ton of code, all the same, all over your business classes (or your data access layer, if you separate that from your business logic).
And then, if you want to do something generically (like track stored procedure usage, for instance), you end up having to customise your code generator if it doesn't already have the feature you want. And even if it does, you still have to regenerate everything all the time.
I like to do things once, not many times, no matter how fast I can do them.
So I rolled my own Data Access class that knows how to add parameters, set up and close connections, manage transactions, and other cool stuff. It only had to be written once, and calling its methods from a Business object that needs some database stuff done consists of one line of code.
When I needed to make the application support multithreaded database accesses, it required a change to the Data Access class only, and all the business classes just do what they already did.
There is no right answer it all depends on your project. As Simon points out if your application is all data driven, then it might make sense depending on the size and complexity of the domain to use non oop paradigm. I had a lot of success building a system using a Transaction Script pattern, which passed XML Messages around the system.
However this system started to break down after five or six years as the application grew in size and complexity (5 or 6 webs, several web services, tons of COM+ components, legacy and .net code, 8+ databases with 800+ tables 4,000+ procedures). No one knew what anything was, and duplication was running rampant.
There are other ways to alleviate the maintance then OOP; however, if you have a very complex domain then hainvg a rich domain model is ideal IMHO, as it allows for the business rules to be expressed in nice encapsulated components.
To answer your question, avoid code generators if you can. Code generators are a recipe for disaster, but if you do go with code generation do not modify the generated code. Also be sure to have a good process in place that is easy for developers to get the new generated code.
I recommend using either the following: ORM or hand roll a lightweight DAL. I am currently transitioning a project over to nHibernate off my hand rolled DAL and am having a lot of success; however, I like having the option of using either option. Further if you properly seperate your concerns (Data Access from Business Layer from Presentation) you can have a single service layer that might talk to a Dao (Data Access Object) that for one object is an ORM but for another is hand rolled). I like this flexibility as it allows to apply the best tool to the job.
I like nHibernate over a hand rolled DAL because while my DAL does abstract away most of the ADO.Net code you still have to write the code that takes a data reader to an object or an object and creates the parameters.
I've always preferred to go the code generator route, especially in C# where you can make use of extended classes to add functionality to the basic data objects.
Hate to say this, but it depends. If you find an ORM tool that fits your needs go for it. We wrote our own system in small steps while developing the application. We are using C++ and there are not that many tools out there anyway. Ours ended up being a XML description of the database, from that the SQL generation script and the basic object layer and metadata were generated.
Do your homework and evaluate theses tools and you will find a good fit for your needs.