WCF and n-tier architecture and serialization performance - wcf

When working with 5-tier architecture (front-end => interface tier => business tier => database tier => database) using WCF service as the interface tier, having the client applications calling it's methods, should I use also WCF service for the business and database tiers? I ask because of all the serialization / deserialization that will be going on between the 3 services is probably going to consume a lot of CPU for the server, and have a performance impact on the application as a whole, or am I wrong?
In the case of running all 3 component layers on a single machine, is it better to build the business and database tiers as simple class libraries, and leave only the interface layer as WCF?
Tks

WCF is useful for communication between physical machines. Although you can use WCF to communicate intra-process there are simpler and more efficient ways to accomplish the same thing. You would only use WCF intra-process if you were thinking about putting the different layers on different machines at some point. As far as using WCF for the database tier, you wouldn't: you would use the classes in the System.Data.xxx namespaces (ie System.Data.SqlClient if you are using a SQL server database; or possibly the Entity Framework).
Edits:
When people talk about 3-tier architecture they are mixing two concepts into one: physical tiers: a client machine, a middleware machine and a Database machine; and logical layers in the software architecture: client UI code, Business logic code, and data access code. When code from two different logical layers that reside on the same physical machine need to communicate the simplest model is one class calling into another, the amount of decoupling depends on your requirements. You want to use the simplest model that satisfies your requirements. Rockford Lhotka has an excellent description of this in the first chapter of his book Expert C# 2008 Business Objects

Tiered architecture is a pre-SOA approach, although we still build logical tiers in our software today. But physical tiers, if more than one (apart from UI and database), will cause you pain and heartache. Sometimes you end up having two but I personally advise against it.
Trend is using parallel/decoupled processing using Service Bus or similar methods and building serial services are not recommended.
You have pointed out the serialisation overhead. But that is just the beginning, you have method execution delay, more points of failure, degradation of performance since layers talk out of process, maintenance overhead, ...
So do not be apologetic about having only one middleware physical layer, that is not an asset it is a liability.

Related

Facade Pattern for distributed application?

We are in the process defining the architecture of a fairly large customer facing financial application where performance and scalability are the key requirements.
We proposed n-tier architecture which primary consists of Web Tier, Application Tier (Mid-Tier) and Data Tier WCF being the communication mechanism between web tier and app tier. Some of the stakeholders are concerned that WCF would cause performance overhead and want configurable architectural provision to support In-process calls and WCF. Their vision is to start with in-process calls and change it to WCF based communication if horizontal scalability is a concern.
We are considering the following approaches:
One architectural approach would be to introduce a client facade
layer which can act as a facade between web and application layers.
The façade layer would simply hide the complexity of the remote calls
and allows for easy swapping of the façade for another one that might
possibly implement a different remote call technology (ie. WCF)
Another approach is to simply use WCF and use different bindings for
different scenarios. For example, use IPC binding (Namedpipes) when
web and application components are deployed in the same machine or
use TCP binding when the application components are deployed in a
different service (Both the ends use .NET so interoperability is not
a concern )
We are looking for the right architectural approach for the above mentioned scenario.
Kindly advice.

Azure Service Bus Queues integration approaches in .NET

There are different approaches to implement brokered messaging communication between services using Service Bus Queues (Topics):
CloudFX Messaging
QueueClient
WCF integrated approach
Which of those approaches are more useful in which cases?
Any comparison of performance, abstraction level, testability, flexibility or facilities would be great.
OK, now that I understand your question better, I see where the confusion is.
All 3 of the options that you are looking into are written by Microsoft.
Also, all 3 of those options are simply an abstraction - a client interface into the service that MS is providing.
None of them are faster, slower, etc. However, I would say that if you went the WCF route, then you can more easily abstract the technology choice a bit better.
What I mean by that is - you can develop a "GetMessage" contract in WCF that points to the service bus... and then later on change the design, and configure WCF to point to some other service and you wouldn't have to change the code.
So, that's one advantage for WCF.
That being said, CloudFX is built by Microsoft to give extra common functionality around the usage of the Azure Service Bus ... so don't ignore that. Look into the benefits of that API and decide if you and your team need those features.
Lastly, QueueClient is simply what CloudFX improves on, but adds no benefits like WCF. So you probably don't want to go with this route (considering your other 2 options).
Keep in mind that Azure uses a REST API under the hood for most of the communication... and so you might hit some unexpected performance issues if you don't configure your application correctly: http://tk.azurewebsites.net/2012/12/10/greatly-increase-the-performance-of-azure-storage-cloudblobclient/

What are the biggest advantages to moving from n-tier to SOA?

At my company we are currently using the classic n-tier architecture using NHibernate as our persistence layer with fat objects. Seeing many issues with this pattern, such as full hydration of the object graph when entities are retrieved from the database we have been looking in to other alternatives.
In this process we have moved to a more scalable Command and Query architecture, and now we are looking into the viability of SOA.
In your experiences, what are the biggest advantages of SOA over n-tier. Have you encountered any major hurdles?
And advice and reading material would be helpful.
Besides scalability, SOA offers architectural flexibility. If you decide at some point to move your application from WebForms to Silverlight, both can take equal advantage of a well-designed SOA interface.
You can also decide at some point down the road to offer a new service that takes advantage of some of the features and/or data in your current offering. You just build a new application that is authorized to access your existing interface and away you go.
Loose coupling and governance.

SOA architecture data access

In my SOA architecture, I have several WCF services.
All of my services need to access the database.
Should I create a specialized WCF service in charge of all the database access ?
Or is it ok if each of my services have their own database access ?
In one version, I have just one Entity layer instanced in one service, and all the other services depend on this service.
In the other one the Entity layer is duplicated in each of my services.
The main drawback of the first version is the coupling induced.
The drawback of the other version is the layer duplication, and maybe SOA bad practice ?
So, what do so think good people of Stack Overflow ?
Just my personal opinion, if you create a service for all database access then multiple services depend on ONE service which sort of defeats the point of SOA (i.e. Services are autonomous), as you have articulated. When you talk of layer duplication, if each service has its own data to deal with, is it really duplication. I realize that you probably have the same means of interacting with your relational databases or back from the OOA days you had a common class library that encapsulated data access for you. This is one of those things I struggle with myself, but I see no problem in each service having its own data layer. In fact, in Michele Bustamante's book (Chapter 1 - Page 8) - she actually depicts this and adds "Services encapsulate business components and data access". If you notice each service has a separate DALC layer. This is a good question.
It sounds as if you have several services but a single database.
If this is correct you do not really have a pure SOA architecture since the services are not independant. (There is nothing wrong with not having a pure SOA architecture, it can often be the correct choice)
Adding an extra WCF layer would just complicate and slow down your solution.
I would recommend that you create a single data access dll which contains all data access and is referenced by each WCF service. That way you do not have any duplication of code. Since you have a single database, any change in the database/datalayer would require a redeployment of all services in any case.
Why not just use a dependency injection framework, and, if they are currently using the same database, then just allow them to share the same code, and if these were in the same project then they would all use the same dll.
That way, later, if you need to put in some code that you don't want the others to share, you can make changes and just create a new DAO layer.
If there is a certain singleton that all will use, then you can just inject that in when you inject in the dao layer.
But, this will require that they use the same DI framework controller.
The real win that SOA brings is that it reduces the number of linkages between applications.
In the past I've worked with organizations who have done it a many different ways. Some data layers are integrated, and some are abstracted.
The way I've seen it most successfully done is when you create generic data-layer services for each app/database and you create the higher level services based on your newly created data layer.

What's the difference between a "Data Service Layer" and a "Data Access Layer"?

I remember reading that one abstracts the low level calls into a data agnostic framework (eg. ExecuteCommand methods etc), and the other usually contains business specific methods (eg. UpdateCustomer).
Is this correct? Which is which?
To me this is a personal design decision on how you want to handle your project design. At times data access and data service are one and the same. For .NET and LINQ that is the case.
To me the data service layer is what actually does the call to the database. The data access layer receives the objects and creates them or modify them for the data service layer to make the call to the database.
In my designs the Business Logic Layer manipulates the objects based on the business rules, then passes them to the data access layer which will format them to go into the database or the objects from the database, and the data service layer handles the actual database call.
I think in general the two terms are interchangeable, but could have more specific meanings depending on the context of your development environment.
A Data Access Layer sits on the border between data and the application. The "data" is simply the diverse set of data sources used by the application. This can mean that substantial coding must be done in each application to pull data together from multiple sources. The code which creates the data views required will be redundant across some applications.
As the number of data sources grows and becomes more complex, it becomes necessary to isolate various tasks of data access to address details of data access, transformation, and integration. With well-designed data services, Business Services will be able to interact with data at a higher level of abstraction. The data logic that handles data access, integration, semantic resolution, transformation, and restructuring to address the data views and structures needed by applications is best encapsulated in the Data Services Layer.
It is possible to break the Data Services Layer down even further into its constituent parts (i.e. data access, transformation, and integration). In such a case you might have a "Data Access Layer" that concerns itself with only retrieving data, and a "Data Service Layer" that retrieves its data through the Data Access Layer and combines and transforms the retrieved data into the various objects required by the Business Service Layer.
Here's another perspective deep from the trenches! A Data Access Layer is a software abstraction layer which hides the complexity / implementation of actually getting the data. The applications asks the Data Access Layer (See DAO design pattern) to "get me this" or "update that" etc (indirection). The Data Access Layer is responsible for performing implementation-specific operations, such as reading/updating various data sources, such as Oracle, MySQL, Cassandra, RabbitMQ, Redis, a simple file system, a cache, or even delegate to another Data Service Layer.
If all this work happens inside a single machine and in the same application, the term Data Service Layer is equivalent to a Service Facade (indirection). It is responsible for servicing and delegating application calls to the correct Data Access Layer.
Somewhat confusingly, in a distributed computing world, or Service Oriented Architecture, a Data Service Layer can actually be a web service that acts as a standalone application. In this context, the Data Service Layer delegates received upstream application data requests to the correct Data Access Layer. In this instance, web services are indirecting data access from applications - the application only needs to know what service to call to get the data, so as a rule-of-thumb, in distributed computing environments, this approach will reduces application complexity (and there are always be exceptional cases)
So just to be clear, the application uses a DSL and a DAL. The DSL in the app should talk to a DAL in the same application. DAL's have the choice of using a single datasource, or delegate to another web service. Web Service DSL can delegate the work to the DAL for that request. Indeed, it's possible for a single web service request to use a number of data sources in order to respond to the data.
With all that said, from a pragmatic perspecive, it's only when systems become increasingly complex, should more attention be paid to architectural patterns. It's good practice to do things right, but there's no point in unnecessarily gold-plating your work. Remember YAGNI? Well that fails to resonate come the time it's needed!
To conclude: A famous aphorism of David Wheeler goes: "All problems in computer science can be solved by another level of indirection";[1] this is often deliberately mis-quoted with "abstraction layer" substituted for "level of indirection". Kevlin Henney's corollary to this is, "...except for the problem of too many layers of indirection."
The Data Service Layer concept done in the WebSphere Commerce documentation is straightforward:
The data service layer (DSL) provides an abstraction layer for data access that is independent of the physical schema.
The purpose of the data service layer is to provide a consistent interface (called the data service facade) for accessing data, independent of the object-relational mapping framework
Currently in internet the DSL concept is mainly associated with the SOAs (Service Oriented Architectures) but not only. Here is mentioned in an example of N-tier applications.