Facade Pattern for distributed application? - wcf

We are in the process defining the architecture of a fairly large customer facing financial application where performance and scalability are the key requirements.
We proposed n-tier architecture which primary consists of Web Tier, Application Tier (Mid-Tier) and Data Tier WCF being the communication mechanism between web tier and app tier. Some of the stakeholders are concerned that WCF would cause performance overhead and want configurable architectural provision to support In-process calls and WCF. Their vision is to start with in-process calls and change it to WCF based communication if horizontal scalability is a concern.
We are considering the following approaches:
One architectural approach would be to introduce a client facade
layer which can act as a facade between web and application layers.
The façade layer would simply hide the complexity of the remote calls
and allows for easy swapping of the façade for another one that might
possibly implement a different remote call technology (ie. WCF)
Another approach is to simply use WCF and use different bindings for
different scenarios. For example, use IPC binding (Namedpipes) when
web and application components are deployed in the same machine or
use TCP binding when the application components are deployed in a
different service (Both the ends use .NET so interoperability is not
a concern )
We are looking for the right architectural approach for the above mentioned scenario.
Kindly advice.

Related

What design patterns are used in an ecommerce web apps?

What design patterns are commonly used or used together in developing e-commerce with microservices or multi-tier layer architecture? Let's say we will write the code using object-oriented language such as Java or .NET 5 just for an example and we develop the client app using a JavaScript framework.
Would the design patterns suggestion change if I choose to implement a microservices architecture?
There is a pattern called "Pattern: Microservice Architecture:"
Define an architecture that structures the application as a set of
loosely coupled, collaborating services. This approach corresponds to
the Y-axis of the Scale Cube. Each service is:
Highly maintainable and testable - enables rapid and frequent
development and deployment
Loosely coupled with other services -
enables a team to work independently the majority of time on their
service(s) without being impacted by changes to other services and
without affecting other services
Independently deployable - enables a
team to deploy their service without having to coordinate with other
teams
Capable of being developed by a small team - essential for high
productivity by avoiding the high communication head of large teams
Services communicate using either synchronous protocols such as
HTTP/REST or asynchronous protocols such as AMQP. Services can be
developed and deployed independently of one another. Each service has
its own database in order to be decoupled from other services. Data
consistency between services is maintained using the Saga pattern
To learn more about the nature of a service, please read this article.
And e-commerce application is considered as an example to apply pattern: Microservice Architecture.
So it is possible to create multiple services divided by entities or business domains:
customers
inventory
shipping
Then it is necessary provide the way of communication among services. It can be event streaming platform Kafka.

Azure Service Bus Queues integration approaches in .NET

There are different approaches to implement brokered messaging communication between services using Service Bus Queues (Topics):
CloudFX Messaging
QueueClient
WCF integrated approach
Which of those approaches are more useful in which cases?
Any comparison of performance, abstraction level, testability, flexibility or facilities would be great.
OK, now that I understand your question better, I see where the confusion is.
All 3 of the options that you are looking into are written by Microsoft.
Also, all 3 of those options are simply an abstraction - a client interface into the service that MS is providing.
None of them are faster, slower, etc. However, I would say that if you went the WCF route, then you can more easily abstract the technology choice a bit better.
What I mean by that is - you can develop a "GetMessage" contract in WCF that points to the service bus... and then later on change the design, and configure WCF to point to some other service and you wouldn't have to change the code.
So, that's one advantage for WCF.
That being said, CloudFX is built by Microsoft to give extra common functionality around the usage of the Azure Service Bus ... so don't ignore that. Look into the benefits of that API and decide if you and your team need those features.
Lastly, QueueClient is simply what CloudFX improves on, but adds no benefits like WCF. So you probably don't want to go with this route (considering your other 2 options).
Keep in mind that Azure uses a REST API under the hood for most of the communication... and so you might hit some unexpected performance issues if you don't configure your application correctly: http://tk.azurewebsites.net/2012/12/10/greatly-increase-the-performance-of-azure-storage-cloudblobclient/

What is a good WCF SOA strategy?

I've worked on enterprise level SOA applications that have a whole lot of very simple one-off WCF services.
The code for some of these services could easily be placed into one central service and accessed through different method calls.
What are the advantages or disadvantages of having many of these one-off services?
As you have recognised there is a tension between decomposing services into small, reusable, separately deployed building blocks and manageability of large numbers of services
Separate services
For: Flexibility of deployment, reuse and composition
Against: Manageability, overhead of invocation if the services needs to talk to eachother
One big service
For: Simplified deployment and management, in-memory invocation between "services"
Against: Reuse reuses entire service, added contention for unrelated functionality, potential scalability problems
As with most of these questions the best solutions comes somewhere in the middle - grouping similar services into single deployments while retaining the flexibility to scale out some groups of services with heavier usage

Advantages of having non interoperable services in WCF?

We are having some discussions about use of WCF and creation of services and client support.
Currently we support a silverlight client by providing silverlight versions of our service libraries client side, so that we can keep the strong typing of our service contract which is defined using interfaces.
This is ok, but having the service defined with interfaces makes it awkward for other clients as the WSDL has a lot of methods return ArrayOfAnyType and everything is just objects at the client end (which can be cast to the correct type, but as I said, its awkward).
We could rewrite our services to use explicit DTOs for the message transfer and recreate our business objects using similar client side libraries, which would make our service much more interoperable.
Doing this though would seem to block off some options for us, such as using EntityFramework and the self tracking entities it provides as these require the same libraries to be shared on client and server and are not interoperable (correct me if I've got this wrong)
It seems like there is a trade off between being interoperable and having access to more functionality out of the box, allowing for quicker development of solutions.
So my question is what advantages do we gain by deciding to be non interoperable and only supporting .net and silverlight client (if supporting silverlight clients can be considered non interoperable)? And what useful .net features do we block ourselves off from by deciding to be interoperable?
Are there standard techniques for allowing both types of solution to co exist, so you can support .net clients using the full range of features available to you, but still support other non .net clients well?
You can use the Facade Pattern for this.
Move your current logic to the business layer, do not expose it via WCF.
Now create 2 WCF services one for each of the contracts you wish to support. This layer will map the business layer objects to the contract objects and call functionality in the business layer.
You then have a central place for all logic and custom services for each client.

WCF and n-tier architecture and serialization performance

When working with 5-tier architecture (front-end => interface tier => business tier => database tier => database) using WCF service as the interface tier, having the client applications calling it's methods, should I use also WCF service for the business and database tiers? I ask because of all the serialization / deserialization that will be going on between the 3 services is probably going to consume a lot of CPU for the server, and have a performance impact on the application as a whole, or am I wrong?
In the case of running all 3 component layers on a single machine, is it better to build the business and database tiers as simple class libraries, and leave only the interface layer as WCF?
Tks
WCF is useful for communication between physical machines. Although you can use WCF to communicate intra-process there are simpler and more efficient ways to accomplish the same thing. You would only use WCF intra-process if you were thinking about putting the different layers on different machines at some point. As far as using WCF for the database tier, you wouldn't: you would use the classes in the System.Data.xxx namespaces (ie System.Data.SqlClient if you are using a SQL server database; or possibly the Entity Framework).
Edits:
When people talk about 3-tier architecture they are mixing two concepts into one: physical tiers: a client machine, a middleware machine and a Database machine; and logical layers in the software architecture: client UI code, Business logic code, and data access code. When code from two different logical layers that reside on the same physical machine need to communicate the simplest model is one class calling into another, the amount of decoupling depends on your requirements. You want to use the simplest model that satisfies your requirements. Rockford Lhotka has an excellent description of this in the first chapter of his book Expert C# 2008 Business Objects
Tiered architecture is a pre-SOA approach, although we still build logical tiers in our software today. But physical tiers, if more than one (apart from UI and database), will cause you pain and heartache. Sometimes you end up having two but I personally advise against it.
Trend is using parallel/decoupled processing using Service Bus or similar methods and building serial services are not recommended.
You have pointed out the serialisation overhead. But that is just the beginning, you have method execution delay, more points of failure, degradation of performance since layers talk out of process, maintenance overhead, ...
So do not be apologetic about having only one middleware physical layer, that is not an asset it is a liability.