I have an class library called ServiceLayer which acts as a repository for a ASP.NET MVC application This service layer has a references to a WCF Service called ProfileService which contains Profile methods to perform CRUD operations on a database etc.
I now need to allow mobile devices to communicate with my application so I have created another WCF Service called ProfileService. This service has a reference to the ServiceLayer class library and makes calls to it to undertake Profile operations.
Now this is quite confusing as I now have 2 ProfileServices. The first communicating with my database etc and exposing itself to my service layer. The second communicating with my service layer and exposing itself to mobile devices.
What is the best way to name your services in a SOA environment to avoid confusion of which type is which? especially when mapping between types.
I may also want to create another service which acts as an API to users of the system. What would I name this service ProfileAPI?? I know each ProfileService is in its own namespace but this doesnt help with readability when creating AutoMapperSettings or performing manual mapping.
So if anybody out there knows of a good way to name services in this environment it would be much appreciated.
You are looking for a Service Facade
You would end up with a Facade, which is just a specialized interface into your real service. You would define the different services as needed (mobile, users, database)
Related
Setting: I'm developing an intranet tool set for my department, the main point of which is to centrally manage data quality and accessibility, but also to automate and scale some partial-processes.
Problem: I currently have my business logic in a CLR assembly, which is available on my SQL-Server for other CLR assemblies that run automated ETL directly on the SQL-Server. I am also developing an intranet site, which also needs the code information in that business logic assembly, but referencing the CLR assembly code has been working out sub-optimally, in terms of deployment and code maintenance. Also another department has voiced interest in using the code-base and data for their own intranet site.
Question(s): I've read quite a few Q&A(1,2,3,4,...) on SO to this topic, but I find it a very encompassing, so I'll try to ask questions for a more specific case(i.e. a single BL and Data Access code base)
Is a WCF service the solution I want? All my potential service clients run on the same server, is there maybe another way to reference the same code base both in CLR assembly and website projects? I don't need support for different platforms(ex. Java) - everything is .NET(yay for in-house progr!) - is WCF overkill?
Can code from a WCF service be used like a class library, or do I need to program a new way for accessing classes/methods from the service?
Separation of Development, Test and Productive instances?
Can a WCF service be updated while clients are accessing it, or do I need to schedule maintenance windows? When I update the service, do I need to update the client as well in some way?
Can I dynamically set the service reference, like I currently am dynamically setting the database connection string, depending on if StageConfig = dev, test, or prod?
My CLR assemblies are written for .Net 3.5, but the websites for .NET 4.0, will that pose a problem?
What minimum set of .NET service architecture programming do I need to know to accomplish this? I'll learn more about WCF with time, but I need to evaluate architecting effort and weigh it against getting things done(feature requests). Does the MS tutorial get me the desired skill?
I appreciate answers to only single questions, if you feel you know something, I'll +1 whatever helps me get closer to a complete answer.
OK, so you want to make your code enterprise-wide. There are two fundamental problems to talk about when you want to do this, so I'll structure the answer that way:
You have to understand what WCF is all about.
You have to manage your dependencies correctly.
What WCF is about
WCF is a way of doing RPC/RMI (Remote procedure call/remote method invocation) which means that some client code can call code that is located somewhere else through the network.
A callable WCF service is determined by the ABC triplet:
The service specification is implemented as a .NET interface with a "ServiceContract" attribute. This is the Contract ("C")
The "location" of the service is determined by a pair : Address ("A") and Binding ("B"). The Binding determines the protocol suite to be used for communication between client and server (NetPipe, TCP, HTTP, ...). The Address is a URI following the scheme determined by the Binding ("net.pipe", "net.tcp", "http", ...)
When the client code calls a WCF service at a specific Address, with a specfic Binding, and a specific Contract (which must match what the server at the specific Address and the specific Binding is delivering), WCF generates a proxy object implementing the interface of the contract.
The program delivering the service is any .NET executable. It has to generate one or many WCF Hosts, that will register objects or classes that implement the service contract, and asociate each delivered service to a specific Address and Binding. (possibly many thereof)
The configuration can be through the app .config file, in which you will be specifying ABC triplets and assotiate these triplets with a name that you will use in your application. You can also do it programmatically, which is very easy.
WCF does not address your problem of deploying your application, or the configuration of addresses and binding. It just addresses the problem of letting two executables communicate with each other with strongly-typed objects (through a specific interface). Sharing the service configuration is up to you. You may use a shared .config file on a Windows share, or even set up a LDAP server that will deliver all the data you need to find your service (namely A and B).
Managing your dependencies correctly
In your scenario, there are three actors that want to use your WCF infrastructure:
Your SQLCLR assembly, which will be a client.
The intranet site, which will be another client.
The service host, which will be a server.
The bare minimum number of assemblies will be 4. One for each of the aforementioned actors, and one specifying the contract, which will be used by all three actors. It should contain the following things:
The interface specifying the contract.
All types needed by the interface, which will of course be sent through the network, and therefore must be serializable.
There should be nothing more in it, or else, it will be a maintenance nightmare.
Answer to your questions
I hope that my answer is clear. Let's sum up the answers to your questions.
Is a WCF service the solution I want? All my potential service clients
run on the same server, is there maybe another way to reference the
same code base both in CLR assembly and website projects? I don't need
support for different platforms(ex. Java) - everything is .NET(yay for
in-house progr!) - is WCF overkill?
Everything is overkill. WCF is rather easy to use and scales down very well.
Can code from a WCF service be used like a class library, or do I need
to program a new way for accessing classes/methods from the service?
Setting up a WCF on existing code requires only the implementation of an additional class, and some code creating the Hosts which will serve the aforementioned class.
Calling a WCF service requires the creation of a Channel, which is a .NET (proxy) object implementing the interface.
So basically, your business code remains in the same state.
Separation of Development, Test and Productive instances?
WCF does not take care of that. Different environments, different service addresses. You have to take care of this yourself.
Can a WCF service be updated while clients are accessing it, or do I need to schedule maintenance windows?
It depends on your maintenance policy. Kill the serving process and launch the new version is the basic upgrade mechanism.
When I update the service, do I need to update the client as well in some way?
Provided that you manage your dependencies correctly like I sketched in the previous section, you need to update the clients only if the service specification (the interface) changes.
Can I dynamically set the service reference, like I currently am dynamically setting the database connection string, depending on if StageConfig = dev, test, or prod?
You have to manage that, probably by etting Address and Binding for a service programmatically.
My CLR assemblies are written for .Net 3.5, but the websites for .NET 4.0, will that pose a problem?
Provided that you manage your dependencies correctly like I sketched in the previous section, the only constraint will be the minimum CLR version required by the "contract" assembly.
What minimum set of .NET service architecture programming do I need to know to accomplish this? I'll learn more about WCF with time, but I need to evaluate architecting effort and weigh it against getting things done(feature requests). Does the MS tutorial get me the desired skill?
You'll need the result of these exercises:
Make two executables, a client and a server, that will communicate
through a WCF contract located in a separate DLL. The configuration
should be located in the app .config file.
Make two executables, a client and a server, that will communicate
through a WCF contract
located in a separate DLL. The configuration should be determined programatically.
Try to send a serializable class as a parameter to your service.
Try to send a serializable class as a return value of your service.
After that, you'll need to think about the best/cheapest way to share the Addresses and Bindings of your services.
Hope it helps.
Is there a clever way to structure my WCF service, such that I can implement a service once and have it return different data contracts for different callers? (i.e. mobile clients)
We have already developed a set of services which are consumed by a desktop application and are now building a mobile version of the application. The problem is that the data transfer objects (DTOs) returned are too big and contain unnecessary members for the mobile application. With it going over a mobile network we would like to cut these out to improve performance, however the implementation of the services will be identical.
Ideas we have so far:
Setting
EmitDefault to false and then not mapping all properties on the
DTO for mobile callers (we are using automapper so may be able to do something
with multiple mapping configurations)
Inherited DTO types for desktop that extend the basic mobile type using the KnownType attribute.
Just building a separate service entirely, but making sure all logic is in a shared business service layer (which it should be already)
Does anyone know if there is any guidance out there for this requirement?
Personally I would keep the implementations separate. As you point out, each set of clients - mobile and desktop - have different requirements. You can share the contracts for your service, just have different implementations / services. This will allow to specialise the services for each client, and makes it easier to extend, modify and test.
I am writing an application that is consuming an in-house WCF-based REST service and I'll admit to being a REST newbie. Since I can't use the "Add Service Reference", I don't have ready-made proxy objects representing the return types from the service methods. So far the only way I've been able to work with the service is by sharing the assembly containing the data types exposed by the service.
My problem with this arrangment is that I see only two possibilities:
Implement DTOs (DataContracts) and expose those types from my service. I would still have to share an assembly but this approach would limit the types contained in the assembly to the service contract and DTOs. I don't like to use DTOs just for the sake of using them, though as they add another layer of abstraction and processing time to convert from domain object to DTO and vice versa. Plus, if I want to have business rules, validation, etc. on the client, I'd have to share the domain objects anyways, so is the added complexity necessary.
Support serialization of my domain objects, expose those types and share that assembly. This would allow me to share business and validation logic with the client but it also exposes parts of my domain objects to the client that are meant only for the service app.
Perhaps an example would help the discussion...
My client application will display a list of documents that is obtained from the REST service (a GET operation). The service returns an array of DocumentInfo objects (lightweight, read-only representation of a Document).
When the user selects one of the items, the client retrieves the full Document object from the REST service (GET by id) and displays a data entry form so the user can modify the object. We would want validation rules for a rich user experience.
When the user commits the changes, the Document object is submitted to the REST service (a PUT operation) where it is persisted to the back-end data store.
If the state of the Document allows, the user may "Publish" the Document. In this case, the client POSTs a request to the REST service with the Document.ID value and the service performs the operation by retrieving the server-side Document domain object and calling the Publish method. The Publish method should not be available to the client application.
As I see it, my Document and DocumentInfo objects would have to be in a shared assembly. Doing this makes Document.Publish available to the client. One idea to hide it would be to make the method internal and add an InternalsVisibleTo attribute that allows my service app to call the method and not the client but this seems "smelly."
Am I on the right track or completely missing something?
The classes you use on the server should not be the same classes you use on the client (apart from during the data transfer itself). The best approach is to create a package (assembly/project) containing DTOs, and share these between the server and the client. You did mention that you don't want to create DTO's for the sake of it, but it is best practice. The performance impact of adding extra layers is negligible, and layering actually helps make your application easier to develop and maintain (avoiding situations like yours where the client has access to server code).
I suggest starting with the following packages:
Service: Resides on server only, exposes the service and contains server application logic.
DTO: Resides on both server and client. Contains simple classes which contain data which need to be passed between server and client. Classes have no code apart from properties. These are short lived objects which survive long enough only to transfer data.
Repository: Resides on client only. Calls the server, and turns Model objects into DTO's (and vice versa).
Model: Resides on client only. Contains classes which represent business objects and relationships. Model objects stay in memory throughout the life of the application.
Your client application code should call into Repository to get Model objects (you might also consider looking into MVVM if your not sure how to go about this).
If your service code is sufficiently complex that it needs access to Model classes, you should create a separate Model package (obviously give it a different name) - the only classes which should exist both on server and client are DTO classes.
I thought that I'd post the approach I took while giving credit to both Greg and Jake for helping guide me down the path.
While Jake is correct that deserializing the data on the client can be done with any type as long as it implements the same data contract, enforcing this without WSDL can be a bit tricky. I'm in an environment where other developers will be working with my solution both to support and maintain the existing as well as creating new clients that consume my service. They are used to "Add Service Reference" and going.
Greg's points about using different objects on the client and the server were the most helpful. I was trying to minimize duplicate by sharing my domain layer between the client and the server and that was the root of my confusion. As soon as I separated these into two distinct applications and looked at them in isolation, each with their own use cases, the picture became clearer.
As a result, I am now sharing a Contracts assembly which contains my service contracts so that a client can easily create a channel to the server (using WCF on the client-side) and data contracts representing the DTOs passed between client and service.
On the client, I have ViewModel objects which wrap the Model objects (data contracts) for the UI and use a service agent class to communicate with the service using the service contracts from the shared assembly. So when the user clicks the "Publish" button in the UI, the controller (or command in WPF/SL) calls the Publish method on the service agent passing in the ID of the document to publish. The service agent relays the request to the REST API (Publish operation).
On the server, the REST API is implemented using the same service contracts. In this case, the service works with my domain services, repositories and domain objects to carry out the tasks. So when the Publish service operation is invoked, the service retrieves the Document domain object from the DocumentRepository, calls the Publish method on the object which updates the internal state of the object and then the service passes the updated object to the Update method of the repository to persist the changes.
I am pleased with the outcome as I believe this gives me a more robust and extensible architecture to work with. I can change the ViewModels as needed to support the UI with no concern over poluting the service(s) and, likewise, change the internal implementation of the service operations (domain layer) without affecting the client application(s). All that binds the two are the contracts they share. Pretty clean.
You can serialize your domain objects and then de-serialize them into different types on the client. Both types need to implement the same data contract. All serializable types have at least a default data contract that includes all public read/write properties and fields.
Hi
we have used the WCF service in my project. In our application the following layers are exists,
UI (aspx file), and code behind.
(Customer.aspx)
UI entity Model (CustomerModel.cs)
which is properties of the customer
details.
UI layer calls the Business Façade class with the CustomerModel object.
In Business façade I have referenced the WCF service.
Convert the CustomerModel to CustomerContract.
Call the CustomerService from business façade.
In the service, I have convert CustomerContract into CustomerBO and call the CustomerBO class.
In the CustomerBO class,I have initialized the provider object and call the customer Provider class.
In Provider class, I have access the Database and send data from 8 to 1 layer.
I don't know which design pattern is using in our project. Can anybody help on this to identify design pattern.
Thanks
This is not about design pattern but about architecture. Your architecture itself is not bad but it is really complex so unless you are creating very big project this architecture can be overkill. Moreover your naming of different layers is not correct. What you call BusinessFacade is usually called ServiceAgent. BusinessFacade should be placed on service site and wrapped by WCF service. Usual responsibility of BusinessFacade is to aggregate several business operations into single calls = create less granular API which can be easily and effectively exposed for remote calls.
I am not understanding how my model can be a WCF service. It makes sense when its an Astoria partial class residing on the client that allows remote calls to do persistence calls, but a WCF service doesn't have properties for model fields that can be used to update a data store.
Even if I could factor out an interface for a model/domain object class into a separate assembly, a silverlight project will not allow me to add that as a reference.
How should my ViewModel encompass my WCF calls? Ultimately the WCF will call a repository assembly implemented in Linq-to-Sql, but apparently those entities are not my model in this scenario, my WCF classes are?
Thanks for any guidance on this.
Also, posts I have read to give a frame of reference:
http://development-guides.silverbaylabs.org/Video/Silverlight-Prism#videolocation_0
http://blogs.conchango.com/davidwynne/archive/2008/12/15/silverlight-and-the-view-viewmodel-pattern.aspx
http://msdn.microsoft.com/en-us/magazine/dd458800.aspx
When you create a service reference to a WCF service in a Silverlight project it also generates an interface for that Service, this is similar to David Wynns IFeedService in the articles you listed above. The service reference will also generate proxy objects that represent the objects used by the service (Product, Category etc).
The important thing to note is that the service interface isn't the model, it's how you access the model. Going back to David's example, his ViewModel exposes a list of items (his model), this list is retrieved using the service.
If you're looking to share code between the client and server I'd reccomend looking into something like RIA Services. If this isn't for you then I'd look at a few articles around about sharing code between the server and client (via Add as Link).
Hope this helps