I miss the .Net remoting days when I could just send an object over the wire and it would work on both sides of the middle layer without much work. Here's why:
I've been given an assignment. I'm building a Logic/Data Abstraction layer (stupid PCI Compliance) so that we can move our database servers off of the corporate network into a protected network. I did a project like this once under .Net 2.0 with remoting. I built the object on the middleware layer and sent that object to the client and the client had my .Net object to work with. But WCF requires serialization to be able to send stuff up and down the pipe and serialization takes away from my fancy methods that do incredible things with the fields I have in place.
I've come up with two different strategies to get around this: (1) Move the methods from the class itself to a static utility class and (2) "Deserialize" the data on the client side and rebuild the native object with data from the serialized object.
nativeObject.Name = serializedObject.Name;
The flaw of the second method is that I have to re-serialize the object before I can send it back to the middleware layer.
serializedObject.Name = nativeObject.Name;
Both methods work but it is making writing objects take much longer than it should because of the whole serialization mess that the middle layer is causing. I would go back to .Net Remoting, but the architect says he wants this Abstraction Layer done in WCF because (my words, not his) it's new and sexy.
So how does one go about working with .Net native objects on both sides of a WCF connection... without writing 1,000 lines of glue code.
You can generate a proxy and tell it to use a specific set of classes instead of creating new ones. I belive this is done using the /r parameter of svutil.exe. If your using the IDE (VS2008), you can do this when adding a service reference click advanced, and make sure Reuse Types in Assemblies is selected (Which I think is the default).
Related
I am writing an application that is consuming an in-house WCF-based REST service and I'll admit to being a REST newbie. Since I can't use the "Add Service Reference", I don't have ready-made proxy objects representing the return types from the service methods. So far the only way I've been able to work with the service is by sharing the assembly containing the data types exposed by the service.
My problem with this arrangment is that I see only two possibilities:
Implement DTOs (DataContracts) and expose those types from my service. I would still have to share an assembly but this approach would limit the types contained in the assembly to the service contract and DTOs. I don't like to use DTOs just for the sake of using them, though as they add another layer of abstraction and processing time to convert from domain object to DTO and vice versa. Plus, if I want to have business rules, validation, etc. on the client, I'd have to share the domain objects anyways, so is the added complexity necessary.
Support serialization of my domain objects, expose those types and share that assembly. This would allow me to share business and validation logic with the client but it also exposes parts of my domain objects to the client that are meant only for the service app.
Perhaps an example would help the discussion...
My client application will display a list of documents that is obtained from the REST service (a GET operation). The service returns an array of DocumentInfo objects (lightweight, read-only representation of a Document).
When the user selects one of the items, the client retrieves the full Document object from the REST service (GET by id) and displays a data entry form so the user can modify the object. We would want validation rules for a rich user experience.
When the user commits the changes, the Document object is submitted to the REST service (a PUT operation) where it is persisted to the back-end data store.
If the state of the Document allows, the user may "Publish" the Document. In this case, the client POSTs a request to the REST service with the Document.ID value and the service performs the operation by retrieving the server-side Document domain object and calling the Publish method. The Publish method should not be available to the client application.
As I see it, my Document and DocumentInfo objects would have to be in a shared assembly. Doing this makes Document.Publish available to the client. One idea to hide it would be to make the method internal and add an InternalsVisibleTo attribute that allows my service app to call the method and not the client but this seems "smelly."
Am I on the right track or completely missing something?
The classes you use on the server should not be the same classes you use on the client (apart from during the data transfer itself). The best approach is to create a package (assembly/project) containing DTOs, and share these between the server and the client. You did mention that you don't want to create DTO's for the sake of it, but it is best practice. The performance impact of adding extra layers is negligible, and layering actually helps make your application easier to develop and maintain (avoiding situations like yours where the client has access to server code).
I suggest starting with the following packages:
Service: Resides on server only, exposes the service and contains server application logic.
DTO: Resides on both server and client. Contains simple classes which contain data which need to be passed between server and client. Classes have no code apart from properties. These are short lived objects which survive long enough only to transfer data.
Repository: Resides on client only. Calls the server, and turns Model objects into DTO's (and vice versa).
Model: Resides on client only. Contains classes which represent business objects and relationships. Model objects stay in memory throughout the life of the application.
Your client application code should call into Repository to get Model objects (you might also consider looking into MVVM if your not sure how to go about this).
If your service code is sufficiently complex that it needs access to Model classes, you should create a separate Model package (obviously give it a different name) - the only classes which should exist both on server and client are DTO classes.
I thought that I'd post the approach I took while giving credit to both Greg and Jake for helping guide me down the path.
While Jake is correct that deserializing the data on the client can be done with any type as long as it implements the same data contract, enforcing this without WSDL can be a bit tricky. I'm in an environment where other developers will be working with my solution both to support and maintain the existing as well as creating new clients that consume my service. They are used to "Add Service Reference" and going.
Greg's points about using different objects on the client and the server were the most helpful. I was trying to minimize duplicate by sharing my domain layer between the client and the server and that was the root of my confusion. As soon as I separated these into two distinct applications and looked at them in isolation, each with their own use cases, the picture became clearer.
As a result, I am now sharing a Contracts assembly which contains my service contracts so that a client can easily create a channel to the server (using WCF on the client-side) and data contracts representing the DTOs passed between client and service.
On the client, I have ViewModel objects which wrap the Model objects (data contracts) for the UI and use a service agent class to communicate with the service using the service contracts from the shared assembly. So when the user clicks the "Publish" button in the UI, the controller (or command in WPF/SL) calls the Publish method on the service agent passing in the ID of the document to publish. The service agent relays the request to the REST API (Publish operation).
On the server, the REST API is implemented using the same service contracts. In this case, the service works with my domain services, repositories and domain objects to carry out the tasks. So when the Publish service operation is invoked, the service retrieves the Document domain object from the DocumentRepository, calls the Publish method on the object which updates the internal state of the object and then the service passes the updated object to the Update method of the repository to persist the changes.
I am pleased with the outcome as I believe this gives me a more robust and extensible architecture to work with. I can change the ViewModels as needed to support the UI with no concern over poluting the service(s) and, likewise, change the internal implementation of the service operations (domain layer) without affecting the client application(s). All that binds the two are the contracts they share. Pretty clean.
You can serialize your domain objects and then de-serialize them into different types on the client. Both types need to implement the same data contract. All serializable types have at least a default data contract that includes all public read/write properties and fields.
Let me first apologise for the length of the entire topic. It will be fairly long, but I wish to be sure that the message comes over clearly without errors.
Here at the company, we have an existing ASP.NET WebApplication. Written in C# ASP.NET on the .NET Framework 3.5 SP1. Some time ago an initial API was developed for this web application using WCF and SOAP to allow external parties to communicate with the application without relying on the browsers.
This API survived for some time, but eventually the request came to create a new API that was RESTfull and relying on new technologies. I was given this assignment, and I created the initial API using the Microsoft MVC 2 Framework, running inside our ASP.NET WebApplication. This took initially quiet some time to get it properly running, but at the moment we're able to make REST calls on the application to receive XML detailing our resources.
I've attended a Microsoft WebCamp, and I was immediatly sold by the OData concept. It was very similar then what we are doing, but this was a protocol supported by more players instead of our own implementation. Currently I'm working on a PoC (Proof of Concept) to recreate the API that I developed using the OData protocol and the WCF DataService technology.
After searching the Internet for getting NHibernate 2 to work with the Data Services, I succeeded in creating a ReadOnly version of the API that allows us to read out the entities from the internal business layer by mapping the incoming query requests to our Business layer.
However, we wish to have a functional API that also allows the creation of entities using the OData protocol. So now i'm a bit stuck on how to proceed. I've been reading the following article : http://weblogs.asp.net/cibrax/default.aspx?PageIndex=3
The above articly nicely explains on how to map a custom DataService to the NHibernate layer. I've used this as a base to continue on, but I have the "problem" that I don't want to map my requests directly to the database using NHibernate, but I wish to map them to our Business layer (a seperate DLL) that performs a large batch of checks, constraints and updates based upon accessrights, privledges and triggers.
So what I want to ask, I for example create my own NhibernateContext class as in the above articly, but instead rely on our Business Layer instead of NHibernate sessions, could it work? I'd probably have to rely on reflection alot to figure out the type of object I'm working with at runtime and call the correct business classes to perform the updates and deletes.
To demonstrate with a smal ascii picture:
*-----------------*
* Database *
*-----------------*
*------------------------*
* DAL(Data Access Layer) *
*------------------------*
*------------------------*
* BUL (Bussiness Layer) *
*------------------------*
*---------------* *-------------------*
* My OData stuff* * Internal API *
*---------------* *-------------------*
*------------------*
* Web Application *
*------------------*
So, would this work, or would the performance make it useless?
Or am I just missing the ball here?
The idea is that I wish to reuse whatever logic is stored in the BUL & DAL layer from the OData WCF DataService.
I was thinking about creating new classes that inherit from the EntityModel classes in the Data.Services namespace and create a new DataService object that marks all calls to the BUL & DAL & API layers. I'm however not sure where/who to intercept the requests for creating and deleting resources.
I hope it's a bit clear what I'm trying to explain, and I hope someone can help me on this.
The devil is in the details, but it sounds like the design you're proposing should work.
The DataService class is where you get to define the access rights applicable to everyone, configuration settings, and custom operations. In this scenario, I think you will be focusing more on the data context instead (the 'T' in DataService).
For the context, there are really two interesing paths: reads and writes. Reads happen through the IQueryable entry points. Writing a LINQ provider is a good chunk of work, but NHibernate already supports this, although it would return what I imagine we're calling DAL entities. You can use query interceptors to do access checks here if you can express those in terms that the database would understand.
The update path is from what I understand where you are trying to run more business logic (you mentioned validation, extra updates, etc). To do this, you'll want to focus on the IUpdatable implementation (IDataServiceUpdateProvider if you're using the latest version). Here you can use whichever objects you want - they could be DAL objects or business objects. You can do everything in the DAL and then run validation on SaveChanges(), or do everything on business objects if they validate as they go.
There are two places where you might 'jump' from one kind of objects to another. One is in the GetResource() API, where you get an IQueryable, presumably in term of DAL entities. The other is in ResolveResource(), where the runtime is asking for an object to serialize, just like it would get from an IQueryable, so it's presumably also a DAL entity.
Hope this helps - doing uniform access over non-uniform APIs can be hard, but often well worth it!
We have a typical multi-tier/layer architecture. Application + WCF Service + Repository/EF4/Database.
We are using a customized version of the EF POCO T4 template to generate our entities, that we use across the tiers/layers. We have decided not to use DTO, because of the additional time/work involved.
An example object would be a forest which could have navigation properties of trees which could have navigation properties of leaves.
What is the best approach to add leaves and deal with the object graph? The data is being imported from the client side, so we don't necessarily know if the parent forest/tree already exists in the database.
Query service and retrieve any existing related objects. Attach graph for related objects or create new objects and attach graph on the client side.
example: public Forest GetForest(string forestid)
then --- public void AddLeaf(Leaf leaf)
Create the forest, tree, and leaf objects on the client side and attach the graphs. Send the leaf object across and then on the server side perform logic to compare objects to existing objects in the database. Strip graphs if required, add items that do not exist and/or attach to existing objects.
example: public void AddLeaf(Leaf leaf)
Create the forest, tree and leaf objects on the client side, but don't attach the graphs. Send the objects across and then on the service side perform the logic to compare objects to existing objects in the database. Add items that do not exist and/or attach to existing objects.
example: public void AddLeaf(Leaf leaf, Tree tree, Forest forest)
The question boils down to where should the logic take place to attach the graphs of these related objects.
On a side note I am a little concerned about the "fixup" logic for the navigation properties when dealing with graphs being serialized and deserialized. It seems like that could become an expensive opearation.
Note: The client application is a windows service that is importing data...so it is not necessarily a light weight client. (We are not necessarily afraid of adding logic to it.)
I had similar question few months ago. After playing a lot with this problem my final decission is to use your third solution (my client is always web application). This solution requires writting a lot of code and it includes some additional database queries because each time you want to update your objects you have to load whole object graph first. Reason for this is that when working with detached objects you have to deal with change tracking manually.
When you use third solution you can also involve DTO and transfers only really needed data between client and server.
In case of statefull client (windows app written in .NET or maybe Silverlight) you can also use self tracking entities and your first approach. Self tracking entities are implementation of Changeset pattern. They can track all changes after detaching from context but you have to load your entities first from DB. Self tracking entities are not a good choice in case of web application client or service consumed by non .NET client.
I have been writing web services for about a year now and it seems that the process I use to get data from the Database all the way to display to the user and back again has some inefficiencies.
The purpose of this question is to make sure that I am following best practices and not just adding extra work in.
Here is the path for data from the DB, to the end user and back.
Service gets it from the database into a Data Access Layer (DAL) object.
Service Converts it to a DataContract to send to the client.
Client gets the DataContract and converts it to a client side object
Client displays the object / the user makes changes / objects are added
Client converts the client side object to a DataContact and sends it to the Service
Service recives the DataContract and converts it to a Data Access Layer object.
Service updates the Database with the changes/new objects.
If you were keeping track the object is converted 4 times (DAL->Contract->Client Object->Contract->DAL). That seems like a lot of conversions when your app starts to scale out it's data.
Is this the "Best" way to do this? Am I missing something?
In case it matters, I am using Visual Studio 2008, WCF, LinqToSQL and Windows Mobile 5.0 (NETCF).
You may be missing the issue of what happens if you reduce the number of conversions (that is, if you couple the layers more tightly together).
The service could directly return a DAL object. The problem is that DAL objects are likely to contain data that is about the fact that they are DAL objects, and not about the data they carry. For instance, LINQ to SQL classes derive from base classes that contain LINQ to SQL functionality - this base class data is not required on the client, and should not be sent.
The client could directly use the DAL object sent back from the server. But that requires the client and server use the same platform - .NET, for instance. They would also have to use compatible versions of .NET, so that the client can use the server-side DAL object.
The client could now display the DAL object however it likes, assuming it doesn't need client-side interfaces like INotifyPropertyChanged, The server doesn't need such code to run, but the client might need it for data binding and validation.
Note that each layer contributes its own requirements. By keeping these requirements independent, the code is easier to design and maintain. Yes, you have to do some copying of data, but that's cheap compared to the cost of maintaining code that has to do four different things at the same time.
Greetings!
I am using a WCF library on an application server, which is referenced by an IIS server (which is therefore the client). I would like to put my validation in a place so that I can just call .Validate() which returns a string array of errors (field too short, missing, etc). The problem is, such functions don't cross the WCF boundary and I really don't want to code the same logic in the WCF service and in IIS/WCF client. Is there a way to use extension methods or something similar so both side can use use a .Validat() method which calls the same code?
Many thanks for any ideas!
Steve
If you control both sides of the wire, i.e. the server-side (service) and the client-side, then you could do the following:
put all your service and data contracts into a shared assembly
reference that "Contracts" assembly from both the server and the client
manually create the client proxy (by deriving from ClientBase<T> or by creating it from a ChannelFactory<T>) - do not use "Add Service Reference" or svcutil.exe!
put all validation logic into a shared assembly
reference that shared validation assembly from both projects
If you want to use a shared validation assembly, you must make sure the data types used on your server and client are identical - this can only be accomplished if you also share service and data contracts. Unfortunately, that requires manual creation of the client proxy (which is really not a big deal!).
If you'd use "Add Service Reference", then Visual Studio will inspect the service based on its metadata, and create a new set of client-side objects, which look the same in terms of their fields and all, but they're a separate, distinct type, and thus you wouldn't be able to use your shared validation on both the server-side and the client-side objects.
Do you have a problem with sending the data over to the server to be validated? In other words, your service interface actually offers the "Validate" method and takes a data contract full of data, validates it and returns a List where T is some kind of custom ValidationResult data contract that contains all the info you need about validation warnings/errors.
In a service architecture, you can't trust the client, who could theoretically be some other company altogether, to have done proper data validation for you. You always need to do it at the service layer and design for communication of those validation issues back to your client. So if you're doing that work at the server anyway, why not open that logic up to the clients so they can use it directly? Certainly the clients can (should) still do some kind of basic input validation such as checking for null values, empty strings, values out of range, etc, but core business logic checks should be shipped off to the service.