WCF Business Objects or DataContracts - wcf

I have three projects:
WCF Service project (Interface and Implementation)
aspx web project (client) that consumes the WCF Service
class library project that holds my business objects (shared by both WCF project and client)
I have a method in the WCF Service implementation class file that retrieves a generic list of data from SQL (referencing the project that holds the business objects), serialize the data using System.Web.Script.Serialization.JavaScriptSerializer and returns the result as a string.
The web client takes this string and deserializes it back to the appropriate business object (referencing the project that holds the business objects)
This is an intranet app and I want to make sure I am doing this correctly.
My questions are:
Should I be using DataContracts instead of business objects? Not sure when to use DataContracts and when to use the business objects.
If I am using DataContracts, should I not use
System.Web.Script.Serialization.JavaScriptSerializer?
Any clarification would be appreciated.

Of course there is no one answer. I think the question is whether you want to use business objects in the first place, otherwise my fourth point pretty much covers it.
Do use the business objects if they look like the data contracts would, i.e. they are a bunch of public properties and do not contain collections of children/ grandchildren etc.
Don't use the business objects if they contain a bunch of data you don't need. For example populating a grid with hundreds of entities begs for a data contract specific to that grid.
Do use the business objects if they contain validation logic etc that you would otherwise have to duplicate in your web service.
Do use the business objects if you are just going to use the data contracts to fully inflate business objects anyway.
Don't use the business objects if you ever want to consume that service interface from non .net code.
Don't use the business objects if you have to massively configure their serialization.
Don't use the business objects if they need to "know" where they are (web server or app server)
Not your case but: Do use the business objects if you are building a rich client for data entry.
Thats all for now, I'll see if anything more occurs to me. :)

Related

Entity Framework Code First DTO or Model to the UI?

I am creating a brand new application, including the database, and I'm going to use Entity Framework Code First. This will also use WCF for services which also opens it up for multiple UI's for different devices, as well as making the services API usable from other unknown apps.
I have seen this batted around in several posts here on SO but I don't see direct questions or answers pertaining to Code First, although there are a few mentioning POCOs. I am going to ask the question again so here it goes - do I really need DTOs with Entity Framework Code First or can I use the model as a set of common entities for all boundaries? I am really trying to follow the YAGNI train of thought so while I have a clean sheet of paper I figured that I would get this out of the way first.
Thanks,
Paul Speranza
There is no definite answer to this problem and it is also the reason why you didn't find any.
Are you going to build services providing CRUD operations? It generally means that your services will be able to return, insert, update and delete entities as they are = you will always expose whole entity or single exactly defined serializable part of the entity to all clients. But once you do this it probably worth to check WCF Data Services.
Are you going to expose business facade working with entities? The facade will provide real business methods instead of just CRUD operations. These buisness methods will get some data object and decompose it to multiple entities in wrapped business logic. Here it makes sense to use specific DTO for every operation. DTO will transfer only data needed for the operation and return only date allowed to the client.
Very simple example. Suppose that your entities keep information like LastModifiedBy. This is probably information you want to pass back to the client. In the first scenario you have single serializable set so you will pass it back to the client and client pass it modified back to the service. Now you must verify that client didn't change the field because he probably didn't have permissions to do that. You must do it with every single field which client didn't have permission to change. In the second scenario your DTO with updated data will simply not include this property (= specialized DTO for your operation) so client will not be able to send you a new value at all.
It can be somehow related to the way how you want to work with data and where your real logic will be applied. Will it be on the service or on the client? How will you ensure that client will not post invalid data? Do you want to restrict passing invalid data by logic or by specific transferred objects?
I strongly recommend a dedicated view model.
Doing this means:
You can design the UI (and iterate on it) without having to wait to design the data model first.
There is less friction when you want to change the UI.
You can avoid security problems with auto-mapping/model binding "accidentally" updating fields which shouldn't be editable by the user -- just don't put them in the view model.
However, with a WCF Data Service, it's hard to ignore the advantage of being able to write the service in essentially one line when you expose entities directly. So that might make the most sense for the WCF/server side.
But when it comes to UI, you're "gonna need it."
do I really need DTOs with Entity Framework Code First or can I use the model as a set of common entities for all boundaries?
Yes, the same set of POCOs / entities can be used for all boundaries.
But a set of mappers / converters / configurators will be needed to adapt entities to some generic structures of each layer.
For example, when entities are configured with DataContract and DataMember attributes, WCF is able to transfer domain objects' state without creating any special classes.
Similarly, when entities are mapped using Entity Framework fluent mapping api, EF is able to persist domain objects' state in database without creating any special classes.
The same way, entities can be configured to be used in any layer by means of the layer infrastructure without creating any special classes.

Is shared assembly the only way to create objects from WCF REST service

I am writing an application that is consuming an in-house WCF-based REST service and I'll admit to being a REST newbie. Since I can't use the "Add Service Reference", I don't have ready-made proxy objects representing the return types from the service methods. So far the only way I've been able to work with the service is by sharing the assembly containing the data types exposed by the service.
My problem with this arrangment is that I see only two possibilities:
Implement DTOs (DataContracts) and expose those types from my service. I would still have to share an assembly but this approach would limit the types contained in the assembly to the service contract and DTOs. I don't like to use DTOs just for the sake of using them, though as they add another layer of abstraction and processing time to convert from domain object to DTO and vice versa. Plus, if I want to have business rules, validation, etc. on the client, I'd have to share the domain objects anyways, so is the added complexity necessary.
Support serialization of my domain objects, expose those types and share that assembly. This would allow me to share business and validation logic with the client but it also exposes parts of my domain objects to the client that are meant only for the service app.
Perhaps an example would help the discussion...
My client application will display a list of documents that is obtained from the REST service (a GET operation). The service returns an array of DocumentInfo objects (lightweight, read-only representation of a Document).
When the user selects one of the items, the client retrieves the full Document object from the REST service (GET by id) and displays a data entry form so the user can modify the object. We would want validation rules for a rich user experience.
When the user commits the changes, the Document object is submitted to the REST service (a PUT operation) where it is persisted to the back-end data store.
If the state of the Document allows, the user may "Publish" the Document. In this case, the client POSTs a request to the REST service with the Document.ID value and the service performs the operation by retrieving the server-side Document domain object and calling the Publish method. The Publish method should not be available to the client application.
As I see it, my Document and DocumentInfo objects would have to be in a shared assembly. Doing this makes Document.Publish available to the client. One idea to hide it would be to make the method internal and add an InternalsVisibleTo attribute that allows my service app to call the method and not the client but this seems "smelly."
Am I on the right track or completely missing something?
The classes you use on the server should not be the same classes you use on the client (apart from during the data transfer itself). The best approach is to create a package (assembly/project) containing DTOs, and share these between the server and the client. You did mention that you don't want to create DTO's for the sake of it, but it is best practice. The performance impact of adding extra layers is negligible, and layering actually helps make your application easier to develop and maintain (avoiding situations like yours where the client has access to server code).
I suggest starting with the following packages:
Service: Resides on server only, exposes the service and contains server application logic.
DTO: Resides on both server and client. Contains simple classes which contain data which need to be passed between server and client. Classes have no code apart from properties. These are short lived objects which survive long enough only to transfer data.
Repository: Resides on client only. Calls the server, and turns Model objects into DTO's (and vice versa).
Model: Resides on client only. Contains classes which represent business objects and relationships. Model objects stay in memory throughout the life of the application.
Your client application code should call into Repository to get Model objects (you might also consider looking into MVVM if your not sure how to go about this).
If your service code is sufficiently complex that it needs access to Model classes, you should create a separate Model package (obviously give it a different name) - the only classes which should exist both on server and client are DTO classes.
I thought that I'd post the approach I took while giving credit to both Greg and Jake for helping guide me down the path.
While Jake is correct that deserializing the data on the client can be done with any type as long as it implements the same data contract, enforcing this without WSDL can be a bit tricky. I'm in an environment where other developers will be working with my solution both to support and maintain the existing as well as creating new clients that consume my service. They are used to "Add Service Reference" and going.
Greg's points about using different objects on the client and the server were the most helpful. I was trying to minimize duplicate by sharing my domain layer between the client and the server and that was the root of my confusion. As soon as I separated these into two distinct applications and looked at them in isolation, each with their own use cases, the picture became clearer.
As a result, I am now sharing a Contracts assembly which contains my service contracts so that a client can easily create a channel to the server (using WCF on the client-side) and data contracts representing the DTOs passed between client and service.
On the client, I have ViewModel objects which wrap the Model objects (data contracts) for the UI and use a service agent class to communicate with the service using the service contracts from the shared assembly. So when the user clicks the "Publish" button in the UI, the controller (or command in WPF/SL) calls the Publish method on the service agent passing in the ID of the document to publish. The service agent relays the request to the REST API (Publish operation).
On the server, the REST API is implemented using the same service contracts. In this case, the service works with my domain services, repositories and domain objects to carry out the tasks. So when the Publish service operation is invoked, the service retrieves the Document domain object from the DocumentRepository, calls the Publish method on the object which updates the internal state of the object and then the service passes the updated object to the Update method of the repository to persist the changes.
I am pleased with the outcome as I believe this gives me a more robust and extensible architecture to work with. I can change the ViewModels as needed to support the UI with no concern over poluting the service(s) and, likewise, change the internal implementation of the service operations (domain layer) without affecting the client application(s). All that binds the two are the contracts they share. Pretty clean.
You can serialize your domain objects and then de-serialize them into different types on the client. Both types need to implement the same data contract. All serializable types have at least a default data contract that includes all public read/write properties and fields.

WCF Data Contracts DTO

In my application I am making a service call and getting back populated WCF Data Contract object. I have to display this data in a grid. Is it good practice to bind the data contract to the grid ?
Josh
Is it good practice to bind the data contract to the grid ?
Yes. There is nothing wrong with what you are doing.
Let me elaborate: what you have received back from the WCF service is a standard object (sometimes referred to as a DTO - Data Transfer Object). You have not received a DataContract - you have received an object that used a DataContract to control the serialization process between the WCF service and your client. The DataContract can control or dictate what you get, but once you have that object you are free to treat it as you wish.
Assuming that all of your DTOs are friendly for data binding then shouldn't have a problem binding your WCF DTOs to a grid.
Some scenarios where you might not want to bind directly to your DTOs are:
Your DTOs are not easy to bind with their current definition (e.g. nested objects/properties)
You need to support notification of changes to the binding client (typically done using INotifyPropertyChanged)
You wish to insulate your UI code from changes to the WCF DTOs. This could be because you don't control the DTO definition or you expect frequent changes to the DTO definitions and you don't want to frequently change your UI code. Of course, if the DTO does change then you will have to modify code but you could isolate those changes to a small translation layer.
I'd recommend the use of view models for any data binding or data display (MVVM) on server side (i.e. MVC) and client side (javascrip) rendering.
The main risk of using DTOs returned by domain is that if DTOs are refactored for any reason (i.e. properties are renamed, extracted into other objects or more objects are merged into one) the UI will break and will require update.
DTOs are the contract for what is returned by your domain, whereas the view models are the contract for what the UI requires. The two are controlled by different requirements and although those requirements can be applied to the same set of objects the result is usually a mixture what is just wrong, not to mention that requirements what apply only to UI or domain will trigger changes in the other party.
I.e. views often require data from more DTOs, or different views require a different subset of data from the same DTO and in both cases the DTO should not change only to accomodate what a concrete view requires.
It is also easier to identify what the requirements for a view are if the views have a view model, rather than having the same DTO passed in to more views.

WCF and Inheritance

I'm working on a project where I have an abstract class of Appointment. There are Workouts, Meals and Measurements that all derived from Appointment. My architecture looks like this so far:
Dao - with data access layer being entity framework 4 right now
POCO classes using the T4 templates
WCF
Silverlight Client, ASP.net MVP, mobile clients
Would I put business rules in the POCO class? or map my Entities to a business object with rules and then map those to DTOs and pass those through WCF?? and when I pass the DTOs do I pass over type Appointment? Or write a service method for each sub class like Workout or Meal?
I haven't found any good material using table per type inheritance and WCF.
thanks in advance!
-ajax
it mainly depends on complexity you require. You are using POCO classes it is good starting point. You now have to choose how complex application are you going to build, how much business logic do you want to add and what do you want to expose to your clients?
The POCO entity can be just DTO or you can turn POCO entity into business object by adding business methods and rules directly into that entity - you will transform the entity into Active record pattern or to Domain object. I don't see any reason to map your POCOs to another set of business objects.
Exposing POCO entity in WCF service is the simplest way. You can use operations which will works directly with Appointment class. Additionally you have to give your service information about all classes derived from Appointment - check KnownTypeAttribute and ServiceKnownTypeAttribute. Using entity often means that service calls transport more than is needed - this can be problem for mobile clients with slow internet connection. There is one special point you have to be aware of when exposing entity which is aggregation root (contains references to another entitities and collection of entities) - if you don't have full control over client applications and you allow clients sending full modified object graph you have to validate not only each entity but also that client changed only what he was allowed to. Example: Suppose that client want to modify Order entity. You send him Order with all OrderItem entities and each item will have reference to its Product entity = full object graph. What happens if instead of modifing Order and OrderItems client changes any of Products (for example price)? If you don't check this in your business logic exposed by WCF and pass the modified object graph into EF context, it will modify the price in your database.
If you decide to use your entities like business objects you usually don't expose those entities, instead you will create large set of DTOs. Each operation will work with precisely defined DTO for request and response. That DTO will carry only information which are really needed - this will reduce data payload for service calls and avoid passing modified prices of product, because you will simply define your DTO to not transfer price or even whole product from the client. This solution is much more time consuming to implement and it adds additional layer of complexity.
Because I have mentioned object graphs I must clarify that there is another hidden level of complexity when using them: change tracking. EF context needs to know what have changed in object graph (at least which OrderItem was modified, which was added or deleted, etc.) for correct persistence. Tracking and multi tier solution is a chalenge. The simplest solution does not track changes and instead uses additional query to EF. This query returns actual persisted state of object graph and modified object graph is merged with it (special care is needed for concurrency checks). Other solutions uses some tracking support in entity - check Tracking changes in POCO and Self-tracking entities. But this is only for entities. If you want to track changes in DTO you have to implement your own change tracking. You can also read articles from MSDN magazine about multi tier applications and EF:
Anti-Patterns To Avoid In N-Tier Applications;
Building N-Tier Apps with EF4

Perceived Inefficiencies in Data translation in web Services

I have been writing web services for about a year now and it seems that the process I use to get data from the Database all the way to display to the user and back again has some inefficiencies.
The purpose of this question is to make sure that I am following best practices and not just adding extra work in.
Here is the path for data from the DB, to the end user and back.
Service gets it from the database into a Data Access Layer (DAL) object.
Service Converts it to a DataContract to send to the client.
Client gets the DataContract and converts it to a client side object
Client displays the object / the user makes changes / objects are added
Client converts the client side object to a DataContact and sends it to the Service
Service recives the DataContract and converts it to a Data Access Layer object.
Service updates the Database with the changes/new objects.
If you were keeping track the object is converted 4 times (DAL->Contract->Client Object->Contract->DAL). That seems like a lot of conversions when your app starts to scale out it's data.
Is this the "Best" way to do this? Am I missing something?
In case it matters, I am using Visual Studio 2008, WCF, LinqToSQL and Windows Mobile 5.0 (NETCF).
You may be missing the issue of what happens if you reduce the number of conversions (that is, if you couple the layers more tightly together).
The service could directly return a DAL object. The problem is that DAL objects are likely to contain data that is about the fact that they are DAL objects, and not about the data they carry. For instance, LINQ to SQL classes derive from base classes that contain LINQ to SQL functionality - this base class data is not required on the client, and should not be sent.
The client could directly use the DAL object sent back from the server. But that requires the client and server use the same platform - .NET, for instance. They would also have to use compatible versions of .NET, so that the client can use the server-side DAL object.
The client could now display the DAL object however it likes, assuming it doesn't need client-side interfaces like INotifyPropertyChanged, The server doesn't need such code to run, but the client might need it for data binding and validation.
Note that each layer contributes its own requirements. By keeping these requirements independent, the code is easier to design and maintain. Yes, you have to do some copying of data, but that's cheap compared to the cost of maintaining code that has to do four different things at the same time.