WCF Data Services: Processing an object at save time - wcf

We are using a WCF Data Service to broker our data server side, and give third parties easy OData access to our data. The server side of things has been relatively easy. The client side, on the other hand, is giving us fits.
We are converting from regular Entity Framework to Data Services, and we've created an assembly which contains the generated client objects that talk to the data service (via a Service Reference). Those classes are partial, so we've added some logic and extended properties to them. This all works great.
The issue we are having is that we need to process our objects at save time, because they need to do some advanced serialization before they are sent over the wire. The DataServiceContext class contains two events: WritingEntity and ReadingEntity. The ReadingEntity event actually happens at the correct time for us (post object deserialization). The WritingEntity event happens at the WRONG time for us (post object serialization).
Is there any way to catch an object before it's written to the request, so that we can call a method on entity that is about to be written?
Obviously we could just loop through the Entities list, looking for any entity that is not in a state of Unchanged or Deleted, and call the appropriate method there...but this would require me to add special code every time I wanted to call SaveChanges on the context. This may be what we need to do, but it would be nice if there was a way to catch the entities before they are written to XML for sending to the service.

Currently there's no hook in the DataServiceContext to do what you want. The closest I can think of is the approach you suggested with walking all the entities and findings those which were modified. You could do this in your own SaveChanges-like method on the context class (which is also partial).

Related

What is naming convention for DTOs in a webservice

I'm designing a restful web service and I was wondering what should I name my DTOs. Can I use suffixes like Request and Response for them? for example for addUser service, there will be 2 DTOs named: AddUserRequest and AddUserResponse.
Does your organization already have a schema that describes a canonical user that you pass in? If that's what you're using, of course you would use the name from that schema. Otherwise, describe them just as you would any class or schema element.
Note that since a DTO doesn't contain its own methods, you probably would not give it a name with an action verb.
However, consider calling them AddUserRequest and AddUserResponse, especially if the method requires more info than just your regular user DTO. This fits with the Interface Segregation Principle in that your interface parameters should be specifically tailored to the request itself (it shouldn't require elements that are unrelated to the request; and you shouldn't have function-type parameters that change the request, those should be extracted into their own calls.) The AddUserRequest might then contain an element called User that holds the user-specific data, and another element holding the set of other associated data on the request, perhaps groups or access permissions, that sort of thing.
DTOs (Data Transfer Object) are like POJOs(Plain Old Java Objects). It should only have getters and setters and not any business logic.
From Wikepedia:
A data transfer object is an object that carries data between
processes. The motivation for its use is that communication between
processes is usually done resorting to remote interfaces (e.g., web
services), where each call is an expensive operation. Because the
majority of the cost of each call is related to the round-trip time
between the client and the server, one way of reducing the number of
calls is to use an object (the DTO) that aggregates the data that
would have been transferred by the several calls, but that is served
by one call only.
The difference between data transfer objects and business objects or
data access objects is that a DTO does not have any behavior except
for storage and retrieval of its own data (mutators and accessors).
DTOs are simple objects that should not contain any business logic
that would require testing.
This pattern is often incorrectly used outside of remote interfaces.
This has triggered a response from its author[3] where he reiterates
that the whole purpose of DTOs is to shift data in expensive remote
calls.
So ideally for those actions you should create some helpers or you can add those as controllers.
Since it is a RESTful service, ideally the user addition/creation request should send back 201 created HTTP status code , with userId in location header and no response body. For the request, you could name it like UserDetails or UserData or simply User. Refer https://pontus.ullgren.com/view/Return_Location_header_after_resource_creation

Validating a Self Tracking Entity (EF) through WCF

I'm having trouble defining what my OperationContract should be when adding / updating an entity. I want to send an entity (or list of entities) to the ObjectContext via the WCF Service (which will instantiate a Business Manager for me to do the actual validation).
If the entity passes all of the validation rules (which could very well require querying the database to determine pass/fail for more complex business rules), it'll be saved to the database, and I'll need to be able to pass back its ID (Identity Column primary key) and the value of the concurrency token (timestamp column), but if it fails, obviously we want to have a message or messages saying what was wrong. In the case of an update, all we would need would be the new value of a concurrency token, but again we'd want the validation message(s).
To make it trickier, an entity could have multiple child/grandchild entities as well. For instance, a Trip will have Stops, which could potentially have Orders.
I'm just wondering how people handle this in the real world. The simplest examples just show the WCF service's operations like:
[OperationContract]
bool AddEntity(Entity e);
[OperationContract]
bool UpdateEntity(Entity e);
Does anyone have any great ideas for handling this? I guess I'm really just looking for practical advice here.
Should we be trying to save a collection of objects in one service call?
Should we be conveying the validation messages through a fault contract?
Any advice/input would be helpful, thanks!
Should we be trying to save a
collection of objects in one service
call?
If you mean saving whole object graph in one call then the answer is definitely yes. If you mean saving multiple independent object graphs (collection) in one call then the answer is probably yes. It is good idea to reduce number of roundtrips between client and service to minimum but in the same time doing this can introduce complications. You must decide if the whole collection must be saved as atomic operation or if you are happy with saving only part of the collection and returning errors for the rest. This will influence the rest of your architecture.
Should we be conveying the validation
messages through a fault contract?
Yes but only if you will use save operation as atomic because fault contract is exception and exception should break your current operation and return only validation errors. It should be enough to have single fault contract which will transfer all validation errors. Don't fire the exception for each single validation error because it can make your application pretty annoying and useless.
If you want to save only part of the collection which passes validations and return errors for the rest you should not use fault contracts. Instead of fault contracts you should have some container data contract used for response which will carry both ids and timestamps for saved data and ids and errors for unsaved data.
One little note to STEs: Passing back just Ids and timestamps can be probably tricky. I'm not sure if you don't have to turn off tracking when you want to set them and after that turn the tracking on again.

Is shared assembly the only way to create objects from WCF REST service

I am writing an application that is consuming an in-house WCF-based REST service and I'll admit to being a REST newbie. Since I can't use the "Add Service Reference", I don't have ready-made proxy objects representing the return types from the service methods. So far the only way I've been able to work with the service is by sharing the assembly containing the data types exposed by the service.
My problem with this arrangment is that I see only two possibilities:
Implement DTOs (DataContracts) and expose those types from my service. I would still have to share an assembly but this approach would limit the types contained in the assembly to the service contract and DTOs. I don't like to use DTOs just for the sake of using them, though as they add another layer of abstraction and processing time to convert from domain object to DTO and vice versa. Plus, if I want to have business rules, validation, etc. on the client, I'd have to share the domain objects anyways, so is the added complexity necessary.
Support serialization of my domain objects, expose those types and share that assembly. This would allow me to share business and validation logic with the client but it also exposes parts of my domain objects to the client that are meant only for the service app.
Perhaps an example would help the discussion...
My client application will display a list of documents that is obtained from the REST service (a GET operation). The service returns an array of DocumentInfo objects (lightweight, read-only representation of a Document).
When the user selects one of the items, the client retrieves the full Document object from the REST service (GET by id) and displays a data entry form so the user can modify the object. We would want validation rules for a rich user experience.
When the user commits the changes, the Document object is submitted to the REST service (a PUT operation) where it is persisted to the back-end data store.
If the state of the Document allows, the user may "Publish" the Document. In this case, the client POSTs a request to the REST service with the Document.ID value and the service performs the operation by retrieving the server-side Document domain object and calling the Publish method. The Publish method should not be available to the client application.
As I see it, my Document and DocumentInfo objects would have to be in a shared assembly. Doing this makes Document.Publish available to the client. One idea to hide it would be to make the method internal and add an InternalsVisibleTo attribute that allows my service app to call the method and not the client but this seems "smelly."
Am I on the right track or completely missing something?
The classes you use on the server should not be the same classes you use on the client (apart from during the data transfer itself). The best approach is to create a package (assembly/project) containing DTOs, and share these between the server and the client. You did mention that you don't want to create DTO's for the sake of it, but it is best practice. The performance impact of adding extra layers is negligible, and layering actually helps make your application easier to develop and maintain (avoiding situations like yours where the client has access to server code).
I suggest starting with the following packages:
Service: Resides on server only, exposes the service and contains server application logic.
DTO: Resides on both server and client. Contains simple classes which contain data which need to be passed between server and client. Classes have no code apart from properties. These are short lived objects which survive long enough only to transfer data.
Repository: Resides on client only. Calls the server, and turns Model objects into DTO's (and vice versa).
Model: Resides on client only. Contains classes which represent business objects and relationships. Model objects stay in memory throughout the life of the application.
Your client application code should call into Repository to get Model objects (you might also consider looking into MVVM if your not sure how to go about this).
If your service code is sufficiently complex that it needs access to Model classes, you should create a separate Model package (obviously give it a different name) - the only classes which should exist both on server and client are DTO classes.
I thought that I'd post the approach I took while giving credit to both Greg and Jake for helping guide me down the path.
While Jake is correct that deserializing the data on the client can be done with any type as long as it implements the same data contract, enforcing this without WSDL can be a bit tricky. I'm in an environment where other developers will be working with my solution both to support and maintain the existing as well as creating new clients that consume my service. They are used to "Add Service Reference" and going.
Greg's points about using different objects on the client and the server were the most helpful. I was trying to minimize duplicate by sharing my domain layer between the client and the server and that was the root of my confusion. As soon as I separated these into two distinct applications and looked at them in isolation, each with their own use cases, the picture became clearer.
As a result, I am now sharing a Contracts assembly which contains my service contracts so that a client can easily create a channel to the server (using WCF on the client-side) and data contracts representing the DTOs passed between client and service.
On the client, I have ViewModel objects which wrap the Model objects (data contracts) for the UI and use a service agent class to communicate with the service using the service contracts from the shared assembly. So when the user clicks the "Publish" button in the UI, the controller (or command in WPF/SL) calls the Publish method on the service agent passing in the ID of the document to publish. The service agent relays the request to the REST API (Publish operation).
On the server, the REST API is implemented using the same service contracts. In this case, the service works with my domain services, repositories and domain objects to carry out the tasks. So when the Publish service operation is invoked, the service retrieves the Document domain object from the DocumentRepository, calls the Publish method on the object which updates the internal state of the object and then the service passes the updated object to the Update method of the repository to persist the changes.
I am pleased with the outcome as I believe this gives me a more robust and extensible architecture to work with. I can change the ViewModels as needed to support the UI with no concern over poluting the service(s) and, likewise, change the internal implementation of the service operations (domain layer) without affecting the client application(s). All that binds the two are the contracts they share. Pretty clean.
You can serialize your domain objects and then de-serialize them into different types on the client. Both types need to implement the same data contract. All serializable types have at least a default data contract that includes all public read/write properties and fields.

Can WCF service transmit type (client doesn't know this type) information?

I'm working on a simple plug-in framework. WCF client need to create an instance of 'ISubject' and then send back to service side. The 'ISubject' can be extended by the user. The only thing client knows at runtime is ID of a subclass of 'ISubject'.
Firstly, client need to get type information of a specific subclass of 'ISubject'. Secondly, client using reflection to enumerate all members to create a custom property editor so that each member can be asigned with proper value. Lastly, client create an instance of that subclass and send back to service.
The problem is how does client get the type information through WCF communication?
I don't want client to load that assembly where the subclass (of 'ISubject') exists.
Thanks
First, you need to be aware that there is no magic way that WCF will provide any type information to your client in the scenario you have descibed. If you are going to do it, you will have to provide a mechanism yourself.
Next, understand that WCF does not really pass objects from server to client or vice versa. All it passes are XML infosets. Often, the XML infoset passed includes a serialized representation of some object which existed on the sender's side; in this case, if the client knows about that type (i.e. can load the type's metadata from its assembly), it can deserialize the XML to instantiate an identical object on the client side. If the client doesn't have the type metadata, it can't: this is the normal case with WCF unless data contract types are in assemblies shared by both server and client implementations (generally not a good idea).
The way WCF is normally used (for example if the client is implemented using a "Service Reference" in Visual Studio), what happens is that the service publishes WSDL metadata describing its operations and the XML schemas for the operation parameters and return values, and from these a set of types is generated for use in the client implementation. These are NOT the same .NET types as the data contract types used by the service implementation, but they are "equivalent" in the sense that they can be serialized to the same XML data passed over the network. Normally this type generation is done at design time in Visual Studio.
In order to do what you are trying to do, which is essentially to do this type generation at runtime, you will need some mechanism by which the client can get sufficient knowledge of the structure of the XML representing the various types of object implementing ISubject so that it can understand the XML received from the service and generate the appropriate XML the service is expecting back (either working with the XML directly, or deserializing/serializing it in some fashion). If you really, really want to do this, possible ways might be:
some out-of-band mechanism whereby the client is preconfigured with the relevant type information corresponding to each subclass of ISubject that it might see. The link provided in blindmeis's answer is one way to do that.
provide a separate service operation by which the client can translate the ID of the subclass to type metadata for the subclass (perhaps as an XSD schema from which the client could generate a suitable serializable .NET type to round trip the XML).
it would also be feasible in principle for the service to pass type metadata in some format within the headers of the response containing the serialized object. The client would need to read, interpret and act on the type infomation in an appropriate fashion.
Whichever way, it would be a lot of effort and is not the standard way of using WCF. You will have to decide if it's worth it.
I think you might be missing something :)
A major concept with web services and WCF is that we can pass our objects across the network, and the client can work with the same objects as the server. Additionally, when a client adds a service reference in Visual Studio, the server will send the client all the details it needs to know about any types which will be passed across the network.
There should be no need for reflection.
There's a lot to cover, but I suggest you start with this tutorial which covers WCF DataContracts - http://www.codeproject.com/KB/WCF/WCFHostingAndConsuming.aspx
To deserialize an object the receiving side will need to have the assembly the type is defined in.
Perhaps you should consider some type of remoting or proxying setup where the instance of ISubject lives on one side and the other side calls back to it. This may be problematic if you need to marshal large amounts of data across the wire.
wcf needs to know the real object(not an interface!) which should be sent across the wire. so you have to satisfy the server AND the clientproxy side from the WCF service that they know the types. if you dont know the object type while creating the WCF service, you have to find a way to do it in a dynamic way. i use the solution from here to get the knownTypes to my WCF service.
[ServiceContract(SessionMode = SessionMode.Required]
[ServiceKnownType("GetServiceKnownTypes", typeof(KnownTypeHelper))]//<--!!!
public interface IWCFService
{
[OperationContract(IsOneWay = false)]
object DoSomething(object obj);
}
if you have something "universal" like the code above, you have to be sure that whatever your object at runtime will be, your WCF service have to know this object.
you wrote your client create a subclass and sent it back to the service. if you want to do that, WCF(clientproxy and server!) needs to know the real type of your subclass.

Perceived Inefficiencies in Data translation in web Services

I have been writing web services for about a year now and it seems that the process I use to get data from the Database all the way to display to the user and back again has some inefficiencies.
The purpose of this question is to make sure that I am following best practices and not just adding extra work in.
Here is the path for data from the DB, to the end user and back.
Service gets it from the database into a Data Access Layer (DAL) object.
Service Converts it to a DataContract to send to the client.
Client gets the DataContract and converts it to a client side object
Client displays the object / the user makes changes / objects are added
Client converts the client side object to a DataContact and sends it to the Service
Service recives the DataContract and converts it to a Data Access Layer object.
Service updates the Database with the changes/new objects.
If you were keeping track the object is converted 4 times (DAL->Contract->Client Object->Contract->DAL). That seems like a lot of conversions when your app starts to scale out it's data.
Is this the "Best" way to do this? Am I missing something?
In case it matters, I am using Visual Studio 2008, WCF, LinqToSQL and Windows Mobile 5.0 (NETCF).
You may be missing the issue of what happens if you reduce the number of conversions (that is, if you couple the layers more tightly together).
The service could directly return a DAL object. The problem is that DAL objects are likely to contain data that is about the fact that they are DAL objects, and not about the data they carry. For instance, LINQ to SQL classes derive from base classes that contain LINQ to SQL functionality - this base class data is not required on the client, and should not be sent.
The client could directly use the DAL object sent back from the server. But that requires the client and server use the same platform - .NET, for instance. They would also have to use compatible versions of .NET, so that the client can use the server-side DAL object.
The client could now display the DAL object however it likes, assuming it doesn't need client-side interfaces like INotifyPropertyChanged, The server doesn't need such code to run, but the client might need it for data binding and validation.
Note that each layer contributes its own requirements. By keeping these requirements independent, the code is easier to design and maintain. Yes, you have to do some copying of data, but that's cheap compared to the cost of maintaining code that has to do four different things at the same time.