Is shared assembly the only way to create objects from WCF REST service - wcf

I am writing an application that is consuming an in-house WCF-based REST service and I'll admit to being a REST newbie. Since I can't use the "Add Service Reference", I don't have ready-made proxy objects representing the return types from the service methods. So far the only way I've been able to work with the service is by sharing the assembly containing the data types exposed by the service.
My problem with this arrangment is that I see only two possibilities:
Implement DTOs (DataContracts) and expose those types from my service. I would still have to share an assembly but this approach would limit the types contained in the assembly to the service contract and DTOs. I don't like to use DTOs just for the sake of using them, though as they add another layer of abstraction and processing time to convert from domain object to DTO and vice versa. Plus, if I want to have business rules, validation, etc. on the client, I'd have to share the domain objects anyways, so is the added complexity necessary.
Support serialization of my domain objects, expose those types and share that assembly. This would allow me to share business and validation logic with the client but it also exposes parts of my domain objects to the client that are meant only for the service app.
Perhaps an example would help the discussion...
My client application will display a list of documents that is obtained from the REST service (a GET operation). The service returns an array of DocumentInfo objects (lightweight, read-only representation of a Document).
When the user selects one of the items, the client retrieves the full Document object from the REST service (GET by id) and displays a data entry form so the user can modify the object. We would want validation rules for a rich user experience.
When the user commits the changes, the Document object is submitted to the REST service (a PUT operation) where it is persisted to the back-end data store.
If the state of the Document allows, the user may "Publish" the Document. In this case, the client POSTs a request to the REST service with the Document.ID value and the service performs the operation by retrieving the server-side Document domain object and calling the Publish method. The Publish method should not be available to the client application.
As I see it, my Document and DocumentInfo objects would have to be in a shared assembly. Doing this makes Document.Publish available to the client. One idea to hide it would be to make the method internal and add an InternalsVisibleTo attribute that allows my service app to call the method and not the client but this seems "smelly."
Am I on the right track or completely missing something?

The classes you use on the server should not be the same classes you use on the client (apart from during the data transfer itself). The best approach is to create a package (assembly/project) containing DTOs, and share these between the server and the client. You did mention that you don't want to create DTO's for the sake of it, but it is best practice. The performance impact of adding extra layers is negligible, and layering actually helps make your application easier to develop and maintain (avoiding situations like yours where the client has access to server code).
I suggest starting with the following packages:
Service: Resides on server only, exposes the service and contains server application logic.
DTO: Resides on both server and client. Contains simple classes which contain data which need to be passed between server and client. Classes have no code apart from properties. These are short lived objects which survive long enough only to transfer data.
Repository: Resides on client only. Calls the server, and turns Model objects into DTO's (and vice versa).
Model: Resides on client only. Contains classes which represent business objects and relationships. Model objects stay in memory throughout the life of the application.
Your client application code should call into Repository to get Model objects (you might also consider looking into MVVM if your not sure how to go about this).
If your service code is sufficiently complex that it needs access to Model classes, you should create a separate Model package (obviously give it a different name) - the only classes which should exist both on server and client are DTO classes.

I thought that I'd post the approach I took while giving credit to both Greg and Jake for helping guide me down the path.
While Jake is correct that deserializing the data on the client can be done with any type as long as it implements the same data contract, enforcing this without WSDL can be a bit tricky. I'm in an environment where other developers will be working with my solution both to support and maintain the existing as well as creating new clients that consume my service. They are used to "Add Service Reference" and going.
Greg's points about using different objects on the client and the server were the most helpful. I was trying to minimize duplicate by sharing my domain layer between the client and the server and that was the root of my confusion. As soon as I separated these into two distinct applications and looked at them in isolation, each with their own use cases, the picture became clearer.
As a result, I am now sharing a Contracts assembly which contains my service contracts so that a client can easily create a channel to the server (using WCF on the client-side) and data contracts representing the DTOs passed between client and service.
On the client, I have ViewModel objects which wrap the Model objects (data contracts) for the UI and use a service agent class to communicate with the service using the service contracts from the shared assembly. So when the user clicks the "Publish" button in the UI, the controller (or command in WPF/SL) calls the Publish method on the service agent passing in the ID of the document to publish. The service agent relays the request to the REST API (Publish operation).
On the server, the REST API is implemented using the same service contracts. In this case, the service works with my domain services, repositories and domain objects to carry out the tasks. So when the Publish service operation is invoked, the service retrieves the Document domain object from the DocumentRepository, calls the Publish method on the object which updates the internal state of the object and then the service passes the updated object to the Update method of the repository to persist the changes.
I am pleased with the outcome as I believe this gives me a more robust and extensible architecture to work with. I can change the ViewModels as needed to support the UI with no concern over poluting the service(s) and, likewise, change the internal implementation of the service operations (domain layer) without affecting the client application(s). All that binds the two are the contracts they share. Pretty clean.

You can serialize your domain objects and then de-serialize them into different types on the client. Both types need to implement the same data contract. All serializable types have at least a default data contract that includes all public read/write properties and fields.

Related

WCF Business Objects or DataContracts

I have three projects:
WCF Service project (Interface and Implementation)
aspx web project (client) that consumes the WCF Service
class library project that holds my business objects (shared by both WCF project and client)
I have a method in the WCF Service implementation class file that retrieves a generic list of data from SQL (referencing the project that holds the business objects), serialize the data using System.Web.Script.Serialization.JavaScriptSerializer and returns the result as a string.
The web client takes this string and deserializes it back to the appropriate business object (referencing the project that holds the business objects)
This is an intranet app and I want to make sure I am doing this correctly.
My questions are:
Should I be using DataContracts instead of business objects? Not sure when to use DataContracts and when to use the business objects.
If I am using DataContracts, should I not use
System.Web.Script.Serialization.JavaScriptSerializer?
Any clarification would be appreciated.
Of course there is no one answer. I think the question is whether you want to use business objects in the first place, otherwise my fourth point pretty much covers it.
Do use the business objects if they look like the data contracts would, i.e. they are a bunch of public properties and do not contain collections of children/ grandchildren etc.
Don't use the business objects if they contain a bunch of data you don't need. For example populating a grid with hundreds of entities begs for a data contract specific to that grid.
Do use the business objects if they contain validation logic etc that you would otherwise have to duplicate in your web service.
Do use the business objects if you are just going to use the data contracts to fully inflate business objects anyway.
Don't use the business objects if you ever want to consume that service interface from non .net code.
Don't use the business objects if you have to massively configure their serialization.
Don't use the business objects if they need to "know" where they are (web server or app server)
Not your case but: Do use the business objects if you are building a rich client for data entry.
Thats all for now, I'll see if anything more occurs to me. :)

Interaction of services in the service layer

What is the best way to organize interaction between services in the service layer?
For example, I have document service and product service. In my case products can have their own documents and to manage documents of product I call appropriate methods from the document service in the product service. So, I need to create instance of document service in product service. And I need to call some methods from product service in the document service too. So, each of these services refers to other and I get stackoverflowexception respectively.
Which design solutions should I use to eliminate these problem?
Application Services are supposed to provide external clients an API for executing cohesive business operations. An application service method generally matches a use case of your application.
While an application service operation may require calling another service (eg, the Create Product use case includes the Create Document use case, which can also be called separately), this is not the norm and you should look to make your application services as cohesive as possible. In particular, just because at some point in your business case you start to manipulate another kind of entity doesn't mean you should delegate that part to another application service - in other words, one application service per entity is not necessarily right.
In any case, from your domain it should appear clearly in which direction the dependency between 2 applications services points. In your example, Product Service seems to depend on Document Service - it's difficult to imagine why it would be the other way around.
If you really need a round-trip between service A and service B (which I wouldn't do unless I have no other option), you could try and have the instance of A inject itself into B instead of relying on a DI container to resolve the dependency with a new instance, solving the stack overflow problem - if that's why you get a stack overflow in the first place.
Obviously, circular dependencies are wrong.
You can use shared identifiers to decouple Products and Documents.
Moreover you can orchestrate the service interaction from outside them, in the application: in the ProductService you can have a LoadProducts(ProductIdentifiers[] identifiers) returning an immutable collection of products and in the DocumentService you can have a LoadDocuments(DocumentIdentifiers[] identifiers) returning an immutable collection of documents.

Need some advice for a web service API?

My company has a product that will I feel can benefit from a web service API. We are using MSMQ to route messages back and forth through the backend system. Currently we are building an ASP.Net application that communicates with a web service (WCF) that, in turn, talks to MSMQ for us. Later on down the road, we may have other client applications (not necessarily written in .Net). The message going into MSMQ is an object that has a property made up of an array of strings. There is also a property that contains the command (a string) that will be routed through the system. Personally, I am not a huge fan of this, but I was told it is for scalability and every system can use strings.
My thought, regarding the web services was to model some objects based on our data that can be passed into and out of the web services so they are easily consumed by the client. Initially, I was passing the message object, mentioned above, with the array of strings in it. I was finding that I was creating objects on the client to represent that data, making the client responsible for creating those objects. I feel the web service layer should really be handling this. That is how I have always worked with services. I did this so it was easier for me to move data around the client.
It was recommended to our group we should maintain the “single entry point” into the system by offering an object that contains commands and have one web service to take care of everything. So, the web service would have one method in it, Let’s call it MakeRequest and it would return an object (either serialized XML or JSON). The suggestion was to have a base object that may contain some sort of list of commands that other objects can inherit from. Any other object may have its own command structure, but still inherit base commands. What is passed back from the service is not clear right now, but it could be that “message object” with an object attached to it representing the data. I don’t know.
My recommendation was to model our objects after our actual data and create services for the types of data we are working with. We would create a base service interface that would house any common methods used for all services. So for example, GetById, GetByName, GetAll, Save, etc. Anything specific to a given service would be implemented for that specific implementation. So a User service may have a method GetUserByUsernameAndPassword, but since it implements the base interface it would also contain the “base” methods. We would have several methods in a service that would return the type of object expected, based on the service being called. We could house everything in one service, but I still would like to get something back that is more usable. I feel this approach leaves the client out of making decisions about what commands to be passed. When I connect to a User service and call the method GetById(int id) I would expect to get back a User object.
I had the luxury of working with MS when I started developing WCF services. So, I have a good foundation and understanding of the technology, but I am not the one designing it this time.
So, I am not opposed to the “single entry point” idea, but any thoughts about why either approach is more scalable than the other would be appreciated. I have never worked with such a systematic approach to a service layer before. Maybe I need to get over that?
I think there are merits to both approaches.
Typically, if you are writing an API that is going to be consumed by a completely separate group of developers (perhaps in another company), then you want the API to be as self-explanative and discoverable as possible. Having specific web service methods that return specific objects is much easier to work with from the consumer's perspective.
However, many companies use web services as one of many layers to their applications. In this case, it may reduce maintenance to have a generic API. I've seen some clever mechanisms that require no changes whatsoever to the service in order to add another column to a table that is returned from the database.
My personal preference is for the specific API. I think that the specific methods are much easier to work with - and are largely self-documenting. The specific operation needs to be executed at some point, so why not expose it for what it is? You'd get laughed at if you wrote:
public void MyApiMethod(string operationToPerform, params object[] args)
{
switch(operationToPerform)
{
case "InsertCustomer":
InsertCustomer(args);
break;
case "UpdateCustomer":
UpdateCustomer(args);
break;
...
case "Juggle5BallsAtOnce":
Juggle5BallsAtOnce(args);
break;
}
}
So why do that with a Web Service? It'd be much better to have:
public void InsertCustomer(Customer customer)
{
...
}
public void UpdateCustomer(Customer customer)
{
...
}
...
public void Juggle5BallsAtOnce(bool useApplesAndEatThemConcurrently)
{
...
}

Can WCF service transmit type (client doesn't know this type) information?

I'm working on a simple plug-in framework. WCF client need to create an instance of 'ISubject' and then send back to service side. The 'ISubject' can be extended by the user. The only thing client knows at runtime is ID of a subclass of 'ISubject'.
Firstly, client need to get type information of a specific subclass of 'ISubject'. Secondly, client using reflection to enumerate all members to create a custom property editor so that each member can be asigned with proper value. Lastly, client create an instance of that subclass and send back to service.
The problem is how does client get the type information through WCF communication?
I don't want client to load that assembly where the subclass (of 'ISubject') exists.
Thanks
First, you need to be aware that there is no magic way that WCF will provide any type information to your client in the scenario you have descibed. If you are going to do it, you will have to provide a mechanism yourself.
Next, understand that WCF does not really pass objects from server to client or vice versa. All it passes are XML infosets. Often, the XML infoset passed includes a serialized representation of some object which existed on the sender's side; in this case, if the client knows about that type (i.e. can load the type's metadata from its assembly), it can deserialize the XML to instantiate an identical object on the client side. If the client doesn't have the type metadata, it can't: this is the normal case with WCF unless data contract types are in assemblies shared by both server and client implementations (generally not a good idea).
The way WCF is normally used (for example if the client is implemented using a "Service Reference" in Visual Studio), what happens is that the service publishes WSDL metadata describing its operations and the XML schemas for the operation parameters and return values, and from these a set of types is generated for use in the client implementation. These are NOT the same .NET types as the data contract types used by the service implementation, but they are "equivalent" in the sense that they can be serialized to the same XML data passed over the network. Normally this type generation is done at design time in Visual Studio.
In order to do what you are trying to do, which is essentially to do this type generation at runtime, you will need some mechanism by which the client can get sufficient knowledge of the structure of the XML representing the various types of object implementing ISubject so that it can understand the XML received from the service and generate the appropriate XML the service is expecting back (either working with the XML directly, or deserializing/serializing it in some fashion). If you really, really want to do this, possible ways might be:
some out-of-band mechanism whereby the client is preconfigured with the relevant type information corresponding to each subclass of ISubject that it might see. The link provided in blindmeis's answer is one way to do that.
provide a separate service operation by which the client can translate the ID of the subclass to type metadata for the subclass (perhaps as an XSD schema from which the client could generate a suitable serializable .NET type to round trip the XML).
it would also be feasible in principle for the service to pass type metadata in some format within the headers of the response containing the serialized object. The client would need to read, interpret and act on the type infomation in an appropriate fashion.
Whichever way, it would be a lot of effort and is not the standard way of using WCF. You will have to decide if it's worth it.
I think you might be missing something :)
A major concept with web services and WCF is that we can pass our objects across the network, and the client can work with the same objects as the server. Additionally, when a client adds a service reference in Visual Studio, the server will send the client all the details it needs to know about any types which will be passed across the network.
There should be no need for reflection.
There's a lot to cover, but I suggest you start with this tutorial which covers WCF DataContracts - http://www.codeproject.com/KB/WCF/WCFHostingAndConsuming.aspx
To deserialize an object the receiving side will need to have the assembly the type is defined in.
Perhaps you should consider some type of remoting or proxying setup where the instance of ISubject lives on one side and the other side calls back to it. This may be problematic if you need to marshal large amounts of data across the wire.
wcf needs to know the real object(not an interface!) which should be sent across the wire. so you have to satisfy the server AND the clientproxy side from the WCF service that they know the types. if you dont know the object type while creating the WCF service, you have to find a way to do it in a dynamic way. i use the solution from here to get the knownTypes to my WCF service.
[ServiceContract(SessionMode = SessionMode.Required]
[ServiceKnownType("GetServiceKnownTypes", typeof(KnownTypeHelper))]//<--!!!
public interface IWCFService
{
[OperationContract(IsOneWay = false)]
object DoSomething(object obj);
}
if you have something "universal" like the code above, you have to be sure that whatever your object at runtime will be, your WCF service have to know this object.
you wrote your client create a subclass and sent it back to the service. if you want to do that, WCF(clientproxy and server!) needs to know the real type of your subclass.

Can Server-side and Client-side WCF Share Validation Library?

Greetings!
I am using a WCF library on an application server, which is referenced by an IIS server (which is therefore the client). I would like to put my validation in a place so that I can just call .Validate() which returns a string array of errors (field too short, missing, etc). The problem is, such functions don't cross the WCF boundary and I really don't want to code the same logic in the WCF service and in IIS/WCF client. Is there a way to use extension methods or something similar so both side can use use a .Validat() method which calls the same code?
Many thanks for any ideas!
Steve
If you control both sides of the wire, i.e. the server-side (service) and the client-side, then you could do the following:
put all your service and data contracts into a shared assembly
reference that "Contracts" assembly from both the server and the client
manually create the client proxy (by deriving from ClientBase<T> or by creating it from a ChannelFactory<T>) - do not use "Add Service Reference" or svcutil.exe!
put all validation logic into a shared assembly
reference that shared validation assembly from both projects
If you want to use a shared validation assembly, you must make sure the data types used on your server and client are identical - this can only be accomplished if you also share service and data contracts. Unfortunately, that requires manual creation of the client proxy (which is really not a big deal!).
If you'd use "Add Service Reference", then Visual Studio will inspect the service based on its metadata, and create a new set of client-side objects, which look the same in terms of their fields and all, but they're a separate, distinct type, and thus you wouldn't be able to use your shared validation on both the server-side and the client-side objects.
Do you have a problem with sending the data over to the server to be validated? In other words, your service interface actually offers the "Validate" method and takes a data contract full of data, validates it and returns a List where T is some kind of custom ValidationResult data contract that contains all the info you need about validation warnings/errors.
In a service architecture, you can't trust the client, who could theoretically be some other company altogether, to have done proper data validation for you. You always need to do it at the service layer and design for communication of those validation issues back to your client. So if you're doing that work at the server anyway, why not open that logic up to the clients so they can use it directly? Certainly the clients can (should) still do some kind of basic input validation such as checking for null values, empty strings, values out of range, etc, but core business logic checks should be shipped off to the service.