I have a wcf service in 1 project and an object model that holds all my objects in another project. I add a reference to the object model in the service project and am able to use the objects in my service without incedent.
When I publish the service and other users use it. They are able to enter invalid data and schema and the service does Not fail.
I need the service to be connected to the object model. If users to not adhere to the schema of the objects the service should fail automatically.
Im am not sure if maybe I have to set a configuration maybe in the web.config?
What I am not understanding is if I set a property on an object to required. If the user does not add this property to the object being passed to the service why isnt the service automatically stopping?
[DataMember(IsRequired = true)]
public string VendorName { get; set; }
WCF automated approaches
To automate the WCF validation against its WSDL contract, you could use the WsdlExporter as shared in this MSDN blog.
WCF raw approaches
You could use a WCF schema validation behavior extension. The custom BehaviorExtension will allow you to enforce data validation of a defined schema.
You could also use a WCF parameter validation behavior extension to enforce parameter constraints.
See MSDN for WCF Input/Data Validation FAQ.
WCF Validation Commentary
Also review this great SO post regarding why WCF input/data validation isn't performed.
The Four Tenets of XML Messaging with WCF also provides an interesting perspective on Schema validation.
Related
My WCF Service has API to create 'Employee' object which needs to be send to client app. This object has set of methods and properties. Now, client need to access Methods in order to set it's fields (API has few validation logics to set it's fields). How WCF service will send an custom object where client must be able to access methods.
Here the design is, my wcf service will provide a 'template' (from api) to client where in client uses this object methods to set/update fields and will send back to service.
If the objects you send and receive have logic associated to them (not a very good idea), you will need the assembly where those objects are impemented on both sides, since the metadata exposed by wcf only shows fields, and not methods.
I'd split that in two, keep the datacontracts clean and if you need validation logic, you can either do it in the wcf service and return errors to the client, or in the client, but that will extra logic to the client that you'll need to provide.
I'd go with validation logic in the server, and clean datacotracts. It's the best way to ensure your services are interoperable.
Its not a good idea to return any objects from wcf service which contains any functions. Keep the data contract simple by having only fields (properties) , if any additional operation is needed make this available as part of operation contract.
I am writing an application that is consuming an in-house WCF-based REST service and I'll admit to being a REST newbie. Since I can't use the "Add Service Reference", I don't have ready-made proxy objects representing the return types from the service methods. So far the only way I've been able to work with the service is by sharing the assembly containing the data types exposed by the service.
My problem with this arrangment is that I see only two possibilities:
Implement DTOs (DataContracts) and expose those types from my service. I would still have to share an assembly but this approach would limit the types contained in the assembly to the service contract and DTOs. I don't like to use DTOs just for the sake of using them, though as they add another layer of abstraction and processing time to convert from domain object to DTO and vice versa. Plus, if I want to have business rules, validation, etc. on the client, I'd have to share the domain objects anyways, so is the added complexity necessary.
Support serialization of my domain objects, expose those types and share that assembly. This would allow me to share business and validation logic with the client but it also exposes parts of my domain objects to the client that are meant only for the service app.
Perhaps an example would help the discussion...
My client application will display a list of documents that is obtained from the REST service (a GET operation). The service returns an array of DocumentInfo objects (lightweight, read-only representation of a Document).
When the user selects one of the items, the client retrieves the full Document object from the REST service (GET by id) and displays a data entry form so the user can modify the object. We would want validation rules for a rich user experience.
When the user commits the changes, the Document object is submitted to the REST service (a PUT operation) where it is persisted to the back-end data store.
If the state of the Document allows, the user may "Publish" the Document. In this case, the client POSTs a request to the REST service with the Document.ID value and the service performs the operation by retrieving the server-side Document domain object and calling the Publish method. The Publish method should not be available to the client application.
As I see it, my Document and DocumentInfo objects would have to be in a shared assembly. Doing this makes Document.Publish available to the client. One idea to hide it would be to make the method internal and add an InternalsVisibleTo attribute that allows my service app to call the method and not the client but this seems "smelly."
Am I on the right track or completely missing something?
The classes you use on the server should not be the same classes you use on the client (apart from during the data transfer itself). The best approach is to create a package (assembly/project) containing DTOs, and share these between the server and the client. You did mention that you don't want to create DTO's for the sake of it, but it is best practice. The performance impact of adding extra layers is negligible, and layering actually helps make your application easier to develop and maintain (avoiding situations like yours where the client has access to server code).
I suggest starting with the following packages:
Service: Resides on server only, exposes the service and contains server application logic.
DTO: Resides on both server and client. Contains simple classes which contain data which need to be passed between server and client. Classes have no code apart from properties. These are short lived objects which survive long enough only to transfer data.
Repository: Resides on client only. Calls the server, and turns Model objects into DTO's (and vice versa).
Model: Resides on client only. Contains classes which represent business objects and relationships. Model objects stay in memory throughout the life of the application.
Your client application code should call into Repository to get Model objects (you might also consider looking into MVVM if your not sure how to go about this).
If your service code is sufficiently complex that it needs access to Model classes, you should create a separate Model package (obviously give it a different name) - the only classes which should exist both on server and client are DTO classes.
I thought that I'd post the approach I took while giving credit to both Greg and Jake for helping guide me down the path.
While Jake is correct that deserializing the data on the client can be done with any type as long as it implements the same data contract, enforcing this without WSDL can be a bit tricky. I'm in an environment where other developers will be working with my solution both to support and maintain the existing as well as creating new clients that consume my service. They are used to "Add Service Reference" and going.
Greg's points about using different objects on the client and the server were the most helpful. I was trying to minimize duplicate by sharing my domain layer between the client and the server and that was the root of my confusion. As soon as I separated these into two distinct applications and looked at them in isolation, each with their own use cases, the picture became clearer.
As a result, I am now sharing a Contracts assembly which contains my service contracts so that a client can easily create a channel to the server (using WCF on the client-side) and data contracts representing the DTOs passed between client and service.
On the client, I have ViewModel objects which wrap the Model objects (data contracts) for the UI and use a service agent class to communicate with the service using the service contracts from the shared assembly. So when the user clicks the "Publish" button in the UI, the controller (or command in WPF/SL) calls the Publish method on the service agent passing in the ID of the document to publish. The service agent relays the request to the REST API (Publish operation).
On the server, the REST API is implemented using the same service contracts. In this case, the service works with my domain services, repositories and domain objects to carry out the tasks. So when the Publish service operation is invoked, the service retrieves the Document domain object from the DocumentRepository, calls the Publish method on the object which updates the internal state of the object and then the service passes the updated object to the Update method of the repository to persist the changes.
I am pleased with the outcome as I believe this gives me a more robust and extensible architecture to work with. I can change the ViewModels as needed to support the UI with no concern over poluting the service(s) and, likewise, change the internal implementation of the service operations (domain layer) without affecting the client application(s). All that binds the two are the contracts they share. Pretty clean.
You can serialize your domain objects and then de-serialize them into different types on the client. Both types need to implement the same data contract. All serializable types have at least a default data contract that includes all public read/write properties and fields.
I am developing an application that exposes a WCF service using the Message/Response pattern for service methods. The application is using Unity 2.0 for dependency injection and the Validation Application Block from MS Patterns & Practices. I've already gotten Unity tied into WCF using a custom HttpModule I picked up from several website a while back and everything works great.
In my service interface I have a method such as:
DoSomethingResponse DoSomething(DoSomethingRequest request)
I can easily attach VAB attributes to the service contract to verify that 'request' is never null but I also want to validate the contents of the request object.
To do this, I inject the validator into the DoSomethingRequest constructor and include an internally scoped IsValid property which handles interacting with the VAB validator. Unfortunately, this constructor doesn't get called because WCF deserializes the object and constructors aren't used.
Without getting into the merits of having the request object be a simple DTO versus having some server-side business logic, is there a way to cleanly inject dependencies into an object passed into WCF service as an argument?
If I'm understanding your issue correctly, you have properties on DoSomethingRequest that are instances of some other classes (data contracts) and you want to validate your data contracts as well? Is there some reason you can't just apply validation attributes to your data contract classes as well? This is the approach I've used when using WCF with VAB integration and it's worked out quite nicely.
So it turns out that adding the validation attributes to my DataContract actually works with no additional code. Unfortunately, it doesn't work if validation is defined in the app's config file (app.config or web.config).
As a result, I've stripped out the constructor injection and IsValid property on my DataContract (request object) which makes it more of an annotated DTO which I think is preferred anyway. I only wish that it would work the same with the XML configuration.
I am not understanding how my model can be a WCF service. It makes sense when its an Astoria partial class residing on the client that allows remote calls to do persistence calls, but a WCF service doesn't have properties for model fields that can be used to update a data store.
Even if I could factor out an interface for a model/domain object class into a separate assembly, a silverlight project will not allow me to add that as a reference.
How should my ViewModel encompass my WCF calls? Ultimately the WCF will call a repository assembly implemented in Linq-to-Sql, but apparently those entities are not my model in this scenario, my WCF classes are?
Thanks for any guidance on this.
Also, posts I have read to give a frame of reference:
http://development-guides.silverbaylabs.org/Video/Silverlight-Prism#videolocation_0
http://blogs.conchango.com/davidwynne/archive/2008/12/15/silverlight-and-the-view-viewmodel-pattern.aspx
http://msdn.microsoft.com/en-us/magazine/dd458800.aspx
When you create a service reference to a WCF service in a Silverlight project it also generates an interface for that Service, this is similar to David Wynns IFeedService in the articles you listed above. The service reference will also generate proxy objects that represent the objects used by the service (Product, Category etc).
The important thing to note is that the service interface isn't the model, it's how you access the model. Going back to David's example, his ViewModel exposes a list of items (his model), this list is retrieved using the service.
If you're looking to share code between the client and server I'd reccomend looking into something like RIA Services. If this isn't for you then I'd look at a few articles around about sharing code between the server and client (via Add as Link).
Hope this helps
Greetings!
I am using a WCF library on an application server, which is referenced by an IIS server (which is therefore the client). I would like to put my validation in a place so that I can just call .Validate() which returns a string array of errors (field too short, missing, etc). The problem is, such functions don't cross the WCF boundary and I really don't want to code the same logic in the WCF service and in IIS/WCF client. Is there a way to use extension methods or something similar so both side can use use a .Validat() method which calls the same code?
Many thanks for any ideas!
Steve
If you control both sides of the wire, i.e. the server-side (service) and the client-side, then you could do the following:
put all your service and data contracts into a shared assembly
reference that "Contracts" assembly from both the server and the client
manually create the client proxy (by deriving from ClientBase<T> or by creating it from a ChannelFactory<T>) - do not use "Add Service Reference" or svcutil.exe!
put all validation logic into a shared assembly
reference that shared validation assembly from both projects
If you want to use a shared validation assembly, you must make sure the data types used on your server and client are identical - this can only be accomplished if you also share service and data contracts. Unfortunately, that requires manual creation of the client proxy (which is really not a big deal!).
If you'd use "Add Service Reference", then Visual Studio will inspect the service based on its metadata, and create a new set of client-side objects, which look the same in terms of their fields and all, but they're a separate, distinct type, and thus you wouldn't be able to use your shared validation on both the server-side and the client-side objects.
Do you have a problem with sending the data over to the server to be validated? In other words, your service interface actually offers the "Validate" method and takes a data contract full of data, validates it and returns a List where T is some kind of custom ValidationResult data contract that contains all the info you need about validation warnings/errors.
In a service architecture, you can't trust the client, who could theoretically be some other company altogether, to have done proper data validation for you. You always need to do it at the service layer and design for communication of those validation issues back to your client. So if you're doing that work at the server anyway, why not open that logic up to the clients so they can use it directly? Certainly the clients can (should) still do some kind of basic input validation such as checking for null values, empty strings, values out of range, etc, but core business logic checks should be shipped off to the service.