I was just looking to develop .NET WCF API. We may need to frequently update APIs.
How to manage multiple versions of API deployment?
Versioning your services is a huge topic with many considerations and guidelines.
For a start, there are different classes of changes you can make; fully-breaking, semi-breaking, and non-breaking.
Non-breaking changes (no change needed to existing clients) include:
changing the internal implementation of the service while keeping the exposed contract unchanged
changing the contract types in a way which does not break clients, for example, by adding fields to your operation return types (most serializers will raise an event rather than throw an exception when encountering an unexpected field on deserialization)
polymorphically exposing new types (using ServiceKnownType attribute)
changing the instance management settings of the service (per-call to singleton, sessionless to sessionful etc, although sometimes this will require configuration or even code changes)
Semi-breaking changes (usually can be configured on the client) inlcude:
changing the location of a service
changing the transport type a service is exposed across (although changing from a bi-directional to a uni-directional transport - eg http to msmq - can be a fully-breaking change)
changing the availability of the service (through use of service windows etc)
Fully-breaking changes (need new version of the client) include:
changing service operation signatures
changing exposed types in a breaking manner (removing fields, etc)
When you are going to make a semi or fully breaking change, you should evaluate the best way of doing this. Do you force all your clients to upgrade to use the new version, or do you co-host both versions of the service at different endpoints? If you choose the latter then how will you control and manage the propagation of different versionning dependencies which this may introduce?
Taken to an extreme, you could look into dynamic endpoint resolution, whereby the client resolves the suitable endpoint to call at runtime using some kind of resolver service.
There's good reading about this here:
http://msdn.microsoft.com/en-us/library/ms731060.aspx
Related
I am planning to use grpc to build my search API, but I am wondering how the grpc services definitions files (e.g .proto) is synced between the server and the clients (assuming all use different technologies).
Also if the server had changed one of the .proto, how the clients will be notified to regenerate their stubs in accordance to those changes.
To summarize: how to share the definitions (.proto) with clients and how clients are notified if any changes to those files had occurred?
Simple: they aren't. All sync here is manual and usually requires a rebuild and redeploy, after you've become aware of a change, and have updated your .proto files.
Without updating, the fields and methods that you know about should at least keep working. You just won't have the new bits.
Note also: while you can extend schemas by adding new fields and services / methods, if you change the meaning of a field, or the field type, or the message types on a service: expect things to go very badly wrong.
I'm about to begin writing a suite of WCF services for a variety of business applications. This SOA will be very immature to begin with and eventually evolve into a strong middle-ware layer.
Unfortunately I do not have the luxury of writing a full set of services and then re-factoring applications to use them, it will be a iterative process done over time. The question I have is around evolving (changing, adding, removing properties) business objects.
For example: If you have a SOA exposing a service that returns obj1. That service is being consumed by app1, app2, app3. Imagine that object is changed for app1, I don't want to have to update app2 and app3 for changes made for app1. If the change is an add property it will work fine, it will simply not be mapped but what happens when you remove a property? Or change a property from a string to an int? How do you manage the change?
Thanks in advance for you help?
PS: I did do a little picture but apparently I need a reputation of 10 so you will have to use your imagination...
The goal is to limit the changes you force your clients to have to make immediately. They may eventually have to make some changes, but hopefully it is only under unavoidable circumstances like they are multiple versions behind and you are phasing it out altogether.
Non-breaking changes can be:
Adding optional properties
Adding new operations
Breaking changes include:
Adding required properties
Removing properties
Changing data types of properties
Changing name of properties
Removing operations
Renaming operations
Changing the order of the properties if explicitly specified
Changing bindings
Changing service namespace
Changing the meaning of the operation. What I mean by this, for example, is if the operation always returned all records but it was changed to only return certain records. This would break the clients expected response.
Options:
Add a new operation to handle the updated properties and logic. Modify code behind original operation to set new properties and refactor service logic if you can. Just remember to not change the meaning of the operation.
If you are wanting to remove an operation that you no longer want to support. You are forcing the client to have to change at some point. You could add documentation in the wsdl to let client know that it is being deprecated. If you are letting the client use your contract dll you could use the [Obsolete] attribute (it is not generated in final wsdl so that's why you can't just use it for all)
If it is a big change altogether, a new version of the service and/or interface and endpoint can be created easily. Ie v2, v3, etc. Then you can have the clients upgrade to the new version when the time is right
Here is also a good flowchart from “Apress - Pro WCF4: Practical Microsoft SOA Implementation” that may help.
I am writing an application that is consuming an in-house WCF-based REST service and I'll admit to being a REST newbie. Since I can't use the "Add Service Reference", I don't have ready-made proxy objects representing the return types from the service methods. So far the only way I've been able to work with the service is by sharing the assembly containing the data types exposed by the service.
My problem with this arrangment is that I see only two possibilities:
Implement DTOs (DataContracts) and expose those types from my service. I would still have to share an assembly but this approach would limit the types contained in the assembly to the service contract and DTOs. I don't like to use DTOs just for the sake of using them, though as they add another layer of abstraction and processing time to convert from domain object to DTO and vice versa. Plus, if I want to have business rules, validation, etc. on the client, I'd have to share the domain objects anyways, so is the added complexity necessary.
Support serialization of my domain objects, expose those types and share that assembly. This would allow me to share business and validation logic with the client but it also exposes parts of my domain objects to the client that are meant only for the service app.
Perhaps an example would help the discussion...
My client application will display a list of documents that is obtained from the REST service (a GET operation). The service returns an array of DocumentInfo objects (lightweight, read-only representation of a Document).
When the user selects one of the items, the client retrieves the full Document object from the REST service (GET by id) and displays a data entry form so the user can modify the object. We would want validation rules for a rich user experience.
When the user commits the changes, the Document object is submitted to the REST service (a PUT operation) where it is persisted to the back-end data store.
If the state of the Document allows, the user may "Publish" the Document. In this case, the client POSTs a request to the REST service with the Document.ID value and the service performs the operation by retrieving the server-side Document domain object and calling the Publish method. The Publish method should not be available to the client application.
As I see it, my Document and DocumentInfo objects would have to be in a shared assembly. Doing this makes Document.Publish available to the client. One idea to hide it would be to make the method internal and add an InternalsVisibleTo attribute that allows my service app to call the method and not the client but this seems "smelly."
Am I on the right track or completely missing something?
The classes you use on the server should not be the same classes you use on the client (apart from during the data transfer itself). The best approach is to create a package (assembly/project) containing DTOs, and share these between the server and the client. You did mention that you don't want to create DTO's for the sake of it, but it is best practice. The performance impact of adding extra layers is negligible, and layering actually helps make your application easier to develop and maintain (avoiding situations like yours where the client has access to server code).
I suggest starting with the following packages:
Service: Resides on server only, exposes the service and contains server application logic.
DTO: Resides on both server and client. Contains simple classes which contain data which need to be passed between server and client. Classes have no code apart from properties. These are short lived objects which survive long enough only to transfer data.
Repository: Resides on client only. Calls the server, and turns Model objects into DTO's (and vice versa).
Model: Resides on client only. Contains classes which represent business objects and relationships. Model objects stay in memory throughout the life of the application.
Your client application code should call into Repository to get Model objects (you might also consider looking into MVVM if your not sure how to go about this).
If your service code is sufficiently complex that it needs access to Model classes, you should create a separate Model package (obviously give it a different name) - the only classes which should exist both on server and client are DTO classes.
I thought that I'd post the approach I took while giving credit to both Greg and Jake for helping guide me down the path.
While Jake is correct that deserializing the data on the client can be done with any type as long as it implements the same data contract, enforcing this without WSDL can be a bit tricky. I'm in an environment where other developers will be working with my solution both to support and maintain the existing as well as creating new clients that consume my service. They are used to "Add Service Reference" and going.
Greg's points about using different objects on the client and the server were the most helpful. I was trying to minimize duplicate by sharing my domain layer between the client and the server and that was the root of my confusion. As soon as I separated these into two distinct applications and looked at them in isolation, each with their own use cases, the picture became clearer.
As a result, I am now sharing a Contracts assembly which contains my service contracts so that a client can easily create a channel to the server (using WCF on the client-side) and data contracts representing the DTOs passed between client and service.
On the client, I have ViewModel objects which wrap the Model objects (data contracts) for the UI and use a service agent class to communicate with the service using the service contracts from the shared assembly. So when the user clicks the "Publish" button in the UI, the controller (or command in WPF/SL) calls the Publish method on the service agent passing in the ID of the document to publish. The service agent relays the request to the REST API (Publish operation).
On the server, the REST API is implemented using the same service contracts. In this case, the service works with my domain services, repositories and domain objects to carry out the tasks. So when the Publish service operation is invoked, the service retrieves the Document domain object from the DocumentRepository, calls the Publish method on the object which updates the internal state of the object and then the service passes the updated object to the Update method of the repository to persist the changes.
I am pleased with the outcome as I believe this gives me a more robust and extensible architecture to work with. I can change the ViewModels as needed to support the UI with no concern over poluting the service(s) and, likewise, change the internal implementation of the service operations (domain layer) without affecting the client application(s). All that binds the two are the contracts they share. Pretty clean.
You can serialize your domain objects and then de-serialize them into different types on the client. Both types need to implement the same data contract. All serializable types have at least a default data contract that includes all public read/write properties and fields.
I have a universal service hosted on IIS7 that accepts a Message and returns Message ( with Action="*"). This service still publishes meta for the clients.
This metadata is explicitly specified using LocationUrl property in ServiceMetadataBehavior.
We have a requirement that the metadata can change during lifetime of the service, so in essence metadata has a lifetime.
I tried adding IWsdlExportExtension to the service endpoint behavior, but the ExportEndpoint method only gets called once (when the service is loaded first time). Is there a way for me to invalidate the loaded metadata so that anytime there is a call for wsdl using HttpGet, the behavior gets called ?
What you are asking for (changing the published service definition at runtime) is not possible - you need to remove the requirement which specifies that the metadata can change over time.
Once you've published a service, the only reason the service specification should change is because the service has been upgraded.
You should look closer at the business requirement which is making this technical requirement necessary, and try to find another way to satisfy it (perhaps post in programmers.stackexchange). Perhaps you can have multiple services available, and switch between the services over time - but this is a bit of a stab in the dark without knowing the business requirement.
No there is no way. Moreover if you needed you are up to your fully custom solution because this is out of scope of web services. Changing metadata means changing the service itself = its internal logic which always result in restarting the hosting process and publishing new metadata.
Is sharing a project containing the wcf interface and datacontracts and using these via ChannelFactory to consume the service against SOA principles?
My architect is advising that generating a proxy using Add Service Reference is preferable.
I guess that depends on a some things: your infrastructure, security policies, governance, etc.
We design our WSDLs (service and message contracts) and XML Schemas (data contracts) and then use svcutil.exe* to generate a proxy. At that point, we have code we can either use to consume or stand up a service. Of course, I am just talking about the code, the output.config will be modified with proper behaviors, bindings and endpoints as those are decided.
Once the service is stood up, it's fronted by an XML gateway. At which point we can begin testing the services using the 'Add Service Reference...'. If you're just looking to save some time and hand someone else your pre-generated proxy or your WSDLs aren't exposed (as they're behind an XML gateway that does not echo them), then what you're doing seems fine.
Otherwise, I'd expect consumers to be able to 'Add Service Reference...' and generate their own clients.
*Java-based applications use something else (WSDL2Java/ClientGen/built-in IDE tool).
Sharing pre-packaged service interfaces along with datacontracts isn't against SOA principles as long as consuming services are not expected to use it. This is exactly what enables potential clients to speed-up development against an existing 3rd-party service, or begin development against one which is yet to be built. Providing interfaces/datacontracts in code format will be less ambiguous than describing these things via documentation only (of course they may not be useful if the client is using a different programming language).
However, if some sort of pre-packaged implementation of the service interface is provided in the shared package, and this implementation is required to be used to successfully use the service, then this would be against SOA principles unless an implementation was written for all types of clients. Being pragmatic though, this can be a good idea so the clients can be more loosely coupled against things such as transport choice, service contract changes and service versioning.
I would recommend using the ChannelFactory (from a dotnet client of course) whether consuming the services via a shared pre-packaged interfaces/datacontracts project or dll, or generating your own proxy (via 'Add Service Reference' or 'svcutil.exe'). This will allow you to code against the service interface and therefore your client will be much more friendly to using concepts such as dependency injection for stubbing, testing, etc.
Both methods of generating a proxy are valid, it depends on how much control you wish to have over the proxy, and if you own both sides of the code. A third option also exists, you can hand craft your own proxy. Let me explain further:
In SOA we pass messages, this is a different paradigm to passing pointers to objects on a heap/stack which is the norm in OO world.
Thus in SOA, the contract (what you can do) and the message (the state to act upon) are important and need to be shared with the consumers of the service so they can all agree on the contract or "rules of engagement" here we have the most basic form of SOA.
Enter WS-* a set of specifications for adding more functionality to our service call (distributed transactions, security etc...) but if we do this we all need to agree on the rules and the flavor of the type of interaction we intend to use, so the service and its clients need to agree exactly on how this is to occur so it to needs to be shared.
The combination of the contract definitions and WS-* specifications is called a WSDL and this typically is what get shared between clients and services, this is in line with the SOA tenants that we share schema and contract, not class, and that Compatibility is based on policy (WS-*).
So if you use channel factory you generate the proxy based on the interface definition you have and the config you have set up on the fly, if you use add service reference you let the IDE generate a proxy class based on the WSDL of the service as it exists then.
If you hand craft the proxy, you have full control over how this happens and you can jump into the interception chain and do things on the client side to manipulate the call.
Depends on what you want to do.
The standards we have carefully considered and adopted at my company, are that we distribute service contracts is two ways. As a shared assembly when delivered to teams within the company, and as a WSDL when providing to clients and other third parties. It is a standard we discussed with Microsoft during a design / process review and they agreed was the correct approach.