I am planning to use grpc to build my search API, but I am wondering how the grpc services definitions files (e.g .proto) is synced between the server and the clients (assuming all use different technologies).
Also if the server had changed one of the .proto, how the clients will be notified to regenerate their stubs in accordance to those changes.
To summarize: how to share the definitions (.proto) with clients and how clients are notified if any changes to those files had occurred?
Simple: they aren't. All sync here is manual and usually requires a rebuild and redeploy, after you've become aware of a change, and have updated your .proto files.
Without updating, the fields and methods that you know about should at least keep working. You just won't have the new bits.
Note also: while you can extend schemas by adding new fields and services / methods, if you change the meaning of a field, or the field type, or the message types on a service: expect things to go very badly wrong.
Related
I'm new to web development and I'm attempting to understand REST. The tutorial I'm watching makes mention of the difference between "procedures" and "state transformation". Stating that REST is based on the notion of "state transformation", but it does not delineate the difference between the two.
This has left me wondering what is the difference between the two? Why can't an operation which transforms the state of a resource also be considered a procedure? After all, 'procedure' sounds like a generic enough term that it would also encompass an operation that would transform the state of a resource.
So, what is the difference between performing a procedure on a resource, and performing a state transformation? Or is it merely a matter of semantics?
I have also tried searching for the answer but can't seem to find anything that will shed light on this.
TL;DR
RPC focues on sending a payload containing method names and arguments in a predefined format. Clients couple tightly to servers through a shared interface (Skeletton classes, WSDL or other interface definition languages (IDLs))
REST focues on decoupling clients from servers and on introducing indirections, like support of multiple different media types to marshal resource state in, and the whole interaction concepts summarized by HATEOAS where hypertext controls are used to drive the application state forward through a domain application protocol / state machine on the server side. Responses usually contain semi-structured data, which usually don't go well with simple CRUD application, that follow the definition of corresponding media type definition (i.e. the HTML spec). If you will the state of a resource is transformed into a representation format adhering to the rules in the media type definition and transferred to the remote side
In network programming, remote procedure call (RPC)-style invocations, i.e. often used in RMI, Corba, SOAP or similar frameworks, will send usually a method name that should be invoked at the server alongside with parameters to feed the method with. The return value is then marshalled into corresponding response and sent back to the caller. What a client could invoke is usually exposed via external stuff, i.e. skeletton classes, WSDL- or other form of contracts and so on. So far, so simple. This is how most of the networking stuff works. However, the drawback here is that a client is tightly coupled to the exposed interface (skeletton classes, WSDL, external documentation) and many problems in internet computing arise due to changes over time that are not adequatly depictable in those interfaces.
If you take a closer look though at how the Web works for decades, change is an inherent part of it. Your browser will just show the most recent state of a resource (Web page) it has. It might eigther got it from its cache or from a server it asked for. If the version available in its cache is older than a predefined threshold value it will ignore the cached value and request a new version. If there happened an update since the last version your browser is automatically served with the new version. Fielding, who was working on the HTTP 1.0 and 1.1 spec back then, analyzed how the interaction on the Web takes place and generalized his findings into the REST architecture design. So, if you will, REST is just Web surfing for applications.
Unfortunately a mojority of enthusiasts and professional have not yet understood what REST really is and there is so much false information available in regards to REST, even here at Stackoverflow most people don't seem to care actually and posts explaining the true nature of REST are downvoted and wrong information upvoted.
So, what does REST different than typical RPC-like method invocations?
First, REST relies on a certain set of uniform interfaces, that are the same for every participant in that architecture. These are i.e. HTTP as transport layer and a naming scheme for resource (URI) so that everyone acts on these fixed principles. This helps to reduce interoperability issues that are just way to common in traditional network programming.
Next, it relies on a basic principle: Servers teach clients what they need to know. But how does a server know what a client need to know? Well, as Jim Webber pointed out, the designer of the application develops a state machine (or domain application protocol) a client will follow through. Think of a checkout system on your favorite online shop. At one point it presents you the items in your trolly and offers you a choice to progress to the next "page" where you can enter the shipping address and on further progressing through the state machine you will be asked for your payment options and so on until at one point to finished the checkout and are served with a "Thank you" page that summarizes your order. Under the hood you just progressed through their protocol on how to place orders and used application controls to progress your client further through their state machine. You therefore got served with some Web forms and links that you used to fulfill your task. In essence, this is what Hypertext as the engine of application state (or HATEOAS for short) is all about.
On the Web HTML forms are used to teach a client about what properties a resource supports, which ones are editable and so on. Besides that, it also teaches clients on the actual URI to send input data to, the HTTP operation to utilze as well as, mostly implicitly given, the media type to marshal the request into. I.e. a regular HTML form will use application/x-www-form-urlencoded as its default media type to send the data to the server. So a full HTTP request for an input of a first and last name may look like this:
POST /path/to/resource HTTP/1.1
Host: acme.org
Connection: close
Accept: */*
User-Agent: ...
Content-Type: application/x-www-form-urlencoded
Content-Length: 32
firstName=Roman&lastName=Vottner
The same data could be sent using a different representation format, if it were supported by the media type the form was issued for. Unfortunately, HTML does not support that many.
Links provided by a server should usually be annotated (or accompanyied) by so called link relation names that put the current resource in relation with the given URI. If you will they are the predicate in a tripple of subject (current page), predicate (link relation name) and object (link target resource). Such names, of course, should be standadized or at least follow the Web linking extension mechanism. URIs itself are opaque, meaning they themselves don't provide meaning and should therefore not get parsed and analyzed at all. A common mistake often seen in so called "REST APIs" is that they have typed resources, i.e. a user resource or a car resource that can be marshalled on the client side to a programming language specific object (i.e. Java object of class User or the like) that is pretty common in traditional RPC-sytle programming. In a REST architecture the representation format however is usually semi-structured data, i.e. a mix of syntax defining control inputs or elements and actual data. As such, a direct mapping from DB-Entry, to Model-Object to a resource itself, as done by so many CRUD applications, is not possible.
Why is this all done in first place?
If you compare traditional network programming a client is usually only able to work with that one server and if something at that server changes clients may be affected and thus stop working. There is a tight coupling between those two apparent. The REST architecture introduces a couple of indirections, i.e. usage of link relations instead of attempting to analyze meaningful URIs as well as usage of a multitude of possible media-types instead of relying on a specified version format, which help to decouple clients from servers. I.e. instead on coupling to the server in regards of the message exchanged, both, client and server couple to media types. Through content-type negotiation a client simply tells the server of its capabilities and the server should generate a response the client can process. Instead of focusing on one message format, REST has the freedom of almost infinite ones as long as both, client and server, support these. The more media types a peer supports, the more likely it will be to interact with other peers in that network.
All these points I've mentioned above lead to a strict decoupling of client and servers, which grant the latter one to evolve freely without having to fear that changes introduce will break clients as neither the transport protocol nor the naming scheme have changed and the changes introdcued are still in scope of the media-type definition. So, well-behaved peers in that network will be able to pick up changes on the fly automatically. This is especially handy if you develop an application that should withstand the sands of time and still server clients in years to come.
If you don't need such properties, there is nothing wrong with not being "RESTful" at all, just don't call such services/APIs REST then. Also, developing REST is for sure more overhead compared to typical RPC-style interactions.
I'm about to begin writing a suite of WCF services for a variety of business applications. This SOA will be very immature to begin with and eventually evolve into a strong middle-ware layer.
Unfortunately I do not have the luxury of writing a full set of services and then re-factoring applications to use them, it will be a iterative process done over time. The question I have is around evolving (changing, adding, removing properties) business objects.
For example: If you have a SOA exposing a service that returns obj1. That service is being consumed by app1, app2, app3. Imagine that object is changed for app1, I don't want to have to update app2 and app3 for changes made for app1. If the change is an add property it will work fine, it will simply not be mapped but what happens when you remove a property? Or change a property from a string to an int? How do you manage the change?
Thanks in advance for you help?
PS: I did do a little picture but apparently I need a reputation of 10 so you will have to use your imagination...
The goal is to limit the changes you force your clients to have to make immediately. They may eventually have to make some changes, but hopefully it is only under unavoidable circumstances like they are multiple versions behind and you are phasing it out altogether.
Non-breaking changes can be:
Adding optional properties
Adding new operations
Breaking changes include:
Adding required properties
Removing properties
Changing data types of properties
Changing name of properties
Removing operations
Renaming operations
Changing the order of the properties if explicitly specified
Changing bindings
Changing service namespace
Changing the meaning of the operation. What I mean by this, for example, is if the operation always returned all records but it was changed to only return certain records. This would break the clients expected response.
Options:
Add a new operation to handle the updated properties and logic. Modify code behind original operation to set new properties and refactor service logic if you can. Just remember to not change the meaning of the operation.
If you are wanting to remove an operation that you no longer want to support. You are forcing the client to have to change at some point. You could add documentation in the wsdl to let client know that it is being deprecated. If you are letting the client use your contract dll you could use the [Obsolete] attribute (it is not generated in final wsdl so that's why you can't just use it for all)
If it is a big change altogether, a new version of the service and/or interface and endpoint can be created easily. Ie v2, v3, etc. Then you can have the clients upgrade to the new version when the time is right
Here is also a good flowchart from “Apress - Pro WCF4: Practical Microsoft SOA Implementation” that may help.
I was just looking to develop .NET WCF API. We may need to frequently update APIs.
How to manage multiple versions of API deployment?
Versioning your services is a huge topic with many considerations and guidelines.
For a start, there are different classes of changes you can make; fully-breaking, semi-breaking, and non-breaking.
Non-breaking changes (no change needed to existing clients) include:
changing the internal implementation of the service while keeping the exposed contract unchanged
changing the contract types in a way which does not break clients, for example, by adding fields to your operation return types (most serializers will raise an event rather than throw an exception when encountering an unexpected field on deserialization)
polymorphically exposing new types (using ServiceKnownType attribute)
changing the instance management settings of the service (per-call to singleton, sessionless to sessionful etc, although sometimes this will require configuration or even code changes)
Semi-breaking changes (usually can be configured on the client) inlcude:
changing the location of a service
changing the transport type a service is exposed across (although changing from a bi-directional to a uni-directional transport - eg http to msmq - can be a fully-breaking change)
changing the availability of the service (through use of service windows etc)
Fully-breaking changes (need new version of the client) include:
changing service operation signatures
changing exposed types in a breaking manner (removing fields, etc)
When you are going to make a semi or fully breaking change, you should evaluate the best way of doing this. Do you force all your clients to upgrade to use the new version, or do you co-host both versions of the service at different endpoints? If you choose the latter then how will you control and manage the propagation of different versionning dependencies which this may introduce?
Taken to an extreme, you could look into dynamic endpoint resolution, whereby the client resolves the suitable endpoint to call at runtime using some kind of resolver service.
There's good reading about this here:
http://msdn.microsoft.com/en-us/library/ms731060.aspx
If I have a web serice and a client consuming tis webservice, and then I change the service location, orI add another parameter, what is the usual way to change the client?
Do you necesarily need to update the client/ Was UDDI helping in this kind of situation?
You should definitely read Service Versioning - it has the information you need.
But the answer to your question is: maybe.
There are two types of changes: breaking and non-breaking. Unfortunately, sometimes it's not obvious what is a breaking or non-breaking change since it could depend on what the client is doing (and you may not have knowledge of how your service is being used).
In terms of changing the service location this is usually a breaking change. However, as you mention, if the client is using UDDI then they should be able to retrieve the new endpoint location and that change would not be a breaking change.
If you add another parameter then that might be a breaking change (or it might not). If the parameter is optional and the client is using lax versioning (e.g. WCF, .asmx) then the change should not be a breaking one. But it might be that the client is expecting a very specific format or they are doing some schema validation etc. and the optional parameter might cause a failure.
It depends on the nature of change you apply in the service definition. If you add something optional that only new clients can consume but the old clients can ommit, you have introduced a backward compatible change so the clients shouldn't be updated unless they decide to use this new feature. Any change that affects the way the existing clients use the service will require a client update as it represents a breaking change.
In the case of WCF, if you use the latest version 4.0, it introduces a new protocol implementation WS-Discovery, which can help clients to find the service url and the right version they can use. Using this approach, you can for instance, deploy a new version in a different url and the client applications can discover it automatically.
Regards
Pablo.
Hey without fully understanding your problem, and from what i can get from your questino it sounds like you need to update your web reference on the client.
If you have updated your references, not changed the location:
So Load up your client solution, then find your References (not dll references) but Web/Service References, and then right-click and select "update web references"
If you have changed the location, you can change the endpoint if you go to properties, but I would just delete the existing one and create a new one using the new location.
Hope it helps.
For more info check out http://msdn.microsoft.com/en-us/library/bb628652.aspx
Desktop clients will be pushing data using WCF to a central server.
When the schema changes etc, say 100 computers have the old version of the desktop client while the rest are using the latest build.
What do I have to do on the server end to handle both versions?
Do I create 2 endpoints in WCF or a single smart endpoint that will figure out the version and act accordingly?
note: i will be passing the version info from the client (if required that is)
You have a choice:
Firstly you should be versioning your service contracts anyway, with their namespaces; eg. http://idunno.org/2008/10/numpty would change to http://idunno.org/2008/11/numpty if the service operations have breaking changes.
Ditto with data contracts; however if all you are doing to the data contract is additive then you can mark the new fields as optional;
[DataMember(IsRequired="false")]
And old clients will work. So this should indicate to you that the parameters into a service and parameters out should also be data contracts; it gives you that flexibility.
MSDN has more