Semantic versioning of REST apis? - api

I've evaluated a number of versioning schemas for REST apis (header, url, …). So far, the most reliable approach seems to be the url option: It works with proxies, and does not rely on obscure schemas such as dates for versioning.
Now, when I look around, everybody who uses the url based approach seems to use versions such as v1, v2, and so on. Nobody uses minor versions, or even a schema such as semantic versioning.
This raises some questions:
When do you increase the version number of a REST api (for sure, you have more updates to it than just once in five years)?
If you just have a bug fix, you probably do not increase the version number, but how do you differ both versions?
If you use a very fine-granular approach, you end up with lots of versions you need to host in parallel. How do you handle that?
In other words: How does a company such as GitHub, e.g., make to only have v3 today (2015), when they are around in business already for 7 years now? Does that mean that they actually only changed their api two times? I can hardly believe that.
Any hints?

The major version number is all you need for a web service. Because your consumers are only concerned about backward incompatible changes, and (if you're following semantic versioning correctly) those will only be introduced when a new major version is released.
All other changes (including new features, bugfixes, patches etc.) should be 'safe' for your consumers. Those new features don't have to be used by your consumers, and you probably don't want to continue to run that unpatched version that contains bug X or Y any longer than necessary.
Using the complete version number in your URLs (or whatever method you use for API versioning) would actually mean that your consumers have to update the URL of your API everytime you make an update / bugfix to your API, and they would keep using an unpatched version until they do so.
This doesn't mean that you can't use semantic versioning internally, of course! Just use the first part (major version) as the version number for your API. It just doesn't make sense to include the full version number in your URL / header / other versioning system.
So to answer your question: you update your API everytime you make a new release, but you only release a new API version when you have a new major version. This way you only have to host a couple of different versions (and you can of course deprecate old versions over time).

Related

Multi version Api Architecture

I had design a multi version API architecture,
Please give me feedback and suggestion about good and bad, or there are better way to achieve this.
version server (MVC4) to route request to correct API interface(WebApi).
version will send to server by header, will default to 1.0 if not header found.
First time I design for multi version API architecture, and google does't have much information for this.
Welcome any suggestion, critic, feedback and anything.
Cheers,
It depends on how many version you will maintain and what the NFR are for older versions.
Your current setup is fine for only a handful of versions pro's and con's:
+ Same NFR for all versions is possible
+ Quick to achieve at first
- Changes to common resources (like the DB) have an impact on all supported versions, so they have to be re-released, tested, ... which can become quite expensive
An other option is to build a chain of adapters, from version 2.2 to 1.0 and from 3.7 to 2.2:
+ Easier to maintain a large set of versions
+ changes only require one new/updated adapter in the chain
- harder to set up at first
- performance drops per adapter that is being used
For the chain of adapters there are several possible scenarios, have them all in one process or have a separate service for each. Again both have pro's and con's.
As usual, it all depends on your situation.

Choosing between dnx451 and dnxcore 50 for Azure Web App in terms of functionality, performance, etc

I am creating a new project that will run in Azure Web App on the new ASP.NET 5. We are not planning to run it on linux or anything like that, at least now. So the question is, should I try to keep both frameworks if possible just in case or I should prefer one of them. There are e.g. much less dependencies that I can use with dnxcore50 which is not so nice. So the main question is: are there any benefits of using dnxcore50 if running in Azure Web App, like: performance, stability, etc. over dnx451.
I have to start that I'm still the beginner in ASP.NET 5 (like the most other), so I didn't posted my answer before and you should ignore my reputation, because it's come from another subjects, which I know better.
I think that everybody, who switch to ASP.NET 5, ask the same question whether it does make sense to keep both framework in his projects. I try to post below my personal thoughts about the subject.
My personal choice is my short recommendation to you: keep both framework till you find some really important reason to drop one from there.
ASP.NET 5 is still not final. The strategy is not full fixed and it can be changed in a short time later. Just some examples. Previous beta versions have supported "Helios" as an option for hosting ASP.NET 5 applications on IIS. The option was dropped later (see the statement). Even the name dnxcore50 is renamed now to dotnet5.4 at least in all internal Microsoft components (see the announcement). One can suppose that some other things could be changed in the future. Thus I think that putting all your eggs in one basket would be too dangerous now: keeping of both frameworks could reduce the risk.
The next thing, which I found, was the following. dnxcore50 (dotnet5.4 or CoreFX or .NET Core foundational libraries) don't support many features supported by .Net Framework. One important example for me was missing XSD Schema validation (see here and here). I use XML only in combination with XSD Schema validation. I prefer JSON in the most other cases. Kipping of both frameworks in your project could helps you to locate the parts of your code, which could be not yet implemented in CoreFX. It could helps you to move the code in separate component or to change the implementation.
About the performance. One should distinguish potentiality of both frameworks from the current implementation. In general CoreFX was redesigned and decomposed. Many parts of one mscorlib was separated or removed (remoting, AppDomains and so on). It means that the performance of CoreFX should be better. Theoretically the factored API can provide better performance. Moreover one can more easy improve one parts of CoreFX and publish new version with improved performance. More modules instead of having one monolith gives us the new way for improvement of the performance and for fixing the bugs. On the other side replacing of dependencies to new version could be origin of new compatibility problems and thus it increases the risk and could decrease the stability. By keeping of both frameworks we can test whether the new problem exist in alternative framework. It allows us to suppose that the last changes of dependencies and not the last changes of our main code is the origin of new problems.
I can continue with pros and cons of the usage of every framework, but nodoby like to read long text and all my arguments forward me to the same practical decision: keeping by default of both frameworks in my projects as soon as I would find out a real requirement to drop one from the frameworks.
No major advantages really so far.
This might change in the future and why I'm planning to target both (CoreCLR and .NET 4.6). A lot of investment is being spent in CoreCLR but also on Docker and Service Fabric.
Just my 2 cents.

What is the benefit of versioning a REST api by date as Twilio does?

Basically, I think it's a good idea to version your REST api. That's common sense. Usually you meet two approaches on how to do this:
Either, you have a version identifier in your url, such as /api/v1/foo/bar,
or, you use a header, such as Accept: vnd.myco+v1.
So far, so good. This is what almost all big companies do. Both approaches have their pros and cons, and lots of this stuff is discussed here.
Now I have seen an entirely different approach, at Twilio, as described here. They use a date:
At compilation time, the developer includes the timestamp of the application when the code was compiled. That timestamp goes in all the HTTP requests.
When the request comes into Twilio, they do a look up. Based on the timestamp they identify the API that was valid when this code was created and route accordingly.
It's a very clever and interesting approach, although I think it is a bit complex. It can be confusing to understand whether the timestamp is compilation time or the timestamp when the API was released, for example.
Now while I somehow find this quite clever as well, I wonder what the real benefits of this approach are. Of course, it means that you only have to document one version of your API (the current one), but on the other hand it makes traceability of what has changed more difficult.
Does anyone know what the advantages of this approach are, so why Twilio decided to do so?
Please note that I am aware that this question sounds as if the answer(s) are primarily opinion-based, but I guess that Twilio had a good technical reason to do so. So please do not close this question as primariliy opinion-based, as I hope that the answer is not.
Interesting question, +1, but from what I see they only have two versions: 2008-08-01 and 2010-04-01. So from my point of view that's just another way to spell v1 and v2 so I don't think there was a technical reason, just a preference.
This is all I could find on their decision: https://news.ycombinator.com/item?id=2857407
EDIT: make sure you read the comments below where #kelnos and #andes mention an advantage of using such an approach to version the API.
There's another thing I can think about that makes this an interesting approach is if you are the developer of such api.
You have 20 methods, and you need to introduce a breaking change in 1 of those.
Using semver (v1, v2, v3, etc) you need a v2 api.
All your 20 methods now needs to respond to v2, but in reality, those methods aren't changed at all, aren't new.
Using dates, you can keep your unchanged methods as is, and when the request comes in, it just pick the best match.
I don't know how is this implemented, any information on that will be really welcome.
I used to work for a company that used date versioning (as in each api call had param of the API date desired ?v=20200630) and loved it.
It lets you be less strict than with the traditional versioning (v1, v2, v3) as client developers don't need to even care about the version number and just use the current build time. Everything else is pretty much the same as as with the traditional versioning + small benefit from seeing date checks in the server code - you can easily see how old this or that code path is.
I believe the situation would have been different if we had to support a number of external clients and for example fix a bug in ?v=20200630 - there is no elegant way to specify something like ?v=20200630.1. As you can see from Twilio's experience they were just changing what API version 2010-04-01 was - thus client couldn't be sure which version exactly it was seeing.
So my outcome from this:
date based version seems easier and more flexible when you are a typical startup or a small company with a few of apps (e.g. frontend, iOS, Android) and no or few 3rd party clients. Date-based versioning makes it a bit easier for client developers to "just write code" and since you control all the code, most of the time you can fix old API bugs by just releasing a new version and asking clients to switch to it
Once you start having the real need to maintain the old API versions (AKA when you have a number of important clients who are not likely to update quickly), then semver versioning becomes more reliable

API level: the lower, the better?

Is it true:
If the application does not require new features from newer APIs (i.e. higher API levels), it is better to take lower API levels.
The major concern is lower API levels means better compatibility, and thus means larger market.
Is there anything else I have to keep in mind when I make such decisions?
I came up with this question when coming across some question about Android API Levels, but I think this can be a general question, not only for Android.
I general, yes, but only if the older API is still supported by the newer implementations, of course. (For example Lucene Java changes their API in incompatible ways on major updates, so you do not have this option).
There could also be cases where the host platform looks at what API version your require, and then behave differently, in a way that you may not want (cannot think of a good real-world example right now).
For Android, at the moment, I'd say, yes, declare the lowest API level you need.

Developing API: balance between new features and back compatibility

I'm working now on an API for developers feature of our product.
The first version was released and it has small number of users at the moment. Since I started to develop its second version, some parts were reworked, some parts were removed to make the API more elegant and clear.
But the 2nd version deployment can be a pain for old version users.
Our marketing department is planning to enhance our API product a lot, add more features to it.
How should I build the system, so
1) we wouldn't be constrained to the "old version" to add new interesting features
2) current API users won't be dissatisfied because of the need to rework their systems in order to comply with the changed API
Or should the API products be tested in a sandbox for quite a long period of time before the public release, so there wouldn't be any significant modifications in the specification?
When you have to make changes to the API which already has some users, probably the best route is to deprecate the old API calls and encourage use of the new calls.
Removing the capability of the old API calls would probably break the functionality of old code, so that is probably going to cause some developers using your "old" API to become somewhat dissatisfied.
If your language provides ways to indicate that certain functionality has been deprecated, it can serve as a indication for the users to stop using old API calls and transition to new calls instead. In Java, the #deprecated javadoc tag can provide notes in the documentation that a feature has been deprecated, or from Java 5 the #Deprecated annotation can be used to raise compile-time warnings on calls to deprecated APIs features.
Also, it would probably be a good idea to provide some tips and hints on migrating from the old API to the new API to encourage people to use the new way of interacting with the API. Having examples and sample code on what to do and what not to do, the users of the API would be able to write code according to the new, preferred way.
It's going to be difficult to change a public API, but with some care taken in the transition from the old to new, I believe that it the amount of pain inflicted on the users of the API can be mitigated to a certain extent.
Here's an article on How and When to Deprecate APIs from Sun, which might provide some more information on when it would be appropriate to deprecate parts of APIs.
Also, thank you to David Schmitt who added that the Obsolete attribute in .NET is similar to the #Deprecated annotation in Java. (Unfortunately the edit was overwritten by my edit, as we were both editing this answer at the same time.)
Microsoft is pretty famous for their insane backwards compatibility. One of the things they did was to keep all the old obsolete calls, and then add new ones that new programs could use to access the enhanced features that they could not work into the old API.
You did not specify which programming language you use, but both .NET and Java has a mechanism to mark certain API calls as obsolete. If backward compatibility is very important for you, you might want to take the same route.
It's a balance you will have to strike with your community:
Keep old functions for aeons and you'll end up with the Win32 API (30000 public
functions)?
Rewrite the API all the time, and you can get something similar to .NET, where a new revision goes out every so often (1.0, 2.0, 3.0, 3.5...) and breaks existing programs or introduces new and improved ways of doing UIs etc.)
If the community is tolerant of change and open to experimenting, you will strive for a lean, current API and know that some breakage, aka bit rot, will result. If, on the other hand, the community has tons of legacy code and no resources or desire to bring it up to the latest version, you must keep backward compatibility or all of their stuff will simply not work on the new API.
Note to one of the other answers: deprecating APIs is an often-used way of indicating which functions are "on the way out", but as long as they work, many developers will use them even in the new code because those are the functions they are used to. There are very few enlightened developers that have both the awareness to actually heed "Deprecated" warnings and the time to search the code for other instances of the old API and update them.
Backward compatibility should be the default. The only reason you should compromise this principle is when the API is somehow insecure which forces users to change to more secure methods.
Idealy applicitations written to your original API will continue to work with the new version.
One way to add new features while at the same time making sure that old applications continue to run is to have two versions of an API call.
For example, suppose you currently have a function Foo that takes 2 parameters (arguments) in the API but you decide the new version really should take 3 parameters. Keep Foo the way it is and add a new function Foo2 which takes 3 parameters.
That way users can continue to code against Foo for backward compatibility or use the new Foo2 function if they require the new features.
This technique has been commonly used bu Microsoft for the Windows APIs.