I'm working now on an API for developers feature of our product.
The first version was released and it has small number of users at the moment. Since I started to develop its second version, some parts were reworked, some parts were removed to make the API more elegant and clear.
But the 2nd version deployment can be a pain for old version users.
Our marketing department is planning to enhance our API product a lot, add more features to it.
How should I build the system, so
1) we wouldn't be constrained to the "old version" to add new interesting features
2) current API users won't be dissatisfied because of the need to rework their systems in order to comply with the changed API
Or should the API products be tested in a sandbox for quite a long period of time before the public release, so there wouldn't be any significant modifications in the specification?
When you have to make changes to the API which already has some users, probably the best route is to deprecate the old API calls and encourage use of the new calls.
Removing the capability of the old API calls would probably break the functionality of old code, so that is probably going to cause some developers using your "old" API to become somewhat dissatisfied.
If your language provides ways to indicate that certain functionality has been deprecated, it can serve as a indication for the users to stop using old API calls and transition to new calls instead. In Java, the #deprecated javadoc tag can provide notes in the documentation that a feature has been deprecated, or from Java 5 the #Deprecated annotation can be used to raise compile-time warnings on calls to deprecated APIs features.
Also, it would probably be a good idea to provide some tips and hints on migrating from the old API to the new API to encourage people to use the new way of interacting with the API. Having examples and sample code on what to do and what not to do, the users of the API would be able to write code according to the new, preferred way.
It's going to be difficult to change a public API, but with some care taken in the transition from the old to new, I believe that it the amount of pain inflicted on the users of the API can be mitigated to a certain extent.
Here's an article on How and When to Deprecate APIs from Sun, which might provide some more information on when it would be appropriate to deprecate parts of APIs.
Also, thank you to David Schmitt who added that the Obsolete attribute in .NET is similar to the #Deprecated annotation in Java. (Unfortunately the edit was overwritten by my edit, as we were both editing this answer at the same time.)
Microsoft is pretty famous for their insane backwards compatibility. One of the things they did was to keep all the old obsolete calls, and then add new ones that new programs could use to access the enhanced features that they could not work into the old API.
You did not specify which programming language you use, but both .NET and Java has a mechanism to mark certain API calls as obsolete. If backward compatibility is very important for you, you might want to take the same route.
It's a balance you will have to strike with your community:
Keep old functions for aeons and you'll end up with the Win32 API (30000 public
functions)?
Rewrite the API all the time, and you can get something similar to .NET, where a new revision goes out every so often (1.0, 2.0, 3.0, 3.5...) and breaks existing programs or introduces new and improved ways of doing UIs etc.)
If the community is tolerant of change and open to experimenting, you will strive for a lean, current API and know that some breakage, aka bit rot, will result. If, on the other hand, the community has tons of legacy code and no resources or desire to bring it up to the latest version, you must keep backward compatibility or all of their stuff will simply not work on the new API.
Note to one of the other answers: deprecating APIs is an often-used way of indicating which functions are "on the way out", but as long as they work, many developers will use them even in the new code because those are the functions they are used to. There are very few enlightened developers that have both the awareness to actually heed "Deprecated" warnings and the time to search the code for other instances of the old API and update them.
Backward compatibility should be the default. The only reason you should compromise this principle is when the API is somehow insecure which forces users to change to more secure methods.
Idealy applicitations written to your original API will continue to work with the new version.
One way to add new features while at the same time making sure that old applications continue to run is to have two versions of an API call.
For example, suppose you currently have a function Foo that takes 2 parameters (arguments) in the API but you decide the new version really should take 3 parameters. Keep Foo the way it is and add a new function Foo2 which takes 3 parameters.
That way users can continue to code against Foo for backward compatibility or use the new Foo2 function if they require the new features.
This technique has been commonly used bu Microsoft for the Windows APIs.
Related
I am trying to build an object oriented wrapper, which will wrap API specification; this includes a many structures, events, and APIs.
This API specification will be revised every year, there by releasing new specification; the updates are likely to have newer structures, events and APIs. Updates will also include
Updates to existing structures, events and APIs, the APIs as such does not change but as they take various structures as parameters which eventually have updates
The challenges
The API specification is nothing but an SDK to a lower layer,
what I am trying to build is also an SDK but will be an object
orient wrapper over this SDK.
The requirement is that the users
want Objects and methods and no āCā like structures and APIs
The frequent version change should not have any impact on high level
application and should seamlessly work with any underlying API
version
Older application should work on newer APIs
Newer application should work on older APIs
The last one is a tricky one, what I mean is that the newer application when it sees that it an older version of SDK should somehow transform itself to an older version of API
Is there any design pattern which will help me achieve this task and tied over the frequent changes to internal data and also achieve backward compatibility and forward compatibility?
OS: Windows
Dev Environment : Visual C++
Your problem is too high level to be answerable by a design pattern.
What you are asking for are architectural principles.
These you should base on your well-founded design decisions ("API is backwards compatible using versioning because...") which in turn are based on your requirements (e.g. "Older application should work on newer APIs").
Look into this (a presentation keynote about API design by Joshua Bloch):
How to Design a Good API and Why it Matters
1) All that comes to mind at the moment, if the sdk API involves manual resource allocation:
RAII, or ctor,dtor resource management: https://en.wikipedia.org/wiki/Resource_Acquisition_Is_Initialization
2-5) Determine a function decomposition of the API you're building, that becomes expressible in terms of each version tier of the SDK API. Some details on semi-formal function decomposition here (towards the bottom):
http://jfeltz.com/posts/2015-08-30-cost-decreasing-software-architecture.html
You can then take the resulting function compositions and make them construct-able objects if you have to. Don't worry about the final object model until you have a working understanding of the function compositions involved. This is hard at first, but trust me, it is far more powerful than iterating through several possible object model designs.
For C++, you'll probably need to perform #define pre-processing against a scheme of versions for each upstream SDK API, unless your sdk encodes its version in a file somewhere, such that you can do dll loading instead (in which case, this may be Factory design pattern), but I suspect you already knew that.
I've evaluated a number of versioning schemas for REST apis (header, url, ā¦). So far, the most reliable approach seems to be the url option: It works with proxies, and does not rely on obscure schemas such as dates for versioning.
Now, when I look around, everybody who uses the url based approach seems to use versions such as v1, v2, and so on. Nobody uses minor versions, or even a schema such as semantic versioning.
This raises some questions:
When do you increase the version number of a REST api (for sure, you have more updates to it than just once in five years)?
If you just have a bug fix, you probably do not increase the version number, but how do you differ both versions?
If you use a very fine-granular approach, you end up with lots of versions you need to host in parallel. How do you handle that?
In other words: How does a company such as GitHub, e.g., make to only have v3 today (2015), when they are around in business already for 7 years now? Does that mean that they actually only changed their api two times? I can hardly believe that.
Any hints?
The major version number is all you need for a web service. Because your consumers are only concerned about backward incompatible changes, and (if you're following semantic versioning correctly) those will only be introduced when a new major version is released.
All other changes (including new features, bugfixes, patches etc.) should be 'safe' for your consumers. Those new features don't have to be used by your consumers, and you probably don't want to continue to run that unpatched version that contains bug X or Y any longer than necessary.
Using the complete version number in your URLs (or whatever method you use for API versioning) would actually mean that your consumers have to update the URL of your API everytime you make an update / bugfix to your API, and they would keep using an unpatched version until they do so.
This doesn't mean that you can't use semantic versioning internally, of course! Just use the first part (major version) as the version number for your API. It just doesn't make sense to include the full version number in your URL / header / other versioning system.
So to answer your question: you update your API everytime you make a new release, but you only release a new API version when you have a new major version. This way you only have to host a couple of different versions (and you can of course deprecate old versions over time).
Basically, I think it's a good idea to version your REST api. That's common sense. Usually you meet two approaches on how to do this:
Either, you have a version identifier in your url, such as /api/v1/foo/bar,
or, you use a header, such as Accept: vnd.myco+v1.
So far, so good. This is what almost all big companies do. Both approaches have their pros and cons, and lots of this stuff is discussed here.
Now I have seen an entirely different approach, at Twilio, as described here. They use a date:
At compilation time, the developer includes the timestamp of the application when the code was compiled. That timestamp goes in all the HTTP requests.
When the request comes into Twilio, they do a look up. Based on the timestamp they identify the API that was valid when this code was created and route accordingly.
It's a very clever and interesting approach, although I think it is a bit complex. It can be confusing to understand whether the timestamp is compilation time or the timestamp when the API was released, for example.
Now while I somehow find this quite clever as well, I wonder what the real benefits of this approach are. Of course, it means that you only have to document one version of your API (the current one), but on the other hand it makes traceability of what has changed more difficult.
Does anyone know what the advantages of this approach are, so why Twilio decided to do so?
Please note that I am aware that this question sounds as if the answer(s) are primarily opinion-based, but I guess that Twilio had a good technical reason to do so. So please do not close this question as primariliy opinion-based, as I hope that the answer is not.
Interesting question, +1, but from what I see they only have two versions: 2008-08-01 and 2010-04-01. So from my point of view that's just another way to spell v1 and v2 so I don't think there was a technical reason, just a preference.
This is all I could find on their decision: https://news.ycombinator.com/item?id=2857407
EDIT: make sure you read the comments below where #kelnos and #andes mention an advantage of using such an approach to version the API.
There's another thing I can think about that makes this an interesting approach is if you are the developer of such api.
You have 20 methods, and you need to introduce a breaking change in 1 of those.
Using semver (v1, v2, v3, etc) you need a v2 api.
All your 20 methods now needs to respond to v2, but in reality, those methods aren't changed at all, aren't new.
Using dates, you can keep your unchanged methods as is, and when the request comes in, it just pick the best match.
I don't know how is this implemented, any information on that will be really welcome.
I used to work for a company that used date versioning (as in each api call had param of the API date desired ?v=20200630) and loved it.
It lets you be less strict than with the traditional versioning (v1, v2, v3) as client developers don't need to even care about the version number and just use the current build time. Everything else is pretty much the same as as with the traditional versioning + small benefit from seeing date checks in the server code - you can easily see how old this or that code path is.
I believe the situation would have been different if we had to support a number of external clients and for example fix a bug in ?v=20200630 - there is no elegant way to specify something like ?v=20200630.1. As you can see from Twilio's experience they were just changing what API version 2010-04-01 was - thus client couldn't be sure which version exactly it was seeing.
So my outcome from this:
date based version seems easier and more flexible when you are a typical startup or a small company with a few of apps (e.g. frontend, iOS, Android) and no or few 3rd party clients. Date-based versioning makes it a bit easier for client developers to "just write code" and since you control all the code, most of the time you can fix old API bugs by just releasing a new version and asking clients to switch to it
Once you start having the real need to maintain the old API versions (AKA when you have a number of important clients who are not likely to update quickly), then semver versioning becomes more reliable
Suppose you are developing a platform which has a web-based interface for its users and APIs for third-party developers. Something similar to Salesforce (or even Facebook).
Salesforce and Facebook, both platforms have normal web-based interface for its users and APIs for third party developers.
Ideally any API internally calls the same function which is being used by the web-based interface. For e.g. "Create a Project" button and "CreateProject" API calls the same "createProject()" function internally. So you can maintain the same version for both as in most cases they are tightly integrated.
Now sometimes you add a feature which makes you increment the minor version of the web-based interface but since you are not implementing that feature in API, API version should remain as is.
How do you handle such cases? Should you maintain separate versions of your web-based interface and APIs for your platform?
It Depends. Because It is difficult to offer a conclusive answer to this question. But I would share some ideas and drill-down some scenarios to help at best.
I would suggest there should be two versions of the api. internal apis and public apis. At a caller's end, they would be two physically distinct apis/end-points so that the security policies and a of lot of other things can be done without exposing much information as well as without relaying any responsibility on code to handle the distinction policy based on who's calling from where as that may quite easilyfail.
It is ok if both versions of the apis consolidate down the line to some extent without involving any security risk but a separate team of expert engineers can design this consolidation to be seamless yet safe. It's a trade-off of between code-reuse and everything else. This is very subjective and can turn into endless discussion. But the software evolves very well as result of this design process if it is agile and iterative.
The apis should be externalizable and inter operable. If the project is very large, then different teams working on separate parts of the project will interact with each other's work using internal apis only. No hanky-panky. No direct data access. Only apis.
This approach will help you create awareness of what's being done in the project across all teams if the apis are discoverable which I personally believe is a very good thing for collaborative team work. In fact it also helps in re-usability. Testing becomes unified and automated. Every team will become responsible for their own work and hence it should be easy to address accountability.
There's more stuff that can go in here but I think this is sufficient at a high level.
IF allowed, I would also read this question as
"Should you have purely service oriented architecture or not ?"
And my answer would be, **It Depends.**Because It is difficult to offer a conclusive answer to this...
Do not publish core function directly via API, instead create all API functions as proxy functions and all changes in core functions will be handled in proxy functions.
Change public api version if there is change in API input/output.
This way you could achieve code re-usability and it does not increase public API version frequently.
Edit:
If you are talking about software version number. My answer is Yes.
I'm starting to find myself getting more and more in to using WCF for projects I implement for internal use (automating company tasks, making sure all clients are on the same page, etc.) This is largely due to the 3-10 clients I am automating at once whenever I do implement a solution, and (even if it was a small sample) the company is growing which continually adds more clients in the pool and thus a higher demand for reliability/consistency. With that said, I'm recognizing how important it is to make sure I make things expandable as (previously) pushing a release was getting harder the more clients I have depending on the service.
My latest project has a potential of being externalized. Until now I've done it the way I know works, but I'd still like to travel down the "right" path in terms of future updates. How should I be setting up my project file to make this as easy and seamless as possible to keep maintained, up-to-date and expansive? Should I be placing version numbers in to the namespace (as in Company.Interfaces.Contracts.June2011.IMyService), using pseudo folders, ...?
I just don't feel confident in this aspect of moving forward. I'd like to know that whatever ground work I have in place now won't place burdens on future expansion/customizing later. I'd also like to stick to the "development norm" as much as possible as it's getting more plausible that we'll hire additional programmers to help the work load.
Does anyone with this kind of experience have any thoughts, suggestions, guidance in this field? I would really appreciate any examples, books, documentation, etc. that you can provide.
Update (06-17-2011)
To give some insight, I'm also looking for some specific questions. These include:
How do you decorate a service class vs a DTO in terms of namespace? I've seen http://service.domain.com/ServerName/Version used on the Service class itself & http://types.domain.com/ServiceName/Version used on the DTOs. Is this common? (Separate the namespace in to a type and service collection?)
Should I be implementing IExtensibleDataObject on all my objects on the basis that they could potentially be evolved in future released? (Lay the ground work out now)
If my database has constraints on it for (e.g.) string length, I should be extending IParameterInspector and using that method for validity (keeping logic and validation separate), correct?
Should the "actual service" be broken out in to its own class so, as I version, the Service Contract classes just call the code (keeping each new version release with an minimal code as possible?) Or should I keep it within the service class and inherit from it with any new methods (likewise, what happens should you remove a method?)
I'm sorry if I have a lot of questions, I just see two ends of the spectrum in documentation. I see "Setting up wcf" then directly to "this is a versioned WCF"--no segue/steps between. I'm assuming it's going to just "click" once I get enough information, but I'm (sadly) not there yet.
tl;dr
When you start writing a WCF service that you know is going to hit several iterations, how do you setup your project(s) to make it as easy as possible in the future (on yourself and teammates)?
I have had success using a "strict" versioning policy (it seems from past experience you are heading in this direction anyway) where you simply create new endpoint/s each time a new definition is released. This means you won't have any contract backwards compatibility concerns for legacy clients - older versions can easily be turned off once logging indicates all clients have upgraded. It is generally necessary however to write bridging code for any legacy endpoint/s so they can continue to call into the modified business logic.
In terms of project organisation, I would create a new project for each version so they can easily be deployed separately. Namespaces using v1, v2 are normally works well enough. The endpoint names can also include a version number which should easily distinguish them from each other.
Alternately you could try using a "lax" versioning policy where you can have the ability to add or remove data members by implementing the IExtensibleDataObject interface in all your services. Some useful MSDN article links can be found in a popular response to a similar question: WCF client's and versioning.
Another "lax" kind of option is to move more towards a messaging solution (which WCF can support through message contracts and/or the MSMQ binding). Here podcast by SOA guru Udi Dahan that provides an interesting perspective and is definitely worth a listen - there is no IDog2.
Finally here is a good blog post with some further more fine-grained guidelines on whichever strategy you end up using:
http://wcfpro.wordpress.com/2010/12/21/wcf-versioning-guidelines-2/.