API level: the lower, the better? - api

Is it true:
If the application does not require new features from newer APIs (i.e. higher API levels), it is better to take lower API levels.
The major concern is lower API levels means better compatibility, and thus means larger market.
Is there anything else I have to keep in mind when I make such decisions?
I came up with this question when coming across some question about Android API Levels, but I think this can be a general question, not only for Android.

I general, yes, but only if the older API is still supported by the newer implementations, of course. (For example Lucene Java changes their API in incompatible ways on major updates, so you do not have this option).
There could also be cases where the host platform looks at what API version your require, and then behave differently, in a way that you may not want (cannot think of a good real-world example right now).
For Android, at the moment, I'd say, yes, declare the lowest API level you need.

Related

Design issue in wrapping "C" API into OO wrapper

I am trying to build an object oriented wrapper, which will wrap API specification; this includes a many structures, events, and APIs.
This API specification will be revised every year, there by releasing new specification; the updates are likely to have newer structures, events and APIs. Updates will also include
Updates to existing structures, events and APIs, the APIs as such does not change but as they take various structures as parameters which eventually have updates
The challenges
The API specification is nothing but an SDK to a lower layer,
what I am trying to build is also an SDK but will be an object
orient wrapper over this SDK.
The requirement is that the users
want Objects and methods and no “C” like structures and APIs
The frequent version change should not have any impact on high level
application and should seamlessly work with any underlying API
version
Older application should work on newer APIs
Newer application should work on older APIs
The last one is a tricky one, what I mean is that the newer application when it sees that it an older version of SDK should somehow transform itself to an older version of API
Is there any design pattern which will help me achieve this task and tied over the frequent changes to internal data and also achieve backward compatibility and forward compatibility?
OS: Windows
Dev Environment : Visual C++
Your problem is too high level to be answerable by a design pattern.
What you are asking for are architectural principles.
These you should base on your well-founded design decisions ("API is backwards compatible using versioning because...") which in turn are based on your requirements (e.g. "Older application should work on newer APIs").
Look into this (a presentation keynote about API design by Joshua Bloch):
How to Design a Good API and Why it Matters
1) All that comes to mind at the moment, if the sdk API involves manual resource allocation:
RAII, or ctor,dtor resource management: https://en.wikipedia.org/wiki/Resource_Acquisition_Is_Initialization
2-5) Determine a function decomposition of the API you're building, that becomes expressible in terms of each version tier of the SDK API. Some details on semi-formal function decomposition here (towards the bottom):
http://jfeltz.com/posts/2015-08-30-cost-decreasing-software-architecture.html
You can then take the resulting function compositions and make them construct-able objects if you have to. Don't worry about the final object model until you have a working understanding of the function compositions involved. This is hard at first, but trust me, it is far more powerful than iterating through several possible object model designs.
For C++, you'll probably need to perform #define pre-processing against a scheme of versions for each upstream SDK API, unless your sdk encodes its version in a file somewhere, such that you can do dll loading instead (in which case, this may be Factory design pattern), but I suspect you already knew that.

What does "Opinionated API" mean?

I came across the term "Opinionated API" when reading about the ssl.create_default_context() function introduced by Python 3.4, what does it mean? What's the style of such API? Why do we call it an "opinionated one"?
Thanks a lot.
It means that the creator of the API makes some choices for you that are in her opinion the best.
For example, a web application framework could choose to work best with (or even bundle or work exclusively with) a selection of lower-level libraries (for stuff like logging, database access, session management) instead of letting you choose (and then have to configure) your own.
In the case of ssl.create_default_context some security experts have thought about reasonably secure defaults to configure SSL connections. In particular, it limits the available algorithms to those that are still considered secure, at the expense of complete compatibility with legacy systems, a trade-off that is beneficial in their (and my) opinion.
Essentially they are saying "we have a lot of experience in this domain, and we really think you should do things in the following way".
I suppose this is a response to "enterprise" API that claim to work with every implementation of as many standard interfaces as possible (at the expense of complexity in configuration and combination, requiring costly consultants to set up everything).
Or a natural extension of "Convention over Configuration".
Things should work very well out-of-the-box, so that you only have to twiddle around with expert settings in special cases (and by then you should know what you are doing), as opposed to even a beginner having to make informed decisions about every aspect of the application (which can end in disaster).

Semantic versioning of REST apis?

I've evaluated a number of versioning schemas for REST apis (header, url, …). So far, the most reliable approach seems to be the url option: It works with proxies, and does not rely on obscure schemas such as dates for versioning.
Now, when I look around, everybody who uses the url based approach seems to use versions such as v1, v2, and so on. Nobody uses minor versions, or even a schema such as semantic versioning.
This raises some questions:
When do you increase the version number of a REST api (for sure, you have more updates to it than just once in five years)?
If you just have a bug fix, you probably do not increase the version number, but how do you differ both versions?
If you use a very fine-granular approach, you end up with lots of versions you need to host in parallel. How do you handle that?
In other words: How does a company such as GitHub, e.g., make to only have v3 today (2015), when they are around in business already for 7 years now? Does that mean that they actually only changed their api two times? I can hardly believe that.
Any hints?
The major version number is all you need for a web service. Because your consumers are only concerned about backward incompatible changes, and (if you're following semantic versioning correctly) those will only be introduced when a new major version is released.
All other changes (including new features, bugfixes, patches etc.) should be 'safe' for your consumers. Those new features don't have to be used by your consumers, and you probably don't want to continue to run that unpatched version that contains bug X or Y any longer than necessary.
Using the complete version number in your URLs (or whatever method you use for API versioning) would actually mean that your consumers have to update the URL of your API everytime you make an update / bugfix to your API, and they would keep using an unpatched version until they do so.
This doesn't mean that you can't use semantic versioning internally, of course! Just use the first part (major version) as the version number for your API. It just doesn't make sense to include the full version number in your URL / header / other versioning system.
So to answer your question: you update your API everytime you make a new release, but you only release a new API version when you have a new major version. This way you only have to host a couple of different versions (and you can of course deprecate old versions over time).

What is the benefit of versioning a REST api by date as Twilio does?

Basically, I think it's a good idea to version your REST api. That's common sense. Usually you meet two approaches on how to do this:
Either, you have a version identifier in your url, such as /api/v1/foo/bar,
or, you use a header, such as Accept: vnd.myco+v1.
So far, so good. This is what almost all big companies do. Both approaches have their pros and cons, and lots of this stuff is discussed here.
Now I have seen an entirely different approach, at Twilio, as described here. They use a date:
At compilation time, the developer includes the timestamp of the application when the code was compiled. That timestamp goes in all the HTTP requests.
When the request comes into Twilio, they do a look up. Based on the timestamp they identify the API that was valid when this code was created and route accordingly.
It's a very clever and interesting approach, although I think it is a bit complex. It can be confusing to understand whether the timestamp is compilation time or the timestamp when the API was released, for example.
Now while I somehow find this quite clever as well, I wonder what the real benefits of this approach are. Of course, it means that you only have to document one version of your API (the current one), but on the other hand it makes traceability of what has changed more difficult.
Does anyone know what the advantages of this approach are, so why Twilio decided to do so?
Please note that I am aware that this question sounds as if the answer(s) are primarily opinion-based, but I guess that Twilio had a good technical reason to do so. So please do not close this question as primariliy opinion-based, as I hope that the answer is not.
Interesting question, +1, but from what I see they only have two versions: 2008-08-01 and 2010-04-01. So from my point of view that's just another way to spell v1 and v2 so I don't think there was a technical reason, just a preference.
This is all I could find on their decision: https://news.ycombinator.com/item?id=2857407
EDIT: make sure you read the comments below where #kelnos and #andes mention an advantage of using such an approach to version the API.
There's another thing I can think about that makes this an interesting approach is if you are the developer of such api.
You have 20 methods, and you need to introduce a breaking change in 1 of those.
Using semver (v1, v2, v3, etc) you need a v2 api.
All your 20 methods now needs to respond to v2, but in reality, those methods aren't changed at all, aren't new.
Using dates, you can keep your unchanged methods as is, and when the request comes in, it just pick the best match.
I don't know how is this implemented, any information on that will be really welcome.
I used to work for a company that used date versioning (as in each api call had param of the API date desired ?v=20200630) and loved it.
It lets you be less strict than with the traditional versioning (v1, v2, v3) as client developers don't need to even care about the version number and just use the current build time. Everything else is pretty much the same as as with the traditional versioning + small benefit from seeing date checks in the server code - you can easily see how old this or that code path is.
I believe the situation would have been different if we had to support a number of external clients and for example fix a bug in ?v=20200630 - there is no elegant way to specify something like ?v=20200630.1. As you can see from Twilio's experience they were just changing what API version 2010-04-01 was - thus client couldn't be sure which version exactly it was seeing.
So my outcome from this:
date based version seems easier and more flexible when you are a typical startup or a small company with a few of apps (e.g. frontend, iOS, Android) and no or few 3rd party clients. Date-based versioning makes it a bit easier for client developers to "just write code" and since you control all the code, most of the time you can fix old API bugs by just releasing a new version and asking clients to switch to it
Once you start having the real need to maintain the old API versions (AKA when you have a number of important clients who are not likely to update quickly), then semver versioning becomes more reliable

What is the status of HTML5 Database?

This spec http://www.w3.org/TR/webdatabase/ says:
This document was on the W3C Recommendation track but specification work has stopped. The specification reached an impasse: all interested implementors have used the same SQL backend (Sqlite), but we need multiple independent implementations to proceed along a standardisation path.
Does this mean that HTML5 database is going away, and for some time we will have a de-facto standard using SQLite, possibly with browser differences? Or has the W3C published a plan of attack for finishing the standard?
According to this article:
[...] we think it is worth explaining our design choices, and why we think IndexedDB is a better solution for the web than Web SQL Database.
In another article, we compare IndexedDB with Web SQL Database, and note that the former provides much syntactic simplicity over the latter. IndexedDB leaves room for a third-party JavaScript library to straddle the underlying primitives with a BTree API, and we look forward to seeing initiatives like BrowserCouch built on top of IndexedDB. Intrepid web developers can even build a SQL API on top of IndexedDB. We’d particularly welcome an implementation of the Web SQL Database API on top of IndexedDB, since we think that this is technically feasible. Starting with a SQL-based API for use with browser primitives wasn’t the right first step, but certainly there’s room for SQL-based APIs on top of IndexedDB.
I'm not personally swayed by the arguments put forth in the article, but it seems clear that (for the time being) Mozilla has decided that Web SQL Database is dead.
Further interesting comments about this article may be found on Hacker News.
My understanding is that this is now called "IndexedDB"
http://www.w3.org/TR/IndexedDB/
Apparently the Firefox team has started implementing this:
http://hacks.mozilla.org/2011/01/indexeddb-in-firefox-4/
I don't know if anyone knows the answer. Mozilla doesn't like the dependence upon SQLite and has decided to go a different way. However, all WebKit based browsers already have it implemented and I don't see them removing it as any websites built to take advantage of the spec would be broken.
This means that at least in certain contexts, mostly within the mobile sphere where most browsers have a webkit implementation, it can still makes sense to use the HTML5 Web SQL spec. I see this as especially true for developers who are looking to create mobile applications using a framework like phonegap.
There are some times where as an application developer you want to provide users with access to data even if they aren't connected to the internet or if the connection is slow and some types of data is just more efficiently stored in a database than in a cookie or JSON cashe. For example, if you have data that has relationships it is much easier and quicker to do a join query to pull the data you need than it is to search a json map.
I don't think the spec is dead, and I actually hope that Mozilla will reverse their stance so that developers can use it to solve problems outside of the mobile webkit world.