Nginx Caching of REST API - api

We have a mobile app which calls a REST API to get the list of tiles to be displayed on the mobile primary screen. The authentication mechanism is AUTH Token using which we uniquely identify a user. The menu keeps changing depending on the version of the app. For this we have two approaches.
/api/tilemenus (Pass auth header only and not version)
Retrieve auth header and lookup the version of the app in the db table (We also store the user version in our database and update it whenever user upgrades the app) and return the data accordingly.
/api/tilemenus/1.2.2 (Pass auth header and version as well since client knows its version itself)
Here, no DB lookup is required since version is getting passed in REST request itself.
Which approach is better? I think approach 2 is better since we can pass the caching headers to cache this API for each version. For approach 1, there is no implicit way to discard this caching, when the user upgrades the app.

It is common to pass API version in the URI path (check out this question too). I'd suggest to use the second option, although rewrite it as /api/1.2.2/tilemenus, which looks more similar to how APIs operate on a bunch of popular websites.

In my opinion, #2 is better, because you enforce the guarantee that a specific URL always returns the same resource/data, and, yes, you can safely cache it.
Plus, it makes it easier to track version usage just by analyzing HTTP server logs.
And it even spares you the effort of keeping track of user version, since the #2 makes it explicit by the request URL itself

Related

How do we version a new endpoint being added to an existing API

We have our API versioning strategy based on URL.
I have couple of scenarios to add new enpoints , where I could not find any strategical reference for this.
Scenario 1:
An existing API having endpoints varyingly ranging from versions v1 to v4. Few endpoints are upto V2, few are upto V3, and few at v4.
In this situation If I have to add a new endpoint, Should I begin the version for the new endpoint at V4? Is there any standards for it.
Scenario 2
This is the different scenario. one of the API GW spanned across multiple microservices, and the micro services are grouped by resources within the gateway. so a resource have a one on one mapping against a service.
Similarly different API versioning exists btw resources here. Few resources were upto V3 and few are up to v5. if a new endpoint is required to be added to a resource which is already upto v3, should we add a new endpoint in v3 or should we create a v5 version of resource to add that specific endpoint?
Any suggestions would be helpful.
You're unlikely find a standard way of doing things. The closest thing to a standard is what Fielding and the HTTP specifications say themselves. You should expect these questions to enlist many subjective opinions. Here's my bias opinion based on experience and a deep understanding of the specifications...
Conceptually, there's no real problem with adding new endpoints to an existing API. Where this might be problematic is if your API is public and with public documentation. Once an API version is released, it should be immutable so that clients can rely on it. If you're adding surface area to your API, then I would recommend you create a new version. If you're unsure what that version will shape up to be in totality, you can always start with a pre-release version; for example, 4.0-preview.1.
Your second question seems to ask whether you should have symmetrical versions. You can, but it's solely at your discretion. You indicated that you have microservices, so unless you are building out an API for an entire product or suite, it is more flexible to allow an API to evolve independently. This will organically result in heterogenous API versions over time. That shouldn't be a problem IMHO. The key to making that manageable is to define a sound versioning policy, such as N-2 supported versions.
You've already elected to version by URL segment, so there's no going back. Versioning this way leads to a spider web of different URLs when the versions are not symmetrical. This is just one of the many problems you may encounter. Hypermedia is almost all but impossible to achieve when versioning by URL unless the versions are symmetrical. Ultimately, versioning by URL segment is not RESTful, despite being popular, because it violates the Uniform Interface constraint. The URL path is the resource identifier. v3/order/42 and v4/order/42 are not different resources, they are different representations. In the same way, I can ask for order/42 as application/json or application/xml, but they are not different API versions, even though they look completely different over the wire.
As an example, if you retrieve v2/order/42 and it has a link to customer/42, but the Customer API supports 2.0 and 3.0, how do you know which link to provide? If the client only knows how to talk v3/customer/42 and you give them v2/customer/42, it might break them. Furthermore, what happens if the Customer API doesn't support 2.0 at all? The Order API has to incorrectly assume v2 is a valid or it has to be coupled into knowing which versions are supported; both of which are not good. In all cases, the server still doesn't know what the client really wants. It is the client's responsibility to tell the server what it wants. Every other method of versioning does not have this problem as the URLs are always consistent. Let's say you version by query string with api-version, another popular choice. If you provide a link to customer/42, the link is valid regardless of API version. It's the client's job to know and append ?api-version=<value> to indicate to the server how they want to query the resource. This is why Fielding says that media type negotiation is the only way to version an API. It's hard to argue with the G.O.A.T., but using the query string or another header doesn't explicitly violate any constraints, even if media type negotiation would be better.

For Dropbox API is there a way to pull a list of users and see if MFA is enabled?

I am wanting to pull all users in my company dropbox and then check to see if their accounts have MFA enabled. I read over the documentation for Dropbox api but did not see anything stand out where this was possible.
It's very sad to realize that a popular platform such as Dropbox doesn't expose A LOT of basic features through its API (and the SDK itself is far from being OK, compared to G-Suite). Anyway, there are two hacky methods you can use in order to pull out that information (with some limitations).
First method:
By analyzing the team events using team_members_list() you can filter out tfa_change_status_details events. When new_value=TfaConfiguration('[sms|other]', None) is specified - 2FA is enabled.
The information I found out that can be retrieved using this method is:
has_2fa - whether 2FA was ever configured.
is_tfa_enabled - whether 2FA is currently enabled.
tfa_type - whether 2FA is by SMS or by app.
However, keep in mind that you have to track changes constantly and also keep in mind that Dropbox saves team events for only two years.
Second method:
Using the front-end dashboard API this information can be retrieved (I can't remember the API name, I think that it is /2/get_multifactor and inside you'd find some information about its status and the organizational policy regarding 2FA). However, to use the front-end dashboard API (which is totally undocumented) you'd need to simulate a successful login (and correctly use the lid and jar cookies) and you'd also need to bypass the random captcha that appears when you abuse the service with too many requests.
To be honest, Dropbox's API is weak, neglected, and ugly. I wish I never had to use it. Anyway, I would recommend using the first method and pray for a significant update to the API
No, unfortunately the Dropbox API doesn't expose this. We'll consider it a feature request.
There's a feature request open for this one (https://www.dropboxforum.com/t5/Dropbox-API-Support-Feedback/MFA-status-for-users/m-p/468564#M23886). But I wouldn't hold your breath, as #Aviv mentioned the Dropbox API seems surprisingly neglected at the moment.

Is it necessary to build a separate API endpoint for mobile apps to access a Rails web app?

I have a web app implemented in Ruby on Rails 4, need an Android native app for it, I am really new to mobile development.
I am a bit confused as to what the mobile-web architecture should look like in this case. I've done some research online, there seems to be a few ways of doing this, but I still have some questions that I haven't been able to find answers for. Thanks in advance for all pointers.
1) do I really need a separate API for the mobile app? what are the issues in using my Rails app's existing controllers with respond_to format.json?
2) I've seen some online examples that suggest using an separate API namespace in the Rails app to serve mobile requests, e.g class Api::ApiController < ActionController::Base for the new controller, then add namespace :api do in routes.rb. With this approach, doesn't it imply that I'll need to duplicate quite a bit of my controller functionality in this new namespace just for mobile?
3) Regarding authentication, many examples suggest using token authentication, is the built-in Rails sessions management framework not good enough for mobile apps? or is it because session cookies work completely differently in a mobile app?
Appreciate your time.
It is not necessary, but it is, like you said, considered a best practice.
1+2) Having same controllers with respond_to/respond_with logic is a nice idea at first sight. But, from my experience, I can say, there always comes a day where API code start to differ with HTML client code. The mobile client might have a different UI and it is just natural that it will expect to consume your data another way as your web client does. The web client is specialized to one use case where an API should be more generic allowing multiple consuming ways.
The second issue that will arise is the fact that you cannot rely on your mobile users to always have the latest app version where with a webapp you can. So for the HTML app you can easily introduce non-compatible changes because you are delivering a proper client right within where for the mobile API breaking the API is at least concerning. Perhaps, you will want to maintain a backwards compatibility which will make your all purpose controllers ugly as hell. And without a proper api/v1 namespace you even can't have two different API versions at the same time.
You can avoid duplication of your logic by keeping your controllers very skinny and move the logic out into models (Service Objects are models too, not only Active Records).
3) Your mobile HTTP lib will to a high probability have a proper automatic cookie management. Having token based authentication is just again a best practice. If it is just a token vs session_id within cookie, there will be not much win. I can only think that it will be automatically secure against CSRF attack and you can disable this protection entirely for the API because your website users won't be allowed to consume the API, just by logging in to the site (an additional benefit perhaps). With session based authentication you will have to generate a CSRF token on first API request and set it within X-CSRF-Token cookie.
The big advantage of token based authentication is that it is extendable to more security, like introducing expire tokens, HMAC tokens etc, whereby session authentication is not.
See Using Sessions vs Tokens for API authentication
I would also encourage you to look at json:api. It comes from the creators of ember.js, who have thought about minifying decisions to take, when building APIs. Another interesting thing is an active_model_serializers gem. An intro to it is given within Rails: The Next Five Years by Yehuda Katz

Single page app + frequently changing REST API - deployment - a lot of API versions OR no deletion and renaming

Overview:
We have a rails REST API app that's gonna be published soon together with client app (single page app written in JS) . We intent to deploy new stuff on almost daily basis. For now the API is only used by us internally.
Questions:
How to manage deployment because we deploy each app separately?
Should we always deploy them together? Should we raise API version for each deployed change?
How to reload JS app when new version of API is available?
Should we not change the API version but keep backwards compatibility in mind? So only add new keys in JSON responses, no renaming, no deletions, no changing of URLs, ...
Are there any best practices described somewhere about this particular problem?
It is considered a best practice to only version a RESTful API when making a breaking change.
I would strongly suggest not releasing the API to the public until it has stabilized more. As long as it is internal-facing only, the frequent changes won't be as big of a deal. Once your API goes public, you'll want to limit the number of breaking changes (new versions) as much as possible, so work with your best customers before your public release to make sure the API is adequate, and try to design it with backwards compatibility in mind.
You should only need to pair the releases if there's a breaking change on the API side. Pairing them or not may be a business decision ("It's easier for our customers to have one version number to worry about than two").
You might also consider resetting the version number at release time so that the public is given the stable API at version 1 instead of 18.

Is requiring a REST api request to include a cookie a good idea?

My idea is to treat URI's in my rest api as a unique resource, except in the context of the client's location, which is stored in a cookie. Are there any downsides to this approach?
From a philosophical perspective, it's not really REST if you don't uniquely identify the resource via URL (at least, per my reading of Fielding).
From a practical perspective -- and this is based on experience -- you're in for a world of pain if you require web service calls to use cookies. Primarily because it's a piece of information that has to be managed on a different code path, making your client-side code more complex. You'll also run into issues with domain and proxies (particularly if you share the cookie between the service and a traditional web-app), and it isn't portable between clients.
If you're looking to generate different content based on location, why not use a geolocation service?
Edit: why not make location part of the request URL? You can still use a cookie to store this information, and retrieve it using JavaScript. This would leave your service interface clean, and allow you to easily use the service from other clients.
As an API, you should aim at making ease of use for the client programmer a high priority. In many libraries that support HTTP, putting cookies into the HTTP request is more difficult than putting, say, a query parameter into the URL.
I'd be concerned about caching. Do one request with the user at location A, it gets cached, user moves to B and makes the request again, gets location A version of the request.