creating my own custom header for versioning REST API - api

I am re-designing my previous century app (which works very well, mind you), and the chance to start fresh has me investigating several aspects. This question is re. versioning.
The app is really collection of data apps that deliver resources--query is http and response is JSON. Now I want to change one of the responses. After reading around, I have decided to not implement versioning via the URI or query_string. Instead, as advised in Best practices for API versioning?, especially the second answer (is there not a way on Stackoverflow to link to a specific answer?), I am using headers. However, instead of using Accept: application/vnd.company.myapp.customer-v3+json I am using
Accept: application/json
X-Requested-With: XMLHttpRequest
X-Version: 2
On the server side I am able to check the value for X-Version. If it doesn't exist, the query uses the latest API. It X-Version does exist, then the request version is used. The above works fine.
My question is -- any gotchas I should be aware of? Esp. since I pulled this X-Version out of my butt. As far as I know, it is not an officially sanctioned header.
Update: Drats! even before posting this I read Custom HTTP headers : naming conventions, and it seems I should not use the X- prefix. However, then there is a possibility of my home-grown header conflicting with an existing standard unless I first do due-diligence. Thoughts?

I prefer the custom header especially with the current trend of reverse proxy with microservices and traffic management among application versions.
X-App: <app_name>:<app_version>

There is a lot of debate around the "right" way to do this. For some high level thoughts, check out mnot's: WEB API VERSIONING SMACKDOWN
For a very in-depth discussion, check out this post to API-Craft: https://groups.google.com/forum/#!topic/api-craft/E8MBkzirdcw
Personally, I favor a combination of media type versioning, and link relation versioning, to accomplish this.

Related

How do we version a new endpoint being added to an existing API

We have our API versioning strategy based on URL.
I have couple of scenarios to add new enpoints , where I could not find any strategical reference for this.
Scenario 1:
An existing API having endpoints varyingly ranging from versions v1 to v4. Few endpoints are upto V2, few are upto V3, and few at v4.
In this situation If I have to add a new endpoint, Should I begin the version for the new endpoint at V4? Is there any standards for it.
Scenario 2
This is the different scenario. one of the API GW spanned across multiple microservices, and the micro services are grouped by resources within the gateway. so a resource have a one on one mapping against a service.
Similarly different API versioning exists btw resources here. Few resources were upto V3 and few are up to v5. if a new endpoint is required to be added to a resource which is already upto v3, should we add a new endpoint in v3 or should we create a v5 version of resource to add that specific endpoint?
Any suggestions would be helpful.
You're unlikely find a standard way of doing things. The closest thing to a standard is what Fielding and the HTTP specifications say themselves. You should expect these questions to enlist many subjective opinions. Here's my bias opinion based on experience and a deep understanding of the specifications...
Conceptually, there's no real problem with adding new endpoints to an existing API. Where this might be problematic is if your API is public and with public documentation. Once an API version is released, it should be immutable so that clients can rely on it. If you're adding surface area to your API, then I would recommend you create a new version. If you're unsure what that version will shape up to be in totality, you can always start with a pre-release version; for example, 4.0-preview.1.
Your second question seems to ask whether you should have symmetrical versions. You can, but it's solely at your discretion. You indicated that you have microservices, so unless you are building out an API for an entire product or suite, it is more flexible to allow an API to evolve independently. This will organically result in heterogenous API versions over time. That shouldn't be a problem IMHO. The key to making that manageable is to define a sound versioning policy, such as N-2 supported versions.
You've already elected to version by URL segment, so there's no going back. Versioning this way leads to a spider web of different URLs when the versions are not symmetrical. This is just one of the many problems you may encounter. Hypermedia is almost all but impossible to achieve when versioning by URL unless the versions are symmetrical. Ultimately, versioning by URL segment is not RESTful, despite being popular, because it violates the Uniform Interface constraint. The URL path is the resource identifier. v3/order/42 and v4/order/42 are not different resources, they are different representations. In the same way, I can ask for order/42 as application/json or application/xml, but they are not different API versions, even though they look completely different over the wire.
As an example, if you retrieve v2/order/42 and it has a link to customer/42, but the Customer API supports 2.0 and 3.0, how do you know which link to provide? If the client only knows how to talk v3/customer/42 and you give them v2/customer/42, it might break them. Furthermore, what happens if the Customer API doesn't support 2.0 at all? The Order API has to incorrectly assume v2 is a valid or it has to be coupled into knowing which versions are supported; both of which are not good. In all cases, the server still doesn't know what the client really wants. It is the client's responsibility to tell the server what it wants. Every other method of versioning does not have this problem as the URLs are always consistent. Let's say you version by query string with api-version, another popular choice. If you provide a link to customer/42, the link is valid regardless of API version. It's the client's job to know and append ?api-version=<value> to indicate to the server how they want to query the resource. This is why Fielding says that media type negotiation is the only way to version an API. It's hard to argue with the G.O.A.T., but using the query string or another header doesn't explicitly violate any constraints, even if media type negotiation would be better.

What is the difference between a rest api and web api

I want to know what’s the difference between a web service (web api) and a rest api
I have learnt till now only GET and POST methods in backend to communicate with my apps but people always talk about PUT DELETE UPDATE & REST etc I am unable to understand the benefits and meaning.
What is the difference between a rest api and web api
Neither of these terms is well enough defined to assert with any confidence "the" difference.
REST is an architectural style; the most important application of that style is the world wide web. The web has been so catastrophically successful that there really hasn't been a second REST application - if you need what REST offers, you use the web, because the hard work has already been done for you.
I am unable to understand the benefits and meaning.
For every standardized HTTP method, you can use the HTTP method registry to find the reference that defines the meaning of the method.
Most of the methods that people talk about on a regular basis have their meanings defined by RFC 7231.
The benefit comes from the fact that the meanings of the different methods are standardized; the provide certain semantic guarantees that allow general purpose components to do clever things.
For example, knowing that a method has idempotent semantics means that we can resend the http request when we don't get a response the first time, this is an important constraint when your network is unreliable. Because that's true of all idempotent requests, regardless of which URI is being targeted, we can build a retry into the browser.
It may help to think about POST as "the" basic message, and all of the others as being specializations
GET is a specialization of POST that is used to retrieve copies of representations
HEAD is a specialization of GET that is used to retrieve metadata
PUT is a specialization of POST that is used for "upserting" new representations
PATCH is a specialization of POST that applies patch documents to a resource

Are REST API's really RESTful?

I'm new to this game so I might be misunderstanding things. Actually, if someone tells me that I'm misunderstanding things, it will be a favor. Maybe this person will be considerate enough to show me the right path. But...
One of the "guidelines" or "best practices" of REST as it applies to Web Services (http://en.wikipedia.org/wiki/Representational_state_transfer#Applied_to_web_services) is that you should use the proper HTTP methods when making calls (did I misunderstand it?) to REST API's.
But looking at many API implementations on the web, what I see is that 100% of the calls made to them are actually GET calls that, depending on their URI, will be interpreted by the API as being of one of the HTTP verbs or methods.
So, for example, looking at the REST API documentation for Twitter (https://dev.twitter.com/rest/public) which, in principle, only defines two verbs/methods (GET and POST), actually have all calls sent as GET and, based on the URI in the GET call, are interpreted by the API and acted upon.
Example:
GET statuses/lookup: https://api.twitter.com/1.1/statuses/lookup.json
POST statuses/update (PUT?): https://api.twitter.com/1.1/statuses/update.json
In both cases, the call itself was made using GET and the last part of the URI defined it as a real GET or as a POST.
In summary, to be truly RESTful, shouldn't client side implementations of REST API's for web services use the proper HTTP verbs/methods?
What am I missing?
You're missing a lot, but don't worry about it, most people are.
The fact is that very few so-called REST APIs publicly available on the internet are really RESTful, mostly because they are not hypertext driven. REST became a buzzword to refer to any HTTP API that isn't SOAP, so don't expect for an API to really be RESTful just because it says it's a REST API. I recommend reading this answer.
From my experience, most API developers aren't aware what REST really is and believe any HTTP API that uses HTTP and avoids verbs in URIs is REST.
REST is defined by a set of constraints. Among them is the uniform interface, which in simple terms means that you should not change the expected behavior of the underlying protocol. REST isn't coupled to any particular protocol, but since it's common to be used with HTTP, they get convoluted sometimes.
HTTP has very well defined semantics for the GET, POST, PUT, DELETE, PATCH and HEAD methods, and the POST method has its semantics determined by the server. Ideally, a REST API should respond to the methods other than POST exactly as determined in the RFC 7231, but as you noticed, there are many APIs who call themselves REST but don't do that. This happens for many reasons. Sometimes there's a simple misunderstanding about the correct semantics, or it's done to keep consistency, or because of backwards compatibility with intermediaries that don't support all methods, and many other reasons.
So, there's a lot more that has to be done to be truly RESTful other than using the HTTP methods correctly. If an API doesn't get even that right, it needs to find another buzzword, because it's definitely not REST.
I can't exactly tell what your question is, but I believe there are some concepts that will help you. Allow me to elaborate...
You are correct that many APIs use a limited number of HTTP "verbs" in their API. GET/POST are the most common. PUT less so, and then all others (DELETE, HEAD, OPTIONS etc) are used with vanishing probabilities.
Dropbox Core API for file uploads allows optional PUT / POST and their stated reason is "For compatibility with browser environments, the POST HTTP method is also recognized."
Indeed the limitation is the browser. Popular web servers have no problem with all HTTP request methods and even made up ones. After all, the request method is just some string with regard to the web server.
HTML4 and HTML5 only allow GET and POST requests for form requests. If you want your API to be used through a browser at all - and hey why not, it sounds like a useful thing - then you're limited to GET/POST. For a useful discussion on this see: https://softwareengineering.stackexchange.com/questions/114156/why-are-there-are-no-put-and-delete-methods-on-html-forms
Further complicating things is the fact that REST is not an industry standard. There exists no RFC, ISO or other document detailing what a "compliant" implementation must and must not do. While many folks have been playing concepts related to REST for some time, the REST concept was "invented" in the PhD disseration of Roy Fielding. A fantastic read if you're interested in such things.
Yes, according to REST, APIs should be using the correct verbs. However, as long as the documentation is clear and all GET requests are idempotent, then life should continue smoothly.
(Source: I wrote PipeThru.com which integrates 40+ APIs, Dropbox and Twitter included)
I think that this link could give you some hints about the design of RESTful services / Web API: https://templth.wordpress.com/2014/12/15/designing-a-web-api/.
It's clear that not all Web services that claim to be RESTful are really RESTful ;-)
To be short, RESTful services should leverage HTTP methods for what they are designed for:
method GET: return the state of a resource
method POST: execute an action (creation of an element in a resource list, ...)
method PUT: update the complete state of a resource
method PATCH: update partially the state of a resource
method DELETE: delete a resource
You need to be also to be aware that they can apply at different levels, so methods won't do the same things:
a list resource (for example, path /elements)
an element resource (for example, path /elements/{elementid})
a field of an element resource (for example, path elements/{elementid}/fieldname). This is convenient to manage field values with multiple cardinality. You don't have to send the complete value of the fields (whole list) but add / remove elements from it.
Another important thing is to leverage HTTP headers. For example, the header Accept for content negotiation...
I find the Web API of Github well designed and its documentation is also great. You could browse it to make you an idea. See its documentation here: https://developer.github.com/v3/.
Hope it helps you,
Thierry
You are correct. If they want to be "RESTful", their API should respect the semantics of each HTTP method.
Roughly, REST is about method Information (what the server should do), scoping information (where the server should do it) and, I almost forgot to mention, hypermedia driven (make sure you check #PedroWerneck's great answer to this question as it talks about it a little more and referecences a blog post from Fielding on the matter).
What the API you mentioned does is have both method and scoping information in the URL. That would not fit the RESTful architecture very well, as it, in general terms, tells us to:
1) use the HTTP methods the proper way (respecting their properties, such as idempotency and others), and
2) use unique URIs to identify unique resources.
Point 1 says "use HTTP methods to convey method information" and point 2 says "use URIs to convey scoping information".
Again, if an API uses GET with a specific parameter in the URI to do something (and not get something), then it is using URI to convey method information.
Now, don't be alarmed. Most APIs out there are just RESTful-ish (like twitter's of flickr's), meaning they are an animal between REST and something else. That is not bad per se, it just means they will not fully benefit from what RESTful architectures (and HTTP) have to offer.
Remember that being RESTful isn't just a matter of fashion, it does have its benefits, such as statelesness, adressability, and so on. And those can only be fully achieved by using the HTTP verbs like they were supposed to be used.
About using POST instead of PUT, considering they have different properties (PUT is idempotent, POST is not), it is not bad to use POST, as long as it is uniformly designed, that is, a programmer should not wonder what POST will do for each and every URI in the API: they all should behave the same. (PUT does not suffer from that because it already is uniform.) I talked a little more about this - and quoted Roy Fielding's say on it - in another question (check out the "Wrapping Up" part).
Consider looking at REST Richardson Maturity Model topic.
This specification is about how much RESTful particular API is:
Level 0:
Simple GET and POST request to descriptive url
/getUserByName?name=Greg
Level 1:
Divide all content in resources and define actions in resource group
/user/getByName?name=Greg
Level 2:
Proper use of HTTP verbs.
GET /user/Greg
Level 3:
Use hypermedia controls
Different APIs in interent implement different maturity level of REST. That's why some APIs don't support all HTTP features.

Separate REST JSON API server and client? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I'm about to create a bunch of web apps from scratch. (See http://50pop.com/code for overview.) I'd like for them to be able to be accessed from many different clients: front-end websites, smartphone apps, backend webservices, etc. So I really want a JSON REST API for each one.
Also, I prefer working on the back-end, so I daydream of me keeping my focus purely on the API, and hiring someone else to make the front-end UI, whether a website, iPhone, Android, or other app.
Please help me decide which approach I should take:
TOGETHER IN RAILS
Make a very standard Rails web-app. In the controller, do the respond_with switch, to serve either JSON or HTML. The JSON response is then my API.
Pro: Lots of precedent. Great standards & many examples of doing things this way.
Con: Don't necessarily want API to be same as web app. Don't like if/then respond_with switch approach. Mixing two very different things (UI + API).
REST SERVER + JAVASCRIPT-HEAVY CLIENT
Make a JSON-only REST API server. Use Backbone or Ember.js for client-side JavaScript to access API directly, displaying templates in browser.
Pro: I love the separation of API & client. Smart people say this is the way to go. Great in theory. Seems cutting-edge and exciting.
Con: Not much precedent. Not many examples of this done well. Public examples (twitter.com) feel sluggish & are even switching away from this approach.
REST SERVER + SERVER-SIDE HTML CLIENT
Make a JSON-only REST API server. Make a basic HTML website client, that accesses the REST API only. Less client-side JavaScript.
Pro: I love the separation of API & client. But serving plain HTML5 is quite foolproof & not client-intensive.
Con: Not much precedent. Not many examples of this done well. Frameworks don't support this as well. Not sure how to approach it.
Especially looking for advice from experience, not just in-theory.
At Boundless, we've gone deep with option #2 and rolled it out to thousands of students. Our server is a JSON REST API (Scala + MongoDB), and all of our client code is served straight out of CloudFront (ie: www.boundless.com is just an alias for CloudFront).
Pros:
Cutting-edge/exciting
A lot of bang for your buck: API gives you basis for your own web client, mobile clients, 3rd party access, etc.
exceedingly fast site loading / page transitions
Cons:
Not SEO friendly/ready without a lot more work.
Requires top-notch web front-end folk who are ready to cope w/ the reality of a site experience that is 70% javascript and what that means.
I do think this is the future of all web-apps.
Some thoughts for the web front end folks (which is where all the new-ness/challenge is given this architecture):
CoffeeScript. Much easier to produce high-quality code.
Backbone. Great way to organize your logic, and active community.
HAMLC. Haml + CoffeeScript templates => JS.
SASS
We've built a harness for our front-end development called 'Spar' (Single Page App Rocketship) which is effectively the asset pipeline from Rails tuned for single page app development. We'll be open-sourcing within the next couple of weeks on our github page, along with a blog post explaining how to use it and overall architecture in greater detail.
UPDATE:
With respect to people's concerns with Backbone, I think they are over-rated. Backbone is far more an organizational principle than it is a deep framework. Twitter's site itself is a giant beast of Javascript covering every corner-case across millions of users & legacy browsers, while loading tweets real-time, garbage collect, display lots of multimedia, etc. Of all the 'pure' js sites I've seen, Twitter is the odd one out. There have been many impressively complicated apps delivered via JS that fare very well.
And your choice of architecture depends entirely on your goals. If you are looking for the fastest way to support multiple clients and have access to good front-end talent, investing in a standalone API is a great way to go.
Very well asked. +1. For sure, this is future useful reference for me. Also #Aaron and others added value to discussion.
Like Ruby, this question is equally applicable to other programming environments.
I have used the first two options. First one for numerous applications and second one for my open source project Cowoop
Option 1
This one is no doubt the most popular one. But I find implementation are very much http-ish. Every API's initial code goes in dealing with request object. So API code is more than pure ruby/python/other language code.
Option 2
I always loved this.
This option also implies that HTML is not runtime generated on server. This is how option 2 is different from option 3. But are build as static html using a build script. When loaded on client side these HTML would call API server as JS API client.
Separation of concerns is great advantage. And very much to your liking (and mine) backend experts implement backend APIs, test them easily like usual language code without worrying about framework/ http request code.
This really is not as difficult as it sounds on frontend side. Do API calls and resulting data (mostly json) is available to your client side template or MVC.
Less server side processing. It means you may go for commodity hardware/ less expensive server.
Easier to test layers independently, easier to generate API docs.
It does have some downsides.
Many developers find this over engineered and hard to understand. So chances are that architecture may get criticized.
i18n/l10n is hard. Since HTML is essentially generated build time are static, one needs multiple builds per supported language (which isn't necessarily a bad thing). But even with that you may have corner cases around l10n/i18n and need to be careful.
Option 3
Backend coding in this case must be same as second option. Most points for option 2 are applicable here as well.
Web pages are rendered runtime using server side templates. This makes i18n/l10n much easier with more established/accepted techniques. May be one less http call for some essential context needed for page rendering like user, language, currency etc. So server side processing is increased with rendering but possibly compensated by less http calls to API server.
Now that pages are server rendered on server, frontend is now more tied with programming environment. This might not be even a consideration for many applications.
Twitter case
As I understand, Twitter might does their initial page rendering on server but for page updates it still has some API calls and client side templates to manipulate DOM. So in such case you have double templates to maintain which adds some overhead and complexity. Not everyone can afford this option, unlike Twitter.
Our project Stack
I happen to use Python. I use JsonRPC 2.0 instead of REST. I suggest REST, though I like idea of JsonRPC for various reasons. I use below libraries. Somebody considering option 2/3 might find it useful.
API Server: Python A fast web micro framework - Flask
Frontend server: Nginx
Client side MVC: Knockout.js
Other relevant tools/libs:
Jquery
Accounting.js for money currency
Webshim : Cross browser polyfill
director: Client side routing
sphc: HTML generation
My conclusion and recommendation
Option 3!.
All said, I have used option 2 successfully but now leaning towards option 3 for some simplicity. Generating static HTML pages with build script and serving them with one of ultra fast server that specialize in serving static pages is very tempting (Option 2).
We opted for #2 when building gaug.es. I worked on the API (ruby, sinatra, etc.) and my business partner, Steve Smith, worked on the front-end (javascript client).
Pros:
Move quickly in parallel. If I worked ahead of Steve, I could keep creating APIs for new features. If he worked ahead of me, he could fake out the API very easily and build the UI.
API for free. Having open access to the data in your app is quickly becoming a standard feature. If you start with an API from the ground up, you get this for free.
Clean separation. It is better to think of your app as an API with clients. Sure, the first and most important client may be a web one, but it sets you up for easily creating other clients (iPhone, Android).
Cons:
Backwards Compatibility. This is more related to an API than your direct question, but once your API is out there, you can't just break it or you break all your clients two. This doesn't mean you have to move slower, but it does mean you have to often make two things work at once. Adding on to the API or new fields is fine, but changing/removing shouldn't be done without versioning.
I can't think of anymore cons right now.
Conclusion: API + JS client is the way to go if you plan on releasing an API.
P.S. I would also recommend fully documenting your API before releasing it. The process of documenting Gaug.es API really helped us imp
http://get.gaug.es/documentation/api/
I prefer to go the route of #2 and #3. Mainly because #1 violates separation of concerns and intermingles all kinds of stuff. Eventually you'll find the need to have an API end point that does not have a matching HTML page/etc and you'll be up a creek with intermingled HTML and JSON endpoints in the same code base. It turns into a freaking mess, even if its MVP, you'll have to re-write it eventually because its soo messy that its not even worth salvaging.
Going with #2 or #3 allows you to completely have a API that acts the same (for the most part) regardless. This provides great flexibility. I'm not 100% sold on Backbone/ember/whatever/etc.js just yet. I think its great, but as we're seeing with twitter this is not optimal. BUT... Twitter is also a huge beast of a company and has hundreds of millions of users. So any improvement can have a huge impact to bottom line on various areas of various business units. I think there is more to the decision than speed alone and they're not letting us in on that. But thats just my opinion. However, I do not discount backbone and its competitors. These apps are great to use and are very clean and are very responsive (for the most part).
The third option has some valid allure as well. This is where I'd follow the Pareto principle (80/20 rule) and have 20% of your main markup (or vice versa) rendered on the server and then have a nice JS client (backbone/etc) run the rest of it. You may not be communicating 100% with the REST api via the JS client, but you will be doing some work if necessary to make the suer experience better.
I think this is one of those "it depends" kinds of problems and the answer is "it depends" on what you're doing, whom you're serving and what kind of experience you want them to receive. Given that I think you can decide between 2 or 3 or a hybrid of them.
I'm currently working on converting a huge CMS from option 1 to option 3, and it's going well. We chose to render the markup server-side because SEO is a big deal to us, and we want the sites to perform well on mobile phones.
I'm using node.js for the client's back-end and a handful of modules to help me out. I'm somewhat early in the process but the foundation is set and it's a matter of going over the data ensuring it all renders right. Here's what I'm using:
Express for the app's foundation.
(https://github.com/visionmedia/express)
Request to fetch the data.
(https://github.com/mikeal/request)
Underscore templates that get rendered server side. I reuse these on the client.
(https://github.com/documentcloud/underscore)
UTML wraps underscore's templates to make them work with Express.
(https://github.com/mikefrey/utml)
Upfront collects templates and let's you chose which get sent to the client.
(https://github.com/mrDarcyMurphy/upfront)
Express Expose passes the fetched data, some modules, and templates to the front-end.
(https://github.com/visionmedia/express-expose)
Backbone creates models and views on the front-end after swallowing the data that got passed along.
(https://github.com/documentcloud/backbone)
That's the core of the stack. Some other modules I've found helpful:
fleck (https//github.com/trek/fleck)
moment (https//github.com/timrwood/moment)
stylus (https//github.com/LearnBoost/stylus)
smoosh (https//github.com/fat/smoosh)
…though I'm looking into grunt (https//github.com/cowboy/grunt)
console trace (//github.com/LearnBoost/console-trace).
No, I'm not using coffeescript.
This option is working really well for me. The models on the back-end are non-existant because the data we get from the API is well structured and I'm passing it verbatim to the front-end. The only exception is our layout model where I add a single attribute that makes rendering smarter and lighter. I didn't use any fancy model library for that, just a function that adds what I need on initialization and returns itself.
(sorry for the weird links, I'm too much of a n00b for stack overflow to let me post that many)
We use the following variant of #3:
Make a JSON-only REST API server. Make an HTML website server. The HTML web server is not, as in your variant, a client to the REST API server. Instead, the two are peers. Not far below the surface, there is an internal API that provides the functionality that the two servers need.
We're not aware of any precedent, so it's kind of experimental. So far (about to enter beta), it has worked out pretty well.
I'm usually going for the 2nd option, using Rails to build the API, and backbone for the JS stuff. You can even get an admin panel for free using ActiveAdmin.
I've shipped tens of mobile apps with this kind of backend.
However it heavily depends if your app is interactive or not.
I did a presentation on this approach at the last RubyDay.it: http://www.slideshare.net/matteocollina/enter-the-app-era-with-ruby-on-rails-rubyday
For the third option, in order to get responsiveness of the 2nd one, you might want to try pajax as Github does.
I'm about 2 months into a 3 month project which takes the second approach you've outlined here. We use a RESTful API server side with backbone.js on the front. Handlebars.js manages the templates and jQuery handles the AJAX and DOM manipulation. For older browsers and search spiders we've fallen back onto server side rendering, but we're using the same HTML templates as the Handlebars frontend using Mozilla Rhino.
We chose this approach for many different reasons but are very aware it's a little risky given it hasn't been proven on a wide scale yet. All te same, everything's going pretty smoothly so far.
So far we've just been working with one API, but in the next phase of the project we'll be working with a second API. The first is for large amounts of data, and the second acts more like a CMS via an API.
Having these two pieces of the project act completely independent of each other was a key consideration in selecting this infrastructure. If you're looking for an architecture to mashup different independent resources without any dependencies then this is approach is worth a look.
I'm afraid I'm not a Ruby guy so I can't comment on the other approaches. Sometimes it's okay to take a risk. Other times it's better to play it safe. You'll k ow yourself depending on the type of project.
Best of luck with your choice here. Keen to see what others share as well.
I like #3 when my website is not going to be a 100% CRUD implementation of my data. Which is yet to happen.
I prefer sinatra and will just split up the app into a few different rack apps with different purposes. I'll make an API specific rack app that will cover what I need for the API. Then perhaps a user rack app that will present my webpage. Sometimes that version will query the API if needed, but usually it just concerns itself with the html site.
I don't worry about it and just do a persistance layer query from the user side if I need it. I'm not overly concerned with creating a complete separation as they usually end up serving different purposes.
Here is a very simple example of using multiple rack apps. I added a quick jquery example in there for you to see it hitting the API app. You can see how simple it can be with sinatra and mounting multiple rack apps with different purposes.
https://github.com/dusty/multi-rack-app-app
Some great answers here already - I'd definitely recommend #2 or #3 - the separation is good conceptually but also in practice.
It can be hard to predict things like load and traffic patterns on an API and customers we see who serve the API independently have an easier time of provisioning and scaling. If you have to do that munged in with human web access patterns it's less easy. Also your API usage might end up scaling up a lot faster than your web client and then you can see where to direct your efforts.
Between #2 #3 it really depends on your goals - I'd agree that #2 is probably the future of webapps - but maybe you want something more straightforward if that channel is only going to be one of many!
For atyourservice.com.cy we are using server side rendered templates for pages especially to cover the se part. And using the API for interactions after page loads.
Since our framework is MVC all controller functions are duplicated to json output and html output. Templates are clean and receive just an object. This can be transformed to js templates in seconds. We always maintain the serverside templates and just reconvert to js on request.
Isomorphic rendering and progressive enhancement. Which is what I think you were headed for in option three.
isomorphic rendering means using the same template to generate markup server-side as you use in the client side code. Pick a templating language with good server-side and client-side implementations. Create fully baked html for your users and send it down the wire. Use caching too.
progressive enhancement means start doing client side execution and rendering and event listening once you've got all the resources downloaded and you can determine a client capabilities. Falling back to functional client-script-less functionality wherever possible for accessibility and backwards compatibility.
Yes, of course write a standalone json api for this app functionality. But don't go so far that you write a json api for things that work fine as static html documents.
REST server + JavaScript-heavy client was the principle I've followed in my recent work.
REST server was implemented in node.js + Express + MongoDB (very good writing performance) + Mongoose ODM (great for modelling data, validations included) + CoffeeScript (I'd go ES2015 now instead) which worked well for me. Node.js might be relatively young compared to other possible server-side technologies, but it made it possible for me to write solid API with payments integrated.
I've used Ember.js as JavaScript framework and most of the application logic was executed in the browser. I've used SASS (SCSS specifically) for CSS pre-processing.
Ember is mature framework backed by strong community. It is very powerful framework with lots of work being done recently focused on performance, like brand new Glimmer rendering engine (inspired by React).
Ember Core Team is in process of developing FastBoot, which let's you to execute your JavaScript Ember logic on server-side (node.js specifically) and send pre-rendered HTML of your application (which would normally be run in browser) to user. It is great for SEO and user experience as he doesn't wait so long for page to be displayed.
Ember CLI is great tool that helps you to organise your code and it did well to scale with growing codebase. Ember has also it's own addon ecosystem and you can choose from variety of Ember Addons. You can easily grab Bootstrap (in my case) or Foundation and add it to your app.
Not to serve everything via Express, I've chosen to use nginx for serving images and JavaScript-heavy client. Using nginx proxy was helpful in my case:
upstream app_appName.com {
# replace 0.0.0.0 with your IP address and 1000 with your port of node HTTP server
server 0.0.0.0:1000;
keepalive 8;
}
server {
listen 80 default_server;
listen [::]:80 default_server ipv6only=on;
client_max_body_size 32M;
access_log /var/log/nginx/appName.access.log;
error_log /var/log/nginx/appName.error.log;
server_name appName.com appName;
location / {
# frontend assets path
root /var/www/html;
index index.html;
# to handle Ember routing
try_files $uri $uri/ /index.html?/$request_uri;
}
location /i/ {
alias /var/i/img/;
}
location /api/v1/ {
proxy_pass http://app_appName.com;
proxy_next_upstream error timeout invalid_header http_500 http_502
http_503 http_504;
proxy_redirect off;
proxy_buffering off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
Pro: I love the separation of API & client. Smart people say this is
the way to go. Great in theory. Seems cutting-edge and exciting.
I can say it's also great in practice. Another advantage of separating REST API is that you can re-use it later for another applications. In perfect world you should be able to use the same REST API not only for webpage, but also for mobile applications if you'd decide to write one.
Con: Not much precedent. Not many examples of this done well. Public
examples (twitter.com) feel sluggish & are even switching away from
this approach.
Things look different now. There are lots of examples of doing REST API + many clients consuming it.
I decided to go for the architecture of Option #2 for Infiniforms, as it provided a great way to separate the UI from the business logic.
An advantage of this is that the API Servers can scale independently of the web servers. If you have multiple clients, then the websites will not need to scale to the same extent as the web servers, as some client swill be phone / tablet or desktop based.
This approach also gives you a good base for opening up your API to your users, especially if you use your own API to provide all of the functionality for your website.
A very nice question and I'm surprised as I thought this is a very common task nowadays such that I will have plenty of resources for this problem, however turned out not to be true.
My thoughts are as follows:
- Create some module that have the common logic between the API controllers and HTML controllers without returning json or rendering html, and include this module in both HTML controller and API controller, then do whatever you want, so for example:
module WebAndAPICommon
module Products
def index
#products = # do some logic here that will set #products variable
end
end
end
class ProductsController < ApplicationController
# default products controlelr, for rendering HMTL pages
include WebAndAPICommon
def index
super
end
end
module API
class ProductsController
include WebAndAPICommon
def index
super
render json: #products
end
end
end
I've gone for a hybrid approach where we user Sinatra as a base, ActiveRecord / Postgress etc to serve up page routes (slim templates) expose a REST API the web-app can use. In early development stuff like populating select options is done via helpers rendering into the slim template, but as we approach production this gets swapped out for an AJAX call to a REST API as we start to care more about page-load speeds and so forth.
Stuff that's easy to render out in Slim gets handled that way, and stuff (populating forms, receiving form POST data from jQuery.Validation's submitHandler etc, is all abviously AJAX)
Testing is an issue. Right now I'm stumped trying to pass JSON data to a Rack::Test POST test.
I personally prefer option (3) as a solution. It's used in just about all the sites a former (household name) employer of mine has. It means that you can get some front-end devs who know all about Javascript, browser quirks and whatnot to code up your front end. They only need to know "curl xyz and you'll get some json" and off they go.
Meanwhile, your heavy-weight back-end guys can code up the Json providers. These guys don't need to think about presentation at all, and instead worry about flaky backends, timeouts, graceful error handling, database connection pools, threading, and scaling etc.
Option 3 gives you a good, solid three tier architecture. It means the stuff you spit out of the front end is SEO friendly, can be made to work with old or new browsers (and those with JS turned off), and could still be Javascript client-side templating if you want (so you could do things like handle old browsers/googlebot with static HTML, but send JS built dynamic experiences to people using the latest Chrome browser or whatever).
In all the cases I've seen Option 3, it's been a custom implementation of some PHP that isn't especially transferable between projects, let alone out in to Open Source land. I guess more recently PHP may have been replaced with Ruby/Rails, but the same sort of thing is still true.
FWIW, $current_employer could do with Option 3 in a couple of important places. I'm looking for a good Ruby framework in which to build something. I'm sure I can glue together a load of gems, but I'd prefer a single product that broadly provides a templating, 'curling', optional-authentication, optional memcache/nosql connected caching solution. There I'm failing to find anything coherent :-(
Building a JSON API in Rails is first class, The JSONAPI::Resources gem does the heavy lifting for a http://jsonapi.org spec'd API.

What should a developer know before building an API for a community based website?

What things should a developer designing and implementing an API for a community based website know before starting the heavy coding? There are a bunch of APIs out there like Twitter API, Facebook API, Flickr API, etc which are all good examples. But how would you build your own API?
What technologies would you use? I think it's a good idea to use REST-like interface so that the API is accessible from different platforms/clients/browsers/command line tools (like curl). Am I right? I know that all the principles of web development should be met like caching, availability, scalability, security, protection against potential DOS attacks, validation, etc. And when it comes to APIs some of the most important things are backward compatibility and documentation. Am I missing something?
On the other hand, thinking from user's point of view (I mean the developer who is going to use your API), what would you look for in an API? Good documentation? Lots of code samples?
This question was inspired by Joel Coehoorn's question "What should a developer know before building a public web site?".
This question is a community wiki, so I hope you will help me put in one place all the things that should be addressed when building an API for a community based website.
If you really want to define a REST api, then do the following:
forget all technology issues other than HTTP and media types.
Identify the major use cases where a client will interact with the API
Write client code that perform those "use cases" against a hypothetical HTTP server. The only information that client should start with is the response from a GET request to the root API url. The client should identify the media-type of the response from the HTTP content-type header and it should parse the response. That response should contain links to other resources that allow the client to perform all of the APIs required operations.
When creating a REST api it is easier to think of it as a "user interface" for a machine rather than exposing an object model or process model. Imagine the machine navigating the api programmatically by retrieving a response, following a link, processing the response and following the next link. The client should never construct a URL based on its knowledge of how the server organizes resources.
How those links are formatted and identified is critical. The most important decision you will make in defining a REST API is your choice of media types. You either need to find standard ways of representing that link information (think Atom, microformats, atom link-relations, Html5 link relations) or if you have specialized needs and you don't need really wide reach to many clients, then you could create your own media-types.
Document how those media types are structured and what links/link-relations they may contain. Specific information about media types is critical to the client. Having a server return Content-Type:application/xml is useless to a client if it wants to do anything more than parse the response. The client cannot know what is contained in a response of type application/xml. Some people do believe you can use XML schema to define this but there are several disadvantages to this and it violates the REST "self-descriptive message" constraint.
Remember that what the URL looks like has absolutely no bearing on how the client should operate. The only exception to this, is that a media type may specify the use of templated URIs and may define parameters of those templates. The structure of the URL will become significant when it comes to choosing a server side framework. The server controls the URL structure, the client should not care. However, do not let the server side framework dictate how the client interacts with the API and be very cautious about choosing a framework that requires you to change your API. HTTP should be the only constraint regarding the client/server interaction.