What are the best practices for caching 3rd party API calls? - api

Our team currently operates 4-5 applications, and all of those applications use various 3rd party services (SimpleGeo, FB graph API, Yelp API, StrikeIron, etc). There is large overlap between applications, and frequently we call the same APIs for the same input parameters multiple times. Obviously that is not ideal: it is slow and it is expensive (some of the APIs are not free).
What are the best practices for caching these API calls across multiple applications? I see several options:
Write a custom app that creates facade for all of those APIs, and change all of my apps to use it.
Configure some sort of HTTP proxy in a very aggressive caching mode, and perform connections to APIs via that proxy.
Are there any other options I am missing?
Is there anything wrong with option 2? What HTTP proxy would you recommend for it (Squid, Varnish, Nginx, etc)?

You can use any of the three, but I would go with squid. Squid was created (and is heavily used) for this purpose (as a caching proxy). Varnish is designed as a Reverse Proxy (a cache in front of your own back-end), and nginx more like a load balancer and web processor (serving files and dynamic pages).

Related

Is there any benefit in creating a proxy for api consumption on 3rd party apis

I'm build a apache cordova mobile app that consumes a specific google api.
What i'm planning to do is consume google apis directly from the app.
Is there any benefit to create a proxy service that consumes google apis and have my app consuming the proxy api ?
I ask because it seems a common practice, but i don't see any benefit.
Is it a best practice or a bad practice ?
If yes maybe a legacy system, maybe yes to do transforms, but now in days, so many "more modern" RESTful API providers have pretty nice client libs. Seems like one more link in the system to fail if I was proxy, and extra load to deal with... (I wouldn't proxy requests to youtube or some storage account for large static assets).
I do find it nice to proxy my own APIs if it's not public. It usually helps to remove the whole CORS mess and eliminate any performance penalties from extra preflight requests, but most public APIs don't have heavy CORS issues since they are, well public and don't have limited allowed origin.
Even simple JS libs hosted on CDNs like JQuery, I usually leverage 3rd party CDNs if available rather than bundle and host on my own CDN through Azure or Google that I have to pay for.

Web API + Client Architecture

We're building:
A bunch of services exposed through a web API.
A mobile app and a browser app.
Is it common practice for the apps to respond to their own conduit servers that end up talking to the API services? We're going to be setting up a reverse proxy - is it enough to directly hit our APIs (instead of setting up a conduit)? This is definitely a general architecture question.
I'm not sure what you mean by a "conduit", but a lot depends on how complete and hardened your APIs are. Do they already handle things like authentication, abuse detection/control, SSL, versioning, etc...
There are companies that specialize in providing this "middleware" of APIs (Apigee, Amazon API Gateway, Azure API Management, and many others). Your reverse proxy is a start, and is probably good enough to get going with (at least you do things like terminate your SSL, and lock down your API servers behind a firewall). If you make your API services stateless, you will probably be able to add new layers at a later date without too much pain and complexity.

Unconventional to bundle web socket server with REST API?

For an enterprise REST API (PHP in this case), is it a bad practice to include a web socket server along with a REST API? The pairing of the two makes a nice mix with event dispatching services, but I'm not sure if these two services are different enough where they warrant separation? I guess the only con I can see at the moment, would be that if the REST API were to go down, then your web socket servers are also down, which removes the possibility of having fail-over for any connected clients, or something to that degree.
If you're looking for a really robust way to manage web sockets, check out http://faye.jcoglan.com/ - it has libraries for JavaScript, Ruby, etc, and runs independently of your other servers.
If you don't need that kind of resilience, then I wouldn't worry about mixing your REST and web socket APIs on the same server.

How to serve a mobile app

What would be a good way to provide a non-trivial backend for mobile apps, regarding both the protocol used for communication, and the actual hosting?
Most backend platforms (such as parse.com) provides some basic API for performing trivial CRUD data operations, but if the server logic needs to be more complex than that, what would be a good strategy (preferably .NET/C#, secondarily JAVA, but not javascript or any custom scripting approached)? SOAP web services (for example WCF)?
Regarding hosting, I have looked at Azure and AppHarbor, but can't decide between the two. AppHarbor seems like the only place to co-locate the web server and a MongoDB instance in Northern Europe, as Azure (apparently) only provides MongoDB in a US region. Any suggestions?

Crossing sub domain ajax calls

We wish to build a web app that will consume our REST API and looking for a way to circumvent the Same Origin Policy security feature.
We have a REST API which is served from api.ourdomain.com from SERVER_1.
We have a Web App which is server from dashboard.ourdomain.com from SERVER_2.
The Web App communicates with the REST API using ajax calls that include GET, POST, DELETE and PUT requests.
At some point in the future, we might consider allowing 3rd party sites to access the API from their own sites and domains.
Due to the Same Origin Policy security feature of the browsers, these requests are not allowed.
We are looking for ways to circumvent this.
Solutions we have encountered:
Tunneling the requests through our proxy. This will slow down the app and requires more resources.
JSONP - Will only work for GET requests. We do not wish to overload the GET requests with post/put/delete capabilities.
Using an iFrame with document.domain set to the same domain. Will only work for sites under ourdomain.com.
Frameworks such as EasyXDM. Seems like a good solution.
Thank you!
I don't know EasyXDM but I have the same architecture you are talking about in more than one application. We use your suggested solution (1). In my opinion proxying the requests through a common subdomain is the cleanest solution. I don't think that this is a performance problem. Many sites use something like nginx anyway to do some sort of reverse proxy (as cache). You could easily tunneling your API through http://[yourhost]/api and the rest of the HTML, CSS and image resources through other paths.