Is there any benefit in creating a proxy for api consumption on 3rd party apis - api

I'm build a apache cordova mobile app that consumes a specific google api.
What i'm planning to do is consume google apis directly from the app.
Is there any benefit to create a proxy service that consumes google apis and have my app consuming the proxy api ?
I ask because it seems a common practice, but i don't see any benefit.
Is it a best practice or a bad practice ?

If yes maybe a legacy system, maybe yes to do transforms, but now in days, so many "more modern" RESTful API providers have pretty nice client libs. Seems like one more link in the system to fail if I was proxy, and extra load to deal with... (I wouldn't proxy requests to youtube or some storage account for large static assets).
I do find it nice to proxy my own APIs if it's not public. It usually helps to remove the whole CORS mess and eliminate any performance penalties from extra preflight requests, but most public APIs don't have heavy CORS issues since they are, well public and don't have limited allowed origin.
Even simple JS libs hosted on CDNs like JQuery, I usually leverage 3rd party CDNs if available rather than bundle and host on my own CDN through Azure or Google that I have to pay for.

Related

Are there any benefits to using the NextJS API?

I'm about to build out my API for my service and I'm wondering if there are any benefits to building it out using the NextJS API integration. It seems a little quirky for my own preferences and the resources aren't as exhaustive as they are for my ideal solution: expressJS. That said I'm aware there are some integrations that would allow me to use express inside the NextJS API but that seems excessive compared to standing up a separate repo for my express API.
So, I'm curious are there any under-the-hood benefits or perks to leveraging the NextJS API within the same repository as the rest of my code?
Thanks!
It depends on your application. Truth is probably a good portion of websites and projects just need a simple request server-side to retrieve data. Having Next.js API helps manages your serverless functions. You could easily do it with AWS's Lambdas and API gateway but the management of many serverless functions can get messy. You can also use CLI tools but Next.js with Now and Netlify can automatically do all this during build/CI/CD.
Under the hood is an AWS Lambda, so the usual can be an issue...cold starts, computing resources. But great for things like accessing private APIs, lower on-demand costs, and development as functions. If its heavy computational data, I would avoid using serverless and just spin up your own separate service. But the plumbing is all the same as an Express or KOA server, its just on-demand and a tool to manage. I just see it as a really easy and simple tool to do a little bit of backend work without maintaining infrastructure.

Implement rate limiting for public facing API deployed in AWS

Our public API is deployed in AWS. They are developed with different tech stacks.
We want to introduce rate limiting (based on IP, access key, etc.) for the API across many services in a generic way.
Less or No ops effort to run
Introducing new services and paths to existing services should not require effort on configuring API gateway
We are considering the following.
AWS API Gateway Looks easy. Not sure adding routes require effort to keep it sync with services.
traefik Looks good. But, we need to run and maintain.
What would be the suggested approach for this? Any better tools/suggestions?
API Gateway with Usage Plans enabled, to enable rate limiting via API key, is going to be the recommendation for a solution on AWS. You can also look into doing something like this in order to support rate-limiting by IP (although if I had to do all that for IP rate-limiting I'd probably look hard at third-party products like traefik).
As mentioned in the comments, you can configure catch-all routes in API Gateway so that you don't have to modify the configuration every time you add a new route.

Web API + Client Architecture

We're building:
A bunch of services exposed through a web API.
A mobile app and a browser app.
Is it common practice for the apps to respond to their own conduit servers that end up talking to the API services? We're going to be setting up a reverse proxy - is it enough to directly hit our APIs (instead of setting up a conduit)? This is definitely a general architecture question.
I'm not sure what you mean by a "conduit", but a lot depends on how complete and hardened your APIs are. Do they already handle things like authentication, abuse detection/control, SSL, versioning, etc...
There are companies that specialize in providing this "middleware" of APIs (Apigee, Amazon API Gateway, Azure API Management, and many others). Your reverse proxy is a start, and is probably good enough to get going with (at least you do things like terminate your SSL, and lock down your API servers behind a firewall). If you make your API services stateless, you will probably be able to add new layers at a later date without too much pain and complexity.

Crossing sub domain ajax calls

We wish to build a web app that will consume our REST API and looking for a way to circumvent the Same Origin Policy security feature.
We have a REST API which is served from api.ourdomain.com from SERVER_1.
We have a Web App which is server from dashboard.ourdomain.com from SERVER_2.
The Web App communicates with the REST API using ajax calls that include GET, POST, DELETE and PUT requests.
At some point in the future, we might consider allowing 3rd party sites to access the API from their own sites and domains.
Due to the Same Origin Policy security feature of the browsers, these requests are not allowed.
We are looking for ways to circumvent this.
Solutions we have encountered:
Tunneling the requests through our proxy. This will slow down the app and requires more resources.
JSONP - Will only work for GET requests. We do not wish to overload the GET requests with post/put/delete capabilities.
Using an iFrame with document.domain set to the same domain. Will only work for sites under ourdomain.com.
Frameworks such as EasyXDM. Seems like a good solution.
Thank you!
I don't know EasyXDM but I have the same architecture you are talking about in more than one application. We use your suggested solution (1). In my opinion proxying the requests through a common subdomain is the cleanest solution. I don't think that this is a performance problem. Many sites use something like nginx anyway to do some sort of reverse proxy (as cache). You could easily tunneling your API through http://[yourhost]/api and the rest of the HTML, CSS and image resources through other paths.

What are the best practices for caching 3rd party API calls?

Our team currently operates 4-5 applications, and all of those applications use various 3rd party services (SimpleGeo, FB graph API, Yelp API, StrikeIron, etc). There is large overlap between applications, and frequently we call the same APIs for the same input parameters multiple times. Obviously that is not ideal: it is slow and it is expensive (some of the APIs are not free).
What are the best practices for caching these API calls across multiple applications? I see several options:
Write a custom app that creates facade for all of those APIs, and change all of my apps to use it.
Configure some sort of HTTP proxy in a very aggressive caching mode, and perform connections to APIs via that proxy.
Are there any other options I am missing?
Is there anything wrong with option 2? What HTTP proxy would you recommend for it (Squid, Varnish, Nginx, etc)?
You can use any of the three, but I would go with squid. Squid was created (and is heavily used) for this purpose (as a caching proxy). Varnish is designed as a Reverse Proxy (a cache in front of your own back-end), and nginx more like a load balancer and web processor (serving files and dynamic pages).