API, building an API but giving priority access for certain requests - api

Not sure how others have addressed this, but generally speaking what is the best practice for giving your own apps priority treatment when it comes to using one of your own public APIs?

Use Cache Priority
Caching responses or interim calculations in RAM is typically the first optimization point because caching is easier than micro optimizing all your code. Controlling what goes into the cache and how long it stays presents a top level place to apply "priority treatment".
I like the cache management approach better than thread priority because if you are under load delaying the execution of a request often creates complex thread pool problems and decreases overall server throughput.
Caching Based on Load (rather than on app ownership) will Expand the Resource Pie
We take the ram cache priority approach with MapLarge Tile Server and Geocoding API. However, we don't actually give our own apps priority, instead we base priority on request frequency and time required to render a response. Unless you have large numbers of low value api users, I would recommend doing something similar because this approach should reduce overall load and enables the server to handle more api requests.
I recently wrote a white paper that highlights the different load profiles of cached and non cached responses in a multi tenant api environment. You can see it here:
http://maplarge.com/Tile-Server-Performance
API Policies can drive revenue
If you have free or low paying users who are generating massive load you might want to review your business plan and consider instituting account based rate limits that match user revenue to server costs in a scalable way. If you do limit API users I would recommend having explicit and predictable policies so they can project usage and know when to purchase an API account upgrade.

Related

Mulesoft best practices for API-led connectivity , is it okay to invoke System API directly from the client application(be it web/mobile)

The main reason for this question is to understand/reasons behind the best practices over the usage of system APIs. If the System API itself good enough to be serve the purpose of my client application, do we still need to write an experience API to invoke the system API indirectly, or break the rule, just invoke the system-API directly from the client application. As sometimes , it is overhead/numerous API calls over the network.
System API is to unlock or expose the system asset(back end data). Now, one could write the system API in such a way that it fetches the data from system database, does the required processing, for instance convert the table rows to JSON format and then does some enrichment & trimming of fields and expose it to the customer A. This is a course grained approach. Now, another customer B requires similar data but needs some fields that were already trimmed by you to serve Customer A who wanted only few fields of the many fields that you picked from System(database). You'll have to write a separate course grained API for Customer B.
Also, in the future if the backend SYSTEM is replaced with a new SYSTEM, one would have to re-write/update both the API's for each customer A and customer B.
This course grained approach would solve your problem each time, but architecturally having a fine grained approach of breaking down a large service into multiple layers of experience, process and system API's would enable re-use, reduce work effort, increase time to market, lower total cost of ownership and allow applying the required separate policies(Security, sla's etc) for each of the clients through experience API layer. You can now better scale your integration landscape.
A fine grained approach increases usage of resources such as network, diskspace(more logging) etc but its a trade-off to all the many advantages you get. Again, the decision to go with either of the approaches should align with the current circumstances of your ecosystem, so it all depends.

ibm visual recognition limits

I was not able to find any information about the maximum number of concurrent requests and maximum number of requests in one second to the Visual Recognition service.
Can you provide me with information or link where I can read about limits in general?
There is not a hard limit on concurrency per user. The service is designed to support many users simultaneously, and therefore has capacity to process many requests in parallel. During periods of heavy use however you may occasionally receive a 500 return code, in which case the request should be resubmitted.
Unfortunately there is a legacy error message in the system that tells users they may be submitting too many concurrent requests , but it is very unlikely that the error is actually caused by a users' concurrency. They should be treated like a 500 error code - just resubmit the request.
You should have no problem submitting 30 or 40 requests concurrently.
Standard API keys are limited to 25,000 events per day by default to prevent something like an infinite loop from generating a huge bill. You can have that limit increased by creating a bluemix support ticket. If you really want to do some large scale processing in parallel it would also make sense to get in touch with Support with a bluemix ticket so that we can guide you in the most cost effective and efficient way to use the service.

Podio API limit

I am working on one product which is fetching all the organization/workspace and app details of the customer. The customer can refresh them any time.
So let’s say I have one customer who has 100 applications across multiple workspaces so around it is making around 110 calls to get each application detail, workspace details and organizations.
Now if that customer refreshed the applications multiple times like 10 times in an hour so the action only for that is 1000 API calls. If I have 50 such users active and doing this thing then it will be something 50000.
AFAIK I can not make so many API calls in an hour so how to handle this scenario. I know a lot of applications are doing such things so want to understand how everyone is handling this.
If you need a higher rate limit, I would encourage you to contact Podio support and ask specifically for what you need. We have internal guidelines for evaluating these kinds of requests and may increase the limit for your user and client ID if appropriate.
In general, though, I would expect your app to implement some kind of batching, transient storage, and/or caching layers, especially if your customers are interacting with Podio exclusively or primarily through your system.
Please see our official statement here: https://developers.podio.com/index/limits
Summary:
The general limit is 5,000 API calls per hour, but if the API call is marked as "Rate limited" in the API reference the call is deemed resource intensive and a lower rate of 1000 calls per hour is enforced. If you hit the rate limits the API will begin returning 420 HTTP error codes for all API calls. Rate limits are per user per API key.
Contacting support:
If you have a project that requires a higher rate limit contact support#podio.com with a brief description of your project, your estimated usage and the client_id of the API key you are using.
Usage tips:
Tips for reducing API usage
Avoid making API requests inside loops. Instead of fetching individual objects inside a loop, fetch a collection of objects in one API operation. E.g. filter items
Cache results whenever possible. This is especially true when you are displaying data to the public (i.e. every sees the same output).
Don't poll for changes. Instead of polling Podio to see if your content has changed use webhooks or push to receive a notification. This might save you thousands of requests: https://developers.podio.com/doc/hooks
Use logging to see how many requests you're making
Bundle responses with "fields" parameter
You might want to build an API proxy app; you would need a messaging queue and a rate limiter. This would lets you keep track of the api calls consumptions across apps and users.
Also worth noting: some API routes are more expensive than others if they are more resource intensive on the Podio side… The term in use is rate-limited: rate limited api route are bound to 1k calls an hours, so in effect costs 5 times as much as regular routes.
Hope this helps!

"reasonable" use of web APIs to sync data

My goal is to synchronize a web-application with an internal database. The web-application has a public API, but in order to fully synchronize the two sources I would need to make around 2000 separate API calls every time. My instinct tells me that this is excessive and possibly irresponsible, but I lack the experience to know for sure.
In this particular case the web-application is Asana, but I've encountered similar situations before with other services. Is there any way to know if you're abusing a service through excessive API calls? I know I'm not going to DOS a company like Asana, but I can't shake the feeling that there must be a better way than making ~150k requests per day.
The only other option I can think of is to update the web-service only when I know there's been a change in the database, but I'll lose a lot of capability that way.
I apologize for the subjectivity of this question, but I'm really hoping that someone can explain if there's any kind of etiquette that's expected when using public APIs.
(I work at Asana)
This is an excellent question, or rather set of questions.
You are designing a system that will repeatedly make requests for every object. What will happen as the number of objects grows? Even if your initial request rate were reasonable, this would suffer problems with scalability. A more scalable solution is one that scales with the number of changes in the system. This will also grow over time, but much more slowly - the number of changes a single user can make per day is relatively constant, but the total number of objects they've created over time grows and grows. So my first piece of advice would be to avoid doing things this way, and instead find a way to detect changes and just act on those. It would be interesting to know why you feel you'll lose capability by taking this approach.
Now, I happen to know that the Asana API does not currently provide you with any friendly mechanism to just detect changes in the system. This is a commonly requested feature and we are looking into it, though I unfortunately cannot promise a delivery date. So you might be left with no choice but to poll our system for now.
As for being polite to the API, many service providers set limits on their API usage to prevent accidental or malicious use of the API from impacting the service to their other customers -- Asana is no exception. Sometimes these limits are published, other times not, and there is no standard limit: it all depends on the service. But it is very thoughtful of you to be curious about service limitations.
That said, 150k requests per day is, for the Asana API, kind of a lot. If all of our API users gave us that much traffic, we might be serving more requests per day than Google Web Search, and we're not quite that scalable yet. :) Technically, sometimes, we might handle requests at that volume from a single user.
If you must poll, try to poll on intervals like 15 minutes. But please do not poll your entire workspace on this time period; it's likely to be too much traffic/data. We're working on trying to provide you with a better solution.
If you do happen to make too many requests of the Asana API, you will get back HTTP status code 429 instead of your desired response; you can read more about that here (https://asana.com/developers/documentation/getting-started/errors).

Rate Limiting to prevent against DOS attack (Heroku)

I have 2 POST, 1 PUT and 1 DELETE API in my application. My application is deployed on Heroku. I want to rate limit these APIs but only to prevent against a DOS attack or in case someone by mistake calls API in infinite loop. What's the ideal rate limit for this scenario. E.g. x per minute, y per hour. What are the idea numbers for x and y?
One way to do it is to drop in the 3scale API plugin (https://github.com/3scale/3scale_ws_api_for_ruby) and this enforces rate limits, does analytics etc. using external infrastructure.
That way you can rate limit individual users and have different quotas for each one (plus all the signup etc.).
It won't strictly prevent true DOS because even unauthed request will still reach you - but it'll stop them going down into your stack + cut off the people who are doing the damage first.
This really depends on your app.
1) It depends on how "heavy" your app is:
If each of your requests are processor-heavy and/or hits your database a lot, then you'll want to set a low limit, because each request is "expensive". If your app is fast and efficient, then you have more room to maneuver and you can get set a higher API limit.
2) It depends on the use cases of your app.
Does your app require a lot of API hits to be useable? How does an API limit affect the features your app provides?
3) It depends on how many processors you have.
Heroku allows you to scale your app by setting web dynos and worker dynos. The more you have, the better you'll be able offer high API limits.
In any case, if you want to prevent someone from DOS-ing you or calling your API in an infinite loop, this is the wrong approach to take because an API limit affects the good guys as well as the bad guys. A better way is to detect the bad behaviour and respond appropriately (i.e. deny the offending IP address for a limited time).