I have 2 POST, 1 PUT and 1 DELETE API in my application. My application is deployed on Heroku. I want to rate limit these APIs but only to prevent against a DOS attack or in case someone by mistake calls API in infinite loop. What's the ideal rate limit for this scenario. E.g. x per minute, y per hour. What are the idea numbers for x and y?
One way to do it is to drop in the 3scale API plugin (https://github.com/3scale/3scale_ws_api_for_ruby) and this enforces rate limits, does analytics etc. using external infrastructure.
That way you can rate limit individual users and have different quotas for each one (plus all the signup etc.).
It won't strictly prevent true DOS because even unauthed request will still reach you - but it'll stop them going down into your stack + cut off the people who are doing the damage first.
This really depends on your app.
1) It depends on how "heavy" your app is:
If each of your requests are processor-heavy and/or hits your database a lot, then you'll want to set a low limit, because each request is "expensive". If your app is fast and efficient, then you have more room to maneuver and you can get set a higher API limit.
2) It depends on the use cases of your app.
Does your app require a lot of API hits to be useable? How does an API limit affect the features your app provides?
3) It depends on how many processors you have.
Heroku allows you to scale your app by setting web dynos and worker dynos. The more you have, the better you'll be able offer high API limits.
In any case, if you want to prevent someone from DOS-ing you or calling your API in an infinite loop, this is the wrong approach to take because an API limit affects the good guys as well as the bad guys. A better way is to detect the bad behaviour and respond appropriately (i.e. deny the offending IP address for a limited time).
Related
My goal is to maintain "real-time" (or as close as possible) information about the email messages sent by a group of users.
My ideal solution would be to periodically query the API for messages by all users in the group. This feature is not (yet?) implemented.
My second choice would be to create subscriptions (https://graph.microsoft.io/en-us/docs/api-reference/v1.0/api/subscription_post_subscriptions) for every member in the group and then request message information after I become aware of an event. The problem is, in practice, I am only allowed to create 20-30 simultaneous subscriptions (Issues to use Webhook for Microsoft Graph API), which might not be enough.
So I'm stuck with polling all the users in a cycle. The main problem with this approach is I can't find any information how many request are "too many", ie I get throttled. I want to maximize the number of requests to minimize the time it takes to get through one cycle.
A solution that comes to mind is to develop an adaptive program that slowly decreases the time between requests until throttled, then abruptly adds some time to it until a nice balance is found and maintained. This seems like a lot of work though. Right now I'm working on the assumption that 1/second is about the highest I can safely go (0.5 seconds on average per round trip, then a cool down of another 0.5 seconds).
What is the best way to deal with an unknown throttling limit in general, and Microsoft Graph in particular?
Edit:
While I think the accepted answer is a good response, it might not be suitable for all cases. For instance, if you don't want to use the 365 API, and you don't mind using beta features, perhaps you might check out this (delta tokens), which seem to be designed for real-time syncing with the data.
The only potential downside with the accepted answer is that you still need a subscription for each user you want to track (I think...), and there are limits on those. Very curious as to how other people tackled this problem.
While still in Preview, you may want to take a look at Outlook Streaming Notifications. These APIs provide a more robust notification model than simple web hooks. You would still need to establish multiple subscriptions but I expect you'll see far less latency.
I am working on one product which is fetching all the organization/workspace and app details of the customer. The customer can refresh them any time.
So let’s say I have one customer who has 100 applications across multiple workspaces so around it is making around 110 calls to get each application detail, workspace details and organizations.
Now if that customer refreshed the applications multiple times like 10 times in an hour so the action only for that is 1000 API calls. If I have 50 such users active and doing this thing then it will be something 50000.
AFAIK I can not make so many API calls in an hour so how to handle this scenario. I know a lot of applications are doing such things so want to understand how everyone is handling this.
If you need a higher rate limit, I would encourage you to contact Podio support and ask specifically for what you need. We have internal guidelines for evaluating these kinds of requests and may increase the limit for your user and client ID if appropriate.
In general, though, I would expect your app to implement some kind of batching, transient storage, and/or caching layers, especially if your customers are interacting with Podio exclusively or primarily through your system.
Please see our official statement here: https://developers.podio.com/index/limits
Summary:
The general limit is 5,000 API calls per hour, but if the API call is marked as "Rate limited" in the API reference the call is deemed resource intensive and a lower rate of 1000 calls per hour is enforced. If you hit the rate limits the API will begin returning 420 HTTP error codes for all API calls. Rate limits are per user per API key.
Contacting support:
If you have a project that requires a higher rate limit contact support#podio.com with a brief description of your project, your estimated usage and the client_id of the API key you are using.
Usage tips:
Tips for reducing API usage
Avoid making API requests inside loops. Instead of fetching individual objects inside a loop, fetch a collection of objects in one API operation. E.g. filter items
Cache results whenever possible. This is especially true when you are displaying data to the public (i.e. every sees the same output).
Don't poll for changes. Instead of polling Podio to see if your content has changed use webhooks or push to receive a notification. This might save you thousands of requests: https://developers.podio.com/doc/hooks
Use logging to see how many requests you're making
Bundle responses with "fields" parameter
You might want to build an API proxy app; you would need a messaging queue and a rate limiter. This would lets you keep track of the api calls consumptions across apps and users.
Also worth noting: some API routes are more expensive than others if they are more resource intensive on the Podio side… The term in use is rate-limited: rate limited api route are bound to 1k calls an hours, so in effect costs 5 times as much as regular routes.
Hope this helps!
I've been using LinkedIn API in my application and I noticed that the call count for profile requests in LinkedIn dashboard is significantly higher than the amount of calls I'm doing (something like 10 times the amount of calls I've been doing).
I'm using bulk calls and field selectors.
Did anyone else have such experience? Any idea why this happens?
From the Throttle Limits documentation:
Note: Bulk requests are throttled as if you make multiple individual requests, so they provide efficiency, not extra information.
So each part of your bulk requests counts as one request.
But without knowing what exactly you are doing, there's no chance to say if this really is the reason for what you experience.
My goal is to synchronize a web-application with an internal database. The web-application has a public API, but in order to fully synchronize the two sources I would need to make around 2000 separate API calls every time. My instinct tells me that this is excessive and possibly irresponsible, but I lack the experience to know for sure.
In this particular case the web-application is Asana, but I've encountered similar situations before with other services. Is there any way to know if you're abusing a service through excessive API calls? I know I'm not going to DOS a company like Asana, but I can't shake the feeling that there must be a better way than making ~150k requests per day.
The only other option I can think of is to update the web-service only when I know there's been a change in the database, but I'll lose a lot of capability that way.
I apologize for the subjectivity of this question, but I'm really hoping that someone can explain if there's any kind of etiquette that's expected when using public APIs.
(I work at Asana)
This is an excellent question, or rather set of questions.
You are designing a system that will repeatedly make requests for every object. What will happen as the number of objects grows? Even if your initial request rate were reasonable, this would suffer problems with scalability. A more scalable solution is one that scales with the number of changes in the system. This will also grow over time, but much more slowly - the number of changes a single user can make per day is relatively constant, but the total number of objects they've created over time grows and grows. So my first piece of advice would be to avoid doing things this way, and instead find a way to detect changes and just act on those. It would be interesting to know why you feel you'll lose capability by taking this approach.
Now, I happen to know that the Asana API does not currently provide you with any friendly mechanism to just detect changes in the system. This is a commonly requested feature and we are looking into it, though I unfortunately cannot promise a delivery date. So you might be left with no choice but to poll our system for now.
As for being polite to the API, many service providers set limits on their API usage to prevent accidental or malicious use of the API from impacting the service to their other customers -- Asana is no exception. Sometimes these limits are published, other times not, and there is no standard limit: it all depends on the service. But it is very thoughtful of you to be curious about service limitations.
That said, 150k requests per day is, for the Asana API, kind of a lot. If all of our API users gave us that much traffic, we might be serving more requests per day than Google Web Search, and we're not quite that scalable yet. :) Technically, sometimes, we might handle requests at that volume from a single user.
If you must poll, try to poll on intervals like 15 minutes. But please do not poll your entire workspace on this time period; it's likely to be too much traffic/data. We're working on trying to provide you with a better solution.
If you do happen to make too many requests of the Asana API, you will get back HTTP status code 429 instead of your desired response; you can read more about that here (https://asana.com/developers/documentation/getting-started/errors).
Not sure how others have addressed this, but generally speaking what is the best practice for giving your own apps priority treatment when it comes to using one of your own public APIs?
Use Cache Priority
Caching responses or interim calculations in RAM is typically the first optimization point because caching is easier than micro optimizing all your code. Controlling what goes into the cache and how long it stays presents a top level place to apply "priority treatment".
I like the cache management approach better than thread priority because if you are under load delaying the execution of a request often creates complex thread pool problems and decreases overall server throughput.
Caching Based on Load (rather than on app ownership) will Expand the Resource Pie
We take the ram cache priority approach with MapLarge Tile Server and Geocoding API. However, we don't actually give our own apps priority, instead we base priority on request frequency and time required to render a response. Unless you have large numbers of low value api users, I would recommend doing something similar because this approach should reduce overall load and enables the server to handle more api requests.
I recently wrote a white paper that highlights the different load profiles of cached and non cached responses in a multi tenant api environment. You can see it here:
http://maplarge.com/Tile-Server-Performance
API Policies can drive revenue
If you have free or low paying users who are generating massive load you might want to review your business plan and consider instituting account based rate limits that match user revenue to server costs in a scalable way. If you do limit API users I would recommend having explicit and predictable policies so they can project usage and know when to purchase an API account upgrade.