How does LinkedIn Throttling count my calls? - api

I've been using LinkedIn API in my application and I noticed that the call count for profile requests in LinkedIn dashboard is significantly higher than the amount of calls I'm doing (something like 10 times the amount of calls I've been doing).
I'm using bulk calls and field selectors.
Did anyone else have such experience? Any idea why this happens?

From the Throttle Limits documentation:
Note: Bulk requests are throttled as if you make multiple individual requests, so they provide efficiency, not extra information.
So each part of your bulk requests counts as one request.
But without knowing what exactly you are doing, there's no chance to say if this really is the reason for what you experience.

Related

ibm visual recognition limits

I was not able to find any information about the maximum number of concurrent requests and maximum number of requests in one second to the Visual Recognition service.
Can you provide me with information or link where I can read about limits in general?
There is not a hard limit on concurrency per user. The service is designed to support many users simultaneously, and therefore has capacity to process many requests in parallel. During periods of heavy use however you may occasionally receive a 500 return code, in which case the request should be resubmitted.
Unfortunately there is a legacy error message in the system that tells users they may be submitting too many concurrent requests , but it is very unlikely that the error is actually caused by a users' concurrency. They should be treated like a 500 error code - just resubmit the request.
You should have no problem submitting 30 or 40 requests concurrently.
Standard API keys are limited to 25,000 events per day by default to prevent something like an infinite loop from generating a huge bill. You can have that limit increased by creating a bluemix support ticket. If you really want to do some large scale processing in parallel it would also make sense to get in touch with Support with a bluemix ticket so that we can guide you in the most cost effective and efficient way to use the service.

Podio API limit

I am working on one product which is fetching all the organization/workspace and app details of the customer. The customer can refresh them any time.
So let’s say I have one customer who has 100 applications across multiple workspaces so around it is making around 110 calls to get each application detail, workspace details and organizations.
Now if that customer refreshed the applications multiple times like 10 times in an hour so the action only for that is 1000 API calls. If I have 50 such users active and doing this thing then it will be something 50000.
AFAIK I can not make so many API calls in an hour so how to handle this scenario. I know a lot of applications are doing such things so want to understand how everyone is handling this.
If you need a higher rate limit, I would encourage you to contact Podio support and ask specifically for what you need. We have internal guidelines for evaluating these kinds of requests and may increase the limit for your user and client ID if appropriate.
In general, though, I would expect your app to implement some kind of batching, transient storage, and/or caching layers, especially if your customers are interacting with Podio exclusively or primarily through your system.
Please see our official statement here: https://developers.podio.com/index/limits
Summary:
The general limit is 5,000 API calls per hour, but if the API call is marked as "Rate limited" in the API reference the call is deemed resource intensive and a lower rate of 1000 calls per hour is enforced. If you hit the rate limits the API will begin returning 420 HTTP error codes for all API calls. Rate limits are per user per API key.
Contacting support:
If you have a project that requires a higher rate limit contact support#podio.com with a brief description of your project, your estimated usage and the client_id of the API key you are using.
Usage tips:
Tips for reducing API usage
Avoid making API requests inside loops. Instead of fetching individual objects inside a loop, fetch a collection of objects in one API operation. E.g. filter items
Cache results whenever possible. This is especially true when you are displaying data to the public (i.e. every sees the same output).
Don't poll for changes. Instead of polling Podio to see if your content has changed use webhooks or push to receive a notification. This might save you thousands of requests: https://developers.podio.com/doc/hooks
Use logging to see how many requests you're making
Bundle responses with "fields" parameter
You might want to build an API proxy app; you would need a messaging queue and a rate limiter. This would lets you keep track of the api calls consumptions across apps and users.
Also worth noting: some API routes are more expensive than others if they are more resource intensive on the Podio side… The term in use is rate-limited: rate limited api route are bound to 1k calls an hours, so in effect costs 5 times as much as regular routes.
Hope this helps!

Limits of the Wikipedia API

I read that wikipedia's API is called MediaWiki. My question is regarding this API. Does this API have a maximum of calls per day/ hours / minutes ? I can't seem to find it.
See the wikimedia REST API "Terms and Conditions" for the latest rate limits (200 requests per second in 2022). What do you plan to do with the Wikipedia API?
They state some API:Etiquette and API:FAQ.
There is no hard and fast limit on read requests, but we ask that you
be considerate and try not to take a site down. Most sysadmins reserve
the right to unceremoniously block you if you do endanger the
stability of their site.
If you make your requests in series rather than in parallel (i.e. wait
for the one request to finish before sending a new request, such that
you're never making more than one request at the same time), then you
should definitely be fine. Also try to combine things into one request where you can (e.g. use multiple titles in a titles parameter instead of making a new request for each title
API FAQ states you can retrieve 50 pages per API request.
You can use Data Dumps as well if you need content offline (likely a little outdated).
For a graceful termination of your script in case you hit any of the limits, you can handle errors & warnings in API calls using these status messages.
If there is no need of a "live sample", it would be better to use a data-dump.

"reasonable" use of web APIs to sync data

My goal is to synchronize a web-application with an internal database. The web-application has a public API, but in order to fully synchronize the two sources I would need to make around 2000 separate API calls every time. My instinct tells me that this is excessive and possibly irresponsible, but I lack the experience to know for sure.
In this particular case the web-application is Asana, but I've encountered similar situations before with other services. Is there any way to know if you're abusing a service through excessive API calls? I know I'm not going to DOS a company like Asana, but I can't shake the feeling that there must be a better way than making ~150k requests per day.
The only other option I can think of is to update the web-service only when I know there's been a change in the database, but I'll lose a lot of capability that way.
I apologize for the subjectivity of this question, but I'm really hoping that someone can explain if there's any kind of etiquette that's expected when using public APIs.
(I work at Asana)
This is an excellent question, or rather set of questions.
You are designing a system that will repeatedly make requests for every object. What will happen as the number of objects grows? Even if your initial request rate were reasonable, this would suffer problems with scalability. A more scalable solution is one that scales with the number of changes in the system. This will also grow over time, but much more slowly - the number of changes a single user can make per day is relatively constant, but the total number of objects they've created over time grows and grows. So my first piece of advice would be to avoid doing things this way, and instead find a way to detect changes and just act on those. It would be interesting to know why you feel you'll lose capability by taking this approach.
Now, I happen to know that the Asana API does not currently provide you with any friendly mechanism to just detect changes in the system. This is a commonly requested feature and we are looking into it, though I unfortunately cannot promise a delivery date. So you might be left with no choice but to poll our system for now.
As for being polite to the API, many service providers set limits on their API usage to prevent accidental or malicious use of the API from impacting the service to their other customers -- Asana is no exception. Sometimes these limits are published, other times not, and there is no standard limit: it all depends on the service. But it is very thoughtful of you to be curious about service limitations.
That said, 150k requests per day is, for the Asana API, kind of a lot. If all of our API users gave us that much traffic, we might be serving more requests per day than Google Web Search, and we're not quite that scalable yet. :) Technically, sometimes, we might handle requests at that volume from a single user.
If you must poll, try to poll on intervals like 15 minutes. But please do not poll your entire workspace on this time period; it's likely to be too much traffic/data. We're working on trying to provide you with a better solution.
If you do happen to make too many requests of the Asana API, you will get back HTTP status code 429 instead of your desired response; you can read more about that here (https://asana.com/developers/documentation/getting-started/errors).

Rate Limiting to prevent against DOS attack (Heroku)

I have 2 POST, 1 PUT and 1 DELETE API in my application. My application is deployed on Heroku. I want to rate limit these APIs but only to prevent against a DOS attack or in case someone by mistake calls API in infinite loop. What's the ideal rate limit for this scenario. E.g. x per minute, y per hour. What are the idea numbers for x and y?
One way to do it is to drop in the 3scale API plugin (https://github.com/3scale/3scale_ws_api_for_ruby) and this enforces rate limits, does analytics etc. using external infrastructure.
That way you can rate limit individual users and have different quotas for each one (plus all the signup etc.).
It won't strictly prevent true DOS because even unauthed request will still reach you - but it'll stop them going down into your stack + cut off the people who are doing the damage first.
This really depends on your app.
1) It depends on how "heavy" your app is:
If each of your requests are processor-heavy and/or hits your database a lot, then you'll want to set a low limit, because each request is "expensive". If your app is fast and efficient, then you have more room to maneuver and you can get set a higher API limit.
2) It depends on the use cases of your app.
Does your app require a lot of API hits to be useable? How does an API limit affect the features your app provides?
3) It depends on how many processors you have.
Heroku allows you to scale your app by setting web dynos and worker dynos. The more you have, the better you'll be able offer high API limits.
In any case, if you want to prevent someone from DOS-ing you or calling your API in an infinite loop, this is the wrong approach to take because an API limit affects the good guys as well as the bad guys. A better way is to detect the bad behaviour and respond appropriately (i.e. deny the offending IP address for a limited time).