We use the REST API to check the last 20 transactions for a specific user
What is the max number of requests per seconds we can make using the Elrond REST API ?
The rate limits for the official api aren't known as far as I'm aware.
If you plan to have a lot of requests each second you might want to consider setting up your own observer squad and api so you can be independent from the elrond infrastructure. This not only gives you greater control over the response times (and downtimes), but you will also reduce the load on the official servers so others won't be affected by the amount of requests you make.
Related
If I may ask, I was wondering how the API request limit handles multiple store calls?
Scenario:
We have one backend service "polling" a store with one core request and 'x' number of requests for images by productId for each item in "LineItems" (I doubt this will be an extraordinary figure) every 5 seconds, but I'm curious to know if we had Five stores and Five background service polling respective stores, would this total to the request limit? I.E how is this tracked, by IP?
I'm hoping that it's on a per store basis thus other stores have there own "Bucket". I have read through "Some" documentation but not sure that info is giving the knowledge I require.
There a lot of older posts and articles, all with conflicting info.
I fully appreciate this is not a programming issue per-say but was hoping this was the place for such a question. As ever, appreciate everyone's time.
Regards,
All shops are provided with a two API calls per second limit, after you've hammered your bucket for 40 other calls. If you are polling a shop every 5 seconds, then you are in no danger of hitting any limits. Any one App can call N stores and this limit is per store, not per App.
I am working on one product which is fetching all the organization/workspace and app details of the customer. The customer can refresh them any time.
So let’s say I have one customer who has 100 applications across multiple workspaces so around it is making around 110 calls to get each application detail, workspace details and organizations.
Now if that customer refreshed the applications multiple times like 10 times in an hour so the action only for that is 1000 API calls. If I have 50 such users active and doing this thing then it will be something 50000.
AFAIK I can not make so many API calls in an hour so how to handle this scenario. I know a lot of applications are doing such things so want to understand how everyone is handling this.
If you need a higher rate limit, I would encourage you to contact Podio support and ask specifically for what you need. We have internal guidelines for evaluating these kinds of requests and may increase the limit for your user and client ID if appropriate.
In general, though, I would expect your app to implement some kind of batching, transient storage, and/or caching layers, especially if your customers are interacting with Podio exclusively or primarily through your system.
Please see our official statement here: https://developers.podio.com/index/limits
Summary:
The general limit is 5,000 API calls per hour, but if the API call is marked as "Rate limited" in the API reference the call is deemed resource intensive and a lower rate of 1000 calls per hour is enforced. If you hit the rate limits the API will begin returning 420 HTTP error codes for all API calls. Rate limits are per user per API key.
Contacting support:
If you have a project that requires a higher rate limit contact support#podio.com with a brief description of your project, your estimated usage and the client_id of the API key you are using.
Usage tips:
Tips for reducing API usage
Avoid making API requests inside loops. Instead of fetching individual objects inside a loop, fetch a collection of objects in one API operation. E.g. filter items
Cache results whenever possible. This is especially true when you are displaying data to the public (i.e. every sees the same output).
Don't poll for changes. Instead of polling Podio to see if your content has changed use webhooks or push to receive a notification. This might save you thousands of requests: https://developers.podio.com/doc/hooks
Use logging to see how many requests you're making
Bundle responses with "fields" parameter
You might want to build an API proxy app; you would need a messaging queue and a rate limiter. This would lets you keep track of the api calls consumptions across apps and users.
Also worth noting: some API routes are more expensive than others if they are more resource intensive on the Podio side… The term in use is rate-limited: rate limited api route are bound to 1k calls an hours, so in effect costs 5 times as much as regular routes.
Hope this helps!
I understand the Twitter REST API has strict request limits (few hundred times per 15 minutes), and that the streaming API is sometimes better for retrieving live data.
My question is, what exactly are the streaming API limits? Twitter references a percentage on their docs, but not a specific amount. Any insight is greatly appreciated.
What I'm trying to do:
Simple page for me to view the latest tweet (& date / time it was posted) from ~1000 twitter users. It seems I would rapidly hit the limit using the REST API, so would the streaming API be required for this application?
You should be fine using the Streaming API, unless those ~1000 users combined are tweeting more than (very) roughly 60 tweets per second at any moment.
Using the Streaming API endpoint statuses/filter with the follow parameter, you are allowed up to 5000 users. There is no rate limit except when the stream returns more than about 1% of the all tweets being tweeted at that moment. (60 tweets per second is 1% of the average rate of tweets, which is always fluctuating, so don't rely on that number.)
If your stream does go above the 1% threshold, you can detect this. (See the LIMIT notice.) Then you would use the REST API to find missed tweets.
Twitter simply will not allow multiple streams from one registered app/account. Doing so will result in the older one being closed.
Also too many connection tries are not allowed as well and will result in a user being blocked.
Reference docs: Public Streaming API (outdated)
Anybody know if there is any API call limits per second, hour or day for Tumblr API? It seems to me the limits do exist when I make a lot of api calls in a short period via OAuth. However, I couldn't find any document on Tumblr API website or on Google. Many thanks.
I have been using Tumblr API for about 2 years now, and I must admit that "Rate Limit Exceeded" issue has no deterministic and, more important, officially confirmed answer.
In Tumblr's API Agreement you may find some reference to limitations under section "Respect for Limitations" which says
In addition, you will comply with any limitations imposed by Tumblr on the frequency of access, calls and use of the Tumblr API and Tumblr Firehose
We ask that you respect these limitations, as well as any rate limits that we may place on actions, which are designed to protect our systems
Notes:
There is a special Tumblr tagged blog "rate-limit-exceeded" dedicated to this. However, it does not say much about number of request per period of time that a reported person used when facing this problem.
For example here you can find avg 1000 requests per minute to be the limit.
As for my application the request rate is approximately 1 request per second. The application runs for about a year already in 24/7 manner. There were several times though this issue occurred to me even with this relatively low rate. However, I consider the failure rate to be insignificant.
From: https://www.tumblr.com/oauth/apps
Newly registered consumers are rate limited to 1,000 requests per hour, and 5,000 requests per day.
If you go to that link it looks like you can get the rate limit removed if you ask nicely! :)
I read that wikipedia's API is called MediaWiki. My question is regarding this API. Does this API have a maximum of calls per day/ hours / minutes ? I can't seem to find it.
See the wikimedia REST API "Terms and Conditions" for the latest rate limits (200 requests per second in 2022). What do you plan to do with the Wikipedia API?
They state some API:Etiquette and API:FAQ.
There is no hard and fast limit on read requests, but we ask that you
be considerate and try not to take a site down. Most sysadmins reserve
the right to unceremoniously block you if you do endanger the
stability of their site.
If you make your requests in series rather than in parallel (i.e. wait
for the one request to finish before sending a new request, such that
you're never making more than one request at the same time), then you
should definitely be fine. Also try to combine things into one request where you can (e.g. use multiple titles in a titles parameter instead of making a new request for each title
API FAQ states you can retrieve 50 pages per API request.
You can use Data Dumps as well if you need content offline (likely a little outdated).
For a graceful termination of your script in case you hit any of the limits, you can handle errors & warnings in API calls using these status messages.
If there is no need of a "live sample", it would be better to use a data-dump.