Geocoding API usage limits at project level or account level? - api

Would someone be kind enough to tell me whether the Google API usage limits specified here: https://developers.google.com/maps/documentation/geocoding/usage-limits are calculated set at the project level, or account level please?
I'm using one API key for several maps on our website. Total calls per day limit is no problem at all. We're occasionally clocking more than 50 requests per second in peak times though.
If I create a new project, and get a new API key in the same account, will that mean we can hit 50 requests per second on one API key, and 50 requests per second separately on another API key...or are they calculated at the account level?
Many thanks everyone!

The documentation states the following:
Most of the Google Maps APIs have a complimentary per-day quota that can be set in the Google API Console. The daily default and maximum query limits vary by API. You can increase the complimentary daily limits by enabling billing, or purchasing a Google Maps APIs Premium Plan license. Quota limits are enforced on a unique project basis, and you may not take any action to circumvent quota limits. For example, you may not create multiple projects to compound and exceed quota limits.
https://developers.google.com/maps/faq#usage-limits
So, as you can see the usage quota is calculated on the per project basis. If you use two API keys from different projects each one will have its own usage limits. Also you cannot create unlimited number of project for one account. As far as I know you can create approximately 16 projects within one account.
I hope this clarifies your doubt.

The usage limits are calculated at the account level, not the project or key level. They do this to prevent people from just creating unlimited projects to get around the acceptable usage limits that they are providing.

Related

Is it possible to increase the Google Sheets API quota limit beyond 2500 per account and 500 per user?

The problem: Running into Google Sheets API read/write quota limits. Specifically, the read/write requests per 100 seconds and read/write requests per 100 seconds per user quotas.
Some background:
For the past few months I've been developing a web app for students and staff in our school district which uses a Google spreadsheet as the database. Each school in our district was assigned a different Google spreadsheet, and a service account was created to make read and write calls to these spreadsheets on behalf of the web app.
We started with one school of approximately 1000 students, but it has now expanded to two other schools with a total user load of around 4000. Due to the nature of a school day schedule, we started hitting our quota limit (per 100 sec & per 100 sec per user) since almost everyone uses the app at the same time.
I found the usage limits guide for the Google sheets API, and as per the instructions I created a billing account, and linked the associated service account project to it. I then went to the quotas section in the developers console and applied for a higher quota. This involved filling out a Google form which asked "How much quota do you need? Express in number of API queries per day." Again, queries per day is not the problem, rather it's the number of queries per 100 seconds and per user (service account). After a couple of weeks our limit was increased to 2500 read/write requests per 100 seconds and 500 read/write requests per 100 seconds per user. The billing account was not charged, and after a little searching, I realized this was a free increase. This bump in our quota limit helped, but it's still going to be an issue because our district wants to add more schools in the future.
Here's what I need to know:
1) [ESSENTIAL QUESTION] Does Google have an upper limit or maximum to the number of read/write requests a single service account/user/IP can make within the 100 second time frame, and if so what is it?
2) If it is possible to go beyond our current quota limit (2500/500), is there another way of requesting/applying for the increase. Once again we have a billing account established for the project and are willing to pay for the service.
I've been pulling (what's left of) my hair out trying to find definitive answers to my questions. This post came close to what I was looking for, and I even did some of the things the OP suggested, but I just need a direct answer to my "essential" question.
Couple more things.
I understand that Google Charts Visualization doesn't have a quota limitation, and I'd consider using it however, for privacy reasons I can't have the spreadsheet keys exposed in plain javascript. Are there other options here?
Also, one might suggest creating multiple service accounts, but I'd rather avoid this if possible.
Thank you for your help. I'm very much a novice and I greatly appreciate your time and expertise.
To answer your questions:
1) [ESSENTIAL QUESTION] Does Google have an upper limit or maximum to the number of read/write requests a single service account/user/IP can make within the 100 second time frame, and if so what is it?
*The provided documentation only stated that Google Sheets API has a limit of 500 requests per 100 seconds per project, and 100 requests per 100 seconds per user. Check this post for additional information.*
2) If it is possible to go beyond our current quota limit (2500/500), is there another way of requesting/applying for the increase. Once again we have a billing account established for the project and are willing to pay for the service.
AFAIK, you can request for a higher quota limit and the Google Engineers may grant the request as long as you are making a reasonable request.
Also, you may check this thread for additional tips:
You can use spreadsheets.get to read the entire spreadsheet in a single call, rather than 1 call per request. Alternately, you
can use spreadsheets.values.batchGet to read multiple different
ranges in a single call, if all you need are the values.
The Drive API offers "push notifications", so you can get notified when changes occur and react to those, instead of polling for
them. The latency of the notifications is a little on the slow side,
but it gets the job done.

How to implement IoT with GCP: What are the limits of both cloud projects and service accounts per project? To what number can they be increased?

In short: What are the limits of both cloud projects and service accounts per project? How can they be increased? Is the architecture a good idea at all?
I am developing an IoT application with tens of thousands of planned devices in the field, having a few devices per customer and hundreds of customers. Every device will continuously (24/7) stream measurement data directly to BigQuery with one dataset (or table) per device at sample rates of at least 100Hz.
Of course, every device needs to be authenticated and authorized to gain tightly restricted access to its cloud dataset. As stated in the Auth Guide API keys are not very secure. Therefore, the most appropriate solutions seems to have one service account per customer with one account key per device (as suggested in this GCP article). However, the FAQs of Cloud IAM state that the number of service accounts is limited to 100 per project.
This limit could be reached quickly. If so, how easily/costly is it to increase this limit towards thousands or tens of thousands of service accounts per project?
In such a scenario also the number of needed projects could easily grow to hundreds or thousands. Would this be feasible?
Is this overall concept practical or are there better approaches within GCP?

Podio API limit

I am working on one product which is fetching all the organization/workspace and app details of the customer. The customer can refresh them any time.
So let’s say I have one customer who has 100 applications across multiple workspaces so around it is making around 110 calls to get each application detail, workspace details and organizations.
Now if that customer refreshed the applications multiple times like 10 times in an hour so the action only for that is 1000 API calls. If I have 50 such users active and doing this thing then it will be something 50000.
AFAIK I can not make so many API calls in an hour so how to handle this scenario. I know a lot of applications are doing such things so want to understand how everyone is handling this.
If you need a higher rate limit, I would encourage you to contact Podio support and ask specifically for what you need. We have internal guidelines for evaluating these kinds of requests and may increase the limit for your user and client ID if appropriate.
In general, though, I would expect your app to implement some kind of batching, transient storage, and/or caching layers, especially if your customers are interacting with Podio exclusively or primarily through your system.
Please see our official statement here: https://developers.podio.com/index/limits
Summary:
The general limit is 5,000 API calls per hour, but if the API call is marked as "Rate limited" in the API reference the call is deemed resource intensive and a lower rate of 1000 calls per hour is enforced. If you hit the rate limits the API will begin returning 420 HTTP error codes for all API calls. Rate limits are per user per API key.
Contacting support:
If you have a project that requires a higher rate limit contact support#podio.com with a brief description of your project, your estimated usage and the client_id of the API key you are using.
Usage tips:
Tips for reducing API usage
Avoid making API requests inside loops. Instead of fetching individual objects inside a loop, fetch a collection of objects in one API operation. E.g. filter items
Cache results whenever possible. This is especially true when you are displaying data to the public (i.e. every sees the same output).
Don't poll for changes. Instead of polling Podio to see if your content has changed use webhooks or push to receive a notification. This might save you thousands of requests: https://developers.podio.com/doc/hooks
Use logging to see how many requests you're making
Bundle responses with "fields" parameter
You might want to build an API proxy app; you would need a messaging queue and a rate limiter. This would lets you keep track of the api calls consumptions across apps and users.
Also worth noting: some API routes are more expensive than others if they are more resource intensive on the Podio side… The term in use is rate-limited: rate limited api route are bound to 1k calls an hours, so in effect costs 5 times as much as regular routes.
Hope this helps!

Google Sheets API QPS quota

I am trying to find out the quota limits for google sheet api and google drive api.
I can find most of them here
https://console.developers.google.com/iam-admin/quotas?project=
Then I came across the following documentation https://developers.google.com/analytics/devguides/config/mgmt/v3/limits-quotas
Which states the following for the google analytics apis
10 queries per second (QPS) per IP.
In the API Console there is a similar quota which is referred to as "request per 100 seconds per user". By default, it is set to 100 requests per 100 seconds and can be adjusted to a maximum value of 1,000. Despite being listed as "per 100 seconds" the API is restricted to a maximum of 10 requests per second per user.
Is there any QPS limits for google sheets api?
and if it there, if I apply to increase the request per 100 seconds per user , my thought is that the QPS should also increase, is that correct?
Here is the default Quota limit for the Sheets API that you will find in your Developer Console.
If you want to increase this quota based on the demands on your project, then you need to apply for higher quota. Just click the pencil icon and it will direct you to the link for applying higher quota.
For more information, check this Usage Limits of Sheets API.

Tumblr API call or request limits

Anybody know if there is any API call limits per second, hour or day for Tumblr API? It seems to me the limits do exist when I make a lot of api calls in a short period via OAuth. However, I couldn't find any document on Tumblr API website or on Google. Many thanks.
I have been using Tumblr API for about 2 years now, and I must admit that "Rate Limit Exceeded" issue has no deterministic and, more important, officially confirmed answer.
In Tumblr's API Agreement you may find some reference to limitations under section "Respect for Limitations" which says
In addition, you will comply with any limitations imposed by Tumblr on the frequency of access, calls and use of the Tumblr API and Tumblr Firehose
We ask that you respect these limitations, as well as any rate limits that we may place on actions, which are designed to protect our systems
Notes:
There is a special Tumblr tagged blog "rate-limit-exceeded" dedicated to this. However, it does not say much about number of request per period of time that a reported person used when facing this problem.
For example here you can find avg 1000 requests per minute to be the limit.
As for my application the request rate is approximately 1 request per second. The application runs for about a year already in 24/7 manner. There were several times though this issue occurred to me even with this relatively low rate. However, I consider the failure rate to be insignificant.
From: https://www.tumblr.com/oauth/apps
Newly registered consumers are rate limited to 1,000 requests per hour, and 5,000 requests per day.
If you go to that link it looks like you can get the rate limit removed if you ask nicely! :)