How to increase Google Sheets v4 API quota limitations - google-sheets-api

The new Google Sheets API v4 currently has an unlimited read/write quota per day (which is fantastic), but restricted to 500 reads/writes per account per 100 seconds, and 100 read/writes per key per 100 seconds (or, I have found, multiple keys coming from the same IP). This is probably plenty for most use cases, but I have an edge case that requires bringing a frequently-updated Google Sheet with 70 tabs down to a node.js server that distributes these to user's clients every ~30-60 seconds or so (users are data annotators who are student research assistants). This wasn't so bad early in the project when there were only 20-30 tabs, but now that the data is large the server is blowing through the 100 quota and returning errors every 10-15 minutes.
The problem is such that:
Frequent data updates: Only data on 1-5 of the 70 tabs is likely to be updated on any given minute, but which tabs have new data is random (so I am pulling down the whole sheet of 70 = 70 reads).
Update interval: The need for updates happens randomly at about 30 second to 5-minute intervals (so some within the quota, some about 3-5x the quota).
Throttling: I have tried throttling the update to be within the 100 calls/100 seconds (my previous solution), but this introduces large usability issues, significantly decreasing usability/productivity/work quality.
Quota increase: The sheets API does not currently appear to include a way to pay to increase the quota. It does allow filling out a form to request an increase in the quota, but I'm not sure what the mean response time is on this (my request is only a few days old).
Multiple service accounts: I have tried using multiple service accounts to get the full 500 requests/100 seconds quota (rather than the per-user quota), since this is a server, but Google Sheets looks to rate-limit to 100 requests/100 seconds from a given IP
Alternatives: I have considered that this project may have just grown beyond the size that Sheets is easily able to handle, but there do not appear to be any good, usable, self-hosted, collaborative spreadsheets with easy-to-interface-to APIs out there.
Are there settings/methods suggested to achieve the full 500 calls/100 seconds for a server?

You can request quota update in Google Cloud Platform and it will be increased to 2500 per account an 500 per user. (about your #4)

You can use spreadsheets.get to read the entire spreadsheet in a single call, rather than 1 call per request. Alternately, you can use spreadsheets.values.batchGet to read multiple different ranges in a single call, if all you need are the values.
The Drive API offers "push notifications", so you can get notified when changes occur and react to those, instead of polling for them. The latency of the notifications is a little on the slow side, but it gets the job done.

Related

Google Pub/Sub + Cloud Run scalability

I have a python application writing pubsub msg into Bigquery. The python code use the google-cloud-bigquery library and the TableData.insertAll() method quota is 10,000 requests per second per table.Quotas documentation.
Cloud Run container auto scaling is set to 100 with 1000 requests per container.So technically, I should be able to reach 10 000 requests/sec right? With the BQ insert API being the biggest bottleneck.
I only have a few 100 requests per sec at the moment, with multiple service running at the same time.
CPU and RAM at 50%.
Now confirming your project structure, and a few details given in the comments; I would then review the Pub/Sub quotas and limits, especially the Quota and the Resource limits, both tables where you can check this information depending on the size and the Throughput quota units sections tells you how to calculate quota usage.
I would answer your question as a yes, you are able to reach 10,000 req/sec. And as in this question depending on the byte size you can have 10,000 row inserts unless the recommendation is 500.
The concurrency in Cloud Run can be modified in case you need to change it.

Google big query backfill takes very long

I am new to stack overflow. I use Google big query to connect data from multiple sources toegether. I have made a connection to Google ads (using data transfer from big query) and this works well. But when i run a backfill of older data it takes more then 3 days to get the data from 180 days in big query. Google advises 180 days as maximum. But it takes so long. I want to do this for the past 2 years and multiple clients (we are an agency). I need to do this in chunks of 180 days.
Does anybody have a solution for this taking so long?
Thanks in advance.
According to the documentation, BigQuery Data Transfer Service supports a maximum of 180 days (as you said) per backfill request and simultaneous backfill requests are not supported [1].
BigQuery Data Transfer Service limits the maximum rate of incoming requests and enforces appropriate quotas on a per-project basis [2] and other BigQuery tasks in the project may be limiting the amount of resources used by the Transfer. Load jobs created by transfers are included in BigQuery's quotas on load jobs. It's important to consider how many transfers you enable in each project to prevent transfers and other load jobs from producing quotaExceeded errors.
If you need to increase the number of transfers, you can create other projects.
If you want to speed up the transfers for all your clients, you could split them into several projects, because it seems that’s an important amount of transfers that you are going to make there.

Keeping sustained responsivness

Are there any method to obtaining reliable throughput with BigQuery? I've been using it since May or March and we have always seen wild swings in latencies, generally not lasting longer than a few days - however now we are constantly slow.
It seems that this is due to the shared resource pool style of the service, however they do not make it easy to purchase private resources.
How does one obtain consistent, reliable results from BigQuery?
Details of my usage:
Streaming inserts (1000/s)
Queries on the streaming dataset(Up to 250GB table size)
Querying Via. the API using the PHP library
My queries times vary from 2 seconds to 10 minutes. Ideally A query should never exceed the 15 - 30 seconds range. Any information on what might improve my response times would be helpful

Finding an applications scalibility point using JMeter

I am trying to find an applications scalibility point using JMeter. I define the scalability point as "The minimum number of concurrent users from which any increase no longer increases the Throughput per second".
I am using the following technique. Schedule my load test to run for an hour, starting a new thread sending SOAP/XML-RPC Requests every 30 seconds. I do this by setting my number of threads to 120 and my ramp up period to 3600 seconds.
Then looking at my TOTAL rows Throughput in my Summary Report Listener. A new row (thread) is added every 30 seconds, the total throughput number rises until it plateaus at about 123 requests per second after 80 of the threads are active in my case. It then slowly drops the throughput number to 120 per second as the last 20 threads are added. I then conclude that my applications scalability point is 123 requests per second with 80 active users.
My question, is this a valid way to find an application scalibility point or is there different technique that I should be trying?
From a technical perspective what you're doing does answer your question regarding one specific user scenario, though I think you might be missing the big picture.
First of all keep in mind that the actual HTTP request you're sending and ramp up times can often impact what you call a scalability point. Are your requests hitting a cache? Are they not random enough? Are they too random? Do they represent real world requests? is 30 seconds going to give you the same results as 20 seconds or 10 seconds?
From my personal experience it's MUCH easier and more intuitive to look at graphs when trying to analyze app performance. It's not just a question of raw numbers but also looking and trends and rates of change.
For example here is an example testing the ghost.org blogging platofom using JMeter with an interactive JMeter results graph.
http://blazemeter.com/blog/ghost-performance-benchmark

Bloomberg API request limit

Is there anyway to determine how many requests or how much data you have in your remaining request limit amount for Bloomberg API?
from Bloomberg HelpDesk on April 2014 (this is valid for a basic desktop client):
We have 3 kind of limits..
You can have no more that 3500 real time fields open at the same time.
If you exceed this limit you will see "NA Limit" as error message and
you just need to delete some securities/ fields in order for the error
message to disappeared and to see the values.
We have also a daily limit. The Daily API limit is 500,000 hits/per
day. A "hit" is defined as one request for a single security/field
pairing. Therefore, if you request static data for 5 fields and 10
securities, that will translate into a total of 50 hits. so try to
refresh just the portion of the spreadsheet that really needs to be
refreshed and avoid refreshing it all or reopen it many times a day.
The last limit is a monthly limit. Our monthly limits comes from a
proprietary model. Only about 0.4% of our user database ever goes over
this limit. This limit is based on unique securities and depends on
the type of data being downloaded. For example some of the data on the
system such as intra-day is valued a little bit higher than historical
end of day for any given list of securities. We do not recommend more
than 5000 to 7000 unique identifiers per month and the limit upgrade
will only allow you to get data to complete your project. Once a
security is used once in a month then if you use it again it will not
count again towards the monthly limit.
We normally grant 2 resets per month in case you exceed your daily
limit and if you exceed your monthly limit we grant 1 extension per
month (10% more), if you breach the limit again you will then need to
wait for the midnight for the daily limit to be reset automatically or
the end of the month for the reset of the monthly.
Bloomberg do not state what the explicit limits are, and there is no programmatic way of finding out what the limits are or what proportion of your limits you have used.
The best information from Bloomberg that I have found is on the WAPI page (in the terminal). On the menus on the LHS, go to WAPI Home > API Resources > API Data Limits. There are two pages, 'Extended Rules and Usage Limits' and 'Managing Your API Data Limits' that shed some further light on the matter.
Broadly speaking... there is a daily limit of individual data requests (i.e. security/field pairs - but duplicates are counted for each request). However, your limit for subscriptions is based on the number of securities you are subscribing to concurrently - i.e. if you expect to be requesting the price of a security every 5 mins, you are much better off subscribing to that security's price. Then there is a monthly limit that is based on the number of unique securities that you are making requests for.
there is an upper limit on Bloomberg API, 500,000 hits per day.
-- information from Bloomberg Help Help
The daily limit is clearly stated - it is the monthly limit that is not to my knowledge disclosed in writing. I have been told the following in the context of discussions about Data Licence, which is one Bloomberg product for bulk data subscription. The monthly limit is expressed as a budget in $, and it is the equivalent price for your requests, priced under the Data Licence schema, which clearly is not secret if you enquire about that product. So why the secrecy about the budget? The reason it is commercially sensitive is that this budget is many times the monthly cost of the Terminal Licence, so clearly if you (a) know what it is and (b) either have access via API to the budget spent (nope) OR write software to 'count the cost' (not hard), then you could pony up a couple of terminals and vastly reduce your Data Licence spend. Bloomberg naturally frown on this sort of activity because it represents an arbitrage opportunity in their pricing model and it is not really 'playing nice'. They likewise do not like if you hit 'the wrong kind of data' too often or the monthly limit at all often, and these activities may prompt them to investigate your business model to be sure you are in compliance with all the T&C of the Data Addendum. Out of courtesy to Bloomberg I am not posting that budget number here, but you should be able to get it from your salesperson and confirm the validity of what I have said, because it may change at any time as it is not part of any contract.
I don't believe this is possible programmatically, however if you speak to the Bloomberg helpdesk they will be able to tell you whether you are near the limit, and reset it for you if necessary. Obviously they will only do that a certain number of times. I have not managed to get a definitive answer as to what the limit is, but it's designed to be large enough that you would not hit it just running spreadsheets, which have a limit of 3500 Bloomberg real-time formulas.
If you feel the download limit is not breached but you still get the error message, you can run the following steps to solve the issue:
Close Excel completely.
From the Windows "Start" menu, select All Programs > Bloomberg > Stop API Process. A command prompt window appears.
Press <Enter> to close the window.
From the Windows Start menu, select All Programs > Bloomberg > API Environment Diagnostics.
Click the Start button.
When the test is complete, if there are any red errors, click the "Repair" button.
Re-open Excel and test a formula.
500'000 data points is the approximate daily limit, however remember different types of data use up varying amounts. It is not 1 for 1. Typically requests for esoteric securities and fields will use up more data per request, than PX_LAST for AAPL US for example. Also there are different types of request, such as reference or historic, which will also consume your limit differently.
If you are requesting intra-day realtime data, these fields are typically not charged to your usage limit. Rather you have limits on how many times the realtime 'pipe' can be opened.
Bloomberg are typically very helpful at resetting your monthly data usage limit should you exceed it on an adhoc basis. This is not written company policy, but seems to be part of their customer care. If you are persistently breaching limits each month, they are likely to stop resetting your limits and try to move you to B-PIPE. But otherwise for my experience they are flexible