Google Pub/Sub + Cloud Run scalability - google-bigquery

I have a python application writing pubsub msg into Bigquery. The python code use the google-cloud-bigquery library and the TableData.insertAll() method quota is 10,000 requests per second per table.Quotas documentation.
Cloud Run container auto scaling is set to 100 with 1000 requests per container.So technically, I should be able to reach 10 000 requests/sec right? With the BQ insert API being the biggest bottleneck.
I only have a few 100 requests per sec at the moment, with multiple service running at the same time.
CPU and RAM at 50%.

Now confirming your project structure, and a few details given in the comments; I would then review the Pub/Sub quotas and limits, especially the Quota and the Resource limits, both tables where you can check this information depending on the size and the Throughput quota units sections tells you how to calculate quota usage.
I would answer your question as a yes, you are able to reach 10,000 req/sec. And as in this question depending on the byte size you can have 10,000 row inserts unless the recommendation is 500.
The concurrency in Cloud Run can be modified in case you need to change it.

Related

Google big query backfill takes very long

I am new to stack overflow. I use Google big query to connect data from multiple sources toegether. I have made a connection to Google ads (using data transfer from big query) and this works well. But when i run a backfill of older data it takes more then 3 days to get the data from 180 days in big query. Google advises 180 days as maximum. But it takes so long. I want to do this for the past 2 years and multiple clients (we are an agency). I need to do this in chunks of 180 days.
Does anybody have a solution for this taking so long?
Thanks in advance.
According to the documentation, BigQuery Data Transfer Service supports a maximum of 180 days (as you said) per backfill request and simultaneous backfill requests are not supported [1].
BigQuery Data Transfer Service limits the maximum rate of incoming requests and enforces appropriate quotas on a per-project basis [2] and other BigQuery tasks in the project may be limiting the amount of resources used by the Transfer. Load jobs created by transfers are included in BigQuery's quotas on load jobs. It's important to consider how many transfers you enable in each project to prevent transfers and other load jobs from producing quotaExceeded errors.
If you need to increase the number of transfers, you can create other projects.
If you want to speed up the transfers for all your clients, you could split them into several projects, because it seems that’s an important amount of transfers that you are going to make there.

s3 rate limit against website endpoint

I'm hitting my s3 bucket via its website endpoint with various paths/keys. I'm able to get ok (200) responses when I'm hitting it at 1,000 requests per second over the course of 5 minutes. I'm using a popular tool: https://github.com/tsenart/vegeta so I have confidence in these stats.
This is suprising considering the documentation says that anything above is 800 per second is problematic.
Is using a website endpoint different than an API call in terms of throttling? Is 800 a real rate limit or a crude theshhold?
It's a soft limit, and not really a limit from the bucket level perspective. Read carefully. The documentation warns of a rapid request rate increase beyond 800 requests per second potentially resulting in temporary rate limits on your request rate.
S3 increases available capacity by keyspace partition splitting and it takes some time for this to happen... but buckets scale up with workload.
If you are requesting the same object(s) repeatedly, you are also not likely to be imposing as much load on the available resources as you would be if you were hitting 800 unique objects per second and reading between the lines, that is the threshold under discussion -- the time to look up keys in the bucket index. Recent hits are probably already more accessible than cold spots in the index.
The problem that document highlights is that of your object keys are lexically sequential, then S3 will be unable to split the partitions meaningfully, because you will always be creating new objects on only one side of the split or the other and thus working against the scaling algorithm of S3.
The documentation has been updated in meantime and the limits have been increased. Now the limits are per bucket prefix and 1000 req/s isn't a problem any more. For more see the mentioned doc.

How to increase Google Sheets v4 API quota limitations

The new Google Sheets API v4 currently has an unlimited read/write quota per day (which is fantastic), but restricted to 500 reads/writes per account per 100 seconds, and 100 read/writes per key per 100 seconds (or, I have found, multiple keys coming from the same IP). This is probably plenty for most use cases, but I have an edge case that requires bringing a frequently-updated Google Sheet with 70 tabs down to a node.js server that distributes these to user's clients every ~30-60 seconds or so (users are data annotators who are student research assistants). This wasn't so bad early in the project when there were only 20-30 tabs, but now that the data is large the server is blowing through the 100 quota and returning errors every 10-15 minutes.
The problem is such that:
Frequent data updates: Only data on 1-5 of the 70 tabs is likely to be updated on any given minute, but which tabs have new data is random (so I am pulling down the whole sheet of 70 = 70 reads).
Update interval: The need for updates happens randomly at about 30 second to 5-minute intervals (so some within the quota, some about 3-5x the quota).
Throttling: I have tried throttling the update to be within the 100 calls/100 seconds (my previous solution), but this introduces large usability issues, significantly decreasing usability/productivity/work quality.
Quota increase: The sheets API does not currently appear to include a way to pay to increase the quota. It does allow filling out a form to request an increase in the quota, but I'm not sure what the mean response time is on this (my request is only a few days old).
Multiple service accounts: I have tried using multiple service accounts to get the full 500 requests/100 seconds quota (rather than the per-user quota), since this is a server, but Google Sheets looks to rate-limit to 100 requests/100 seconds from a given IP
Alternatives: I have considered that this project may have just grown beyond the size that Sheets is easily able to handle, but there do not appear to be any good, usable, self-hosted, collaborative spreadsheets with easy-to-interface-to APIs out there.
Are there settings/methods suggested to achieve the full 500 calls/100 seconds for a server?
You can request quota update in Google Cloud Platform and it will be increased to 2500 per account an 500 per user. (about your #4)
You can use spreadsheets.get to read the entire spreadsheet in a single call, rather than 1 call per request. Alternately, you can use spreadsheets.values.batchGet to read multiple different ranges in a single call, if all you need are the values.
The Drive API offers "push notifications", so you can get notified when changes occur and react to those, instead of polling for them. The latency of the notifications is a little on the slow side, but it gets the job done.

Dataflow to BigQuery quota

I found a couple related questions, but no definitive answer from the Google team, for this particular question:
Is a Cloud DataFlow job, writing to BigQuery, limited to the BigQuery quota of 100K rows-per-second-per-table (i.e. BQ streaming limit)?
google dataflow write to bigquery table performance
Cloud DataFlow performance - are our times to be expected?
Edit:
The main motivation is to find a way to predict runtimes for various input sizes.
I've managed to run jobs which show > 180K rows/sec processed via the Dataflow monitoring UI. But I'm unsure if this is somehow throttled on the insert into the table, since the job runtime was slower by about 2x than a naive calculation (500mm rows / 180k rows/sec = 45 mins, which actually took almost 2 hrs)
From your message, it sounds like you are executing your pipeline in batch, not streaming, mode.
In Batch mode, jobs run on the Google Cloud Dataflow service do not use BigQuery's streaming writes. Instead, we write all the rows to be imported to files on GCS, and then invoke a BigQuery load" job. Note that this reduces your costs (load jobs are cheaper than streaming writes) and is more efficient overall (BigQuery can be faster doing a bulk load than doing per-row imports). The tradeoff is that no results are available in BigQuery until the entire job finishes successfully.
Load jobs are not limited by a certain number of rows/second, rather it is limited by the daily quotas.
In Streaming mode, Dataflow does indeed use BigQuery's streaming writes. In that case, the 100,000 rows per second limit does apply. If you exceed that limit, Dataflow will get a quota_exceeded error and will then retry the failing inserts. This behavior will help smooth out short-term spikes that temporarily exceed BigQuery's quota; if your pipeline exceeds quota for a long period of time, this fail-and-retry policy will eventually act as a form of backpressure that slows your pipeline down.
--
As for why your job took 2 hours instead of 45 minutes, your job will have multiple stages that proceed serially, and so using the throughput of the fastest stage is not an accurate way to estimate end-to-end runtime. For example, the BigQuery load job is not initiated until after Dataflow finishes writing all rows to GCS. Your rates seem reasonable, but please follow up if you suspect a performance degradation.

Finding an applications scalibility point using JMeter

I am trying to find an applications scalibility point using JMeter. I define the scalability point as "The minimum number of concurrent users from which any increase no longer increases the Throughput per second".
I am using the following technique. Schedule my load test to run for an hour, starting a new thread sending SOAP/XML-RPC Requests every 30 seconds. I do this by setting my number of threads to 120 and my ramp up period to 3600 seconds.
Then looking at my TOTAL rows Throughput in my Summary Report Listener. A new row (thread) is added every 30 seconds, the total throughput number rises until it plateaus at about 123 requests per second after 80 of the threads are active in my case. It then slowly drops the throughput number to 120 per second as the last 20 threads are added. I then conclude that my applications scalability point is 123 requests per second with 80 active users.
My question, is this a valid way to find an application scalibility point or is there different technique that I should be trying?
From a technical perspective what you're doing does answer your question regarding one specific user scenario, though I think you might be missing the big picture.
First of all keep in mind that the actual HTTP request you're sending and ramp up times can often impact what you call a scalability point. Are your requests hitting a cache? Are they not random enough? Are they too random? Do they represent real world requests? is 30 seconds going to give you the same results as 20 seconds or 10 seconds?
From my personal experience it's MUCH easier and more intuitive to look at graphs when trying to analyze app performance. It's not just a question of raw numbers but also looking and trends and rates of change.
For example here is an example testing the ghost.org blogging platofom using JMeter with an interactive JMeter results graph.
http://blazemeter.com/blog/ghost-performance-benchmark