very different file size when export data from bigquery to GCS - google-bigquery

I am export data from BQ to GCS with the following query
export_query = f"""
EXPORT DATA
OPTIONS(
uri='{uri}',
format='PARQUET',
overwrite=true,
compression='GZIP')
AS {query}"""
and I am seeing the resulting files are of very different size, as a few of them are 10x larger than the rest. I am wondering why this happened..And how can I make sure the files all have similar size?

BigQuery supports the maximum table size exported to a single file is 1 GB. For exporting data more than 1 GB, a wildcard can be used to export the data into multiple files. When exporting data to multiple files the size of the file varies as mentioned in the documentation.You can check for the possible options for the destinationUris property in this link.
When you export data to multiple files, the size of the files will vary, this is because the number of files will depend on the number of workers that are starting to export a table/query to GCS in parquet format. However, combining the results in one file would require an additional shuffling step to ensure that all of the data ends up on the same partition, which is not something that BigQuery currently does.
If you want to customize the number of files then you need to use dataflow.

Related

Unable to extract data in a single .csv file from Google Big Query (though data is smaller than 1GB)

I am able to export the data in 4 different files of about 90 MB each. (which doesn't make sense)
I have read the limitations of Google Big Query and it says that data with more than 1 GB in size cannot be downloaded in a single CSV file.
My data size is about 250 - 300 MB in size.
This is what usually I do to export data from GBQ:
I saved the table in Google Big Query (as it has more than 16000 rows)
Then exported it in the Bucket using as follows:
gs://[your_bucket]/file-name-*.csv
I think 2M rows of data is less than 1 GB. (Let me know if I am wrong)
Can I get this data in a single csv file ?
Thank you.
You should take out the wildcard from the name of the blob you want to write to. This tells BQ you want to export as multiple files.
So you should rather export to gs://[your_bucket]/file-name.csv
As you noted, this won't work if your data is bigger than 1GB, but you should be fine if total is about 300MB.
You can get node.js readable stream that contains result of your query (https://cloud.google.com/nodejs/docs/reference/bigquery/2.0.x/BigQuery#createQueryStream).
Chunk of data is a row of result set.
And then write data (row by row) to csv (locally or to cloud storage).

Exporting Data using BigQuery multiple wildcard URIs

Trying to export the table data in BigQuery to buckets created in Google Cloud Storage.
When I export the table in BigQuery to GCS with single wildcard URI, it automatically splits the table into multiple sharded files (around 368 MB per file) and land in the designated buckets in GCS.
Here is the command:
bq --nosync extract --destination_format=CSV '<bq table>' 'gs://<gcs_bucket>/*.csv'
The file size and number of files remains the same (around 368 MB per file) even with the use of multiple URIs:
bq --nosync extract --destination_format=CSV '<bq table>' 'gs://<gcs_bucket>/1-*.csv','gs://<gcs_bucket>/2-*.csv','gs://<gcs_bucket>/3-*.csv','gs://<gcs_bucket>/4-*.csv','gs://<gcs_bucket>/5-*.csv'
I am trying to figure out how to use multiple URIs option to reduce the file size.
I believe BigQuery does not provide any guarantee on the file size produced, so what you observed is correct: the file size may not differ with or without multiple wildcard URIs specified.
The common use case for multiple wildcard URIs is that it tells BigQuery to distribute the output files for you into N patterns evenly, so that you can feed each output URI pattern to a downstream worker.

Google bigquery export big table to multiple objects in Google Cloud storage

I have two bigquery tables, bigger than 1 GB.
To export to storage,
https://googlecloudplatform.github.io/google-cloud-php/#/docs/google-cloud/v0.39.2/bigquery/table?method=export
$destinationObject = $storage->bucket('myBucket')->object('tableOutput_*');
$job = $table->export($destinationObject);
I used wild card.
Strange things is one bigquery table is exported to 60 files each of them with 3 - 4 MB size.
Another table is exported to 3 files, each of them close to 1 GB, 900 MB.
The codes are the same. The only difference is in the case that the table exported to 3 files. I put them into a subfolder.
The one exported to 60 files are one level above the subfolder.
My question is how bigquery decided that a file will be broken into dozens smaller files or just be broken into a few big files (as long as each file is less than 1GB)?
Thanks!
BigQuery makes no guarantees on the sizes of the exported files, and there is currently no way to adjust this.

Increasing Spark Read and Parquet Conversion Performance for Gzipped Text File

Use case:
A> Have Text Gzipped files in AWS s3 location
B> Hive Table created on top of the file, to access the data from the file as Table
C> Using Spark Dataframe to read the table and converting into Parquet Data with Snappy Compression
D> Number of fields in the table is 25, which includes 2 partition columns. Data Type is String except for two fields which has Decimal as data type.
Used following Spark Option: --executor-memory 37G --executor-cores 5 --num-executors 20
Cluster Size - 10 Data Nodes of type r3.8xLarge
Found the number of vCores used in AWS EMR is always equal to the number of files, may be because gzip files are not splittable. Gzipped files are coming from different system and size of files are around 8 GB.
Total Time taken is more than 2 hours for Parquet conversion for 6 files with total size 29.8GB.
Is there a way to improve the performance via Spark, using version 2.0.2?
Code Snippet:
val srcDF = spark.sql(stgQuery)
srcDF.write.partitionBy("data_date","batch_number").options(Map("compression"->"snappy","spark.hadoop.mapreduce.fileoutputcommitter.algorithm.version"->"2","spark.speculation"->"false")).mode(SaveMode.Overwrite).parquet(finalPath)
It doesn't matter how many nodes you ask for, or how many cores there are, if you have 6 files, six threads will be assigned to work on them. Try to do one of
save in a splittable format (snappy)
get the source to save their data is many smaller files
do some incremental conversion into a new format as you go along (e.g a single spark-streaming core polling for new gzip files, then saving elsewhere into snappy files. Maybe try with AWS-Lambda as the trigger for this, to save dedicating a single VM to the task.

how to limit the size of the file that exporting from bigquery to gcs?

I Used the python code for exporting data from bigquery to gcs,and then using gsutil to export to s3!But after exporting to gcs ,I noticed the some files are more tha 5 GB,which gsutil cannnot deal?So I want to know the way for limiting the size
So after the issue tracker, the correct way to take this is.
Single URI ['gs://[YOUR_BUCKET]/file-name.json']
Use a single URI if you want BigQuery to export your data to a single
file. The maximum exported data with this method is 1 GB.
Please note that data size is up to a maximum of 1GB, and the 1GB is not for the file size that is exported.
Single wildcard URI ['gs://[YOUR_BUCKET]/file-name-*.json']
Use a single wildcard URI if you think your exported data set will be
larger than 1 GB. BigQuery shards your data into multiple files based
on the provided pattern. Exported files size may vary, and files won't
be equally in size.
So again you need to use this method when your data size is above 1 GB, and the resulting files size may vary, and may go beyond the 1 GB, as you mentioned 5GB and 160Mb pair would happen on this method.
Multiple wildcard URIs
['gs://my-bucket/file-name-1-*.json',
'gs://my-bucket/file-name-2-*.json',
'gs://my-bucket/file-name-3-*.json']
Use multiple wildcard URIs if you want to partition the export output.
You would use this option if you're running a parallel processing job
with a service like Hadoop on Google Cloud Platform. Determine how
many workers are available to process the job, and create one URI per
worker. BigQuery treats each URI location as a partition, and uses
parallel processing to shard your data into multiple files in each
location.
the same applies here as well, exported file sizes may vary beyond 1 GB.
Try using single wildcard URI
See documentation for Exporting data into one or more files
Use a single wildcard URI if you think your exported data will be
larger than BigQuery's 1 GB per file maximum value. BigQuery shards
your data into multiple files based on the provided pattern. If you
use a wildcard in a URI component other than the file name, be sure
the path component does not exist before exporting your data.
Property definition:
['gs://[YOUR_BUCKET]/file-name-*.json']
Creates:
gs://my-bucket/file-name-000000000000.json
gs://my-bucket/file-name-000000000001.json
gs://my-bucket/file-name-000000000002.json ...
Property definition:
['gs://[YOUR_BUCKET]/path-component-*/file-name.json']
Creates:
gs://my-bucket/path-component-000000000000/file-name.json
gs://my-bucket/path-component-000000000001/file-name.json
gs://my-bucket/path-component-000000000002/file-name.json