I am working with 3rd-party dataset that is loaded into a Google Bucket, and I am attempting to load it into BigQuery to run some pruning steps. The dataset has a set of columns that don't follow BQ column naming convention, and thus the load fails when considering the full file (Which is approximately 4,000 partitions and a little more than a TB)
I can build out a distributed Python process to drop/rename the offending columns, but this seems like an ineffective use of computational resources. Does BigQuery offer some direct method of loading a subset of columns from a parquet file?
Related
Our team drops parquet files on blob, and one of their main usages is to allow analysts (whose comfort zone is SQL syntax) to query them as tables. They will do this in Azure Databricks.
We've mapped the blob storage and can access the parquet files from a notebook. Currently, they are loaded and "prepped" for SQL querying in the following way:
Cell1:
%python
# Load the data for the specified job
dbutils.widgets.text("JobId", "", "Job Id")
results_path = f"/mnt/{getArgument("JobId")}/results_data.parquet"
df_results = spark.read.load(results_path)
df_results.createOrReplaceTempView("RESULTS")
The cell following this can now start doing SQL queries. e.g.:
SELECT * FROM RESULTS LIMIT 5
This takes a bit of time to get up, but not too much. I'm concerned about two things:
Am I loading this in the most efficient way possible, or is there a way to skip the creation of the df_results dataframe, which is only used to create the RESULTS temp table.
Am I loading the table for SQL in a way that lets it be used most efficiently? For example, if the user plans to execute a few dozen queries, I don't want to re-read from disk each time if I have to, but there's no need to persist beyond this notebook. Is createOrReplaceTempView the right method for this?
For your first question:
Yes, you can use the Hive Metastore on Databricks and query any tables in there without first creating DataFrames. The documentation on Databases and Tables is a fantastic place to start.
As a quick example, you can create a table using SQL or Python:
# SQL
CREATE TABLE <example-table>(id STRING, value STRING)
# Python
dataframe.write.saveAsTable("<example-table>")
Once you've created or saved a table this way, you'll be able to access it directly in SQL without creating a DataFrame or temp view.
# SQL
SELECT * FROM <example-table>
# Python
spark.sql("SELECT * FROM <example-table>")
For your second question:
Performance depends on multiple factors but in general, here are some tips.
If your tables are large (tens, hundreds of GB at least), you can partition by a predicate commonly used by your analysts to filter data. For example, if you typically include a WHERE clause that includes a date range or state, it might make sense to partition the table by one of those columns. The key concept here is data skipping.
Use Delta Lake to take advantage of OPTIMIZE and ZORDER. OPTIMIZE helps right-size files for Spark and ZORDER improves data skipping.
Choose Delta Cache Accelerated instace types for the cluster that your analysts will be working on.
I know you said there's no need to persist beyond the notebook but you can improve performance by creating persistent tables and taking advantage of data skipping, caching, and so on.
I have usecase for designing storage for 30 TB of text files as part of deploying data pipeline on Google cloud. My input data is in CSV format, and I want to minimize the cost of querying aggregate values for multiple users who will query the data in Cloud Storage with multiple engines. Which would be a better option in below for this use case?
Using Cloud Storage for storage and link permanent tables in Big Query for query or Using Cloud Big table for storage and installing HBaseShell on compute engine to query Big table data.
Based on my analysis in below for this specific usecase, I see below where cloudstorage can be queried in through BigQuery. Also, Bigtable supports CSV imports and querying. BigQuery limits also mention a maximum size per load job of 15 TB across all input files for CSV, JSON, and Avro based on the documentation, which means i could load mutiple load jobs if loading more than 15 TB, i assume.
https://cloud.google.com/bigquery/external-data-cloud-storage#temporary-tables
https://cloud.google.com/community/tutorials/cbt-import-csv
https://cloud.google.com/bigquery/quotas
So, does that mean I can use BigQuery for the above usecase?
The short answer is yes.
I wrote about this in:
https://medium.com/google-cloud/bigquery-lazy-data-loading-ddl-dml-partitions-and-half-a-trillion-wikipedia-pageviews-cd3eacd657b6
And when loading cluster your tables, for massive improvements in costs for the most common queries:
https://medium.com/google-cloud/bigquery-optimized-cluster-your-tables-65e2f684594b
In summary:
BigQuery can read CSVs and other files straight from GCS.
You can define a view that parses those CSVs in any way you might prefer, all within SQL.
You can run a CREATE TABLE statement to materialize the CSVs into BigQuery native tables for better performance and costs.
Instead of CREATE TABLE you can do imports via API, those are free (instead of cost of query for CREATE TABLE.
15 TB can be handled easily by BigQuery.
Say in Dataflow/Apache Beam program, I am trying to read table which has data that is exponentially growing. I want to improve the performance of the read.
BigQueryIO.Read.from("projectid:dataset.tablename")
or
BigQueryIO.Read.fromQuery("SELECT A, B FROM [projectid:dataset.tablename]")
Will the performance of my read improve, if i am only selecting the required columns in the table, rather than the entire table in above?
I am aware that selecting few columns results in the reduced cost. But would like to know the read performance in above.
You're right that it will reduce cost instead of referencing all the columns in the SQL/query. Also, when you use from() instead of fromQuery(), you don't pay for any table scans in BigQuery. I'm not sure if you were aware of that or not.
Under the hood, whenever Dataflow reads from BigQuery, it actually calls its export API and instructs BigQuery to dump the table(s) to GCS as sharded files. Then Dataflow reads these files in parallel into your pipeline. It does not ready "directly" from BigQuery.
As such, yes, this might improve performance because the amount of data that needs to be exported to GCS under the hood, and read into your pipeline will be less i.e. less columns = less data.
However, I'd also consider using partitioned tables, and then even think about clustering them too. Also, use WHERE clauses to even further reduce the amount of data to be exported and read.
I am currently extracting a BigQuery table into sharded .csv's in Google Cloud Storage -- is there any way to shuffle/randomize the rows for the extract? The GCS .csv's will be used as training data for a GCMLE model, and the current exports are in a non-random order as they are bunched up by similar "labels".
This causes issues when training a GCMLE model as you must hand the model random examples of all labels within each batch. While GCMLE/TF has the ability to randomize the order of rows WITHIN individual .csv's, but there is not (to my knowledge) any way to randomize the rows selected within multiple .csv's. So, I am looking for a way to ensure that the rows being output to the .csv are indeed random.
Can BigQuery table extracted rows be randomized?
No. Extract Job API (thus any client built on top of it) has nothing that would allow you to do so.
I am looking for a way to ensure that the rows being output to the .csv are indeed random.
You should first create tables corresponding to your csv file and then extract them one-by-one into separate csv. In this case you can control what goes into what csv
If your concern is cost of processing (you will need to scan table as many times as csv files you will need) - you can check partitioning approaches in Migrating from non-partitioned to Partitioned tables . This still involves cost but substantially reduced one
Finally, zero cost option is to use Tabledata.list API with paging while distributing response throughout your csv files - you can do this easily in client of your choice
I want to upload a dataset of Tweets from Cloud Storage. I have an schema based on https://github.com/twitterdev/twitter-for-bigquery, but simpler, because I don't need all the fields.
I uploaded several files to Cloud Storage and then manually imported them from BigQuery. I have tried loading each file ignoring unknown fields, and not ignoring them. But I always end up with a table with lesser rows than the original dataset. Just in case, I took care of eliminating redundant rows from each dataset.
Example job-ids:
cellular-dream-110102:job_ZqnMTr17Yx_KKGEuec3qfA0DWMo (loaded 1,457,794 rows, but the dataset contained 2,387,666)
cellular-dream-110102:job_2xfbTFSvvs-unpP6xZXAfDeDjic (loaded 1,151,122 rows, but the dataset contained 3,265,405).
I don't know why this happens. I have tried to simplify the schema further, as well as ensuring that the dataset is clean (no repeated rows, no invalid data, and so on). The curious thing is that if I take a small sample of tweets (say, 10,000) and then upload the file manually, it works - it loads the 10,000 rows.
How can I find what is causing the problem?