select * from gpqueries:contracts.raw where fiscal_year =2015
ignore case
Whe generating data as JSON or CSV from google big query, and getting data as below when downloaded it from storage bucket.
Please guide me why this happens. Also how can combine multiple files, if generated files are multiple.
Update :
From the screenshot I can see that the first 3 characters are 1F8B08 - that's the signature of a gzip compressed file. Just uncompress it with gunzip.
http://www.filesignatures.net/index.php?page=search&search=1F8B08&mode=SIG
My guess: Did you pick "compression: gzip" when exporting?
Related
I am trying to connect third party ranking management system (https://tranco-list.eu/) with metabase. Tranco is giving us an option to see the record on Google BigQuery but when I am trying to connect the Tranco with Metabase then it is asking for dataset from my Google cloud console project. Since Tranco is an external database source and I don't have access to the dataset Id from this.
If you want to get the result of tranco in Google BigQuery then run below query.
select * from `tranco.daily.daily` where domain ='google.com' limit 10
When I am searching Tranco in public dataset then also I am not finding this over their also. Is anyone aware of, how to add the third party dataset to our Google cloud project.
Thanks in advance.
Unfortunately, you can’t read the Tranco dataset directly from BigQuery; but, what you can do is to load the CSV data from Tranco into a Cloud Storage Bucket and then read your bucket in BigQuery.
When you load data from Cloud Storage into a BigQuery table, the dataset that contains the table must be in the same regional or multi- regional location as the Cloud Storage bucket.
Note that it has the next limitations:
CSV files do not support nested or repeated data.
Remove byte order mark (BOM) characters. They might cause unexpected
issues.
If you use gzip compression, BigQuery cannot read the data in
parallel. Loading compressed CSV data into BigQuery is slower than
loading uncompressed data.
You cannot include both compressed and uncompressed files in the same
load job.
The maximum size for a gzip file is 4 GB. When you load CSV or JSON
data, values in DATE columns must use the dash (-) separator and the
date must be in the following format: YYYY-MM-DD (year-month-day).
When you load JSON or CSV data, values in TIMESTAMP columns must use
a dash (-) separator for the date portion of the timestamp, and the
date must be in the following format: YYYY-MM-DD (year-month-day).
The hh:mm:ss (hour-minute-second) portion of the timestamp must use a
colon (:) separator.
Also, you can see this documentation if you don’t know how you can upload and read your CSV data.
And also in the next link I'm sending you is a step by step guide in how yo can create / select the bucket you will use.
Everyday I'm uploading a new file to a Cloud Storage bucket. The file is stored as JSON-NL format. I have a BigQuery table (setup as external table) connected to this bucket. Each files is named with the date of their upload. If I want to query only the most recent file, so far the best option I found is to parse the _FILE_NAME in my sql query and match it with the current date. However the parsing is a bit messy so I'm wondering is there is any other better solution.
What are other options to query only the most recent file? Should I set this up differently?
There isn't better solution. Use a script to parse the pseudo-column with the file name, get the latest one and then query it (with an execute immediate). No other solution so far
I created a table using Athena CTAS statements. Per Glue, I see that the table is stored on my s3 bucket. I further confirmed that there are files in the expected place in my s3 bucket.
These files, however, are not parquet files (they are extension-less). When I try to read them into python using pd.read_parquet, I get the Error "Parquet magic bytes not found in footer. Either the file is corrupted or this is not a parquet file.". A similar error occurs when I try to query the table and read the csv output using pd.read_csv. There, the error is "'utf-8' codec can't decode byte 0xee in position 0: invalid continuation byte". I tried using awswrangler and got the same errors.
I'm pretty sure these errors are related to the SSE_S3 encryption I put on the bucket. However, I'm at a loss as to how I can actually interact with these files outside of Athena.
The resolution is that the default Athena workgroup had CSE_KMS encryption turned on. I couldn't quickly figure out how to pass these options via awswrangler, so I took the shortcut of recreating the table using another workgroup that didn't have encryption.
Problem
I'm attempting to create a BigQuery table from a CSV file in Google Cloud Storage.
I'm explicitly defining the schema for the load job (below) and set header rows to skip = 1.
Data
$ cat date_formatting_test.csv
id,shipped,name
0,1/10/2019,ryan
1,2/1/2019,blah
2,10/1/2013,asdf
Schema
id:INTEGER,
shipped:DATE,
name:STRING
Error
BigQuery produces the following error:
Error while reading data, error message: Could not parse '1/10/2019' as date for field shipped (position 1) starting at location 17
Questions
I understand that this date isn't in ISO format (2019-01-10), which I'm assuming will work.
However, I'm trying to define a more flexible input configuration whereby BigQuery will correctly load any date that the average American would consider valid.
Is there a way to specify the expected date format(s)?
Is there a separate configuration / setting to allow me to successfully load the provided CSV in with the schema defined as-is?
According to the listed limitations:
When you load CSV or JSON data, values in DATE columns must use
the dash (-) separator and the date must be in the following
format: YYYY-MM-DD (year-month-day).
So this leaves us with 2 options:
Option 1: ETL
Place new CSV files in Google Cloud Storage
That in turn triggers a Google Cloud Function or Google Cloud Composer job to:
Edit the date column in all the CSV files
Save the edited files back to Google Cloud Storage
Load the modified CSV files into Google BigQuery
Option 2: ELT
Load the CSV file as-is to BigQuery (i.e. your schema should be modified to shipped:STRING)
Create a BigQuery view that transforms the shipped field from a string to a recognised date format. Use SELECT id, PARSE_DATE('%m/%d/%Y', shipped) AS shipped, name
Use that view for your analysis
I'm not sure, from your description, if this is a once-off job or recurring. If it's once-off, I'd go with Option 2 as it requires the least effort. Option 1 requires a bit more effort, and would only be worth it for recurring jobs.
I am exporting a table of size>1GB from Bigquery into GCS but it splits the files into very small files of 2-3 MB. Is there a way to get bigger files like 40-60MB per files rather than 2-3 MB.
I do the expport via the api
https://cloud.google.com/bigquery/docs/exporting-data#exporting_data_into_one_or_more_files
https://cloud.google.com/bigquery/docs/reference/v2/jobs
The source table size is 60 GB on Bigquery. I extract the data with format - NewLine_Delimited_Json and GZIP compression
destination_cloud_storage_uris=[
'gs://bucket_name/main_folder/partition_date=xxxxxxx/part-*.gz'
]
Are you trying to export partitioned table? If yes, each partition is exported as different table and it might cause small files.
I run the export in cli with each of the following commands and received in both cases files of size 49 MB:
bq extract --compression=GZIP --destination_format=NEWLINE_DELIMITED_JSON project:dataset.table gs://bucket_name/path5-component/file-name-*.gz
bq extract --compression=GZIP project:dataset.table gs://bucket_name/path5-component/file-name-*.gz
Please add more details to the question so we can provide specific advice: How are you exactly asking for this export?
Nevertheless, if you have many files in GCS and you want to merge them all into one, you can do:
gsutil compose gs://bucket/obj1 [gs://bucket/obj2 ...] gs://bucket/composite
https://cloud.google.com/storage/docs/gsutil/commands/compose