Issues creating table from bucket file - google-bigquery

I have a big table (About 10 million rows) that I'm trying to pull into my bigquery. I had to upload the CSV into the bucket due to the size constraints when creating the table. When I try to create the table using the Datastore, the job fails with the error:
Error Reason:invalid. Get more information about this error at Troubleshooting Errors: invalid.
Errors:
gs://es_main/provider.csv does not contain valid backup metadata.
Job ID: liquid-cumulus:job_KXxmLZI0Ulch5WmkIthqZ4boGgM
Start Time: Dec 16, 2015, 3:00:51 PM
End Time: Dec 16, 2015, 3:00:51 PM
Destination Table: liquid-cumulus:ES_Main.providercloudtest
Source URI: gs://es_main/provider.csv
Source Format: Datastore Backup
I've troubleshot by using a small sample file of rows from the same table and just uploading using the CSV feature in the table creation without any errors and can view the data just fine.
I'm just wondering what the metadata should be set as with the "Edit metadata" option within the bucket or if there is some other work around I'm missing. Thanks

The error message for the job that you posted is telling you that the file you're providing is not a Datastore Backup file. Note that "Datastore" here means Google Cloud Datastore, which is another storage solution that it sounds like you aren't using. A Cloud Datastore Backup is a specific file type from that storage product which is different from CSV or JSON.
Setting the file metadata within the Google Cloud Storage browser, which is where the "Edit metadata" option you're talking about lives, should have no impact on how BigQuery imports your file. It might be important if you were doing something more involved with your file from Cloud Storage, but it isn't important to BigQuery as far as I know.
To upload a CSV file from Google Cloud Storage to BigQuery, make sure to select the CSV source format and the Google Storage load source as pictured below.

Related

Azure Data Factory creates .CSV that's incompatible with Power Query

I have a pipeline that creates a dataset from a stored procedure on Azure SQL Server.
I want to then manipulate it in a power query step within the factory, but it fails to load in the power query editor with this error.
It opens up the JSON file (to correct it, I assume) but I can't see anything wrong with it.
If I download the extract from blob and upload it again as a .csv then it works fine.
The only difference I can find is that if I upload a blob direct to storage then the file information for the blob looks like this:
If I just let ADF create the .csv in blob storage the file info looks like this:
So my assumption is that somewhere in the process in ADF that creates the .csv file it's getting something wrong, and the Power Query module can't recognise it as a valid file.
All the other parts of the pipeline (Data Flows, other datasets) recognise it fine, and the 'preview data' brings it up correctly. It's just PQ that won't read it.
Any thooughts? TIA
I reproduced the same. When data is copied from SQL database to BLOB as csv file, Power query is unable to read. Also, Power query doesn't support json file. But when I tried to download the csv file and reupload, it worked.
Below are the steps to overcome this issue.
When I tried to upload the file in Blob and create the dataset for that file in power query, Schema is imported from connection/Store. It forces us to import schema either from connection/store or from sample file. There is no option as none here.
When data is copied from sql db to Azure blob, and dataset which uses the blob storage didn't have schema imported by default.
Once imported the schema, power query activity ran successfully.
Output before importing schema in dataset
After importing schema in dataset

Reading data from GCS with BigQuery fails with "Not Found", but the date (files) exists

I have a service that is constantly updating files in GCS bucket with hive format:
bucket
device_id=aaaa
month=01
part-0.parquet
month=02
part-0.parquet
....
device_id=bbbb
month=01
part-0.parquet
month=02
part-0.parquet
....
If today we are at month=02 and I ran the following with BigQuery:
SELECT DISTINCT event_id
FROM `project_id.dataset.table`
WHERE month = '02';
I get the error: Not found: Files /bigstore/bucket_name/device_id=aaaa/month=02/part-0.parquet
I checked and the file is there when the query ran.
If I run
SELECT DISTINCT event_id
FROM `project_id.dataset.table`
WHERE month = '01';
I get results without any errors. I guess the error is related to the fact that I'm modifying the data while querying it. But as I understand this should not be the case with GCS, this is from their docs.
Because uploads are strongly consistent, you will never receive a 404 Not Found response or stale data for a read-after-write or read-after-metadata-update operation.
I saw some posts that this could be related to my bucket been Multi-region.
Any other insights?
It could be for some reason that you get this error.
When you load data from Cloud Storage into a BigQuery table, the
dataset that contains the table must be in the same regional or
multi- regional location as the Cloud Storage bucket.
Due to consistency, for buckets, while metadata updates are strongly
consistent for read-after-metadata-update operations, the process
could take time to finish the changes.
Using a Multi-region bucket is not recommended.
In this case, it could be due to consistency, because while you are updating the files GCS at the same time you are executing the query, so when you execute a query the parquet file was available to read and you didn’t get the error, but the next time the parquet file wasn’t available because the service was updating the file and you got the error.
Unfortunately, there is not a simple way, to solve this problem, but here are some options:
You can add a pub/sub routine to the bucket and/or file and quick off
your query after the service finished updating the files.
Make a workflow that blocks the updating of the files in their
buckets until their query finishes.
If the query fails with “not found” for file ABCD and you have
verified ABCD exists in GCS, then retry the query X times.
You need to backup your data into another location where you won't
update these files constantly, just once a day.
You could move the data into a managed storage where you won't have
this problem because you can do snapshotting.

Loading 50GB CSV File Azure Blob to Azure SQL DB in Less time- Performance

I am loading 50GB CSV file From Azure Blob to Azure SQL DB using OPENROWSET.
It takes 7 hours to load this file.
Can you please help me with possible ways to reduce this time?
The easiest option IMHO is just use BULK INSERT. Move the csv file into a Blob Store and the import it directly using BULK INSERT from Azure SQL. Make sure Azure Blob storage and Azure SQL are in the same Azure region.
To make it as fast as possible:
split the CSV in more than one file (for example using something like a CSV splitter. This looks nice https://www.erdconcepts.com/dbtoolbox.html. Never tried and just came up with a simple search, but looks good)
run more BULK INSERT in parallel using TABLOCK option. (https://learn.microsoft.com/en-us/sql/t-sql/statements/bulk-insert-transact-sql?view=sql-server-2017#arguments). This, if the target table is empty, will allow multiple concurrent bulk operations in parallel.
make sure you are using an higher SKU for the duration of the operation. Depending on the SLO (Service Level Objective) you're using (S4? P1, vCore?) you will get a different amount of log throughput, up to close 100 MB/Sec. That's the maximum speed you can actually achieve. (https://learn.microsoft.com/en-us/azure/sql-database/sql-database-resource-limits-database-server)
Please try using Azure Data Factory.
First create the destination table on Azure SQL Database, let's call it USDJPY. After that upload the CSV to an Azure Storage Account. Now create your Azure Data Factory instance and choose Copy Data.
Next, choose "Run once now" to copy your CSV files.
Choose "Azure Blob Storage" as your "source data store", specify your Azure Storage which you stored CSV files.
Provide information about Azure Storage account.
Choose your CSV files from your Azure Storage.
Choose "Comma" as your CSV files delimiter and input "Skip line count" number if your CSV file has headers
Choose "Azure SQL Database" as your "destination data store".
Type your Azure SQL Database information.
Select your table from your SQL Database instance.
Verify the data mapping.
Execute data copy from CSV files to SQL Database just confirming next wizards.

How to take the backup of a dataset in BigQuery?

We want to create a backup copy of a BigQuery dataset in case a table is accidentally dropped, as it is only recoverable within 7 days.
Is there a way to extend the duration of the recovery period? If not, how can we create a backup of a dataset with a retention period of 30 days in BigQuery?
It is currently not possible to extend the duration of the recovery period. A feature request for the ability to extend the duration of the recovery period has already been created as commented by Katayoon.
Here is a public link to monitor the progress on that issue: https://issuetracker.google.com/120038872
To backup datasets in BigQuery you could either make copies of your dataset, or as a more workable solution, export the data to Cloud Storage, so you can import it back at a later time. Cloud Storage allows you to set a retention period and a lifecycle policy which together will allow you to make sure that data stays undisturbed for the desired amount of time, and that it removes itself after a given time should you wish to save on storage costs.
For how you do export in BigQuery:
You can export the tables as AVRO, JSON or CSV files to the Cloud Storage via web UI, command line, an API and using various languages like C#, Go, Python and Java, as long as both are in the same location. There are other limitations to exporting a table, such as file size, Integer encoding, data compression, etc.
Link to table export and limitations:
https://cloud.google.com/bigquery/docs/exporting-data
You can find the instructions on the procedures here:
Retention Policies and Bucket Lock: https://cloud.google.com/storage/docs/using-bucket-lock#lock-bucket
Object Lifecycle Management:
https://cloud.google.com/storage/docs/managing-lifecycles
Loading data into BigQuery can be done using various file formats, such as CSV, JSON, Avro, Parquet, or ORC and so on. At this moment you can load directly only from local storage, or from Google Storage. More on loading data, file formats, data sources and limitations by following the link: https://cloud.google.com/bigquery/docs/loading-data
More information on
Exporting tables: https://cloud.google.com/bigquery/docs/exporting-data
Export limitations: https://cloud.google.com/bigquery/docs/exporting-data#export_limitations
Loading data into BigQuery: https://cloud.google.com/bigquery/docs/loading-data
Wildcards: https://cloud.google.com/storage/docs/gsutil/addlhelp/WildcardNames
Merging the file: https://cloud.google.com/storage/docs/gsutil/commands/compose
You can take a snapshot of a table using either SQL or CLI:
SQL
CREATE SNAPSHOT TABLE
myproject.library_backup.books
CLONE myproject.library.books
OPTIONS(expiration_timestamp = TIMESTAMP "2022-04-27 12:30:00.00-08:00")
CLI
bq cp --snapshot --no_clobber --expiration=86400 library.books library_backup.books
You can backup and restore using the tools in https://github.com/GoogleCloudPlatform/bigquery-oreilly-book/tree/master/blogs/bigquery_backup:
Backup a table to GCS
./bq_backup.py --input dataset.tablename --output gs://BUCKET/backup
This saves a schema.json, a tabledef.json, and extracted data in AVRO format to GCS.
You can also backup all the tables in a data set:
./bq_backup.py --input dataset --output gs://BUCKET/backup
Restore tables one-by-one by specifying a destination data set
./bq_restore.py --input gs://BUCKET/backup/fromdataset/fromtable --output destdataset
For views, the backup stores the view definition and the restore creates a view.

Exporting query results as JSON via Google BigQuery API

I've got jobs/queries that return a few hundred thousand rows. I'd like to get the results of the query and write them as json in a storage bucket.
Is there any straightforward way of doing this? Right now the only method I can think of is:
set allowLargeResults to true
set a randomly named destination table to hold the query output
create a 2nd job to extract the data in the "temporary" destination table to a file in a storage bucket
delete the random "temporary" table.
This just seems a bit messy and roundabout. I'm going to be wrapping all this in a service hooked up to a UI that would have lots of users hitting it and would rather not be in the business of managing all these temporary tables.
1) As you mention the steps are good. You need to use Google Cloud Storage for your export job. Exporting data from BigQuery is explained here, check also the variants for different path syntax.
Then you can download the files from GCS to your local storage.
Gsutil tool can help you further to download the file from GCS to local machine.
With this approach you first need to export to GCS, then to transfer to local machine. If you have a message queue system (like Beanstalkd) in place to drive all these it's easy to do a chain of operation: submit jobs, monitor state of the job, when done initiate export to GCS, then delete the temp table.
Please also know that you can update a table via the API and set the expirationTime property, with this aproach you don't need to delete it.
2) If you use the BQ Cli tool, then you can set output format to JSON, and you can redirect to a file. This way you can achieve some export locally, but it has certain other limits.
this exports the first 1000 line as JSON
bq --format=prettyjson query --n=1000 "SELECT * from publicdata:samples.shakespeare" > export.json