bigquery pass property in commandline - sql

I am getting an error following in bigquery :
Error: Response too large to return.
After couple of google search I found workaround is set configuration.query.allowLargeResults=true
But not sure how to pass this property value in bq command line tool.
any help ?
Thanks

$ bq help query
USAGE: bq.py [--global_flags] <command> [--command_flags] [args]
[...]
--[no]allow_large_results: Enables larger destination table sizes
--destination_table: Name of destination table for query results.
So:
$ bq query --allow_large_results --destination_table "dataset.table" "SELECT 1"

Related

BigQuery CLI: load commands stays pending

I have a csv file on my computer. I would like to load this CSV file into a BigQuery table.
I'm using the following command from a terminal:
bq load --apilog=./logs --field_delimiter=$(printf ';') --skip_leading_rows=1 --autodetect dataset1.table1 mycsvfile.csv myschema.json
The command in my terminal doesn't give any output. In the GCP interface, I see no job being created, which makes me think the request doesn't even reach GCP.
In the logs file (from the --apilog parameter) I get informations about the request being made, and it ends with this:
INFO:googleapiclient.discovery:URL being requested: POST https://bigquery.googleapis.com/upload/bigquery/v2/projects/myproject/jobs?uploadType=resumable&alt=json
and that's it. No matter how long I wait, nothing happens.
You are mixing --autodetect with myschema.json, something like the following shoud work:
bq load --apilog=logs \
--source_format=CSV \
--field_delimiter=';' \
--skip_leading_rows=1 \
--autodetect \
dataset.table \
mycsvfile.csv
If you continue having issues, please post the content of the apilog, the line you shared doesn't seem to be an error. There should be more than one line and normally contains the error in a json structure, for instance:
"reason": "invalid",
"message": "Provided Schema does not match Table project:dataset.table. Field users is missing in new schema"
I'm not sure why you are using
--apilog=./logs
I did not find this in the bq load documentation, please clarify.
Based on that, maybe the bq load command could be the issue, you can try with something like:
bq load \
--autodetect \
--source_format=CSV \
--skip_leading_rows= 1 \
--field_delimiter=';'
dataset1.table1 \
gs://mybucket/mycsvfile.csv \
./myschema.json
If it fails, please check your job list to get the job created, then use bq show to view the information about that job, there you should find an error messag which can help you to determine the cause of the issue.

how to upload the .sql file onto big query using bq command line

when using bq command tool, can I directly upload the .sql file.
because it shows that the specified file is missing when executing the code to find the command
I have tried this one approach:
while read -r q; do
bq query --project_id=my-proj --dataset_id=sample_db --nouse_legacy_sql "$q"
done < <(grep '^INSERT' sample_db_export.sql)
These PowerShell commands also read lines beginning with INSERT and run the queries using the bq command-line tool.
Select-String -pattern '^INSERT' ./sample_db_export.sql |
%{ bq query --project=my-proj --dataset_id=sample_db --nouse_legacy_sql $_.Line }
It's hard to tell what you are asking. If you have the query in a file called sample_db_export.sql, just pipe it as input to bq query. For example,
bq query --use_legacy_sql=false < sample_db_export.sql

google bigquery bq command syntax error

Could somebody explain me why I am getting an syntax error in the following code:
$ bq query --allow_large_results --destination_table=clients.tab_cl1 "SELECT * from adagency-167918:sourcedataset.src_table$20170516 where advertiserid=1 and timestamp="2017-05-16""
and this is the error I am getting:
Error in query string: Error processing job 'adagency-167918:bqjob_r215d56938dbaa2b7_0000015c1a4c2932_1': Encountered " "-" "- "" at line 1, column 31.
Was expecting:
Edit: the problem is unrelated to using bq, actually, although the $ is problematic. When you are using legacy SQL, you need to use [ and ] to escape the table name if the project includes a hyphen. For example,
[your-project:dataset.table]
With standard SQL, you use backticks:
`your-project.dataset.table`
So your query should be:
bq query --allow_large_results \
--destination_table=clients.tab_cl1 \
"SELECT * from [adagency-167918:sourcedataset.src_table\$20170516] where advertiserid=1 and timestamp=timestamp('2017-05-16')"

Running Query from text file

I'm trying to run a big query query from the command line, but because my query is very long I've written it in a text file. The query works from the GUI and I'm overwriting a table that already exsists
bq query --allow_large_results --replace --destination_table=me.Tbl_MyTable '`cat query.txt`'
However, I'm getting error results:
Error in query string: Error processing job
'dev:bqjob_r_00000123456789456123_1': Encountered "
"\'cat query.txt\' "" at line 1, column 1.
Was expecting: EOF
Do I need to put the entire file path in the .txt filename? (this doesn't seem to make a difference)
Are there any characters I need to be careful with in the text file (e.g. "\" or quotation marks) ?
I'm using where clauses and group by clauses - is that an issue?
Instead of cat, just pipe the input from the file. The command would be:
bq query --allow_large_results --replace --destination_table=me.Tbl_MyTable < query.txt
This will send the contents of query.txt to the bq tool.
Elliot is right, now if you want to cat, sed or anything, pipe it:
cat query.txt | bq query

BigQuery bq command with asterisk (*) doesn't work in Compute Engine

I have a directory with a file named file1.txt
And I run the command:
bq query "SELECT * FROM [publicdata:samples.shakespeare] LIMIT 5"
In my local machine it works fine but in Compute Engine I receive this error:
Waiting on bqjob_r2aaecf624e10b8c5_0000014d0537316e_1 ... (0s) Current status: DONE
BigQuery error in query operation: Error processing job 'my-project-id:bqjob_r2aaecf624e10b8c5_0000014d0537316e_1': Field 'file1.txt' not found.
If the directory is empty it works fine. I'm guessing the asterisk is expanding the file(s) into the query but I don't know why.
Apparently the bq command which is located at /usr/bin/bq has the following script:
#!/bin/sh
exec /usr/lib/google-cloud-sdk/bin/bq ${#}
which expands the asterisk.
As a current workaround I'm calling /usr/lib/google-cloud-sdk/bin/bq directly.