google bigquery bq command syntax error - google-bigquery

Could somebody explain me why I am getting an syntax error in the following code:
$ bq query --allow_large_results --destination_table=clients.tab_cl1 "SELECT * from adagency-167918:sourcedataset.src_table$20170516 where advertiserid=1 and timestamp="2017-05-16""
and this is the error I am getting:
Error in query string: Error processing job 'adagency-167918:bqjob_r215d56938dbaa2b7_0000015c1a4c2932_1': Encountered " "-" "- "" at line 1, column 31.
Was expecting:

Edit: the problem is unrelated to using bq, actually, although the $ is problematic. When you are using legacy SQL, you need to use [ and ] to escape the table name if the project includes a hyphen. For example,
[your-project:dataset.table]
With standard SQL, you use backticks:
`your-project.dataset.table`
So your query should be:
bq query --allow_large_results \
--destination_table=clients.tab_cl1 \
"SELECT * from [adagency-167918:sourcedataset.src_table\$20170516] where advertiserid=1 and timestamp=timestamp('2017-05-16')"

Related

Bigquery Command Line Query with ">" not working

Initially took me a while to figure out that ">" causes an issue in cmd and learned from one of the answers that escaping with ^ works. So after changing my query to below format it works fine.
bq query --use_legacy_sql=false --destination_table= (SELECT * FROM `dataset_ID.table_nm` GROUP BY AAA HAVING SUM(BBB)^>0 )
But now I am trying to run the same query through a file.
bq query --use_legacy_sql=false --destination_table= --flagfile="Y:query.sql"
However above it gives me an error if I have "^>".
Error in query string: Error processing job Syntax error: Unexpected ">" at [1:227]
If I don't have "^" in front of the ">" the query doesn't return an error but the result is empty..
Hoping someone could help out with the issue above.
Thanks!

Running Query from text file

I'm trying to run a big query query from the command line, but because my query is very long I've written it in a text file. The query works from the GUI and I'm overwriting a table that already exsists
bq query --allow_large_results --replace --destination_table=me.Tbl_MyTable '`cat query.txt`'
However, I'm getting error results:
Error in query string: Error processing job
'dev:bqjob_r_00000123456789456123_1': Encountered "
"\'cat query.txt\' "" at line 1, column 1.
Was expecting: EOF
Do I need to put the entire file path in the .txt filename? (this doesn't seem to make a difference)
Are there any characters I need to be careful with in the text file (e.g. "\" or quotation marks) ?
I'm using where clauses and group by clauses - is that an issue?
Instead of cat, just pipe the input from the file. The command would be:
bq query --allow_large_results --replace --destination_table=me.Tbl_MyTable < query.txt
This will send the contents of query.txt to the bq tool.
Elliot is right, now if you want to cat, sed or anything, pipe it:
cat query.txt | bq query

passing values using hivevar in HIVE

I've got a param which is like "This is a param", and I'm going to pass it to below hiveQL:
hive -hivevar sys_nm="This is a param" -e 'select * from rd_sys where rd_sys_nm=${hivevar:sys_nm}'
But Hive returned below error message:
Logging initialized using configuration in jar:file:/opt/mapr/hive/hive-0.13/lib/hive-common-0.13.0-mapr-1409.jar!/hive-log4j.properties
FAILED: ParseException line 1:49 missing EOF at 'is' near 'This'
g4t7491_[mgr#g4t7491 ~]$
Does anyone know how to pass it normally?
Hive var don't work like hiveconf where you need to apply "hiveconf:somthing" in the code
when declaring hivevar just add the var name like this -> ${var_name}
for example:
through command line:
hive -hivevar MONTH_VAR='11' -e "select * from table where month=${MONTH_VAR};"
you can also declair through the script:
set hivevar:MONTH_VAR=11;
-- so query would look like this (no hiveconf):
set hivevar:MONTH_VAR=11;
SELECT * from table where month=${MONTH_VAR};
You need to put the string in single quotes for it to parse correctly as a string inside the sql after interpolation.
hive -hivevar sys_nm="'This is a param'" -e 'select * from rd_sys where rd_sys_nm=${hivevar:sys_nm}'

Using sql within shell script

I am currently trying to integrate an sql statement into a shell script, But facing major syntax issue:
My statement in the script:
su - <sid>adm -c 'hdbsql -U SYSTEM export "'SCHEMA'"."'*'" as binary into "'Export Location'" with reconfigure'
I get the following error:
* 257: sql syntax error: incorrect syntax near "*": line 1 col 16 (at pos 16) SQLSTATE: HY000
Would really appreciate if anyone could help me with this.
Thanks and Regards,
AK
Your command line doesn't make much sense to me. It starts with
su - <sid>adm
which means that you are redirecting the contents of the file "sid" into "su" and then redirecting the result of that operation into the file "adm".
Second problem is that in the command you are giving to adm, the single quotes end right before the "" which means, that the "" will get interpreted by the shell as a file glob:
-c 'hdbsql -U SYSTEM export "'SCHEMA'"."'*'" as binary into "'Export Location'" with reconfigure'
You'll need to escape those single quotes like this: "\'".
But I think your problem solving approach is not good. Try to reduce to problem and only then start adding additional things to it. So first try to execute the SQL statement from the "hdbsql" shell. Does it work?
$ hdbsql
> YOUR SQL STATEMENT HERE
Once that works, try to execute the SQL statement from the unix shell as a user:
$ hdbsql -U SYSTEM export ...
Once that works, try to execute it via su
$ su - ...

bigquery pass property in commandline

I am getting an error following in bigquery :
Error: Response too large to return.
After couple of google search I found workaround is set configuration.query.allowLargeResults=true
But not sure how to pass this property value in bq command line tool.
any help ?
Thanks
$ bq help query
USAGE: bq.py [--global_flags] <command> [--command_flags] [args]
[...]
--[no]allow_large_results: Enables larger destination table sizes
--destination_table: Name of destination table for query results.
So:
$ bq query --allow_large_results --destination_table "dataset.table" "SELECT 1"