I'm trying to run a big query query from the command line, but because my query is very long I've written it in a text file. The query works from the GUI and I'm overwriting a table that already exsists
bq query --allow_large_results --replace --destination_table=me.Tbl_MyTable '`cat query.txt`'
However, I'm getting error results:
Error in query string: Error processing job
'dev:bqjob_r_00000123456789456123_1': Encountered "
"\'cat query.txt\' "" at line 1, column 1.
Was expecting: EOF
Do I need to put the entire file path in the .txt filename? (this doesn't seem to make a difference)
Are there any characters I need to be careful with in the text file (e.g. "\" or quotation marks) ?
I'm using where clauses and group by clauses - is that an issue?
Instead of cat, just pipe the input from the file. The command would be:
bq query --allow_large_results --replace --destination_table=me.Tbl_MyTable < query.txt
This will send the contents of query.txt to the bq tool.
Elliot is right, now if you want to cat, sed or anything, pipe it:
cat query.txt | bq query
Related
I am exporting query results form hive using beeline, here is my command :
beeline -u 'jdbc:hive2://myhost.com:10000/mydb;principal=hive/myhost.COM' --incremental=true --silent=true --outputformat=dsv --disableQuotingForSV=true --delimiterForDSV=\, --showHeader=false --nullemptystring=true -f myquery.hql --hiveconf DT_ID=${DT_ID} > ${spoolFile}
This is my query :
SELECT id, concat('"',c_name,'"'), app_name from mytab where dt_id='${hiveconf:DT_ID}';
But I get results like this, for fields having my field separator(,) in column value:
66,**^#**"(Chat\, Social\, Music\, Utilities)"**^#**,Default
Note the ^#. Why is it coming? How can avoid it? What is that character? If it is quote, I am to have it, so that I can remove the concat in my query. I tried playing with --disableQuotingForSV=true/false. But that did not help me.
when using bq command tool, can I directly upload the .sql file.
because it shows that the specified file is missing when executing the code to find the command
I have tried this one approach:
while read -r q; do
bq query --project_id=my-proj --dataset_id=sample_db --nouse_legacy_sql "$q"
done < <(grep '^INSERT' sample_db_export.sql)
These PowerShell commands also read lines beginning with INSERT and run the queries using the bq command-line tool.
Select-String -pattern '^INSERT' ./sample_db_export.sql |
%{ bq query --project=my-proj --dataset_id=sample_db --nouse_legacy_sql $_.Line }
It's hard to tell what you are asking. If you have the query in a file called sample_db_export.sql, just pipe it as input to bq query. For example,
bq query --use_legacy_sql=false < sample_db_export.sql
I have a BigQuery table that has column with some values of '\N' (without the quotes). I want to write a query with where clause on the field.
This is my command "SELECT barcode FROM [mydataset1.mytab1] where barcode = '\N' and length(barcode) < 5"
The above command works perfectly on Windows. The above command returns records for which barcode is \N. Now the same command returns error on Linux platform. I think the special character needs to be written differently.
I tried "SELECT barcode FROM [mydataset1.mytab1] where barcode = '/\N' and length(barcode) < 5" and this does not work either. Could you let me know who to modify the above query to work it on Linux environment?
I have attached the screenshots of the working and not working screens.
http://goo.gl/9p6cwD (Windows works)
http://goo.gl/DeAHij (Linux gives error)
Try using \\\. For instance, this query works:
$ bq query "SELECT '\\\N';"
I am trying to delete last row in the file generated by nzsql.Please find the below query.
nzsql -A -c "SELECT * FROM AM_MAS_DIVISION_DIM" > abc.out
When I execute this query the output will be generated and stored in abc.out.This will include both header columns as well as some time information at the bottom.But I don't need the bottom metadata and want to keep only my header columns. How can I do this using only nzsql.Please help me.Thanks in advance.
use -r flag in the nzsql command to avoid getting that row [assuming the metadata referred in question is the row count summary line, ex: (3 rows)]
-r Suppresses the row count that is displayed at the end of the SQL output.
reference: http://pic.dhe.ibm.com/infocenter/ntz/v7r0m3/index.jsp?topic=%2Fcom.ibm.nz.adm.doc%2Fr_sysadm_nzsql_command.html
Why don't you just pipe the output to a unix command to remove it? I think something like this will work:
nzsql -A -c "SELECT * FROM AM_MAS_DIVISION_DIM" | sed '$d' > abc.out
Seems to be a recommended solution for getting rid of the last line (although ed, gawk, and other tools can handle it).
I'm trying to load a sql from a file in bash and execute the loaded sql. The sql file needs to be versatile, meaning it cannot be altered in order to make things easy while being run in bash (escaping special characters like * )
So I have run into some problems:
If I read my sample.sql
SELECT * FROM SAMPLETABLE
to a variable with
ab=`cat sample.sql`
and execute it
db2 `echo $ab`
I receive an sql error because by doing a cat the * has been replaced by all the files in the directory of sample.sql.
Easy solution would be to replace "" with "\" . But I cannot do this, because the file needs to stay executable in programs like DB Visualizer etc.
Could someone give me hint in the right direction?
The DB2 command line processor has options that accept a filename as input, so you shouldn't need to load statements from a text file into a shell variable.
This command will execute all SQL statements in the file, with newline treated as the statement terminator:
db2 -f sample.sql
This command will execute all SQL statements in the file, with semicolon treated as the statement terminator:
db2 -t -f sample.sql
Other useful CLP flags are:
-x : Suppress the column headings
-v : Echo the statement text immediately before execution
-z : Tee a copy of all CLP output to the filename immediately following this flag
Redirect stdin from the file.
db2 < sample.sql
In case, you have a variable used in your script and wanted to get it replaced by the shell before executed in DB2 then use this approach:
Contents of File.sql:
cat <<xEOF
insert values(1,2) into ${MY_SCHEMA}.${MY_TABLE};
select * from ${MY_SCHEMA}.${MY_TABLE};
xEOF
In command prompt do:
export MY_SCHEMA='STAR'
export MY_TAVLE='DIMENSION'
Then you are all good to get it executed in DB2:
eval File.sq |db2 +p -t
The shell will replace the global variables and then DB2 will execute it.
Hope it helps.