Is it possible to run something like this in Hive CLI?
I am trying to pass file contents as a variable to another query.
set column_list=!cat /home/user/filename.lst ;
create table tabname as select $column_list from ...
if you have a query file you pass the variables as hiveconf
hive -hiveconf var1=abcd -f file.txt
or you can construct your query and then pass it to hive cli using -e
hive -e "create table ..."
file filename.lst
line
make a file test.sh,
temp=$(cat /home/user/filename.lst)
hive -f test.hql -hiveconf var=$temp
make a another file test.hql
create table test(${hiveconf:var} string);
on terminal
sh -x test.sh
It will pass the line to the test.hql and it will create a table with line as column;
note- all files should be in same directory .This script is passing only one variable.
Related
I want to figure out how to run multiple sql files on one go. Suppose I have this test.sql file which has file1.sql, file2.sql and file3.sql and so on. Along with some DML/DDL.
use database &{db};
use schema &{sc};
file1.sql;
file2.sql;
file3.sql;
create table snow_test1
(
name varchar
,add1 varchar
,id number
)
comment = 'this is snowsql testing table' ;
desc table snow_test1;
insert into snow_test1
values('prachi', 'testing', 1);
select * from snow_test1;
here what I run in power shell,
snowsql -c pp_conn -f ...\test.sql -D db=tbc -D sc=testing;
Is there any way to do this ? I know It is possible in Oracle but I want to do this using snowsql. Please guide me. Thanks in advance!
you can run multiple files in a single call:
snowsql -c pp_conn -f file1.sql -f file2.sql -f file3.sql -D db=tbc -D sc=testing;
You might need to put the addition DMLs in a file.
I have tried defining .sql file with !source inside my test.sql file and its working:
!source file1.sql;
!source file2.sql;
!source file3.sql;
....
Also, run the same command in power shell using one .sql file and it is working.
I want to do a insert overwrite to hdfs folder as csv /textfile.
In hite-site.xml, hive.exec.compress.output is set to true.
I cannot do a set hive.exec.compress.output=false as the code is being executed in a custom build framework.
Is there an option to turn off hive compression like an attribute of the insert overwrite statement?
If you cannot modify properties in hite-site.xml, one option would be from the hive CLI or beeline, but it would be only for the current session, if you close the session and the next day start a new session you will have to do the same.
As an example:
Log in hive CLI or beeline
$ hive
to see the value of the property:
hive> SET hive.execution.engine;
to overwrite its value for the current session
hive> SET hive.execution.engine=tez
or in your case
hive> SET hive.exec.compress.output;
hive> SET hive.exec.compress.output=false
Other commands that can be useful from the Linux shell are:
$ hive -e "SET" > hive_properties
to write a file with all hive properties, or
$ hive -e "SET;" | grep compress
to see a group of hive properties from the console
I am having a shell file named test.sh which is invoking other sql file 'table.sql'. 'table.sql' file will create some tables, but I want to create the tables in a particular schema 'bird'.
content of sql file.
create schema bird; --bird should not be hard coded it should be in variable
set search_path to 'bird';
create table bird.sparrow(id int, name varchar2(20));
content of shell file.
dbname=$1
cnport=$2
schemaname=$3
filename=$4
gsql -d ${dbname} -p ${cnport} -f ${filenam} #[how to give schema name here so that it can be used in table.sql without hardcoding]
I will execute my shell file like this
sh test.sh db1 9999 bird table.sql
it is easier doing it in shell, eg:
dbname=$1
cnport=$2
schemaname=$3
filename=$4
gsql -d ${dbname} -p ${cnport} <<EOF
create schema $3; --bird should not be hard coded it should be in variable
set search_path to '$3';
create table bird.sparrow(id int, name varchar2(20));
EOF
otherwise use psql variables
when using bq command tool, can I directly upload the .sql file.
because it shows that the specified file is missing when executing the code to find the command
I have tried this one approach:
while read -r q; do
bq query --project_id=my-proj --dataset_id=sample_db --nouse_legacy_sql "$q"
done < <(grep '^INSERT' sample_db_export.sql)
These PowerShell commands also read lines beginning with INSERT and run the queries using the bq command-line tool.
Select-String -pattern '^INSERT' ./sample_db_export.sql |
%{ bq query --project=my-proj --dataset_id=sample_db --nouse_legacy_sql $_.Line }
It's hard to tell what you are asking. If you have the query in a file called sample_db_export.sql, just pipe it as input to bq query. For example,
bq query --use_legacy_sql=false < sample_db_export.sql
I'm trying to load a sql from a file in bash and execute the loaded sql. The sql file needs to be versatile, meaning it cannot be altered in order to make things easy while being run in bash (escaping special characters like * )
So I have run into some problems:
If I read my sample.sql
SELECT * FROM SAMPLETABLE
to a variable with
ab=`cat sample.sql`
and execute it
db2 `echo $ab`
I receive an sql error because by doing a cat the * has been replaced by all the files in the directory of sample.sql.
Easy solution would be to replace "" with "\" . But I cannot do this, because the file needs to stay executable in programs like DB Visualizer etc.
Could someone give me hint in the right direction?
The DB2 command line processor has options that accept a filename as input, so you shouldn't need to load statements from a text file into a shell variable.
This command will execute all SQL statements in the file, with newline treated as the statement terminator:
db2 -f sample.sql
This command will execute all SQL statements in the file, with semicolon treated as the statement terminator:
db2 -t -f sample.sql
Other useful CLP flags are:
-x : Suppress the column headings
-v : Echo the statement text immediately before execution
-z : Tee a copy of all CLP output to the filename immediately following this flag
Redirect stdin from the file.
db2 < sample.sql
In case, you have a variable used in your script and wanted to get it replaced by the shell before executed in DB2 then use this approach:
Contents of File.sql:
cat <<xEOF
insert values(1,2) into ${MY_SCHEMA}.${MY_TABLE};
select * from ${MY_SCHEMA}.${MY_TABLE};
xEOF
In command prompt do:
export MY_SCHEMA='STAR'
export MY_TAVLE='DIMENSION'
Then you are all good to get it executed in DB2:
eval File.sq |db2 +p -t
The shell will replace the global variables and then DB2 will execute it.
Hope it helps.