How can I run .sql file having multiple .sql file inside using Snowsql? - sql

I want to figure out how to run multiple sql files on one go. Suppose I have this test.sql file which has file1.sql, file2.sql and file3.sql and so on. Along with some DML/DDL.
use database &{db};
use schema &{sc};
file1.sql;
file2.sql;
file3.sql;
create table snow_test1
(
name varchar
,add1 varchar
,id number
)
comment = 'this is snowsql testing table' ;
desc table snow_test1;
insert into snow_test1
values('prachi', 'testing', 1);
select * from snow_test1;
here what I run in power shell,
snowsql -c pp_conn -f ...\test.sql -D db=tbc -D sc=testing;
Is there any way to do this ? I know It is possible in Oracle but I want to do this using snowsql. Please guide me. Thanks in advance!

you can run multiple files in a single call:
snowsql -c pp_conn -f file1.sql -f file2.sql -f file3.sql -D db=tbc -D sc=testing;
You might need to put the addition DMLs in a file.

I have tried defining .sql file with !source inside my test.sql file and its working:
!source file1.sql;
!source file2.sql;
!source file3.sql;
....
Also, run the same command in power shell using one .sql file and it is working.

Related

Exporting SQL Table into a CSV file using Windows Batch Script

I am trying to create a windows batch file to export data from an SQL file to a CSV file.
I have an SQL file in %MYHOME%\database\NET-DB.sql which contains data that is:
NET-DB.sql
insert into net_network (id, a_id, alias, address, domain, mask) values('NET_10.10.1.0_10', 1, 'Local Network', '10.10.1.0', '', '255.255.252.0');
What I have tried so far in exporting the data from net_network table into a CSV file in my .bat file is with this command:
export.bat
if not exist "%MYHOME%\net\NUL" mkdir "%MYHOME%\net"
COPY net_network TO '%MYHOME%\net\CSV-EXPORT_FILE.csv' DELIMITER ',' CSV HEADER;
pause
Since that does not work for me, what should be the correct approach for this implementation? Any help will be much appreciated.
Use SQLCMD
You need to modify the code to make it work in your environment, but here goes.
if not exist "%MYHOME%\net\NUL" mkdir "%MYHOME%\net"
cd "C:\your path\to\sqlcmd"
sqlcmd -S YourDBServer -d DB_NAME -E -Q "select id, a_id, alias, address, domain, mask from net_network"
-o "CSV-EXPORT-FILE.csv" -s"," -w 255
Some explanations:
-S The database server to connect to.
-d Name of the database to connect to.
-Q Query to run, can also be insert, delete, update etc.
-o select the output file
-s"," separated by comma
-w column width, this has to be as big as your largest columns characters.

run os command and set out put to hive variable

Is it possible to run something like this in Hive CLI?
I am trying to pass file contents as a variable to another query.
set column_list=!cat /home/user/filename.lst ;
create table tabname as select $column_list from ...
if you have a query file you pass the variables as hiveconf
hive -hiveconf var1=abcd -f file.txt
or you can construct your query and then pass it to hive cli using -e
hive -e "create table ..."
file filename.lst
line
make a file test.sh,
temp=$(cat /home/user/filename.lst)
hive -f test.hql -hiveconf var=$temp
make a another file test.hql
create table test(${hiveconf:var} string);
on terminal
sh -x test.sh
It will pass the line to the test.hql and it will create a table with line as column;
note- all files should be in same directory .This script is passing only one variable.

Correct Output to File When Using SQLCMD

I've been playing around with SQLCMD to output a .SQL results to a file. It is working but the actual SQL statements are also being displayed in the output file which I obviously don't want. I am using a .bat file to run the SQL File. The following command is what I have in my .bat:
SQLCMD -S MyServerName -d MyDBName -i C:\test\start.SQL -o C:\test\out.txt -e
In my start.SQL I have the following:
set nocount on
SELECT '<HEADER>', getDate(), '<HHEADER>'
SELECT UPPER(ACCTNUM),
CONVERT(VARCHAR(10),DATEEXP,126),
UPPER(OPERID),
CONVERT(VARCHAR,CREATE_TMSTMP,120)
FROM RETURNS
WHERE CREATE_TMSTMP < getDate()
AND CREATE_TMSTMP > getDate() - 1
SELECT '<TRAILER>', getDate(), '<HHEADER>'
set nocount off
In the output file the correct information is shown but it's a little ugly with dashes and such and even shows the SQL command that was executed. Is there any way to fix this problem. Am I doing something wrong here? Any help would be greatly appreciated.
Why are you using the -e option?
-e
Writes input scripts to the standard output device (stdout).
The dashes are part of the header. Use -h-1 to specify no header. If that is not acceptable use something like FINDSTR /B /V "----" to exclude that header line... assuming none of the lines you want to keep start with dashes.

connect to sqlplus only once without writing to a file in a loop

I have a requirement for which I need to write a ksh script that reads command line parameters into arrays and creates DML statements to insert records into an oracle database. I've created a script as below to achieve this. However, the user invoking the script doesn't have permission to write into the directory where the script has to run. So, is there a way we can fire multiple inserts on the database without connecting to sqlplus multiple times within the loop and at the same time, NOT create temp sql file as below? Any ideas are highly appreciated. Thanks in advance!
i=0
while (( i<$src_tbl_cnt ))
do
echo "insert into temp_table values ('${src_tbl_arr[$i]}', ${ins_row_arr[$i]}, ${rej_row_arr[$i]});" >> temp_scrpt.sql
(( i+=1 ))
done
echo "commit; disc; quit" >> temp_scrpt.sql
sqlplus user/pass#db # temp_scrpt.sql
Just use the /tmp directory.
The /tmp directory is guaranteed to be present on any unix-family server. It is there precisely for needs like this. Definitely do something like add the current process ID in the file name so that multiple users don't step on each other. So the total name is something like /tmp/temp_$PID_scrpt.sql or the like.
When done, be sure to also delete that file--say, in a line right after the sqlplus call. Thus be sure to store the file name in a variable and delete what's in that variable.
It should go without saying, but in a well run shop: 1) The admins should have put more than enough space in /tmp, 2) All the users in the community should not be deleting other's files in /tmp or overloading it so it runs out of space. 3) The admins should setup a job that deletes files from /tmp after a certain age so that if your script fails before it deletes the temporary file, it won't be there forever.
So really, this answer is more about /tmp and managing it effectively--but that really is what you need. Using temporary files is a powerful technique, so your design is good. And the reality that users often won't have rights in a directory is common, so /tmp is your answer.
Instead of creating a temporary file you can directly pipe the output of an input generating block into sqlplus, in your shell script.
Example:
{
echo 'set auto off;'
for ((i=0; i<100; i++)); do
echo "insert into itest(i) values ($i);"
done
# echo 'rollback;' # for testing
echo 'commit;'
} | sqlplus -S juser/secret#db > /dev/null
This works with Ksh 93 and Bash (perhaps even with Ksh 88 modulo the (( expression syntax).
The corresponding DDL statement for the test table:
create table itest ( i number(36) ) ;
PS: Btw, even when creating a temporary file is preferred - redirecting the output is way more efficient than doing an append-style redirect for each line, e.g.:
{ for ((i=0; i<100; i++)); do echo "line $i"; done; echo end; } > foo.tmp
the below piece of code will keep connecting to SQLplus multiple times or it will connect only once ?
{
echo 'set auto off;'
for ((i=0; i<100; i++)); do
echo "insert into itest(i) values ($i);"
done
echo 'rollback;' # for testing
echo 'commit;'
} | sqlplus -S juser/secret#db > /dev/null

Execute SQL from file in bash

I'm trying to load a sql from a file in bash and execute the loaded sql. The sql file needs to be versatile, meaning it cannot be altered in order to make things easy while being run in bash (escaping special characters like * )
So I have run into some problems:
If I read my sample.sql
SELECT * FROM SAMPLETABLE
to a variable with
ab=`cat sample.sql`
and execute it
db2 `echo $ab`
I receive an sql error because by doing a cat the * has been replaced by all the files in the directory of sample.sql.
Easy solution would be to replace "" with "\" . But I cannot do this, because the file needs to stay executable in programs like DB Visualizer etc.
Could someone give me hint in the right direction?
The DB2 command line processor has options that accept a filename as input, so you shouldn't need to load statements from a text file into a shell variable.
This command will execute all SQL statements in the file, with newline treated as the statement terminator:
db2 -f sample.sql
This command will execute all SQL statements in the file, with semicolon treated as the statement terminator:
db2 -t -f sample.sql
Other useful CLP flags are:
-x : Suppress the column headings
-v : Echo the statement text immediately before execution
-z : Tee a copy of all CLP output to the filename immediately following this flag
Redirect stdin from the file.
db2 < sample.sql
In case, you have a variable used in your script and wanted to get it replaced by the shell before executed in DB2 then use this approach:
Contents of File.sql:
cat <<xEOF
insert values(1,2) into ${MY_SCHEMA}.${MY_TABLE};
select * from ${MY_SCHEMA}.${MY_TABLE};
xEOF
In command prompt do:
export MY_SCHEMA='STAR'
export MY_TAVLE='DIMENSION'
Then you are all good to get it executed in DB2:
eval File.sq |db2 +p -t
The shell will replace the global variables and then DB2 will execute it.
Hope it helps.