connect to sqlplus only once without writing to a file in a loop - sql

I have a requirement for which I need to write a ksh script that reads command line parameters into arrays and creates DML statements to insert records into an oracle database. I've created a script as below to achieve this. However, the user invoking the script doesn't have permission to write into the directory where the script has to run. So, is there a way we can fire multiple inserts on the database without connecting to sqlplus multiple times within the loop and at the same time, NOT create temp sql file as below? Any ideas are highly appreciated. Thanks in advance!
i=0
while (( i<$src_tbl_cnt ))
do
echo "insert into temp_table values ('${src_tbl_arr[$i]}', ${ins_row_arr[$i]}, ${rej_row_arr[$i]});" >> temp_scrpt.sql
(( i+=1 ))
done
echo "commit; disc; quit" >> temp_scrpt.sql
sqlplus user/pass#db # temp_scrpt.sql

Just use the /tmp directory.
The /tmp directory is guaranteed to be present on any unix-family server. It is there precisely for needs like this. Definitely do something like add the current process ID in the file name so that multiple users don't step on each other. So the total name is something like /tmp/temp_$PID_scrpt.sql or the like.
When done, be sure to also delete that file--say, in a line right after the sqlplus call. Thus be sure to store the file name in a variable and delete what's in that variable.
It should go without saying, but in a well run shop: 1) The admins should have put more than enough space in /tmp, 2) All the users in the community should not be deleting other's files in /tmp or overloading it so it runs out of space. 3) The admins should setup a job that deletes files from /tmp after a certain age so that if your script fails before it deletes the temporary file, it won't be there forever.
So really, this answer is more about /tmp and managing it effectively--but that really is what you need. Using temporary files is a powerful technique, so your design is good. And the reality that users often won't have rights in a directory is common, so /tmp is your answer.

Instead of creating a temporary file you can directly pipe the output of an input generating block into sqlplus, in your shell script.
Example:
{
echo 'set auto off;'
for ((i=0; i<100; i++)); do
echo "insert into itest(i) values ($i);"
done
# echo 'rollback;' # for testing
echo 'commit;'
} | sqlplus -S juser/secret#db > /dev/null
This works with Ksh 93 and Bash (perhaps even with Ksh 88 modulo the (( expression syntax).
The corresponding DDL statement for the test table:
create table itest ( i number(36) ) ;
PS: Btw, even when creating a temporary file is preferred - redirecting the output is way more efficient than doing an append-style redirect for each line, e.g.:
{ for ((i=0; i<100; i++)); do echo "line $i"; done; echo end; } > foo.tmp

the below piece of code will keep connecting to SQLplus multiple times or it will connect only once ?
{
echo 'set auto off;'
for ((i=0; i<100; i++)); do
echo "insert into itest(i) values ($i);"
done
echo 'rollback;' # for testing
echo 'commit;'
} | sqlplus -S juser/secret#db > /dev/null

Related

How can I run .sql file having multiple .sql file inside using Snowsql?

I want to figure out how to run multiple sql files on one go. Suppose I have this test.sql file which has file1.sql, file2.sql and file3.sql and so on. Along with some DML/DDL.
use database &{db};
use schema &{sc};
file1.sql;
file2.sql;
file3.sql;
create table snow_test1
(
name varchar
,add1 varchar
,id number
)
comment = 'this is snowsql testing table' ;
desc table snow_test1;
insert into snow_test1
values('prachi', 'testing', 1);
select * from snow_test1;
here what I run in power shell,
snowsql -c pp_conn -f ...\test.sql -D db=tbc -D sc=testing;
Is there any way to do this ? I know It is possible in Oracle but I want to do this using snowsql. Please guide me. Thanks in advance!
you can run multiple files in a single call:
snowsql -c pp_conn -f file1.sql -f file2.sql -f file3.sql -D db=tbc -D sc=testing;
You might need to put the addition DMLs in a file.
I have tried defining .sql file with !source inside my test.sql file and its working:
!source file1.sql;
!source file2.sql;
!source file3.sql;
....
Also, run the same command in power shell using one .sql file and it is working.

save all my work on sql in a file?

Actually , i am working on Mysql in linux terminal .
i want a way or a command to save all the queries i write and their outputs in a file .
well, write every query and redirect it to a file is very hard and useless !
if there is any bash script or command it will be helpfull .
yes , tee command can be use for this purpose .
while logging into mysql you can make this redirection like
mysql -u username -pPassword | tee -a outputfilename
your whole session will be stored in the file
This is a bit advanced, but I've just started playing with org-babel, and it's pretty great for SQL.
Set up org-babel in your init.el:
(org-babel-do-load-languages 'org-babel-load-languages
'((sql . t)))
(setq org-confirm-babel-evaluate nil
org-src-fontify-natively t
org-src-tab-acts-natively t)
And create an org-mode buffer. You can just run M-x org-mode in *scratch* if you want.
Then write your SQL:
#+BEGIN_SRC sql :engine "mysql" :dbhost "db.example.com" :dbuser "jqhacker" :dbpassword "passw0rd" :database "the_db"
show tables
select * from the_table limit 10
#+END_SRC
Evaluate it by putting the cursor in the SQL block and type C-c C-c. The results show up in the buffer. You can write as many source blocks as you like, and evaluate them in any order.
There's a lot more to org-babel: http://orgmode.org/worg/org-contrib/babel/languages/ob-doc-sql.html
i just founded that there is an sql command to save query and output in a file ;
mysql> tee filename ;
example :
mysql> tee tmp/output.out;
..logging to file 'tmp/output.out'
now : every query and his output will be saved in a output.out file.
note : " remember to write file name without quotes"

Cronjob does not execute command line in perl script

I am unfamiliar with linux/linux environment so do pardon me if I make any mistakes, do comment to clarify.
I have created a simple perl script. This script creates a sql file and as shown, it would execute the lines in the file to be inserted into the database.
#!/usr/bin/perl
use strict;
use warnings;
use POSIX 'strftime';
my $SQL_COMMAND;
my $HOST = "i";
my $USERNAME = "need";
my $PASSWORD = "help";
my $NOW_TIMESTAMP = strftime '%Y-%m-%d_%H-%M-%S', localtime;
open my $out_fh, '>>', "$NOW_TIMESTAMP.sql" or die 'Unable to create sql file';
printf {$out_fh} "INSERT INTO BOL_LOCK.test(name) VALUES ('wow');";
sub insert()
{
my $SQL_COMMAND = "mysql -u $USERNAME -p'$PASSWORD' ";
while( my $sql_file = glob '*.sql' )
{
my $status = system ( "$SQL_COMMAND < $sql_file" );
if ( $status == 0 )
{
print "pass";
}
else
{
print "fail";
}
}
}
insert();
This works if I execute it while I am logged in as a user(I do not have access to Admin). However, when I set a cronjob to run this file let's say at 10.08am by using the line(in crontab -e):
08 10 * * * perl /opt/lampp/htdocs/otpms/Data_Tsunami/scripts/test.pl > /dev/null 2>&1
I know the script is being executed as the sql file is created. However no new rows are inserted into the database after 10.08am. I've searched for solutions and some have suggested using the DBI module but it's not available on the server.
EDIT: Didn't manage to solve it in the end. A root/admin account was used to to execute the script so that "solved" the problem.
First things first, get rid of the > /dev/null 2>&1 at the end of your crontab entry (at least temporarily) so you can actually see any errors that may be occurring.
In other words, change it temporarily to something like:
08 10 * * * perl /opt/lampp/htdocs/otpms/Data_Tsunami/scripts/test.pl >/tmp/myfile 2>&1
Then you can examine the /tmp/myfile file to see what's being output.
The most likely case is that mysql is not actually on the path in your cron job, because cron itself gives a rather minimal environment.
To fix that problem (assuming that's what it is), see this answer, which gives some guidelines on how best to expand the cron environment to give you what you need. That will probably just involve adding the MySQL executable directory to your PATH variable.
The other thing you may want to consider is closing the out_fh file before trying to pass it to mysql - if the buffers haven't been flushed, it may still be an empty file as far as other processes are concerned.
The expression glob(".* *") matches all files in the current working
directory.
- http://perldoc.perl.org/functions/glob.html
you should not rely on the wd in a cron job. If you want to use a glob (or any file operation) with a relative path, set the wd with chdir first.
source: http://www.perlmonks.org/bare/?node_id=395387
So if your working directory is, for example /home/user, you should insert
chdir('/home/user/');
before the WHILE, ie:
sub insert()
{
my $SQL_COMMAND = "mysql -u $USERNAME -p'$PASSWORD' ";
chdir('/home/user/');
while( my $sql_file = glob '*.sql' )
{
...
replace /home/user with wherever your sql files are being created.
It's better to do as much processing within Perl as possible. It avoids the overhead of generating a separate shell process and leaves everything under the control of the program so that you can handle any errors much more simply
Database access from Perl is done using the DBI module. This program demonstrates how to achieve what you have written using the mysql utility. As you can see it's also much more concise
#!/usr/bin/perl
use strict;
use warnings;
use DBI;
my $host = "i";
my $username = "need";
my $password = "help";
my $dbh = DBI->connect("DBI:mysql:database=test;host=$host", $username, $password);
my $insert = $dbh->prepare('INSERT INTO BOL_LOCK.test(name) VALUES (?)');
my $rv = $insert->execute('wow');
print $rv ? "pass\n" : "fail\n";

Looping SQL query in Bash script

i am new to bash scripting and i was wondering if anyone could help me with the following.
I am trying to retrieve the competition name from a Oracle database using competition_id using the following statement:
select name, competition_type from competitions where competition_id=' ';
However i want to use a seperate text file whcih has a list competition_ids i want to identify, i want my script to find the name and type of all my ids and output the results in a txt file. this is what i have so far:
#!/bin/bash
echo Start Executing SQL commands
cat comps_ids.txt | while read ID
var=$ID
do
sqlplus "details"
<< EOF
select name, competition_type
from competitions
where competition_id=$var;
exit;
EOF
I tried to add a done at the end but i get "unexpected line ending" error message. Can anyone solve this?
Many thanks in advance :)
I'm not sure what your command line should look like, but it's more like
sqlplus "details" <<EOF
select name, competition_type from competitions where competition_id=$val;
exit;
EOF
If your list of IDs isn't too big, it may be better idea to make a ,-separated list and single query.
#!/bin/bash
function get_comp () {
sqlplus -S user/pass#database << EOF
set pagesize 0
set feedback off
set head off
select name, competition_type
from competitions
where competition_id=$1;
EOF
}
for id in $* ; do
get_comp $id
done
Put it in a file (get_comps.sh), and then call it like this
$ ./get_comps.sh < comp_ids.txt > text_file_out.txt
-S makes sqlplus quieter
The other setting make it return just your data, not row headers or anything else.
Of course the database credentials will be stored in your history, and available to other users using 'ps' or 'top'.
This is also horribly inefficient because it connects to the database for each row in your original file. If you have a lot of rows, you might try using python or ruby as their database stuff is pretty easy to use.

Execute SQL from file in bash

I'm trying to load a sql from a file in bash and execute the loaded sql. The sql file needs to be versatile, meaning it cannot be altered in order to make things easy while being run in bash (escaping special characters like * )
So I have run into some problems:
If I read my sample.sql
SELECT * FROM SAMPLETABLE
to a variable with
ab=`cat sample.sql`
and execute it
db2 `echo $ab`
I receive an sql error because by doing a cat the * has been replaced by all the files in the directory of sample.sql.
Easy solution would be to replace "" with "\" . But I cannot do this, because the file needs to stay executable in programs like DB Visualizer etc.
Could someone give me hint in the right direction?
The DB2 command line processor has options that accept a filename as input, so you shouldn't need to load statements from a text file into a shell variable.
This command will execute all SQL statements in the file, with newline treated as the statement terminator:
db2 -f sample.sql
This command will execute all SQL statements in the file, with semicolon treated as the statement terminator:
db2 -t -f sample.sql
Other useful CLP flags are:
-x : Suppress the column headings
-v : Echo the statement text immediately before execution
-z : Tee a copy of all CLP output to the filename immediately following this flag
Redirect stdin from the file.
db2 < sample.sql
In case, you have a variable used in your script and wanted to get it replaced by the shell before executed in DB2 then use this approach:
Contents of File.sql:
cat <<xEOF
insert values(1,2) into ${MY_SCHEMA}.${MY_TABLE};
select * from ${MY_SCHEMA}.${MY_TABLE};
xEOF
In command prompt do:
export MY_SCHEMA='STAR'
export MY_TAVLE='DIMENSION'
Then you are all good to get it executed in DB2:
eval File.sq |db2 +p -t
The shell will replace the global variables and then DB2 will execute it.
Hope it helps.