I have a shell script which calls some SQL like so
sqlplus system/$password#$instance #./oracle/mysqlfile.sql $var1 $var2 $var3
Then in mysqlfile.sql, I define properties like this:
DEFINE var1=&1
DEFINE var2=&3
DEFINE var3=&3
Later in the file, I call another SQL script:
// i wish to wrap this in a if statement - pseudo-code
if(var3="true") do the following
#./oracle/myOthersqlfile.sql &&varA &&varB
I am not sure how to implement this though, any suggestions appreciated
You could (ab)use substitution variables:
set termout off
column var3_path new_value var3_path
select case
when '&var3' = 'true' then './oracle/myOthersqlfile.sql &&varA &&varB'
else '/dev/null'
end as var3_path
from dual;
set termout on
#&var3_path
The query between the set termout commands - which just hide the output of the query - uses a case expression to pick either your real file path or a dummy file; I've used /dev/null, but you could have a 'no-op' file of your own that does nothing if that's clearer. The query gives the result of that the alias var3_path. The new_value line before it turns that into a substitution variable. The # then expands that variable.
So if var3 is 'true' then that runs:
#./oracle/myOthersqlfile.sql &&varA &&varB
(or, actually, with the varA and varB variables already replaced with their actual values) and if it is false it runs:
#/dev/null
which does nothing, silently.
You can set verify on around that code to see when and where substitution is happening.
You can't implement procedural logic into sqlplus. You have these options :
Implement the IF-THEN-ELSE logic inside the shell script that is running the sqlplus.
Use PL/SQL, but then your SQL Script should be called as a process inside an anonymous block, not like an external script.
In your case the easiest way is to change your shell script.
#/bin/bash
#
# load environment Oracle variables
sqlplus system/$password#$instance #./oracle/mysqlfile.sql $var1 $var2 $var3
# if then
if [ $var3 == "true" ]
then
sqlplus system/$password#$instance #./oracle/myOthersqlfile.sql
fi
You should realise that sqlplus is just a CLI ( Command Line Interface ). So you can't apply procedural logic to it.
I have no idea what you do in those sql scripts ( running DMLs, creating files, etc ), but the best approach would be to convert them to PL/SQL, then you can apply whatever logic you need to.
Related
I want to execute an SQL file with sqlplus, but when I try to in Powershell ISE the result says how to use sqlplus. The result I get
The code I used in the example in ISE is:
sqlplus "username/password#database #C:Path\To\file.sql"
But when I run this code in CMD or regular Powershell it works without problems. The result is just some dummy Select 1 from dual.
I have tried to put the path in a single qoute( ' ) with and without the # (inside and outside of the quote) but nothing is working. I also didn't find much when googling the issue.
I also tried just to connect and it works without problems, although I can't type anything after it connects. Result with just the connect
because you are doing wrong
the real syntex is
sqlplus username/password#TnsAlias 'c:\path\to\DBscript.sql' | out-file 'c:\temp\sql- output.txt'
I think you (') use early.
or try this without outfile
$output = sqlplus username/password#TnsAlias 'c:\path\to\DBscript.sql'
store in variable
The case is that I have an SQL clause inside a unix script like:
sqlplus -s user/pass << END_SQL1 >> outfile.txt
set echo off feedback off heading off tab off;
select .....
from ....
where ...
and ...
and ... ;
END_SQL
If the outfile.txt is not empty, which means that I get a result from the above SQL, then I am running an update SQL that should change something at some DB elements.
Then I need to reuse the same SQL above to check if the DB elements that I wanted have changed indeed. So, is that possible to reuse this same SQL, but WITHOUT including this same SQL code again later at the script, instead to run it again and, moreover, even put the result at another output file, e.g. outfile2.txt ?
You can use RETURNING ... INTO ... clause inside the script
UPDATE myTable
SET col1 = <something1>
WHERE col2 = <something2>
RETURNING col3, col1 INTO v_col3, v_col1;
to return the results into the variables v_col3 and v_col1.
You could put your hairy SELECT query in a file, say select.sql. Then whenever you need to run the SQL, you could just do :
sqlplus -s user/pass #select.sql >> outfile.txt
You can adapt the output file as you wish :
sqlplus -s user/pass #select.sql >> outfile2.txt
NB : you said
If the outfile.txt is not empty, which means that I get a result from the above SQL
You probably want to use > when writing to outfile.txt : >> appends to the file, while > replaces it.
Does anyone know if what I'm trying to do in the below code is possible and if so what the syntax is? This issue is around the connect call, the username doesn't seem to generate correctly. The commented out connect call is another one I tried.
-- myscript.sql
-- #params:
-- 1 - Oracle database name eg. localhost
-- 2 - Site (site01, site02 site03)
connect systemname_%2_admin/mypassword#&1;
--connect "systemname_" || "%2" || "_admin"/mypassword#&1;
begin
--execution code here.
end;
/
disconnect;
NOTE: Call does need to be this way as this is going to be an automated script doing different things for different usernames.
Your arguments will be stored in substitution variables 1, 2 and so on.
You access them in your script with &1, &2 (so forget about %2, it's meaningless).
Now your problem is that &2_admin looks to sqlplus like a substitution variable named 2_admin so you just need to add a dot . after the 2. Dot is the character that separates the name of a substitution variable from what follows.
your connect will look like :
connect systemname_&2._admin/mypassword#&1
(With no ; : this is a sqlplus command not an sql statement).
In booggie 2, how can I execute a script (programmed in Python) out of a rule and pass the script's return value to the rule?
Please note: The booggie-project does not exist anymore but led to the development of Soley Studio which covers the same functionality.
exec is the command to execute rules and scripts out of a rule. It is followed by parentheses containing a sequence composed of rules and scripts.
There is a strict order in which the application sequence in a rule is executed, (cf. Is there a fixed order of how the right-hand side of a rule is executed in GrGen.NET?). exec is always the last statements that's executed (before return of course). Hence, we can't pass a variable from exec to eval. Therefore, variables resulting from the execution of scripts in exechave to assigned to node/edge-attributes within the exec statement. To do so, we use curly brackets and write the same code as we would in an eval statement.
In the following example, a script is called that returns the highest value of three given values (a.value, b.value, c.value) and stores it a node's attribute (d.value).
exec ((max_value) = getMaxValue(a.value, b.value, c.value) ;>
{
d.value = max_value;
}
);
I'm trying to load a sql from a file in bash and execute the loaded sql. The sql file needs to be versatile, meaning it cannot be altered in order to make things easy while being run in bash (escaping special characters like * )
So I have run into some problems:
If I read my sample.sql
SELECT * FROM SAMPLETABLE
to a variable with
ab=`cat sample.sql`
and execute it
db2 `echo $ab`
I receive an sql error because by doing a cat the * has been replaced by all the files in the directory of sample.sql.
Easy solution would be to replace "" with "\" . But I cannot do this, because the file needs to stay executable in programs like DB Visualizer etc.
Could someone give me hint in the right direction?
The DB2 command line processor has options that accept a filename as input, so you shouldn't need to load statements from a text file into a shell variable.
This command will execute all SQL statements in the file, with newline treated as the statement terminator:
db2 -f sample.sql
This command will execute all SQL statements in the file, with semicolon treated as the statement terminator:
db2 -t -f sample.sql
Other useful CLP flags are:
-x : Suppress the column headings
-v : Echo the statement text immediately before execution
-z : Tee a copy of all CLP output to the filename immediately following this flag
Redirect stdin from the file.
db2 < sample.sql
In case, you have a variable used in your script and wanted to get it replaced by the shell before executed in DB2 then use this approach:
Contents of File.sql:
cat <<xEOF
insert values(1,2) into ${MY_SCHEMA}.${MY_TABLE};
select * from ${MY_SCHEMA}.${MY_TABLE};
xEOF
In command prompt do:
export MY_SCHEMA='STAR'
export MY_TAVLE='DIMENSION'
Then you are all good to get it executed in DB2:
eval File.sq |db2 +p -t
The shell will replace the global variables and then DB2 will execute it.
Hope it helps.