Store my "Sybase" query result /output into a script variable - sql

I need a variable to keep the results retrieved from a query (Sybase) that´s in a script.
I have built the following script, it works fine I get the desired result when I run it
Script: EXECUTE_DAILY:
isql -U database_dba -P password <<EOF!
select the_name from table_name where m_num="NUMB912" and date="17/01/2019"
go
quit
EOF!
echo "All Done"
Output:
"EXECUTE_DAILY" 97 lines, 293 characters
user#zp01$ ./EXECUTE_DAILY
the_name
-----------------------------------
NAME912
(1 row affected)
But now I would like to keep the output(the_name: NAME912) in a variable.
So far this is basically what I'm trying with no success.
variable=$(isql -U database_dba -P password -se "select the_name from table_name where m_num="NUMB912" and date="17/01/2019" ")
But, is not working. I can't save NAME912 in a variable.

You need to parse the output for the desired string/piece-of-data that you wish to store in your variable. I tend to make my life a bit easier by making sure I can easily/quickly search/parse out what I want.
Keeping a few issues in mind ...
I tend to use isql -s"|" -w10000 to ensure (most of the time) that a) the result set has all columns delimited with the pipe ('|') and b) a single row of data does not span multiple rows; the pipe delimiter makes it easier to parse out columns that may contain white space; obviously (?) use a different delimiter if a pipe may be part of your actual data
to make parsing of the isql output a bit easier I tend to add a unique, grep-able (literal) string to the rows that I'm looking to search/parse
some databases (eg, SQLAnywhere, Oracle) tend to mimic a literal value as the column header if said literal string has not been assigned an explicit alias/header; this means that if you do a simple search on your literal string then you'll get a match for the result set header as well as the actual data row
I tend to capture all isql output to a temporary file; this allows for easier follow-on processing, eg, error checking, data parsing, dumping contents to a logfile, etc
So, with the above in mind my code typically looks something like:
$ outfile=/tmp/.$$.isql.outfile
$ isql -s"|" -w10000 -U database_dba -P password <<-EOF > ${outfile} 2>&1
-- 'GREP'||'ME' ensures that 'GREPME' only shows up in the data row
select 'GREP'||'ME',the_name
from table_name
where m_num = "NUMB912"
and date = "17/01/2019"
go
EOF
$ cat ${outfile}
... snip ...
|'GREP'||'ME'|the_name | # notice the default column header = 'GREP'||'ME' which won't match my search for 'GREPME'
|------------|----------|
|GREPME |NAME912 | # this is the line I want to search/parse
... snip ...
$ read -r namevar < <(egrep GREPME ${outfile} | awk -F"|" '{print $3}')
$ echo ${namevar}
NAME912

Related

Reuse same sql clause in script

The case is that I have an SQL clause inside a unix script like:
sqlplus -s user/pass << END_SQL1 >> outfile.txt
set echo off feedback off heading off tab off;
select .....
from ....
where ...
and ...
and ... ;
END_SQL
If the outfile.txt is not empty, which means that I get a result from the above SQL, then I am running an update SQL that should change something at some DB elements.
Then I need to reuse the same SQL above to check if the DB elements that I wanted have changed indeed. So, is that possible to reuse this same SQL, but WITHOUT including this same SQL code again later at the script, instead to run it again and, moreover, even put the result at another output file, e.g. outfile2.txt ?
You can use RETURNING ... INTO ... clause inside the script
UPDATE myTable
SET col1 = <something1>
WHERE col2 = <something2>
RETURNING col3, col1 INTO v_col3, v_col1;
to return the results into the variables v_col3 and v_col1.
You could put your hairy SELECT query in a file, say select.sql. Then whenever you need to run the SQL, you could just do :
sqlplus -s user/pass #select.sql >> outfile.txt
You can adapt the output file as you wish :
sqlplus -s user/pass #select.sql >> outfile2.txt
NB : you said
If the outfile.txt is not empty, which means that I get a result from the above SQL
You probably want to use > when writing to outfile.txt : >> appends to the file, while > replaces it.

How can I insert the content of the variable into single quotes inside the INSERT INTO command?

I created a text file. The name of this is "test.txt" and the content is first part below. I also created script with the name insert.sh.
I run the command with ./insert.sh test.txt.
If the words / strings are in single quotes, it will insert the words into the columns. Also it will insert numbers without single quotes. The csv that I will eventually use won't have single quotes and I don't want to change the data.
How can I insert the content of the variable into single quotes inside the INSERT INTO command?
I am using psql.
Text file, test.txt
'one','ten','hundred'
'two','twenty','twohundred'
Script, insert.sh:
#!/bin/bash
while read cell
do
name=$cell
echo "$cell"
####Insert from txt into table####
sudo -u username -H -- psql -d insert_test -c "
INSERT INTO first (ten, hundred, thousend) VALUES ($cell);
"
done < $1
something like this:
INSERT INTO first (ten, hundred, thousend) VALUES (INSERT" $cell "QUOTES);
UPDATE:
I changed the code. I added the single quotes around $cell as you suggested.
#!/bin/bash
while read cell
do
name=$cell
echo "$cell"
####Insert from txt into table####
sudo -u username -H -- psql -d insert_test -c "
INSERT INTO first (ten, hundred, thousend) VALUES ('$cell');
"
done < $1
and I removed the quotes out of the text file since the csv file that I want to use later wont have any single quotes.
new text file.
one,ten,hundred
two,twenty,twohundred
and im getting the error:
one,two,three
ERROR: INSERT has more target columns than expressions
LINE 2: INSERT INTO first (ten, hundred, thousend) VALUES ('one,two,...
You need to modify the $IFS (Internal Field Separator) variable to determine the line separator used by Bash. Since you used a CSV like file, you IFS come to , character, thus this is the result $IFS=,. Note that if you need to do others stuff in you script, you need to redefine the $IFS var to the original state, so you need to store it in an temportal variable before you change it, something like $OLDIFS=$IFS.
readline read the entire line and separate the values depending on $IFS var, thus you need to write the adecauted quantity of var where readline will store the words, i.e., if you line have 3 words, you need to give 3 vars to readline, e.g.: file: foo,baz,bar, readline -r word1 word2 word3. If you don't give the correct amount of vars, readline will store the rest of word in a single var, that is your problem.
So, a solution to your problem would be:
#!/bin/bash
$OLDIFS=$IFS # If you need to do more stuff.
while IFS=, read -r word1 word2 word3
do
sudo -u username -H -- psql -d insert_test -c
"INSERT INTO first (ten, hundred, thousend) VALUES (${word1}, ${word2}, ${word3});"
done < $1
$IFS=$OLDIFS # Same of line 2.
# ...
NOTE: This is insecure because lead with easily to a SQL injection. If you use this, only use in a local database that don't have any sensetive data.

Executing the SQL from shell scripting

I have a table called query_master table which has 4 columns and the 4th column has SQL query as values. In total there are 5 entries in the query table.
Table Structure:
S.No --> Key --> Title --> Query
1 100 EG select * from dual
Now my objective is, I have to fetch the SQL queries using shell script from the query_master and execute it. The output of that each SQL query should be written on a separate log file, and the log filename should be equal to the name of the title.
Can you please help in achieving this scenario using stored procedures or stored functions which will be more helpful for me.
I need to achieve this using shell scripting.
Try this, assuming you're using mysql:
awk -F'\t' 'NR!=1 {system("mysql -u user -p -e " $4 " database")}' file
Where file is the file containing the table, user is the user and database is the database. Alternatively set these as variables instead of hard coding them like this:
awk -F'\t' -v db="database" -v user="user" 'NR!=1 {system(""mysql -u " user " -p -e " $4 " " db)}' file
Make a shell script that accepts a SQL statement from commandline (or inputfile or stdin) and does all things for you like exporting ORACLE_HOME, tnsnames, username, password, redirecting output, calling sqlplus, output formatting, deleting column headers and other sqlplus settings.
With your magicsql.sh (after testing), aim for a solution like
magicsql.sh "select key, query from query_master order by key" | while read key query; do
magicsql.sh "${query}" > /tmp/${key}.out
done

Generate a Properties File using Shell Script and Results from a SQL Query

I am trying to create a properties file like this...
firstname=Jon
lastname=Snow
occupation=Nights_Watch
family=Stark
...from a query like this...
SELECT
a.fname as firstname,
a.lname as lastname,
b.occ as occupation...
FROM
names a,
occupation b,
family c...
WHERE...
How can I do this? As I am aware of only using spool to a CSV file which won't work here?
These property files will be picked up by shell scripts to run automated tasks. I am using Oracle DB
Perhaps something like this?
psql -c 'select id, name from test where id = 1' -x -t -A -F = dbname -U dbuser
Output would be like:
id=1
name=test1
(For the full list of options: man psql.)
Since you mentionned spool I will assume you are running on Oracle. This should produce a result in the desired format, that you can spool straight away.
SELECT
'firstname=' || firstname || CHR(10) ||
'lastname=' || lastname || CHR(10) -- and so on for all fields
FROM your_tables;
The same approach should be possible with all database engines, if you know the correct incantation for a litteral new line and the syntax for string concatenation.
It is possible to to this from your command line SQL client but as STTLCU notes it might be better to get the query to output in something "standard" (like CSV) and then transform the results with a shell script. Otherwise, because a lot of the features you would use are not part of any SQL standard, they would depend on the database server and client application. Think of this step as sort of the obverse of ETL where you clean up the data you "unload" so that it is useful for some other application.
For sure there's ways to build this into your query application: e.g. if you use something like perl DBI::Shell as your client (which allows you to connect to many different servers using the DBI module) you can jazz up your output in various ways. But here you'd probably be best off if could send the query output to a text file and run it through awk.
Having said that ... here's how the Postgresql client could do what you want. Notice how the commands to set up the formatting are not SQL but specific to the client.
~/% psql -h 192.168.2.69 -d cropdusting -u stubblejumper
psql (9.2.4, server 8.4.14)
WARNING: psql version 9.2, server version 8.4.
Some psql features might not work.
You are now connected to database "cropdusting" as user "stubblejumper".
cropdusting=# \pset border 0 \pset format unaligned \pset t \pset fieldsep =
Border style is 0.
Output format is unaligned.
Showing only tuples.
Field separator is "=".
cropdusting=# select year,wmean_yld from bckwht where year=1997 AND freq > 13 ;
1997=19.9761904762
1997=14.5533333333
1997=17.9942857143
cropdusting=#
With the psql client the \pset command sets options affecting the output of query results tables. You can probably figure out which option is doing what. If you want to do this using your SQL client tell us which one it is or read through the manual page for tips on how to format the output of your queries.
My answer is very similar to the two already posted for this question, but I try to explain the options, and try to provide a precise answer.
When using Postgres, you can use psql command-line utility to get the intended output
psql -F = -A -x -X <other options> -c 'select a.fname as firstname, a.lname as lastname from names as a ... ;'
The options are:
-F : Use '=' sign as the field separator, instead of the default pipe '|'
-A : Do not align the output; so there is no space between the column header, separator and the column value.
-x : Use expanded output, so column headers are on left (instead of top) and row values are on right.
-X : Do not read $HOME/.psqlrc, as it may contain commands/options that can affect your output.
-c : The SQL command to execute
<other options> : Any other options, such as connection details, database name, etc.
You have to choose if you want to maintain such a file from shell or from PL/SQL. Both solutions are possible and both are correct.
Because Oracle has to read and write from the file I would do it from database side.
You can write data to file using UTL_FILE package.
DECLARE
fileHandler UTL_FILE.FILE_TYPE;
BEGIN
fileHandler := UTL_FILE.FOPEN('test_dir', 'test_file.txt', 'W');
UTL_FILE.PUTF(fileHandler, 'firstname=Jon\n');
UTL_FILE.PUTF(fileHandler, 'lastname=Snow\n');
UTL_FILE.PUTF(fileHandler, 'occupation=Nights_Watch\n');
UTL_FILE.PUTF(fileHandler, 'family=Stark\n');
UTL_FILE.FCLOSE(fileHandler);
EXCEPTION
WHEN utl_file.invalid_path THEN
raise_application_error(-20000, 'ERROR: Invalid PATH FOR file.');
END;
Example's source: http://psoug.org/snippet/Oracle-PL-SQL-UTL_FILE-file-write-to-file-example_538.htm
At the same time you read from the file using Oracle external table.
CREATE TABLE parameters_table
(
parameters_coupled VARCHAR2(4000)
)
ORGANIZATION EXTERNAL
(
TYPE ORACLE_LOADER
DEFAULT DIRECTORY test_dir
ACCESS PARAMETERS
(
RECORDS DELIMITED BY NEWLINE
FIELDS
(
parameters_coupled VARCHAR2(4000)
)
)
LOCATION ('test_file.txt')
);
At this point you can write data to your table which has one column with coupled parameter and value, i.e.: 'firstname=Jon'
You can read it by Oracle
You can read it by any shell script because it is a plain text.
Then it is just a matter of a query, i.e.:
SELECT MAX(CASE WHEN INSTR(parameters_coupled, 'firstname=') = 1 THEN REPLACE(parameters_coupled, 'firstname=') ELSE NULL END) AS firstname
, MAX(CASE WHEN INSTR(parameters_coupled, 'lastname=') = 1 THEN REPLACE(parameters_coupled, 'lastname=') ELSE NULL END) AS lastname
, MAX(CASE WHEN INSTR(parameters_coupled, 'occupation=') = 1 THEN REPLACE(parameters_coupled, 'occupation=') ELSE NULL END) AS occupation
FROM parameters_table;

How to delete last row in output file generated by nzsql

I am trying to delete last row in the file generated by nzsql.Please find the below query.
nzsql -A -c "SELECT * FROM AM_MAS_DIVISION_DIM" > abc.out
When I execute this query the output will be generated and stored in abc.out.This will include both header columns as well as some time information at the bottom.But I don't need the bottom metadata and want to keep only my header columns. How can I do this using only nzsql.Please help me.Thanks in advance.
use -r flag in the nzsql command to avoid getting that row [assuming the metadata referred in question is the row count summary line, ex: (3 rows)]
-r Suppresses the row count that is displayed at the end of the SQL output.
reference: http://pic.dhe.ibm.com/infocenter/ntz/v7r0m3/index.jsp?topic=%2Fcom.ibm.nz.adm.doc%2Fr_sysadm_nzsql_command.html
Why don't you just pipe the output to a unix command to remove it? I think something like this will work:
nzsql -A -c "SELECT * FROM AM_MAS_DIVISION_DIM" | sed '$d' > abc.out
Seems to be a recommended solution for getting rid of the last line (although ed, gawk, and other tools can handle it).