thanks for taking your time to take a look at this.
I recently started to script in Bash and wanted to write a small script where the user input, depending on the chosen parameter, gets written in a sqlite database. I'm completly stuck, if you have a minute and an idea I'd be very greatful if you answer to this.
My code currently looks somethink like this:
#!/bin/bash
### checking if database is availibe etc.
## ...
if [ $# -gt 0 ]
case $1 in
"--add")
case $2 in
"-t")
sqlite3 DatabaseFile <<'END_SQL'
INSERT INTO databasenaem (tablename) ($3);
END_SQL
# ....
esac
"--change")
sqlite3 DatabaseFile <<'END_SQL'
UPDATE tablename SET tablename=$3 where ID=3;
END_SQL
esac
Thank you very much and have a great day.
It's hard to provide complete answer, as the structure of the tables in the database are not listed, and there is no sample data to replicate the problem.
With the little information that is available, 2 possible problems:
From the SQL statement, looks like the injected parameters are string. You have to quote them to create valid sql
The code uses quotes for the 'here-document' separator. This will disable the parameter substibution that will be needed to expand $3 and friends.
Also, check the spelling (e.g., 'databasename')
Consider the following
sqlite3 DatabaseFile <<END_SQL
INSERT INTO databasenaem (tablename) ('$3');
END_SQL
...
sqlite3 DatabaseFile <<END_SQL
UPDATE tablename SET tablename='$3' where ID=3;
END_SQL
Related
The case is that I have an SQL clause inside a unix script like:
sqlplus -s user/pass << END_SQL1 >> outfile.txt
set echo off feedback off heading off tab off;
select .....
from ....
where ...
and ...
and ... ;
END_SQL
If the outfile.txt is not empty, which means that I get a result from the above SQL, then I am running an update SQL that should change something at some DB elements.
Then I need to reuse the same SQL above to check if the DB elements that I wanted have changed indeed. So, is that possible to reuse this same SQL, but WITHOUT including this same SQL code again later at the script, instead to run it again and, moreover, even put the result at another output file, e.g. outfile2.txt ?
You can use RETURNING ... INTO ... clause inside the script
UPDATE myTable
SET col1 = <something1>
WHERE col2 = <something2>
RETURNING col3, col1 INTO v_col3, v_col1;
to return the results into the variables v_col3 and v_col1.
You could put your hairy SELECT query in a file, say select.sql. Then whenever you need to run the SQL, you could just do :
sqlplus -s user/pass #select.sql >> outfile.txt
You can adapt the output file as you wish :
sqlplus -s user/pass #select.sql >> outfile2.txt
NB : you said
If the outfile.txt is not empty, which means that I get a result from the above SQL
You probably want to use > when writing to outfile.txt : >> appends to the file, while > replaces it.
we have a simple sql script we maintain that sets up your schema and populates a set of text/example values - so it's just like create table, create table table insert into table... and we run it with a simple shell script which calls psql
one of our tables requires files - what I wanted to do was just have the files in the same directory as the script and do something like insert into repository (id, picture) values ('first', lo_import('first.jpg'))
but I get errors saying must be superuser to use server-side script. Is there any way I can achieve this? I have just a .sql file and a bunch of image files and by running psql against the file import them?
Running as superuser is not an option.
Using psql, you could write a shell script like
oid=`psql -At -c "\lo_import 'first.jpg'" | tail -1 | cut -d " " -f 2`
psql -Aqt -c "INSERT INTO repository (id, picture) values ('first', $oid)"
because comments can't have code - thanks to Laurenz, I got it "working" like this:
drop table if exists some_landing_table;
create table some_landing_table( load_time timestamp, filename varchar, data bytea);
\set the_file 'example.jpg';
\lo_import 'example.jpg';
insert into some_landing_table
select now(), 'example.jpg', string_agg(data,decode('','escape') order by pageno)
from
pg_largeobject
where
loid = (select max(loid) from pg_largeobject);
select lo_unlink( max(loid) ) from pg_largeobject;
however, that is ugly for two reasons -
I don't seem to be able to get the result of \lo_import into a variable in any way. even though select \lo_import filename works select \lo_import filename into x doesn't.
I can't use a variable - if I do \lo_import :the_file - it just says example.jpg doesn't exist - enven though if I put it in directly it works perfectly
I can't find a simpler way of providing a 0 length bytea field than decode('','escape')
This is equivalent to my earlier question here, but for sqlite.
As before, I am trying to do the following using the sqlite3 command line client.
UPDATE my_table set my_column=CONTENT_FROM_FILE where id=1;
I have looked at the documentation on .import, but that seems to be a little heavyweight for what I am trying to do.
What is the correct way to set the value of one field from a file?
The method I seek should not impose constraints on the contents of the file.
Assuming the file content is all UTF-8 text and doesn't have any quote characters that would be misinterpreted, you could do this (assuming posix shell - on Windows try cygwin):
$ echo "UPDATE my_table set my_column='" >> temp.sql
$ cat YourContentFile >> temp.sql
$ echo "' where id=1;" >> temp.sql
$ sqlite3
SQLite version 3.7.13 2012-07-17 17:46:21
Enter ".help" for instructions
Enter SQL statements terminated with a ";"
sqlite> .read temp.sql
If the content does have single quotes, escape them first with a simple find-and-replace (you'd need to do that anyway).
hth!
See: http://www.sqlite.org/cli.html#fileio
sqlite> INSERT INTO images(name,type,img)
...> VALUES('icon','jpeg',readfile('icon.jpg'));
In your case:
UPDATE my_table set my_column=readfile('yourfile') where id=1;
If you don't have readfile, you need to .load the module first.
Note
I found that the provided fileio module: http://www.sqlite.org/src/artifact?ci=trunk&filename=ext/misc/fileio.c uses sqlite3_result_blob. When I use it in my project with text columns, it results in Chinese characters being inserted into the table rather than the bytes read from file. This can be fixed by changing it to sqlite3_result_text. See http://www.sqlite.org/loadext.html for instructions on building and loading run-time extensions.
i am new to bash scripting and i was wondering if anyone could help me with the following.
I am trying to retrieve the competition name from a Oracle database using competition_id using the following statement:
select name, competition_type from competitions where competition_id=' ';
However i want to use a seperate text file whcih has a list competition_ids i want to identify, i want my script to find the name and type of all my ids and output the results in a txt file. this is what i have so far:
#!/bin/bash
echo Start Executing SQL commands
cat comps_ids.txt | while read ID
var=$ID
do
sqlplus "details"
<< EOF
select name, competition_type
from competitions
where competition_id=$var;
exit;
EOF
I tried to add a done at the end but i get "unexpected line ending" error message. Can anyone solve this?
Many thanks in advance :)
I'm not sure what your command line should look like, but it's more like
sqlplus "details" <<EOF
select name, competition_type from competitions where competition_id=$val;
exit;
EOF
If your list of IDs isn't too big, it may be better idea to make a ,-separated list and single query.
#!/bin/bash
function get_comp () {
sqlplus -S user/pass#database << EOF
set pagesize 0
set feedback off
set head off
select name, competition_type
from competitions
where competition_id=$1;
EOF
}
for id in $* ; do
get_comp $id
done
Put it in a file (get_comps.sh), and then call it like this
$ ./get_comps.sh < comp_ids.txt > text_file_out.txt
-S makes sqlplus quieter
The other setting make it return just your data, not row headers or anything else.
Of course the database credentials will be stored in your history, and available to other users using 'ps' or 'top'.
This is also horribly inefficient because it connects to the database for each row in your original file. If you have a lot of rows, you might try using python or ruby as their database stuff is pretty easy to use.
I need to loop a oracle sqlplus query using bash.
my scenario is like this. I have a set of names in a text file and i need to find out details of that names using a sqlplus query.
textfile.txt content:
john
robert
samuel
chris
bash script
#!/bin/bash
while read line
do
/opt/oracle/bin/sqlplus -s user#db/password #query.sql $line
done < /tmp/textfile.txt
sql query: query.sql
set verify off
set heading off
select customerid from customers where customername like '%&1%';
exit
problem is when I run the script I get errors like
SP2-0734: unknown command beginning
"robert..." - rest of line ignored.
can someone tell me how to solve this?
The way I do this all the time is as follows:
#!/bin/bash
cat textfile.txt |while read Name
do
sqlplus -s userid/password#db_name > output.log <<EOF
set verify off
set heading off
select customerid from customers where customername like '%${Name}%'
/
exit
EOF
Bash will auto magically expand ${Name} for each line and place it into the sql command before sending it into sqlplus
Do you have set define on ? Is your wildcard & ? You could check glogin.sql to know.
And yes, establishing n connections to pass n queries is probably not a good solution. Maybe it's faster for you to develop and you will do that one time, but if not, you should maybe think of crafting a procedure.