we have a simple sql script we maintain that sets up your schema and populates a set of text/example values - so it's just like create table, create table table insert into table... and we run it with a simple shell script which calls psql
one of our tables requires files - what I wanted to do was just have the files in the same directory as the script and do something like insert into repository (id, picture) values ('first', lo_import('first.jpg'))
but I get errors saying must be superuser to use server-side script. Is there any way I can achieve this? I have just a .sql file and a bunch of image files and by running psql against the file import them?
Running as superuser is not an option.
Using psql, you could write a shell script like
oid=`psql -At -c "\lo_import 'first.jpg'" | tail -1 | cut -d " " -f 2`
psql -Aqt -c "INSERT INTO repository (id, picture) values ('first', $oid)"
because comments can't have code - thanks to Laurenz, I got it "working" like this:
drop table if exists some_landing_table;
create table some_landing_table( load_time timestamp, filename varchar, data bytea);
\set the_file 'example.jpg';
\lo_import 'example.jpg';
insert into some_landing_table
select now(), 'example.jpg', string_agg(data,decode('','escape') order by pageno)
from
pg_largeobject
where
loid = (select max(loid) from pg_largeobject);
select lo_unlink( max(loid) ) from pg_largeobject;
however, that is ugly for two reasons -
I don't seem to be able to get the result of \lo_import into a variable in any way. even though select \lo_import filename works select \lo_import filename into x doesn't.
I can't use a variable - if I do \lo_import :the_file - it just says example.jpg doesn't exist - enven though if I put it in directly it works perfectly
I can't find a simpler way of providing a 0 length bytea field than decode('','escape')
Related
I have a table called query_master table which has 4 columns and the 4th column has SQL query as values. In total there are 5 entries in the query table.
Table Structure:
S.No --> Key --> Title --> Query
1 100 EG select * from dual
Now my objective is, I have to fetch the SQL queries using shell script from the query_master and execute it. The output of that each SQL query should be written on a separate log file, and the log filename should be equal to the name of the title.
Can you please help in achieving this scenario using stored procedures or stored functions which will be more helpful for me.
I need to achieve this using shell scripting.
Try this, assuming you're using mysql:
awk -F'\t' 'NR!=1 {system("mysql -u user -p -e " $4 " database")}' file
Where file is the file containing the table, user is the user and database is the database. Alternatively set these as variables instead of hard coding them like this:
awk -F'\t' -v db="database" -v user="user" 'NR!=1 {system(""mysql -u " user " -p -e " $4 " " db)}' file
Make a shell script that accepts a SQL statement from commandline (or inputfile or stdin) and does all things for you like exporting ORACLE_HOME, tnsnames, username, password, redirecting output, calling sqlplus, output formatting, deleting column headers and other sqlplus settings.
With your magicsql.sh (after testing), aim for a solution like
magicsql.sh "select key, query from query_master order by key" | while read key query; do
magicsql.sh "${query}" > /tmp/${key}.out
done
Let us say I have thousands of comma separated text files with 1050 columns each (no header). Is there a way to concatenate and import all the text files into one table, one database in SQLite (Ideally I'd use R and sqldf to communicate with SQlite).
I.e.,
Each file is called, table1.txt, table2.txt, table3.txt; all of different number of rows, but same column types, and different unique IDs in the IDs column (first column of each file).
table1.txt
id1,20.3,1.2,3.4
id10,2.1,5.2,9.3
id21,20.5,1.2,8.4
table2.txt
id2,20.3,1.2,3.4
id92,2.1,5.2,9.3
table3.txt
id3,1.3,2.2,5.4
id30,9.1,4.4,9.3
The real example is pretty much the same but with more columns and more rows. AS you can note the first column in each file corresponds to a unique ID.
Now I'd like my new table in supertable, in the DB, super.db to be (also uniquely indexed):
super.db - name of the DB
mysupertable - name of the table in the DB
myids,v1,v2,v3
id1,20.3,1.2,3.4
id10,2.1,5.2,9.3
id21,20.5,1.2,8.4
id2,20.3,1.2,3.4
id92,2.1,5.2,9.3
id3,1.3,2.2,5.4
id30,9.1,4.4,9.3
For reference, I am using SQLite3; and I am looking for a SQL command that I can run on the background without logging interactively into the sqlite3 interpreter, i.e., IMPORT bla INTO,...
I could try in unix:
cat *.txt > allmyfiles.txt
and then a .sql file,
CREATE TABLE test (myids varchar(255), v1 float, v2 float, v3 float);
.separator ,
.import output.csv test
But this command does not work since I am using R sqldf library, and dbGetQuery(db, sql) and I have no idea how to create such string in R without getting an error.
p.s. I asked a similar Q for appending tables from a DB but this time I need to append/import text files not tables from a DB.
If you are using sqlite database files anyway, you might want to consider working with RSQLite.
install.packages( "RSQLite" ) # will install package "DBI"
library( RSQLite )
db <- dbConnect( dbDriver("SQLite"), dbname = "super.db" )
You still can use the unix command within R which should be faster than any loop in R, using the system() command:
system( "cat *.txt > allmyfiles.txt" )
Provided that your allmyfiles.txt has a consistent format, you can import it as a data.frame into R
allMyFiles <- read.table( "allmyfiles.txt", header = FALSE, sep = "," )
and write it to your database, following #MartÃn Bel's advice, with something like
dbWriteTable( db, "mysupertable", allMyFiles, overwrite = TRUE, append = FALSE )
EDIT:
Or, if you don't want to route your data through R,you can again resort to using the system() command. This may get you started:
You have a file with the data you want to get into SQLite called allmyfiles.txt. Create a file called table.sql with this content (obviously the structure must match):
CREATE TABLE mysupertable (myids varchar(255), v1 float, v2 float, v3 float);
.separator ,
.import allmyfiles.txt mysupertable
and call it from R with
system( "sqlite3 super.db < table.sql" )
That should avoid routing the data through R but still do all the work from within R.
Take a look at termsql:
https://gitorious.org/termsql/pages/Home
cat *.txt | termsql -d ',' -t mysupertable -c 'myids,v1,v2,v3' -o mynew.db
This should do the job.
I am trying to create a properties file like this...
firstname=Jon
lastname=Snow
occupation=Nights_Watch
family=Stark
...from a query like this...
SELECT
a.fname as firstname,
a.lname as lastname,
b.occ as occupation...
FROM
names a,
occupation b,
family c...
WHERE...
How can I do this? As I am aware of only using spool to a CSV file which won't work here?
These property files will be picked up by shell scripts to run automated tasks. I am using Oracle DB
Perhaps something like this?
psql -c 'select id, name from test where id = 1' -x -t -A -F = dbname -U dbuser
Output would be like:
id=1
name=test1
(For the full list of options: man psql.)
Since you mentionned spool I will assume you are running on Oracle. This should produce a result in the desired format, that you can spool straight away.
SELECT
'firstname=' || firstname || CHR(10) ||
'lastname=' || lastname || CHR(10) -- and so on for all fields
FROM your_tables;
The same approach should be possible with all database engines, if you know the correct incantation for a litteral new line and the syntax for string concatenation.
It is possible to to this from your command line SQL client but as STTLCU notes it might be better to get the query to output in something "standard" (like CSV) and then transform the results with a shell script. Otherwise, because a lot of the features you would use are not part of any SQL standard, they would depend on the database server and client application. Think of this step as sort of the obverse of ETL where you clean up the data you "unload" so that it is useful for some other application.
For sure there's ways to build this into your query application: e.g. if you use something like perl DBI::Shell as your client (which allows you to connect to many different servers using the DBI module) you can jazz up your output in various ways. But here you'd probably be best off if could send the query output to a text file and run it through awk.
Having said that ... here's how the Postgresql client could do what you want. Notice how the commands to set up the formatting are not SQL but specific to the client.
~/% psql -h 192.168.2.69 -d cropdusting -u stubblejumper
psql (9.2.4, server 8.4.14)
WARNING: psql version 9.2, server version 8.4.
Some psql features might not work.
You are now connected to database "cropdusting" as user "stubblejumper".
cropdusting=# \pset border 0 \pset format unaligned \pset t \pset fieldsep =
Border style is 0.
Output format is unaligned.
Showing only tuples.
Field separator is "=".
cropdusting=# select year,wmean_yld from bckwht where year=1997 AND freq > 13 ;
1997=19.9761904762
1997=14.5533333333
1997=17.9942857143
cropdusting=#
With the psql client the \pset command sets options affecting the output of query results tables. You can probably figure out which option is doing what. If you want to do this using your SQL client tell us which one it is or read through the manual page for tips on how to format the output of your queries.
My answer is very similar to the two already posted for this question, but I try to explain the options, and try to provide a precise answer.
When using Postgres, you can use psql command-line utility to get the intended output
psql -F = -A -x -X <other options> -c 'select a.fname as firstname, a.lname as lastname from names as a ... ;'
The options are:
-F : Use '=' sign as the field separator, instead of the default pipe '|'
-A : Do not align the output; so there is no space between the column header, separator and the column value.
-x : Use expanded output, so column headers are on left (instead of top) and row values are on right.
-X : Do not read $HOME/.psqlrc, as it may contain commands/options that can affect your output.
-c : The SQL command to execute
<other options> : Any other options, such as connection details, database name, etc.
You have to choose if you want to maintain such a file from shell or from PL/SQL. Both solutions are possible and both are correct.
Because Oracle has to read and write from the file I would do it from database side.
You can write data to file using UTL_FILE package.
DECLARE
fileHandler UTL_FILE.FILE_TYPE;
BEGIN
fileHandler := UTL_FILE.FOPEN('test_dir', 'test_file.txt', 'W');
UTL_FILE.PUTF(fileHandler, 'firstname=Jon\n');
UTL_FILE.PUTF(fileHandler, 'lastname=Snow\n');
UTL_FILE.PUTF(fileHandler, 'occupation=Nights_Watch\n');
UTL_FILE.PUTF(fileHandler, 'family=Stark\n');
UTL_FILE.FCLOSE(fileHandler);
EXCEPTION
WHEN utl_file.invalid_path THEN
raise_application_error(-20000, 'ERROR: Invalid PATH FOR file.');
END;
Example's source: http://psoug.org/snippet/Oracle-PL-SQL-UTL_FILE-file-write-to-file-example_538.htm
At the same time you read from the file using Oracle external table.
CREATE TABLE parameters_table
(
parameters_coupled VARCHAR2(4000)
)
ORGANIZATION EXTERNAL
(
TYPE ORACLE_LOADER
DEFAULT DIRECTORY test_dir
ACCESS PARAMETERS
(
RECORDS DELIMITED BY NEWLINE
FIELDS
(
parameters_coupled VARCHAR2(4000)
)
)
LOCATION ('test_file.txt')
);
At this point you can write data to your table which has one column with coupled parameter and value, i.e.: 'firstname=Jon'
You can read it by Oracle
You can read it by any shell script because it is a plain text.
Then it is just a matter of a query, i.e.:
SELECT MAX(CASE WHEN INSTR(parameters_coupled, 'firstname=') = 1 THEN REPLACE(parameters_coupled, 'firstname=') ELSE NULL END) AS firstname
, MAX(CASE WHEN INSTR(parameters_coupled, 'lastname=') = 1 THEN REPLACE(parameters_coupled, 'lastname=') ELSE NULL END) AS lastname
, MAX(CASE WHEN INSTR(parameters_coupled, 'occupation=') = 1 THEN REPLACE(parameters_coupled, 'occupation=') ELSE NULL END) AS occupation
FROM parameters_table;
I'm trying to run a query using SQLCMD.EXE and have trouble with the LIKE portion.
WHERE email LIKE '%%#%%'
I think it is an error with cmd prompt rather then SQLCMD.EXE since I get the error:
Syntax error "#%'"
I am running this via Notepad++ (NppExec) pointing to the bat file like so:
H:\scripts\SQL.bat "$(CURRENT_WORD)"
This causes the query to be wrapped in double quotes before being used by the SQLCMD.EXE call. The SQLCMD.EXE call then runs in the bat file like so:
SQLCMD.EXE -U user -P %pass% -S %server% -Q %sql% -d %table%
It works perfect on any query I use aside from this LIKE '%%#%%' part.
UPDATE
I've done a few more tests and think I have narrowed it down to being a problem with the % and the #.
So queries like these work fine:
SELECT name FROM table WHERE name LIKE 'test'
SELECT name FROM table WHERE name LIKE 'test%'
SELECT name FROM table WHERE name LIKE '%%test'
But these will cause errors:
SELECT name FROM table WHERE name LIKE '%test'
SELECT name FROM table WHERE name LIKE '%test%'
This is fine since I am ok with doubling the % in my queries, but I've tried %%#% and %%#%% and they throw errors. Syntax error "#'"" or Syntax error "#%'"", respectively.
Also, the reason for the variables is that I included some logic so it can detect table names and run for different servers and databases.
Here is the bat file
set sql=%1
iff %#index[%sql%,sur_] GT -1 THEN
SET SERVER=server1
SET table=tablename
SET pass=password
else
SET SERVER=server2
SET table=tablename
SET pass=password
endiff
SQLCMD.EXE -U usr -P %pass% -S %server% -Q %sql% -d %table%
The reason for the weird syntax is due to the command being run through TCC/LE (see here)
I'm not quite sure what your reasoning is for doubling up the %s, but it looks like your intent is to find values in the email column that contain #. If so, you can try rewriting the clause as such:
WHERE CHARINDEX('#', email) > 0
If it's the # symbol that is tripping things up, use CHAR(64) instead.
WHERE CHARINDEX(CHAR(64), email) > 0
When run query with sqlcmd, i found that % symbol will be removed. Let's say your query is :
SELECT name FROM table WHERE name LIKE 'test%'
The sqlcmd will read your query as
SELECT name FROM table WHERE name LIKE 'test'
So sqlcmd will not filter your result. Please use %% for query
SELECT name FROM table WHERE name LIKE 'test%%'
and you will get result
SELECT name FROM table WHERE name LIKE 'test%'
I have tested this on SQLServer 2005 & 2008
i am new to bash scripting and i was wondering if anyone could help me with the following.
I am trying to retrieve the competition name from a Oracle database using competition_id using the following statement:
select name, competition_type from competitions where competition_id=' ';
However i want to use a seperate text file whcih has a list competition_ids i want to identify, i want my script to find the name and type of all my ids and output the results in a txt file. this is what i have so far:
#!/bin/bash
echo Start Executing SQL commands
cat comps_ids.txt | while read ID
var=$ID
do
sqlplus "details"
<< EOF
select name, competition_type
from competitions
where competition_id=$var;
exit;
EOF
I tried to add a done at the end but i get "unexpected line ending" error message. Can anyone solve this?
Many thanks in advance :)
I'm not sure what your command line should look like, but it's more like
sqlplus "details" <<EOF
select name, competition_type from competitions where competition_id=$val;
exit;
EOF
If your list of IDs isn't too big, it may be better idea to make a ,-separated list and single query.
#!/bin/bash
function get_comp () {
sqlplus -S user/pass#database << EOF
set pagesize 0
set feedback off
set head off
select name, competition_type
from competitions
where competition_id=$1;
EOF
}
for id in $* ; do
get_comp $id
done
Put it in a file (get_comps.sh), and then call it like this
$ ./get_comps.sh < comp_ids.txt > text_file_out.txt
-S makes sqlplus quieter
The other setting make it return just your data, not row headers or anything else.
Of course the database credentials will be stored in your history, and available to other users using 'ps' or 'top'.
This is also horribly inefficient because it connects to the database for each row in your original file. If you have a lot of rows, you might try using python or ruby as their database stuff is pretty easy to use.