Is there a way to run the following command line statement from an SQL script in a PostgreSQL database?
raster2pgsql -s 4326 -I -C -M D:\postgresql\data\input.tif -F -t 100x100 public.demelevation > output.sql
I found something promising for Microsoft's SQL Server here, but I couldn't find something similar for PostgreSQL.
you can use COPY for that. Think if it's a good idea to call programm from psql though. Also you will need superuser permissions for that.
Example:
t=# create table c (t text);
t=# copy (select 1) to program $$echo "whatever" -v >/tmp/12$$;
COPY 1
t=# copy c from program 'echo "whatever" -v';
COPY 1
t=# select * from c;
t
-------------
whatever -v
(1 row)
Time: 0.380 ms
t=# !\ cat /tmp/12
Invalid command \. Try \? for help.
t-# \! cat /tmp/12
whatever -v
Related
I've the following file queries.sql that contains a number of queries, structured like this:
/* Query 1 */
SELECT cab_type_id,
Count(*)
FROM trips
GROUP BY 1;
/* Query 2 */
SELECT passenger_count,
Avg(total_amount)
FROM trips
GROUP BY 1;
/* Query 3 */
SELECT passenger_count,
Extract(year FROM pickup_datetime),
Count(*)
FROM trips
GROUP BY 1,
2;
Then I've written a regex, that finds all those queries in the file:
/\*[^\*]*\*/[^;]*;
What I'd like to achieve is the following:
Select all the queries with the regex.
Prefix each query with EXPLAIN ANALYZE
Execute each query and output the results to a new file. That means, query 1 will create a file q1.txt with the corresponding output, query 2 create q2.txt etc.
One of my main challenges (there are no problems, right? ;-)) is, that I'm rather unfamiliar with the linux bash I've to use.
I tried cat queries.sql | grep '/\*[^\*]*\*/[^;]*;' but that doesn't return anything.
So a solution could look like:
count = 0
for query in (cat queries.sql | grep 'somehow-here-comes-my-regex') do
count = $count+1
query = 'EXPLAIN ANALYZE '+query
psql -U postgres -h localhost -d nyc-taxi-data -c query > 'q'$count'.txt'
Except from: that doesn't work and I don't know how to make it work.
You have to omit spaces for variable assignments.
The following script would help. Save it in a file eg.: explain.sh, make it executable using chmod 0700 explain.sh and run in the following way: ./explain.sh query.sql.
#!/bin/bash
qfile="$1"
# number of queries
n="$(grep -oP '(?<=Query )[0-9]+ ' $qfile)"
count=1
for q in $n; do
# Corrected solution, modified after the remarks of #EdMorton
qn="EXPLAIN ANALYZE $(awk -v n="Query $q" 'flag; $0 ~ n {flag=1} /;/{flag=0}' $qfile)"
#qn="EXPLAIN ANALYZE $(awk -v n=$q "flag; /Query $q/{flag=1} /;/{flag=0}" $qfile)"
# psql -U postgres -h localhost -d nyc-taxi-data -c "$qn" > q$count.txt
echo "$qn" > q$count.txt
count=$(( $count + 1 ))
done
First of all, the script accounts for one argument (your example input query.sql file). It reads out the number of queries and save into a variable n. Then in a for loop it iterates through the query numbers and uses awk to extract the number n query and append EXPLAIN ANALYZE to the beginning. Then you can run your psql with the desired query. Here I commented out the psql part. This example script only creates qN.txt files for each explain query.
UPDATE:
The awk part: It is possible to use a shell variable in awk using the -v flag. Here we creates an awk variable n with the value of the q shell variable. n is used to create the starter pattern ie: Query 1. awk -v n="Query $q" 'flag; $0 ~ n {flag=1} /;/{flag=0}' $qfile matches everything between Query 1 and the first occurence of a semi-colon (;) excluding the line of Query 1 from query.sql. The $(...) means command-substitution in bash, thus we can save the output of a shell command into a variable. Here we save the output of awk and prefix it with the EXPLAIN ANALYZE string.
Here is a great answer about awk pattern matching.
It sounds like this is what you're looking for:
awk -v RS= -v ORS='\0' '{print "EXPLAIN ANALYZE", $0}' queries.sql |
while IFS= read -r -d '' query; do
psql -U postgres -h localhost -d nyc-taxi-data -c "$query" > "q$((++count)).txt"
do
The awk statement outputs each query as a NUL-terminated string, the shell loop reads it as such one at a time and calls psql on it. Simple, robust, efficient, etc...
When using PSQL's variables, I can run it as follows:
psql -d database -v var="'123'"
And I will then have access to the variable var when I type the following in the PSQL terminal:
select * from table where column = :var;
This variable feature also works when the SQL is read from a file:
psql -d database -v var="'123'" -f file.sql
But when I try to run the SQL as a single command:
psql -d database -v var="'123'" -c "select * from table where column = :var;"
I can't access the variable and get the following error:
ERROR: syntax error at or near ":"
Is it possible to pass variables to single SQL commands in PSQL?
It turns out that, as man psql explains, the -c command is limited to SQL that "contains no psql-specific features":
-c command, --command=command
Specifies that psql is to execute one command string, command, and then exit. This is useful in shell
scripts. Start-up files (psqlrc and ~/.psqlrc) are ignored with this option.
command must be either a command string that is completely parsable by the server (i.e., it contains no
psql-specific features), or a single backslash command. Thus you cannot mix SQL and psql meta-commands
with this option. To achieve that, you could pipe the string into psql, for example: echo '\x \\ SELECT
* FROM foo;' | psql. (\\ is the separator meta-command.)
It looks like I can do what I want by passing in the SQL using stdin:
echo "select * from table where column = :var;" | psql -d database -v var="'123'"
I want to export all data from sql server table to a csv, I know I can get the desired result by:
sqlcmd -S . -d database -E -s, -W -Q "SELECT * FROM TABLENAME" > file.csv
I have many tables, so I want to create a .bat file that do the work for me, I have this:
set "list = A B C D"
for %%x in (%list%) do (
sqlcmd -S . -d database -E -s, -W -Q "SELECT * FROM %%x" > %%x.csv
)
But I am getting errors I don't know (I am not an expert in bat files). Why this does not work? How can I do what I want?
Spacing is important when using set (unless you're doing math with the /A switch). As written, the variable you're setting isn't %list%. It's %list %. Change your set command as follows:
set "list=A B C D"
How do you hide the column names and row count in the output from psql?
I'm running a SQL query via psql with:
psql --user=myuser -d mydb --output=result.txt -c "SELECT * FROM mytable;"
and I'm expecting output like:
1,abc
2,def
3,xyz
but instead I get:
id,text
-------
1,abc
2,def
3,xyz
(3 rows)
Of course, it's not impossible to filter the top two rows and bottom row out after the fact, but it there a way to do it with only psql? Reading over its manpage, I see options for controlling the field delimiter, but nothing for hiding extraneous output.
You can use the -t or --tuples-only option:
psql --user=myuser -d mydb --output=result.txt -t -c "SELECT * FROM mytable;"
Edited (more than a year later) to add:
You also might want to check out the COPY command. I no longer have any PostgreSQL instances handy to test with, but I think you can write something along these lines:
psql --user=myuser -d mydb -c "COPY mytable TO 'result.txt' DELIMITER ','"
(except that result.txt will need to be an absolute path). The COPY command also supports a more-intelligent CSV format; see its documentation.
You can also redirect output from within psql and use the same option. Use \o to set the output file, and \t to output tuples only (or \pset to turn off just the rowcount "footer").
\o /home/flynn/queryout.txt
\t on
SELECT * FROM a_table;
\t off
\o
Alternatively,
\o /home/flynn/queryout.txt
\pset footer off
. . .
usually when you want to parse the psql generated output you would want to set the -A and -F ...
# generate t.col1, t.col2, t.col3 ...
while read -r c; do test -z "$c" || echo , $table_name.$c | \
perl -ne 's/\n//gm;print' ; \
done < <(cat << EOF | PGPASSWORD=${postgres_db_useradmin_pw:-} \
psql -A -F -v -q -t -X -w -U \
${postgres_db_useradmin:-} --port $postgres_db_port --host $postgres_db_host -d \
$postgres_db_name -v table_name=${table_name:-}
SELECT column_name
FROM information_schema.columns
WHERE 1=1
AND table_schema = 'public'
AND table_name =:'table_name' ;
EOF
)
echo -e "\n\n"
You could find example of the full bash call here:
UPDATE:
This is what works!
fgrep -ircl --include=*.sql -- -- *
I have various SQL files with '--' comments and we migrated to the latest version of MySQL and it hates these comments. I want to replace -- with #.
I am looking for a recursive, inplace replace one-liner.
This is what I have:
perl -p -i -e 's/--/# /g'` ``fgrep -- -- *
A sample .sql file:
use myDB;
--did you get an error
I get the following error:
Unrecognized switch: --did (-h will show valid options).
p.s : fgrep skipping 2 dashes was just discussed here if you are interested.
Any help is appreciated.
The command-line arguments after the -e 's/.../.../' argument should be filenames. Use fgrep -l to return names of files that contain a pattern:
perl -p -i -e 's/--/# /g' `fgrep -l -- -- * `
I'd use a combination of find and inplace sed
find . -name '*.sql' -exec sed -i -e "s/^--/#/" '{}' \;
Note that it will only replace lines beginning with --
The regex will become vastly more complex if you wan't to replace this for example:
INSERT INTO stuff VALUES (...) -- values used for xyz
because the -- might as well be in your data (I guess you don't want to replace those)
INSERT INTO stuff VALUES (42, "<!-- sboing -->") -- values used for xyz
The equivalent of that in script form is:
#!/usr/bin/perl -i
use warnings;
use strict;
while(<>) {
s/--/# /g;
print;
}
If I have several files with comments of the form of --comment and feed any number of names to this script, they are changed in place to # comment You could use find, ls, grep, etc to find the files...
There is nothing per se wrong with using a 1 liner.
Is that what you are looking for?