How to hide result set decoration in Psql output - sql

How do you hide the column names and row count in the output from psql?
I'm running a SQL query via psql with:
psql --user=myuser -d mydb --output=result.txt -c "SELECT * FROM mytable;"
and I'm expecting output like:
1,abc
2,def
3,xyz
but instead I get:
id,text
-------
1,abc
2,def
3,xyz
(3 rows)
Of course, it's not impossible to filter the top two rows and bottom row out after the fact, but it there a way to do it with only psql? Reading over its manpage, I see options for controlling the field delimiter, but nothing for hiding extraneous output.

You can use the -t or --tuples-only option:
psql --user=myuser -d mydb --output=result.txt -t -c "SELECT * FROM mytable;"
Edited (more than a year later) to add:
You also might want to check out the COPY command. I no longer have any PostgreSQL instances handy to test with, but I think you can write something along these lines:
psql --user=myuser -d mydb -c "COPY mytable TO 'result.txt' DELIMITER ','"
(except that result.txt will need to be an absolute path). The COPY command also supports a more-intelligent CSV format; see its documentation.

You can also redirect output from within psql and use the same option. Use \o to set the output file, and \t to output tuples only (or \pset to turn off just the rowcount "footer").
\o /home/flynn/queryout.txt
\t on
SELECT * FROM a_table;
\t off
\o
Alternatively,
\o /home/flynn/queryout.txt
\pset footer off
. . .

usually when you want to parse the psql generated output you would want to set the -A and -F ...
# generate t.col1, t.col2, t.col3 ...
while read -r c; do test -z "$c" || echo , $table_name.$c | \
perl -ne 's/\n//gm;print' ; \
done < <(cat << EOF | PGPASSWORD=${postgres_db_useradmin_pw:-} \
psql -A -F -v -q -t -X -w -U \
${postgres_db_useradmin:-} --port $postgres_db_port --host $postgres_db_host -d \
$postgres_db_name -v table_name=${table_name:-}
SELECT column_name
FROM information_schema.columns
WHERE 1=1
AND table_schema = 'public'
AND table_name =:'table_name' ;
EOF
)
echo -e "\n\n"
You could find example of the full bash call here:

Related

Bash purge script

I'm trying to create a script that removes my images that are not in DB
There is my code (Updated):
I have 1 problems:
Problem with the like syntax like '%$f%'
#!/bin/bash
db="intranet_carc_development"
user="benjamin"
for f in public/uploads/files/*
do
if [[ -f "$f" ]]
then
psql $db $user -t -v "ON_ERROR_STOP=1" \
-c 'select * from public.articles where content like "%'"$(basename "$f")"'%"' | grep . \
&& echo "exist" \
|| echo "doesn't exist"
fi
done
And I have the following error :
ERROR: column "%1YOLV3M4-VFb2Hydb0VFMw.png%" does not exist
LINE 1: select * from public.articles where content like "%1YOLV3M4-...
^
doesn't exist
ERROR: column "%wnj8EEd8wuJp4TdUwqrJtA.png%" does not exist
LINE 1: select * from public.articles where content like "%wnj8EEd8w...
EDIT : if i use \'%$f%\' for the like :
/purge_files.sh: line 12: unexpected EOF while looking for matching `"'
./purge_files.sh: line 16: syntax error: unexpected end of file
There are several issues with your code :
$f is public/uploads/files/FILENAME and i want only the FILENAME
You can use basename to circumvent that, by writing :
f="$(basename "$f")"
psql $db $user -c "select * from public.articles where content like '%$f%'"...
(The extra quotes are here to prevent issues if you have spaces and special characters in your file name)
your psql request will always return true even if no rows are found
your psql command will return true even if the request fails, unless you set the variable 'ON_ERROR_STOP' to 1
As shown in the linked questions, you can use the following syntax :
#!/bin/bash
set -o pipefail #needed because of the pipe to grep later on
db="intranet_carc_development"
user="benjamin"
for f in public/uploads/files/*
do
if [[ -f "$f" ]]
then
f="$(basename "$f")"
psql $db $user -t -v "ON_ERROR_STOP=1" \
-c "select * from public.articles where content like '%$f%'" | grep . \
&& echo "exist" \
|| echo "doesn't exist"
fi
done

Use multiline regex on cat output

I've the following file queries.sql that contains a number of queries, structured like this:
/* Query 1 */
SELECT cab_type_id,
Count(*)
FROM trips
GROUP BY 1;
/* Query 2 */
SELECT passenger_count,
Avg(total_amount)
FROM trips
GROUP BY 1;
/* Query 3 */
SELECT passenger_count,
Extract(year FROM pickup_datetime),
Count(*)
FROM trips
GROUP BY 1,
2;
Then I've written a regex, that finds all those queries in the file:
/\*[^\*]*\*/[^;]*;
What I'd like to achieve is the following:
Select all the queries with the regex.
Prefix each query with EXPLAIN ANALYZE
Execute each query and output the results to a new file. That means, query 1 will create a file q1.txt with the corresponding output, query 2 create q2.txt etc.
One of my main challenges (there are no problems, right? ;-)) is, that I'm rather unfamiliar with the linux bash I've to use.
I tried cat queries.sql | grep '/\*[^\*]*\*/[^;]*;' but that doesn't return anything.
So a solution could look like:
count = 0
for query in (cat queries.sql | grep 'somehow-here-comes-my-regex') do
count = $count+1
query = 'EXPLAIN ANALYZE '+query
psql -U postgres -h localhost -d nyc-taxi-data -c query > 'q'$count'.txt'
Except from: that doesn't work and I don't know how to make it work.
You have to omit spaces for variable assignments.
The following script would help. Save it in a file eg.: explain.sh, make it executable using chmod 0700 explain.sh and run in the following way: ./explain.sh query.sql.
#!/bin/bash
qfile="$1"
# number of queries
n="$(grep -oP '(?<=Query )[0-9]+ ' $qfile)"
count=1
for q in $n; do
# Corrected solution, modified after the remarks of #EdMorton
qn="EXPLAIN ANALYZE $(awk -v n="Query $q" 'flag; $0 ~ n {flag=1} /;/{flag=0}' $qfile)"
#qn="EXPLAIN ANALYZE $(awk -v n=$q "flag; /Query $q/{flag=1} /;/{flag=0}" $qfile)"
# psql -U postgres -h localhost -d nyc-taxi-data -c "$qn" > q$count.txt
echo "$qn" > q$count.txt
count=$(( $count + 1 ))
done
First of all, the script accounts for one argument (your example input query.sql file). It reads out the number of queries and save into a variable n. Then in a for loop it iterates through the query numbers and uses awk to extract the number n query and append EXPLAIN ANALYZE to the beginning. Then you can run your psql with the desired query. Here I commented out the psql part. This example script only creates qN.txt files for each explain query.
UPDATE:
The awk part: It is possible to use a shell variable in awk using the -v flag. Here we creates an awk variable n with the value of the q shell variable. n is used to create the starter pattern ie: Query 1. awk -v n="Query $q" 'flag; $0 ~ n {flag=1} /;/{flag=0}' $qfile matches everything between Query 1 and the first occurence of a semi-colon (;) excluding the line of Query 1 from query.sql. The $(...) means command-substitution in bash, thus we can save the output of a shell command into a variable. Here we save the output of awk and prefix it with the EXPLAIN ANALYZE string.
Here is a great answer about awk pattern matching.
It sounds like this is what you're looking for:
awk -v RS= -v ORS='\0' '{print "EXPLAIN ANALYZE", $0}' queries.sql |
while IFS= read -r -d '' query; do
psql -U postgres -h localhost -d nyc-taxi-data -c "$query" > "q$((++count)).txt"
do
The awk statement outputs each query as a NUL-terminated string, the shell loop reads it as such one at a time and calls psql on it. Simple, robust, efficient, etc...

Using for loop bat file windows for multiple command calls

I want to export all data from sql server table to a csv, I know I can get the desired result by:
sqlcmd -S . -d database -E -s, -W -Q "SELECT * FROM TABLENAME" > file.csv
I have many tables, so I want to create a .bat file that do the work for me, I have this:
set "list = A B C D"
for %%x in (%list%) do (
sqlcmd -S . -d database -E -s, -W -Q "SELECT * FROM %%x" > %%x.csv
)
But I am getting errors I don't know (I am not an expert in bat files). Why this does not work? How can I do what I want?
Spacing is important when using set (unless you're doing math with the /A switch). As written, the variable you're setting isn't %list%. It's %list %. Change your set command as follows:
set "list=A B C D"

Sqlcmd to generate file without dashed line under header, without row count

Using the following sqlcmd script:
sqlcmd -S . -d MyDb -E -s, -W -Q "select account,rptmonth, thename from theTable"
> c:\dataExport.csv
I get an csv output file containing
acctnum,rptmonth,facilname
-------,--------,---------
ALLE04,201406,Allendale Community for Senior Living-LTC APPL02,201406,Applewood Estates
ARBO02,201406,Arbors Care Center
ARIS01,201406,AristaCare at Cherry
Hill
. . .
(139 rows affected)
Is there a way to get rid of the dashed line under the column headers : -------,--------, but keep the column headers?
and also a way to get rid of the two lines used for the row count on the bottom?
I tries using parm -h-1 but that got rid of the column headers as well as the dashed line.
Solutions:
1) To remove the row count ("(139 rows affected)") you should use SET NOCOUNT ON statement. See ref.
2) To remove column headers you should use -h parameter with value -1. See ref (section Formatting Options).
Examples:
C:\Users\sqlservr.exe>sqlcmd -S(local)\SQL2012 -d Test -E -h -1 -s, -W -Q "set nocount on; select * from dbo.Account" > d:\export.txt.
or
C:\Users\sqlservr.exe>sqlcmd -S(local)\SQL2012 -d Test -E -h -1 -s, -W -Q "set nocount on; select * from dbo.Account" -o "d:\export2.txt"
The guy with the top answer didn't answer how to remove the dashed line. This is my awesome solution.
First include -h -1 which removes both the dashed line and header
Then before your select statement manually inject the header string that you need with a PRINT statement. So in your case PRINT 'acctnum,rptmonth,facilname' select..*...from...
Sorry I'm 4 years and 9 months late.
Use the following;
sqlcmd -S . -d MyDb -E -s, -h-1 -W -Q "set nocount on;select 'account','rptmonth', 'thename';select account,rptmonth, thename from theTable"
> c:\dataExport.csv
remove the header -h-1
remove row count [set nocount on;]
add header select [select 'account','rptmonth', 'thename';]
add your select [select account,rptmonth, thename from theTable;]
To remove the Row Count:
Add the below to your SQL statement
SET NOCOUNT ON;
To remove the hyphen row try the following upon successful execution:
findstr /v /c:"---" c:\dataExport.csv > c:\finalExport.csv
I use "---" as all my columns are over 3 characters and I never have that string in my data but you could also use "-,-" to reduce the risk further or any delimiter based on your data in place of the ",".
In my case worked well as :
type Temp.txt | findstr /v -- > DestFile.txt
In addition, if you want to query out all records in a table, you can code as
SET NOCOUNT ON;
SELECT SUBSTRING((SELECT ','+ COLUMN_NAME FROM
INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME=N'%table_name%' FOR XML
PATH('') ), 2, 9999);
SELECT * FROM %table_name%
Assign the above queries into a variable %query%. The the command will be looks like as below.
SQLCMD -h -1 -W -E -S %sql_server% -d %sql_dabase% -Q %query% -s"," -o output_file.csv
This is the one line solution, without doing anything inside the stored procedure to append the column headers:
sqlcmd -S . -d MyDb -E -s, -W -Q "select account,rptmonth, thename from theTable"
| findstr /v /c:"-" /b > "c:\dataExport.csv" & exit 0
What this does is it intercepts all console output and replaces the "-" char BEFORE it redirects to the output file. There is NO need to output to intermediary file. And you will need a one-liner command if you use an agent to run these commands remotely on the sql server machines, which most of the times are locked from hosting *.bat files (which you'd need for multiline commands).
I added the "exit 0" at the end to not fail the caller application overall. You may remove it starting "& exit 0" if you don't care about that.
This one liner is why I chose sqlcmd over bcp out, by the way. BCP, although optimized for speed, cannot output column headers unless doing the ugly trick within the stored proc, to append them there as a union all.
Just in case you have access to writing a bat file that contains this one liner, you MUST add #ECHO OFF before it. Otherwise the console output will also have the actual command.
Hope it helps.
With SQL Server 2017 (14.x) and later you can print header with:
SELECT string_agg(COLUMN_NAME, ', ') within group (order by ORDINAL_POSITION asc) FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME='YOUR_TABLE_NAME'
1.Create the file first with the header columns
2.Apprend the sqlcmd output to the file using the option -h-1
echo acctnum,rptmonth,facilname > c:\dataExport.csv
sqlcmd -S . -d MyDb -E -s, -h-1 -W -Q "select account,rptmonth, thename from theTable" >> c:\dataExport.csv
I used another solution to solve the issue of removing the dashed line below the header.
DECLARE #combinedString VARCHAR(MAX);
SELECT #combinedString = COALESCE(#combinedString + '|', '') + COLUMN_NAME
FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME = 'YOUR_TABLE_NAME'
Then just use
Print #combinedString above your select statement.
I used pipe delimiter.

PostgreSQL - dump each table into a different file

I need to extract SQL files from multiple tables of a PostgreSQL database. This is what I've come up with so far:
pg_dump -t 'thr_*' -s dbName -U userName > /home/anik/psqlTest/db_dump.sql
However, as you see, all the tables that start with the prefix thr are being exported to a single unified file (db_dump.sql). I have almost 90 tables in total to extract SQL from, so it is a must that the data be stored into separate files.
How can I do it? Thanks in advance.
If you are happy to hard-code the list of tables, but just want each to be in a different file, you could use a shell script loop to run the pg_dump command multiple times, substituting in the table name each time round the loop:
for table in table1 table2 table3 etc;
do pg_dump -t $table -U userName dbName > /home/anik/psqlTest/db_dump_dir/$table.sql;
done;
EDIT: This approach can be extended to get the list of tables dynamically by running a query through psql and feeding the results into the loop instead of a hard-coded list:
for table in $(psql -U userName -d dbName -t -c "Select table_name From information_schema.tables Where table_type='BASE TABLE' and table_name like 'thr_%'");
do pg_dump -t $table -U userName dbName > /home/anik/psqlTest/db_dump_dir/$table.sql;
done;
Here psql -t -c "SQL" runs SQL and outputs the results with no header or footer; since there is only one column selected, there will be a table name on each line of the output captured by $(command), and your shell will loop through them one at a time.
Since version 9.1 of PostgreSQL (Sept. 2011), one can use the directory format output when doing backups
and 2 versions/2 years after (PostgreSQL 9.3), the --jobs/-j makes it even more efficient to backup every single objects in parallel
but what I don't understand in your original question, is that you use the -s option which dumps only the object definitions (schema), not data.
if you want the data, you shall not use -s but rather -a (data-only) or no option to have schema+data
so, to backup all objects (tables...) that begins with 'th' for the database dbName on the directory dbName_objects/ with 10 concurrent jobs/processes (increase load on the server) :
pg_dump -Fd -f dbName_objects -j 10 -t 'thr_*' -U userName dbName
(you can also use the -a/-s if you want the data or the schema of the objects)
as a result the directory will be populated with a toc.dat (table of content of all the objects) and one file per object (.dat.gz) in a compressed form
each file is named after it's object number, and you can retrieve the list with the following pg_restore command:
pg_restore --list -Fd dbName_objects/ | grep 'TABLE DATA'
in order to have each file not compressed (in raw SQL)
pg_dump --data-only --compress=0 --format=directory --file=dbName_objects --jobs=10 --table='thr_*' --username=userName --dbname=dbName
(not enough reputation to comment the right post)
I used your script with some corrections and some modifications for my own use, may be usefull for others:
#!/bin/bash
# Config:
DB=rezopilotdatabase
U=postgres
# tablename searchpattern, if you want all tables enter "":
P=""
# directory to dump files without trailing slash:
DIR=~/psql_db_dump_dir
mkdir -p $DIR
TABLES="$(psql -d $DB -U $U -t -c "SELECT table_name FROM
information_schema.tables WHERE table_type='BASE TABLE' AND table_name
LIKE '%$P%' ORDER BY table_name")"
for table in $TABLES; do
echo backup $table ...
pg_dump $DB -U $U -w -t $table > $DIR/$table.sql;
done;
echo done
(I think you forgot to add $DB in the pg_dumb command, and I added a -w, for an automated script, it is better not to have a psw prompt I guess, for that, I created a ~/.pgpass file with my password in it
I also gave the user for the command to know which password to fetch in .pgpass)
Hope this helps someone someday.
This bash script will do a backup with one file per table:
#!/bin/bash
# Config:
DB=dbName
U=userName
# tablename searchpattern, if you want all tables enter "":
P=""
# directory to dump files without trailing slash:
DIR=~/psql_db_dump_dir
mkdir -p $DIR
AUTH="-d $DB -U $U"
TABLES="$(psql $AUTH -t -c "SELECT table_name FROM information_schema.tables WHERE table_type='BASE TABLE' AND table_name LIKE '%$P%' ORDER BY table_name")"
for table in $TABLES; do
echo backup $table ...
pg_dump $AUTH -t $table > $DIR/$table.sql;
done;
echo done