Save PL/pgSQL output from PostgreSQL to a CSV file - sql

What is the easiest way to save PL/pgSQL output from a PostgreSQL database to a CSV file?
I'm using PostgreSQL 8.4 with pgAdmin III and PSQL plugin where I run queries from.

Do you want the resulting file on the server, or on the client?
Server side
If you want something easy to re-use or automate, you can use Postgresql's built in COPY command. e.g.
Copy (Select * From foo) To '/tmp/test.csv' With CSV DELIMITER ',' HEADER;
This approach runs entirely on the remote server - it can't write to your local PC. It also needs to be run as a Postgres "superuser" (normally called "root") because Postgres can't stop it doing nasty things with that machine's local filesystem.
That doesn't actually mean you have to be connected as a superuser (automating that would be a security risk of a different kind), because you can use the SECURITY DEFINER option to CREATE FUNCTION to make a function which runs as though you were a superuser.
The crucial part is that your function is there to perform additional checks, not just by-pass the security - so you could write a function which exports the exact data you need, or you could write something which can accept various options as long as they meet a strict whitelist. You need to check two things:
Which files should the user be allowed to read/write on disk? This might be a particular directory, for instance, and the filename might have to have a suitable prefix or extension.
Which tables should the user be able to read/write in the database? This would normally be defined by GRANTs in the database, but the function is now running as a superuser, so tables which would normally be "out of bounds" will be fully accessible. You probably don’t want to let someone invoke your function and add rows on the end of your “users” table…
I've written a blog post expanding on this approach, including some examples of functions that export (or import) files and tables meeting strict conditions.
Client side
The other approach is to do the file handling on the client side, i.e. in your application or script. The Postgres server doesn't need to know what file you're copying to, it just spits out the data and the client puts it somewhere.
The underlying syntax for this is the COPY TO STDOUT command, and graphical tools like pgAdmin will wrap it for you in a nice dialog.
The psql command-line client has a special "meta-command" called \copy, which takes all the same options as the "real" COPY, but is run inside the client:
\copy (Select * From foo) To '/tmp/test.csv' With CSV DELIMITER ',' HEADER
Note that there is no terminating ;, because meta-commands are terminated by newline, unlike SQL commands.
From the docs:
Do not confuse COPY with the psql instruction \copy. \copy invokes COPY FROM STDIN or COPY TO STDOUT, and then fetches/stores the data in a file accessible to the psql client. Thus, file accessibility and access rights depend on the client rather than the server when \copy is used.
Your application programming language may also have support for pushing or fetching the data, but you cannot generally use COPY FROM STDIN/TO STDOUT within a standard SQL statement, because there is no way of connecting the input/output stream. PHP's PostgreSQL handler (not PDO) includes very basic pg_copy_from and pg_copy_to functions which copy to/from a PHP array, which may not be efficient for large data sets.

There are several solutions:
1 psql command
psql -d dbname -t -A -F"," -c "select * from users" > output.csv
This has the big advantage that you can using it via SSH, like ssh postgres#host command - enabling you to get
2 postgres copy command
COPY (SELECT * from users) To '/tmp/output.csv' With CSV;
3 psql interactive (or not)
>psql dbname
psql>\f ','
psql>\a
psql>\o '/tmp/output.csv'
psql>SELECT * from users;
psql>\q
All of them can be used in scripts, but I prefer #1.
4 pgadmin but that's not scriptable.

In terminal (while connected to the db) set output to the cvs file
1) Set field seperator to ',':
\f ','
2) Set output format unaligned:
\a
3) Show only tuples:
\t
4) Set output:
\o '/tmp/yourOutputFile.csv'
5) Execute your query:
:select * from YOUR_TABLE
6) Output:
\o
You will then be able to find your csv file in this location:
cd /tmp
Copy it using the scp command or edit using nano:
nano /tmp/yourOutputFile.csv

CSV Export Unification
This information isn't really well represented. As this is the second time I've needed to derive this, I'll put this here to remind myself if nothing else.
Really the best way to do this (get CSV out of postgres) is to use the COPY ... TO STDOUT command. Though you don't want to do it the way shown in the answers here. The correct way to use the command is:
COPY (select id, name from groups) TO STDOUT WITH CSV HEADER
Remember just one command!
It's great for use over ssh:
$ ssh psqlserver.example.com 'psql -d mydb "COPY (select id, name from groups) TO STDOUT WITH CSV HEADER"' > groups.csv
It's great for use inside docker over ssh:
$ ssh pgserver.example.com 'docker exec -tu postgres postgres psql -d mydb -c "COPY groups TO STDOUT WITH CSV HEADER"' > groups.csv
It's even great on the local machine:
$ psql -d mydb -c 'COPY groups TO STDOUT WITH CSV HEADER' > groups.csv
Or inside docker on the local machine?:
docker exec -tu postgres postgres psql -d mydb -c 'COPY groups TO STDOUT WITH CSV HEADER' > groups.csv
Or on a kubernetes cluster, in docker, over HTTPS??:
kubectl exec -t postgres-2592991581-ws2td 'psql -d mydb -c "COPY groups TO STDOUT WITH CSV HEADER"' > groups.csv
So versatile, much commas!
Do you even?
Yes I did, here are my notes:
The COPYses
Using /copy effectively executes file operations on whatever system the psql command is running on, as the user who is executing it1. If you connect to a remote server, it's simple to copy data files on the system executing psql to/from the remote server.
COPY executes file operations on the server as the backend process user account (default postgres), file paths and permissions are checked and applied accordingly. If using TO STDOUT then file permissions checks are bypassed.
Both of these options require subsequent file movement if psql is not executing on the system where you want the resultant CSV to ultimately reside. This is the most likely case, in my experience, when you mostly work with remote servers.
It is more complex to configure something like a TCP/IP tunnel over ssh to a remote system for simple CSV output, but for other output formats (binary) it may be better to /copy over a tunneled connection, executing a local psql. In a similar vein, for large imports, moving the source file to the server and using COPY is probably the highest-performance option.
PSQL Parameters
With psql parameters you can format the output like CSV but there are downsides like having to remember to disable the pager and not getting headers:
$ psql -P pager=off -d mydb -t -A -F',' -c 'select * from groups;'
2,Technician,Test 2,,,t,,0,,
3,Truck,1,2017-10-02,,t,,0,,
4,Truck,2,2017-10-02,,t,,0,,
Other Tools
No, I just want to get CSV out of my server without compiling and/or installing a tool.

New version - psql 12 - will support --csv.
psql - devel
--csv
Switches to CSV (Comma-Separated Values) output mode. This is equivalent to \pset format csv.
csv_fieldsep
Specifies the field separator to be used in CSV output format. If the separator character appears in a field's value, that field is output within double quotes, following standard CSV rules. The default is a comma.
Usage:
psql -c "SELECT * FROM pg_catalog.pg_tables" --csv postgres
psql -c "SELECT * FROM pg_catalog.pg_tables" --csv -P csv_fieldsep='^' postgres
psql -c "SELECT * FROM pg_catalog.pg_tables" --csv postgres > output.csv

If you're interested in all the columns of a particular table along with headers, you can use
COPY table TO '/some_destdir/mycsv.csv' WITH CSV HEADER;
This is a tiny bit simpler than
COPY (SELECT * FROM table) TO '/some_destdir/mycsv.csv' WITH CSV HEADER;
which, to the best of my knowledge, are equivalent.

I had to use the \COPY because I received the error message:
ERROR: could not open file "/filepath/places.csv" for writing: Permission denied
So I used:
\Copy (Select address, zip From manjadata) To '/filepath/places.csv' With CSV;
and it is functioning

psql can do this for you:
edd#ron:~$ psql -d beancounter -t -A -F"," \
-c "select date, symbol, day_close " \
"from stockprices where symbol like 'I%' " \
"and date >= '2009-10-02'"
2009-10-02,IBM,119.02
2009-10-02,IEF,92.77
2009-10-02,IEV,37.05
2009-10-02,IJH,66.18
2009-10-02,IJR,50.33
2009-10-02,ILF,42.24
2009-10-02,INTC,18.97
2009-10-02,IP,21.39
edd#ron:~$
See man psql for help on the options used here.

I'm working on AWS Redshift, which does not support the COPY TO feature.
My BI tool supports tab-delimited CSVs though, so I used the following:
psql -h dblocation -p port -U user -d dbname -F $'\t' --no-align -c "SELECT * FROM TABLE" > outfile.csv

In pgAdmin III there is an option to export to file from the query window. In the main menu it's Query -> Execute to file or there's a button that does the same thing (it's a green triangle with a blue floppy disk as opposed to the plain green triangle which just runs the query). If you're not running the query from the query window then I'd do what IMSoP suggested and use the copy command.

I tried several things but few of them were able to give me the desired CSV with header details.
Here is what worked for me.
psql -d dbame -U username \
-c "COPY ( SELECT * FROM TABLE ) TO STDOUT WITH CSV HEADER " > \
OUTPUT_CSV_FILE.csv

I've written a little tool called psql2csv that encapsulates the COPY query TO STDOUT pattern, resulting in proper CSV. It's interface is similar to psql.
psql2csv [OPTIONS] < QUERY
psql2csv [OPTIONS] QUERY
The query is assumed to be the contents of STDIN, if present, or the last argument. All other arguments are forwarded to psql except for these:
-h, --help show help, then exit
--encoding=ENCODING use a different encoding than UTF8 (Excel likes LATIN1)
--no-header do not output a header

If you have longer query and you like to use psql then put your query to a file and use the following command:
psql -d my_db_name -t -A -F";" -f input-file.sql -o output-file.csv

To Download CSV file with column names as HEADER use this command:
Copy (Select * From tableName) To '/tmp/fileName.csv' With CSV HEADER;

Since Postgres 12, you can change the output format :
\pset format csv
The following formats are allowed :
aligned, asciidoc, csv, html, latex, latex-longtable, troff-ms, unaligned, wrapped
If you want to export the result of a request, you can use the \o filename feature.
Example :
\pset format csv
\o file.csv
SELECT * FROM table LIMIT 10;
\o
\pset format aligned

I found that psql --csv creates a CSV file with UTF8 characters but it is missing the UTF8 Byte Order Mark (0xEF 0xBB 0xBF). Without taking it into account, the default import of this CSV file will corrupt international characters such as CJK characters.
To fix it, I devised the following script:
# Define a connection to the Postgres database through environment variables
export PGHOST=your.pg.host
export PGPORT=5432
export PGDATABASE=your_pg_database
export PGUSER=your_pg_user
# Place credentials in $HOME/.pgpass with the format:
# ${PGHOST}:${PGPORT}:${PGUSER}:master:${PGPASSWORD}
# Populate long SQL query in a text file:
cat > /tmp/query.sql <<EOF
SELECT item.item_no,item_descrip,
invoice.invoice_no,invoice.sold_qty
FROM item
LEFT JOIN invoice
ON item.item_no=invoice.item_no;
EOF
# Generate CSV report with UTF8 BOM mark
printf '\xEF\xBB\xBF' > report.csv
psql -f /tmp/query.sql --csv | tee -a report.csv
Doing it this way, lets me script the CSV creation process for automation and allows me to succinctly maintain the script in a single source file.

import json
cursor = conn.cursor()
qry = """ SELECT details FROM test_csvfile """
cursor.execute(qry)
rows = cursor.fetchall()
value = json.dumps(rows)
with open("/home/asha/Desktop/Income_output.json","w+") as f:
f.write(value)
print 'Saved to File Successfully'

JackDB, a database client in your web browser, makes this really easy. Especially if you're on Heroku.
It lets you connect to remote databases and run SQL queries on them.
Source
(source: jackdb.com)
Once your DB is connected, you can run a query and export to CSV or TXT (see bottom right).
Note: I'm in no way affiliated with JackDB. I currently use their free services and think it's a great product.

Per the request of #skeller88, I am reposting my comment as an answer so that it doesn't get lost by people who don't read every response...
The problem with DataGrip is that it puts a grip on your wallet. It is not free. Try the community edition of DBeaver at dbeaver.io. It is a FOSS multi-platform database tool for SQL programmers, DBAs and analysts that supports all popular databases: MySQL, PostgreSQL, SQLite, Oracle, DB2, SQL Server, Sybase, MS Access, Teradata, Firebird, Hive, Presto, etc.
DBeaver Community Edition makes it trivial to connect to a database, issue queries to retrieve data, and then download the result set to save it to CSV, JSON, SQL, or other common data formats. It's a viable FOSS competitor to TOAD for Postgres, TOAD for SQL Server, or Toad for Oracle.
I have no affiliation with DBeaver. I love the price and functionality, but I wish they would open up the DBeaver/Eclipse application more and made it easy to add analytics widgets to DBeaver / Eclipse, rather than requiring users to pay for the annual subscription to create graphs and charts directly within the application. My Java coding skills are rusty and I don't feel like taking weeks to relearn how to build Eclipse widgets, only to find that DBeaver has disabled the ability to add third-party widgets to the DBeaver Community Edition.
Do DBeaver users have insight as to the steps to create analytics widgets to add into the Community Edition of DBeaver?

Related

How to copy table to my computer from network database , if I haven't right superuser? POSTGRES

I want to copy a Postgres table in CSV format from a network database to my computer.
For example, here is its address
psql postgresql://login:password#192.168.00.00:5432/test_table
The problem is that I don't have superuser rights and I can't copy the table via pg_admin.
For example, if I make a request in pg_admin:
COPY test_table TO 'C:\tmp\test_table.csv' DELIMITER ',' CSV HEADER;
I get an error:
ERROR: must be superuser or a member of the pg_write_server_files role to COPY to a file
HINT: Anyone can COPY to stdout or from stdin. psql's \copy command also works for anyone.
SQL state: 42501
As I understand it, it is possible to copy the table - but through the command line, right? How to do it in my case? Thank
Instead of using COPY with a path, use STDOUT. Then, redirect the output to a local path:
psql -c "COPY test_table TO STDOUT DELIMITER ',' CSV HEADER" >> C:\tmp\test_table.csv
See the documentation for COPY.
In case you need this explanation: stdout stands for standard output, it means that the result of the command should be printed on your terminal. Using >> you redirect the output of the psql command to a file.
I would just learn how to use the command line, but if you want to stick with pgAdmin4 you can right click on the table in the browser tree and then choose "Import/Export Data" and follow the dialog box. Doing that is basically equivalent to using \copy from psql.

Move table from server1 to server2

I have two Postgresql servers (Windows), and I am trying to transfer a table from server1 to server2. This table is around 200 MB size as it contains binary data.
I want to put the table into a usb stick and then move it to the second server. (assume the two servers are not connected by a LAN).
What is the simplest way to do that? Can you describe the way with command.
The easiest way would probably be to use pg_dump.
I haven't used it on Windows so I don't know the actual path to it, but it should be in the Postgres\bin directory and you need to execute it in a shell window (like PowerShell or CMD).
Assuming you have console access to each server, and that the table already exists in the second database:
pg_dump -a -b -Fc -t <tablename> <databasename> > <path to dump file>
Then when you have moved it to the new server.
pg_restore -a -Fc -d <databasename> <path to dump file>
If you don't have direct access to each server, then you need to add the connection parameters to each command:
-h <server> -U <username>
Quick description of the parameters:
-a : dumps only the data and not the schema definition. This should be removed if the table is not already in place on the new server
-b : dumps blobs. You mentioned there are binary data in the table, if they are stored as large objects, this parameter needs to be included, otherwise you can skip it.
-Fc : The format to dump the data as. c stands for Postgres custom format, which is better suited for moving binary data. You could change it to d to use a directory format since you're using 9.2, but I prefer the custom format still. d however is useful when dumping large databases since it stores each table in one file within the specified directory.
-t : Specifies that you want to dump a table and not the entire database.
-d : the database that you want to restore to (this parameter can be used in pg_dump as well, but not needed if specified as above)
There is a possibility that you need to add the -t parameter to the restore as well, but as far as I remember, it should not be necessary since you only have that table in the dump (however, if you had several tables in the dump, for instance if it is a complete dump of the database, this can be used to only restore parts of the database).

MySQL mysqldump command error(bug in mysql 5.5)

I am working on exporting a table from my server DB which is about few thousand rows and the PHPMyadmin is unable to handle it.So I switched to the command line option
But I am running into this error after executing the mysqldump command.The error is
Couldn't execute 'SET OPTION SQL_QUOTE_SHOW_CREATE=1': You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'OPTION SQL_QUOTE_SHOW_CREATE=1' at line 1 (1064)
After doing some search on the same I found this as a bug in the mysql version 5.5 not supporting the SET OPTION command.
I am running a EC2 instance with centos on it.My mysql version is 5.5.31(from my phpinfo).
I would like to know if there is a fix for this as it wont be possible to upgrade the entire database for this error.
Or if there is any other alternative to do a export or dump,please suggest.
An alternative to mysqldump is the SELECT ... INTO form of SELECT, which allows results to be written to a file (http://dev.mysql.com/doc/refman/5.5/en/select-into.html).
Some example syntax from the above help page is:
SELECT a,b,a+b INTO OUTFILE '/tmp/result.txt'
FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'
LINES TERMINATED BY '\n'
FROM test_table;
Data can then be loaded back in using LOAD DATA INFILE (http://dev.mysql.com/doc/refman/5.5/en/load-data.html).
Again the page gives an example:
LOAD DATA INFILE '/tmp/test.txt' INTO TABLE test
FIELDS TERMINATED BY ',' LINES STARTING BY 'xxx';
And with a complete worked example pair:
When you use SELECT ... INTO OUTFILE in tandem with LOAD DATA INFILE
to write data from a database into a file and then read the file back
into the database later, the field- and line-handling options for both
statements must match. Otherwise, LOAD DATA INFILE will not interpret
the contents of the file properly. Suppose that you use SELECT ...
INTO OUTFILE to write a file with fields delimited by commas:
SELECT * INTO OUTFILE 'data.txt' FIELDS TERMINATED BY ','
FROM table2;
To read the comma-delimited file back in, the correct statement would
be:
LOAD DATA INFILE 'data.txt' INTO TABLE table2 FIELDS TERMINATED BY ',';
Not tested, but something like this:
cat yourdumpfile.sql | grep -v "SET OPTION SQL_QUOTE_SHOW_CREATE" | mysql -u user -p -h host databasename
This inserts the dump into your database, but removes the lines containing "SET OPTION SQL_QUOTE_SHOW_CREATE". The -v means reverting.
Couldn't find the english manual entry for SQL_QUOTE_SHOW_CREATE to link it here, but you don't need this option at all, when your table and database names don't include special characters or something (meaning they don't need to put in quotes).
UPDATE:
mysqldump -u user -p -h host database | grep -v "SET OPTION SQL_QUOTE_SHOW_CREATE" > yourdumpfile.sql
Then when you insert the dump into database you have to do nothing special.
mysql -u user -p -h host database < yourdumpfile.sql
I used quick and dirty hack for this.
Download mysql 5.6. (from https://downloads.mariadb.com/archive/signature/p/mysql/f/mysql-5.6.13-linux-glibc2.5-x86_64.tar.gz/v/5.6.13)
Untar and use newly downloaded mysqldump.

How to take backup of functions only in Postgres

I want to take backup of all functions in my postgres database.How to take backup of functions only in Postgres?
use pg_getfunctiondef; see system information functions. pg_getfunctiondef was added in PostgreSQL 8.4.
SELECT pg_get_functiondef('proc_name'::regproc);
To dump all functions in a schema you can query the system tables in pg_catalog; say if you wanted everything from public:
SELECT pg_get_functiondef(f.oid)
FROM pg_catalog.pg_proc f
INNER JOIN pg_catalog.pg_namespace n ON (f.pronamespace = n.oid)
WHERE n.nspname = 'public';
it's trivial to change the above to say "from all schemas except those beginning with pg_" instead if that's what you want.
In psql you can dump this to a file with:
psql -At dbname > /path/to/output/file.sql <<"__END__"
... the above SQL ...
__END__
To run the output in another DB, use something like:
psql -1 -v ON_ERROR_STOP -f /path/to/output/file.sql target_db_name
If you're replicating functions between DBs like this, though, consider storing the authorative copy of the function definitions as a SQL script in a revision control system like svn or git, preferably packaged as a PostgreSQL extension. See packaging extensions.
You can't tell pg_dump to dump only functions. However, you can make a dump without data (-s or --schema-only) and filter it on restoring. Note the --format=c (also -Fc) part: this will produce a file suitable for pg_restore.
First take the dump:
pg_dump -U username --format=c --schema-only -f dump_test your_database
Then create a list of the functions:
pg_restore --list dump_test | grep FUNCTION > function_list
And finally restore them (-L or --use-list specifies the list file created above):
pg_restore -U username -d your_other_database -L function_list dump_test

Is there a tool to generate a full database DDL for SQL Server? What about Postgres and MySQL?

Using Toad for Oracle, I can generate full DDL files describing all tables, views, source code (procedures, functions, packages), sequences, and grants of an Oracle schema. A great feature is that it separates each DDL declaration into different files (a file for each object, be it a table, a procedure, a view, etc.) so I can write code and see the structure of the database without a DB connection. The other benefit of working with DDL files is that I don't have to connect to the database to generate a DDL each time I need to review table definitions. In Toad for Oracle, the way to do this is to go to Database -> Export and select the appropriate menu item depending on what you want to export. It gives you a nice picture of the database at that point in time.
Is there a "batch" tool that exports
- all table DDLs (including indexes, check/referential constraints)
- all source code (separate files for each procedure, function)
- all views
- all sequences
from SQL Server?
What about PostgreSQL?
What about MySQL?
What about Ingres?
I have no preference as to whether the tool is Open Source or Commercial.
For SQL Server:
In SQL Server Management Studio, right click on your database and choose 'Tasks' -> 'Generate Scripts'.
You will be asked to choose which DDL objects to include in your script.
In PostgreSQL, simply use the -s option to pg_dump. You can get it as a plain sql script (one file for the whole database) on in a custom format that you can then throw a script at to get one file per object if you want it.
The PgAdmin tool will also show you each object's SQL dump, but I don't think there's a nice way to get them all at once from there.
For mysql, I use mysqldump. The command is pretty simple.
$ mysqldump [options] db_name [tables]
$ mysqldump [options] --databases db_name1 [db_name2 db_name3...]
$ mysqldump [options] --all-databases
Plenty of options for this. Take a look here for a good reference.
In addition to the "Generate Scripts" wizard in SSMS you can now use mssql-scripter which is a command line tool to generate DDL and DML scripts.
It's an open source and Python-based tool that you can install via:
pip install mssql-scripter.
Here's an example of what you can use to script the database schema and data to a file.
mssql-scripter -S localhost -d AdventureWorks -U sa --schema-and-data > ./adventureworks.sql
More guidelines: https://github.com/Microsoft/sql-xplat-cli/blob/dev/doc/usage_guide.md
And here is the link to the GitHub repository: https://github.com/Microsoft/sql-xplat-cli
MySQL has a great tool called MySQL workbench that lets you reverse and forward engineer databases, as well as synchronize, which I really like. You can view the DDL when executing these functions.
I wrote SMOscript which does what you are asking for (referring to MSSQL Server)
Following what Daniel Vassallo said, this worked for me:
pg_dump -f c:\filename.sql -C -n public -O -s -d Moodle3.1 -h localhost -p 5432 -U postgres -w
try this python-based tool: Yet another script to split PostgreSQL dumps into object files