Is there a simple way to create a copy of a database or schema in PostgreSQL 8.1?
I'm testing some software which does a lot of updates to a particular schema within a database, and I'd like to make a copy of it so I can run some comparisons against the original.
If it's on the same server, you just use the CREATE DATABASE command with the TEMPLATE parameter. For example:
CREATE DATABASE newdb WITH TEMPLATE olddb;
pg_dump with the --schema-only option.
If you have to copy the schema from the local database to a remote database, you may use one of the following two options.
Option A
Copy the schema from the local database to a dump file.
pg_dump -U postgres -Cs database > dump_file
Copy the dump file from the local server to the remote server.
scp localuser#localhost:dump_file remoteuser#remotehost:dump_file
Connect to the remote server.
ssh remoteuser#remotehost
Copy the schema from the dump file to the remote database.
psql -U postgres database < dump_file
Option B
Copy the schema directly from the local database to the remote database without using an intermediate file.
pg_dump -h localhost -U postgres -Cs database | psql -h remotehost -U postgres database
This blog post might prove helpful for you if you want to learn more about options for copying the database using pg_dump.
This can be done by running the following command:
CREATE DATABASE [Database to create] WITH TEMPLATE [Database to copy] OWNER [Your username];
Once filled in with your database names and your username, this will create a copy of the specified database. This will work as long as there are no other active connections to the database you wish to copy. If there are other active connections you can temporarily terminate the connections by using this command first:
SELECT pg_terminate_backend(pg_stat_activity.pid)
FROM pg_stat_activity
WHERE pg_stat_activity.datname = '[Database to copy]'
AND pid <> pg_backend_pid();
A good article that I wrote for Chartio's Data School which goes a bit more in depth on how to do this can be found here:
https://dataschool.com/learn/how-to-create-a-copy-of-a-database-in-postgresql-using-psql
Related
I need to play around with a data syncing program I wrote, but I want to copy the structure of the production database into a new table on my localhost Postgres database, without copying the data to my localhost db.
I was thinking along the lines of
CREATE TABLE new_table AS
TABLE existing_table
WITH NO DATA;
But I am unsure how to modify it to work with 2 different databases.
Any help would be appreciated
This boils down to the question "how to create the DDL script for a table" which can easily be done using pg_dump on the command line.
pg_dump -d some_db -h production_server -t existing_table --schema-only -f create.sql
The file create.sql then contains the CREATE TABLE script that you can run on your local Postgres installation.
I am trying to migrate files over from one remote database- Scratxh to another remote database. I am using pg_dump and psql to download an .sql file then using psql to recreate the table in the new database - SourceData. I want to copy the table only. I used -t to indicate this, but I still get these errors :
ERROR schema public does not exist
ERROR permission denied to set session authorization.
These are the commands I used.
pg_dump -t table -d Scratch -U me -h host.com > table.sql
psql -d SourceData -U me -h host.com < table.sql
I know that psql command uses the .sql text file to recreate the table, so I tried editing this file to get rid of any mentions of the schema 'public'.
It didn't help. I got the same error.
Has anyone else encountered this?
It is not clear from your comment when the error happens. I will assume it happens on the second command. In that case, the first error shown might be because the second database is not ready to receive the data, i.e.: the SQL contains INSERT statement to a table that doesn't exist yet in SourceData.
You need to create the table in the new database before being able to import data into it.
If you pg_dump the entire database, you would probably not encounter this exact problem.
I have a dev version and a production version running in django.
I recently started populating it with a lot of data and found that the django loaddata tries to load everything into memory before adding it into the db and my files will be too big for that.
What is the proper way to push my data from my dev machine to my production?
I did...
pg_dump -U user -W db ./filename.sql
and then on the production server I did...
psql dbname < filename.sql
It seems like it worked, all the data is there, but it came up with some errors such as
relation xxx already exists
constrain xxx for relation xxx already exists
and there were quite a few of them, but like I said everything appears to be there. Is this the right way to do it?
Edit: I have in the production machine the database with information and I don't want truncate the tables before import.
This is the script that I use:
pg_dump -d DATABASE_NAME -U postgres --format plain --inserts > /FILE.sql
Edit: As you says in comments that you don't want truncate the tables before import, you can't do this type of import into your production database. I suggest empty your production database before import the dev database dump.
I have two Postgresql servers (Windows), and I am trying to transfer a table from server1 to server2. This table is around 200 MB size as it contains binary data.
I want to put the table into a usb stick and then move it to the second server. (assume the two servers are not connected by a LAN).
What is the simplest way to do that? Can you describe the way with command.
The easiest way would probably be to use pg_dump.
I haven't used it on Windows so I don't know the actual path to it, but it should be in the Postgres\bin directory and you need to execute it in a shell window (like PowerShell or CMD).
Assuming you have console access to each server, and that the table already exists in the second database:
pg_dump -a -b -Fc -t <tablename> <databasename> > <path to dump file>
Then when you have moved it to the new server.
pg_restore -a -Fc -d <databasename> <path to dump file>
If you don't have direct access to each server, then you need to add the connection parameters to each command:
-h <server> -U <username>
Quick description of the parameters:
-a : dumps only the data and not the schema definition. This should be removed if the table is not already in place on the new server
-b : dumps blobs. You mentioned there are binary data in the table, if they are stored as large objects, this parameter needs to be included, otherwise you can skip it.
-Fc : The format to dump the data as. c stands for Postgres custom format, which is better suited for moving binary data. You could change it to d to use a directory format since you're using 9.2, but I prefer the custom format still. d however is useful when dumping large databases since it stores each table in one file within the specified directory.
-t : Specifies that you want to dump a table and not the entire database.
-d : the database that you want to restore to (this parameter can be used in pg_dump as well, but not needed if specified as above)
There is a possibility that you need to add the -t parameter to the restore as well, but as far as I remember, it should not be necessary since you only have that table in the dump (however, if you had several tables in the dump, for instance if it is a complete dump of the database, this can be used to only restore parts of the database).
Using Toad for Oracle, I can generate full DDL files describing all tables, views, source code (procedures, functions, packages), sequences, and grants of an Oracle schema. A great feature is that it separates each DDL declaration into different files (a file for each object, be it a table, a procedure, a view, etc.) so I can write code and see the structure of the database without a DB connection. The other benefit of working with DDL files is that I don't have to connect to the database to generate a DDL each time I need to review table definitions. In Toad for Oracle, the way to do this is to go to Database -> Export and select the appropriate menu item depending on what you want to export. It gives you a nice picture of the database at that point in time.
Is there a "batch" tool that exports
- all table DDLs (including indexes, check/referential constraints)
- all source code (separate files for each procedure, function)
- all views
- all sequences
from SQL Server?
What about PostgreSQL?
What about MySQL?
What about Ingres?
I have no preference as to whether the tool is Open Source or Commercial.
For SQL Server:
In SQL Server Management Studio, right click on your database and choose 'Tasks' -> 'Generate Scripts'.
You will be asked to choose which DDL objects to include in your script.
In PostgreSQL, simply use the -s option to pg_dump. You can get it as a plain sql script (one file for the whole database) on in a custom format that you can then throw a script at to get one file per object if you want it.
The PgAdmin tool will also show you each object's SQL dump, but I don't think there's a nice way to get them all at once from there.
For mysql, I use mysqldump. The command is pretty simple.
$ mysqldump [options] db_name [tables]
$ mysqldump [options] --databases db_name1 [db_name2 db_name3...]
$ mysqldump [options] --all-databases
Plenty of options for this. Take a look here for a good reference.
In addition to the "Generate Scripts" wizard in SSMS you can now use mssql-scripter which is a command line tool to generate DDL and DML scripts.
It's an open source and Python-based tool that you can install via:
pip install mssql-scripter.
Here's an example of what you can use to script the database schema and data to a file.
mssql-scripter -S localhost -d AdventureWorks -U sa --schema-and-data > ./adventureworks.sql
More guidelines: https://github.com/Microsoft/sql-xplat-cli/blob/dev/doc/usage_guide.md
And here is the link to the GitHub repository: https://github.com/Microsoft/sql-xplat-cli
MySQL has a great tool called MySQL workbench that lets you reverse and forward engineer databases, as well as synchronize, which I really like. You can view the DDL when executing these functions.
I wrote SMOscript which does what you are asking for (referring to MSSQL Server)
Following what Daniel Vassallo said, this worked for me:
pg_dump -f c:\filename.sql -C -n public -O -s -d Moodle3.1 -h localhost -p 5432 -U postgres -w
try this python-based tool: Yet another script to split PostgreSQL dumps into object files