I am trying to restore a table I dumped to its current location. The table was dumped using:
pg_dump -t table db > table.sql -U username
Trying to restore, I'm using:
pg_restore -c --dbname=db --table=table table.sql
I am using -c since the table currently exists and has data, however it returns:
pg_restore: [archiver] input file appears to be a text format dump. Please use psql.
I've also tried:
-bash-4.2$ psql -U username -d db -1 -f table.sql
But since the data is there already and there's no --clean option for this psql command (I believe), it returns:
psql:table.sql:32: ERROR: relation "table" already exists
Is there a way to use pg_restore correctly or use psql with a --clean option?
Bad news for you, friend. You can’t pg_restore from a plan-text dump. It’s in the docs:
“pg_restore is a utility for restoring a PostgreSQL database from an archive created by pg_dump in one of the non-plain-text formats.” https://www.postgresql.org/docs/9.6/static/app-pgrestore.html
If you can re-execute pg_dump, you can do so with the -Fc flag, which will produce an artifact that you can restore with pg_restore. If you are stuck with the table.sql in plain-text format, you do have a few options (I’m calling your target table my_fancy_table in the two examples below):
Option 1:
Drop the table: drop table my_fancy_table;
Now run your psql command: psql <options> -f table.sql
If you have referential integrity where other tables necessitate rows being there, you might not be able to drop the table.
Option 2:
Upsert from a temp table
You can edit the sql file (since it’s in plain text) and change the table name to my_fancy_table_temp (or any table name that does not yet exist). Now you'll have two tables, a temp one from the sql file and the real one that's been there the whole time. You can then write some upsert SQL to insert or update the real table with the rows from the temp table like so:
insert into my_facy_table (column1, column2)
select column1, column2 from my_fancy_table_temp
on conflict (my_fancy_unique_index)
update my_fancy_table
set column1 = excluded.column1,
column2 = excluded.column2
Related
I have an SQL script that does a bunch of stuff to some existing tables in my DB, but part of that scripts need to operate with data present on a non-existing table in order to update some values.
That table is only temporary; I have an .sql file that contains:
CREATE TABLE big_table
POPULATION of that same big_table
So the idea is, my SQL script contains the import of that .sql file, that should execute it (it will create the big_table in my DB and populate it), then my already existing script will execute all it's operations and then I'll just run a DROP TABLE big_table since I'll no longer need it.
Being an POSTGRESQL DDBB, I've thought about:
db_name < file.sql
, also:
mysql -u username -p db_name < file.sql
, but that I cannot include into an SQL script, I can only run that trough a shell which is not the idea.
I am working on a PostgreSQL database. I would like to get the results of a SELECT query as a list of INSERT statements and have them saved into a .sql file (let's say exported_data.sql).
This is because later I would like to run the .sql file on another database and have the data "imported" in the table, so that the table gets "updated".
psql my_database -h my_host < exported_data.sql
Let's say my query is:
SELECT *
FROM params_table
WHERE params_type = 'tkt';
From this thread I see that one way to do it is to:
Create a temporary table filled with the results of the query
CREATE TABLE temporary_table
AS
SELECT *
FROM params_table
WHERE params_type = 'tkt';
Export the table as a list of INSERT statements into a .sql file
pg_dump --table=temporary_table --data-only --column-inserts my_database -h my_host > exported_data.sql
Remove the table, to keep the database clean.
DROP TABLE temporary_table;
Is there a way to do it in one shot?
I am asking just because I would not like to create and then remove a temporary table every time I need to export some data.
I saw from this thread that for a MySQL database it is possible to do it one-shot:
mysqldump -u root -p my_database -h my_host params_table --where="params_type = 'tkt'" > exported_data.sql
So I have read the Postgresql documentation for pg_dump, but I cannot find any equivalent for --where clause.
Given a set of complex, consecutive (Postgres) SQL select statements, each stored in a single sql-file, how can I write a bash-script dropping (if exists) and creating a table with the results of each statement having the same table name as file name.
background: we share these sql-files in a git repo where different users want to use the statements in a different way. I want automatically create tables, the others use temp tables, thus I dont want to write the 'create table...' in the sql-file's headers.
A skeleton for your shell script could look like this:
set -e # stop immediately on any error
for script in s1 s2 s3
do
echo "processing $script"
select=`cat $script`
psql -d dbname -U user <<EOF
DROP TABLE IF EXISTS "$script";
CREATE TABLE "$script" AS $select ;
EOF
done
Note however that any SELECT is not necessarily suitable as the source for a CREATE TABLE .. AS SELECT...
As the simplest example, consider the case when two different columns share the same name. This is legal in a SELECT, but an error condition when creating a table from it.
I want to backup two tables: table1 and table2.
table1 is from database database1.
table2 is from database database2.
Is there a way to dump them with a single mysqldump call?
I know I can do:
mysqldump -S unixSocket --skip-comments --default-character-set=utf8 --databases database1 --tables table1 > /tmp/file.sql
But how to dump two tables from different databases?
Use mysqldump twice but second time with redirect to file as append >> /tmp/file.sql.
There are three general ways to invoke mysqldump:
shell> mysqldump [options] db_name [tbl_name ...]
shell> mysqldump [options] --databases db_name ...
shell> mysqldump [options] --all-databases
Only the first one lets you select the database and table name, but doesn't allow multiple databases. If you use the second or third option you'll dump the selected databases (second) or all databases (third).
So, you can do it but you'll need to dump to entire databases.
As Michał Powaga stated in the comments, you might also do it twice.
first time with "> /tmp/file.sql"
second time with ">> /tmp/file.sql to append"
The syntax is:
mysqldump --databases db_name1 [db_name2 ...] > my_databases.sql
Check for reference: http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html
Hope it helps
This might be a workaround but you could ignore the other tables you DON'T want to backup.
Such as in your case:
mysqldump --databases database1 database2 --ignore-table=database1.table2 --ignore-table=database2.table1
You need to define each table you DON'T want to dump with each --ignore-table statement.
Good luck!
For linux/bash, oneliner:
(mysqldump dbname1 --tables table1; mysqldump dbname2 --tables table2) | gzip > dump.sql.gz
I have some database with a lot tables , and i want to clean the data from tables how i can do it ?
like doing for loop on every table in the database DELETE FROM table_name;
Thanks.
Another alternative
use mysqldump to dump table schema only
use add-drop table
once you have the dump file, execute it
all the table will be drop and re-create
If you want to reset your auto increment,
then make changes to the dump file to reset it
if you are using linux, this can be done by a bash script, like
for table in $(mysql -N <<<"show tables from your_db")
do
mysql -N <<< "truncated $table"
done
One way is to create a script which will generate all the sql statements for you. Here is a version for linux:
echo "select concat('truncate ', table_name, ';') as '' from \
information_schema.tables where table_schema='YOURDATABASE';"\
|mysql -u USER -p > /tmp/truncate-all-tables.sql
The simplest way to do this is simply to issue a DELETE or TRUNCATE command on a per-table basis either from a .sql batch file or via a scripting language. (Whilst it's possible to interrogate the table schema information and use this, this may not be suitable depending on whether or not you have the required access privs and whether the tables in question follow a SELECTable naming convention.)
Incidentally, whilst you can use DELETE, this won't reclaim the used disk space and any auto_increment fields will remember the previous ID. (If you want to get the disk space back and reset the IDs to 1, use "TRUNCATE <table name>;".)