PostgreSQL, update existing rows with pg_restore - sql

I need to sync two PostgreSQL databases (some tables from development db to production db) sometimes.
So I came up with this script:
[...]
pg_dump -a -F tar -t table1 -t table2 -U user1 dbname1 | \
pg_restore -a -U user2 -d dbname2
[...]
The problem is that this works just for newly added rows. When I edit non-PK column I get constraint error and row isn't updated. For each dumped row I need to check if it exists in destination database (by PK) and if so delete it before INSERT/COPY.
Thanks for advices.

Do this:
pg_dump -t table1 production_database > /tmp/old_production_database_table1.sql
pg_dump -t table1 devel_database > /tmp/devel_database_table1.sql
psql production_database
truncate table1
\i /tmp/devel_database_table1.sql
\i /tmp/old_production_database_table1.sql
You'll get a lot of duplicate primary key errors on second \i, but it'll do what you want: all rows from devel will be updated, all rows not in devel will not be updated nor deleted.
If you have any references to table1 then you'll have to drop them before and recreate them after importing. Especially check for on delete cascade, set null or set default references to table1 - you'd loose data in other tables if you have those.

Related

How to copy subset of a table from one database to another?

So I have a table, person_table:
name
age
gender
john
21
M
abraham
32
M
I want to copy name and gender columns of this table from one database to another on another server. So I have the following table in the other database.
name
gender
john
M
abraham
M
I tried to run the following command, but it does not seem to work.
psql -h localhost -d db_name -p 5432 -U postgres -c "copy(SELECT name, gender FROM person_table) to stdout" > dump.tsv
psql -h localhost -p 5431 -d leaves_db -U postgres -c "copy person_table from stdin" < dump.tsv
But I am getting this error
ERROR: relation "person_table" does not exist
Can anyone tell me what I am doing wrong? Also this is a dummy table. My actual table has millions of entries. Can anyone recommend some faster way to transfer data or my mentioned way is the fastest?

Backup & remove specific records - SQL Server

Is there anyway to:
Remove specific records from the table using a query?
Make a backup from specific records and restore them into another SQL Server instance somewhere else?
1) If ID is the table's PK (or it is unique) you can just use DELETE FROM TABLE_NAME WHERE ID IN (3, 4). You better check if this will not delete other items (or open a transaction, which is always good).
2) If it is just those 4 records and both databases are on the same server (and both tables have the same schema) you can just do (with the same worries that I have expressed in the answer above)
insert into DESTINATION
select * from SOURCE where id between 73 and 76;
Edit: If you really need to do something more like a row backup you can use the bcp utility:
bcp "select * from SOURCE where id between 73 and 76" queryout "file.dat" -T -c
bcp DESTINATION in file.dat -T -c
DELETE FROM ListsItems
WHERE ID = (3, 4);
It will remove your record.
Modify it....

How to backup some tables with data and some tables only schema PostgreSQL

I want to dump a database.
I have three tables:
table1
table2
table3
From table1 i want the schema plus data.
From table2 and table3 i just want the schema.
How do i do that?
To get data from just a few tables:
pg_dump myDatabase --inserts -a -t table1 -t table2 > backup.sql;
pg_dump myDatabase --inserts -a -t seq1 -t seq2 > backupSequences.sql;
Parameters descriptions:
-a, --data-only dump only the data, not the schema
-t, --table=TABLE dump the named table(s) only
--inserts dump data as INSERT commands, rather than
COPY
This is what i wanted :)
Thanks all!
Use pg_dump, which has both schema-only and schema + data output.

MYSQL: Display Skipped records after LOAD DATA INFILE?

In MySQL I've used LOAD DATA LOCAL INFILE which works fine. At the end I get a message like:
Records: 460377 Deleted: 0 Skipped: 145280 Warnings: 0
How can I view the line number of the records that were skipped? SHOW warnings doesn't work:
mysql> show warnings;
Empty set (0.00 sec)
If there was no warnings, but some rows were skipped, then it may mean that the primary key was duplicated for the skipped rows.
The easiest way to find out duplicates is by openning the local file in excel and performing a duplicate removal on the primary key column to see if there are any.
You could create a temp table removing the primary key items so that it allows duplications, and then insert the data.
Construct a SQL statement like
select count(column_with_duplicates) AS num_duplicates,column_with_duplicates
from table
group by column_with_duplicates
having num_duplicates > 1;
This will show you the rows with redundancies. Another way is to just dump out the rows that were actually inserted into the table, and run a file difference command against the original to see which ones weren't included.
For anyone stumbling onto to this:
Another option would be to do a SELECT INTO and diff the two files. For example:
LOAD DATA LOCAL INFILE 'data.txt' INTO TABLE my_table FIELDS TERMINATED BY '\t' OPTIONALLY ENCLOSED BY '\"' LINES TERMINATED BY '\r' IGNORE 1 LINES (title, desc, is_viewable);
SELECT title, desc, is_viewable INTO OUTFILE 'data_rows.txt' FIELDS TERMINATED BY '\t' LINES TERMINATED BY '\r' FROM my_table;
Then execute FileMerge (on Mac OS X) data.txt data_rows.txt to see the differences. If you are getting an access denied error when doing the SELECT INTO make sure you:
GRANT FILE ON *.* TO 'mysql_user'#'localhost';
flush privileges;
As the root user in the mysql client.
Records would be skipped, when any database constraint is not met. Check for common ones like
Primary key duplication
Unique key condition
Partition condition
I use bash command-line to find the duplicate row in the csv file:
awk -F\, '{print $1$2}' /my/source/file.csv| sort -n| uniq -c| grep -v "^\ *1"
when the two first columns are the primary key.

SQL Command for copying table

What is the SQL command to copy a table from one database to another database?
I am using MySQL and I have two databases x and y. Suppose I have a table in x called a and I need to copy that table to y database.
Sorry if the question is too novice.
Thanks.
If the target table doesn't exist....
CREATE TABLE dest_table AS (SELECT * FROM source_table);
If the target table does exist
INSERT INTO dest_table (SELECT * FROM source_table);
Caveat: Only tested in Oracle
If your two database are separated, the simplest thing to do would be to create a dump of your table and to load it into the second database. Refer to your database manual to see how a dump can be performed.
Otherwise you can use the following syntax (for MySQL)
INSERT INTO database_b.table (SELECT * FROM database_a.table)
Since your scenario involves two different databases, the correct query should be...
INSERT INTO Y..dest_table (SELECT * FROM source_table);
Query assumes, you are running it using X database.
If you just want to copy the contents, you might be looking for select into:
http://www.w3schools.com/Sql/sql_select_into.asp. This will not create an identical copy though, it will just copy every row from one table to another.
At the command line
mysqldump somedb sometable -u user -p | mysql otherdb -u user -p
then type both passwords.
This works even if they are on different hosts (just add the -h parameter as usual), which you can't do with insert select.
Be careful not to accidentally pipe into the wrong db or you will end up dropping the sometable table in that db! (The dump will start with 'drop table sometable').
insert blah from select suggested by others is good for copying the data under mysql.
If you want to copy the table structure you might want to use the show create table Tablename; statement.