Mysqldump tables from different databases? - sql

I want to backup two tables: table1 and table2.
table1 is from database database1.
table2 is from database database2.
Is there a way to dump them with a single mysqldump call?
I know I can do:
mysqldump -S unixSocket --skip-comments --default-character-set=utf8 --databases database1 --tables table1 > /tmp/file.sql
But how to dump two tables from different databases?

Use mysqldump twice but second time with redirect to file as append >> /tmp/file.sql.

There are three general ways to invoke mysqldump:
shell> mysqldump [options] db_name [tbl_name ...]
shell> mysqldump [options] --databases db_name ...
shell> mysqldump [options] --all-databases
Only the first one lets you select the database and table name, but doesn't allow multiple databases. If you use the second or third option you'll dump the selected databases (second) or all databases (third).
So, you can do it but you'll need to dump to entire databases.
As Michał Powaga stated in the comments, you might also do it twice.
first time with "> /tmp/file.sql"
second time with ">> /tmp/file.sql to append"

The syntax is:
mysqldump --databases db_name1 [db_name2 ...] > my_databases.sql
Check for reference: http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html
Hope it helps

This might be a workaround but you could ignore the other tables you DON'T want to backup.
Such as in your case:
mysqldump --databases database1 database2 --ignore-table=database1.table2 --ignore-table=database2.table1
You need to define each table you DON'T want to dump with each --ignore-table statement.
Good luck!

For linux/bash, oneliner:
(mysqldump dbname1 --tables table1; mysqldump dbname2 --tables table2) | gzip > dump.sql.gz

Related

How to save SQL query results as a list of INSERT statements into a .sql file

I am working on a PostgreSQL database. I would like to get the results of a SELECT query as a list of INSERT statements and have them saved into a .sql file (let's say exported_data.sql).
This is because later I would like to run the .sql file on another database and have the data "imported" in the table, so that the table gets "updated".
psql my_database -h my_host < exported_data.sql
Let's say my query is:
SELECT *
FROM params_table
WHERE params_type = 'tkt';
From this thread I see that one way to do it is to:
Create a temporary table filled with the results of the query
CREATE TABLE temporary_table
AS
SELECT *
FROM params_table
WHERE params_type = 'tkt';
Export the table as a list of INSERT statements into a .sql file
pg_dump --table=temporary_table --data-only --column-inserts my_database -h my_host > exported_data.sql
Remove the table, to keep the database clean.
DROP TABLE temporary_table;
Is there a way to do it in one shot?
I am asking just because I would not like to create and then remove a temporary table every time I need to export some data.
I saw from this thread that for a MySQL database it is possible to do it one-shot:
mysqldump -u root -p my_database -h my_host params_table --where="params_type = 'tkt'" > exported_data.sql
So I have read the Postgresql documentation for pg_dump, but I cannot find any equivalent for --where clause.

pg_restore clean table psql

I am trying to restore a table I dumped to its current location. The table was dumped using:
pg_dump -t table db > table.sql -U username
Trying to restore, I'm using:
pg_restore -c --dbname=db --table=table table.sql
I am using -c since the table currently exists and has data, however it returns:
pg_restore: [archiver] input file appears to be a text format dump. Please use psql.
I've also tried:
-bash-4.2$ psql -U username -d db -1 -f table.sql
But since the data is there already and there's no --clean option for this psql command (I believe), it returns:
psql:table.sql:32: ERROR: relation "table" already exists
Is there a way to use pg_restore correctly or use psql with a --clean option?
Bad news for you, friend. You can’t pg_restore from a plan-text dump. It’s in the docs:
“pg_restore is a utility for restoring a PostgreSQL database from an archive created by pg_dump in one of the non-plain-text formats.” https://www.postgresql.org/docs/9.6/static/app-pgrestore.html
If you can re-execute pg_dump, you can do so with the -Fc flag, which will produce an artifact that you can restore with pg_restore. If you are stuck with the table.sql in plain-text format, you do have a few options (I’m calling your target table my_fancy_table in the two examples below):
Option 1:
Drop the table: drop table my_fancy_table;
Now run your psql command: psql <options> -f table.sql
If you have referential integrity where other tables necessitate rows being there, you might not be able to drop the table.
Option 2:
Upsert from a temp table
You can edit the sql file (since it’s in plain text) and change the table name to my_fancy_table_temp (or any table name that does not yet exist). Now you'll have two tables, a temp one from the sql file and the real one that's been there the whole time. You can then write some upsert SQL to insert or update the real table with the rows from the temp table like so:
insert into my_facy_table (column1, column2)
select column1, column2 from my_fancy_table_temp
on conflict (my_fancy_unique_index)
update my_fancy_table
set column1 = excluded.column1,
column2 = excluded.column2

Is there a safe way to modify the table pg_constraint so as no more checking be done ( temporarily )?

So as to restore a database from a dump, I want to temporarily disable all the constraints in all the tables of my database, which content I have emptied ( postregsql 8.3).
Is there a safe way to modify a field in the pg_constraint table that easily allows to bypass the constraint ?
I don't think so when I look at the documentation pg_constraint.
Is it possible to drop the pg_constraint table content, then refill it again ?
If not, how people do restore/copy databases with tables that are full of constraints and foreign keys ? (my destination database has the schema but all the columns are empty).
Edit: althought Erwin Brandstetter's answer is the good answer (with wise warning) to my precise question... the solution to my general problem for avoiding foreign key errors during restore is to use the flag --disable-triggers when using pg_restore.
finally, my two lines commands are now:
pg_dump.exe -h %ipAddress% -p 5432 -U postgres -F c -a -v -f %file% mydb
pg_restore.exe -h localhost -p 5432 -U postgres -d mydb -v %file% --disable-triggers
and it works ok.
You can ...
ALTER TABLE tbl DISABLE TRIGGER ALL;
This disables all triggers of the table permanently. So don't forget to later run:
ALTER TABLE tbl ENABLE TRIGGER ALL;
-> 8.3 manual
You can ...
SET CONSTRAINTS ALL DEFERRED;
This makes all deferrable constraints wait until the end of the transaction.
-> 8.3 manual
You should never tinker with tables in the system catalog manually unless you are a hacker and know exactly what you are doing. Mortal humans should use DDL commands exclusively to affect the system catalog.
If your constraints are DEFERRABLE, then you can do this:
SET CONSTRAINTS DEFERRED
However this just moves constraint checking on the next COMMIT - in other words its per transaction.
Generally for moving data, deploy some sort of backup strategy.

Clean everything from database tables

I have some database with a lot tables , and i want to clean the data from tables how i can do it ?
like doing for loop on every table in the database DELETE FROM table_name;
Thanks.
Another alternative
use mysqldump to dump table schema only
use add-drop table
once you have the dump file, execute it
all the table will be drop and re-create
If you want to reset your auto increment,
then make changes to the dump file to reset it
if you are using linux, this can be done by a bash script, like
for table in $(mysql -N <<<"show tables from your_db")
do
mysql -N <<< "truncated $table"
done
One way is to create a script which will generate all the sql statements for you. Here is a version for linux:
echo "select concat('truncate ', table_name, ';') as '' from \
information_schema.tables where table_schema='YOURDATABASE';"\
|mysql -u USER -p > /tmp/truncate-all-tables.sql
The simplest way to do this is simply to issue a DELETE or TRUNCATE command on a per-table basis either from a .sql batch file or via a scripting language. (Whilst it's possible to interrogate the table schema information and use this, this may not be suitable depending on whether or not you have the required access privs and whether the tables in question follow a SELECTable naming convention.)
Incidentally, whilst you can use DELETE, this won't reclaim the used disk space and any auto_increment fields will remember the previous ID. (If you want to get the disk space back and reset the IDs to 1, use "TRUNCATE <table name>;".)

Overwrite a Backup

I need to set up a job to create backup everyday. I also need to overwrite an existing backup.
Can somebody please help me with it.
Thanks,
Have a look at mysqldump.
mysqldump db_name tbl_name > backupfile.sql
will dump the a db / table and overwrite backupfile.sql if it exists.
Use rsync or scp to copy it to another host if needed.