will mysql dump break replication? - replication

I have 2 databses X "production" and Y "testing
Database X should be identical to Y in structure. However, they are not because I mad many changes to the production.
Now, I need to somehow to export X and import it into Y without breaking any replications.
I am thinking to do mysql dump but I don't want to case any issue to replication this is why I am asking this question to confirm.
Here are the step that I want to follow
back up production. (ie. mysqldump -u root -p --triggers --routines X > c:/Y.sql)
Restore it. (ie. mysql -u root -p Y < c:\Y.sql)
Will this cause any issues to the replication?
I believe the dump will execute everything and save it into it's bin log and the slave will be able to see it and replicate it with no problems.
Is what I am trying to do correct? will it cause any replication issues?
thanks

Yes, backing up from X and restoring to Y is a normal operation. We often call this "reinitializing the replica."
This does interrupt replication. There's no reliable way to restore the data at the same time as letting the replica continue to apply changes, because the changes the replica is processing are not in sync with the snapshot of data represented by the backup. You could overwrite changed data, or miss changes, and this would make the replica totally out of sync.
So you have to stop replication on the replica while you restore.
Here are the steps for a typical replica reinitialization:
mysqldump from the master, with the --master-data option so the dump includes the binary log position that was current at the moment of the dump.
Stop replication on the replica.
Restore the dump on the replica.
Use CHANGE MASTER to alter what binary log coordinates the replica starts at. Use the coordinates that were saved in the dump.
Start replication on the replica.
Re your comment:
Okay, I understand what you need better now.
Yes, there's an option for mysqldump --no-data so the output only includes the CREATE TABLE and other DDL, but no INSERT statements with data. Then you can import that to a separate database on the same server. And you're right, by default DDL statements are added to the binary log, so any replication replicas will automatically run the same statements.
You can even do the export & import in just two steps like this:
$ mysqladmin create newdatabase
$ mysqldump --no-data olddatabase | mysql newdatabase

Related

Restore SQL Back ups back to server from Directory

I have list of SQL databases taken as a backup and stored in D:\Backup\ drive, Task is to restore all backups, back to SQL Server. Looking for stored procedure which will open each file from directory and restore all files one by one to server.
Sample list of Databases
restore database master from disk='D:\BACKUP\DB1.bak' with replace
restore database master from disk='D:\BACKUP\DB2.bak' with replace
restore database master from disk='D:\BACKUP\DB3.bak' with replace
restore database master from disk='D:\BACKUP\DB4.bak' with replace
restore database master from disk='D:\BACKUP\DB5.bak' with replace
restore database master from disk='D:\BACKUP\DB6.bak' with replace
restore database master from disk='D:\BACKUP\DB7.bak' with replace
Thanks
I would strongly encourage you to look at the dbatools powershell module for this. For your needs:
$server = 'yourServer';
$backups = get-dbabackupinformation -SqlInstance $server -Path 'D:\BACKUP';
$backups | restore-dbadatabase -SqlInstance $server -withreplace;
Note - restore-dbadatabase has a bunch of options. The defaults are pretty good for most cases but see what's available.
But also! Two things grab my attention about your sample:
You're restoring the master database. That should only be done if something extraordinary happened (e.g. disk corruption, an admin or malicious actor trashed the server, etc).
You're restoring the same database from multiple different files without specifying with norecovery.
I'd guess that both of these are attributed to fast and loose with the sample code. But it's worth calling out as restoring master will have consequences for your server and restoring multiple files could make your restore take longer than it should (e.g. if each of those backups represents a daily backup, restoring only the one that represents your desired RPO makes sense).

Import huge SQL file into SQL Server

I use the sqlcmd utility to import a 7 GB large SQL dump file into a remote SQL Server. The command I use is this:
sqlcmd -S IP address -U user -P password -t 0 -d database -i file.sql
After about 20-30 min the server regularly responds with:
Sqlcmd: Error: Scripting error.
Any pointers or advice?
I assume file.sql is just a bunch of INSERT statements. For a large amount of rows, I suggest using the BCP command-line utility. This will perform orders of magnitude faster than individual INSERT statements.
You could also bulk insert data using the T-SQL BULK INSERT command. In that case, the file path needs to be accessible by the database server (i.e. UNC path or copied to a drive on the server) and along with needed permissions. See http://msdn.microsoft.com/en-us/library/ms188365.aspx.
Why not use SSIS. While I have a certificate as a DBA, I always try to use the right tool for the job.
Here are some reasons to use SSIS.
1 - Use can still use fast-load, bulk copy. Make sure you set the batch size.
2 - Error handling is much better.
However, if you are using fast-load, either the batch commits or it gets tossed.
If you are using single record, you can direct each error row to a separate destination.
3 - You can perform transformations on the source data before loading it into the destination.
In short, Extract Translate Load.
4 - SSIS loves memory and buffers. If you want to get really in depth, read some articles from Matt Mason or Brian Knight.
Last but not least, the LAN/WAN always plays a factor if the job is not running on the target server with the input file on a local disk.
If you are on the same backbone with a good pipe, things go fast.
In summary, yeah you can use BCP. It is great for little quick jobs. Anything complicated with robust error handling should be done with SSIS.
Good luck,

Create database explicitly before restoring to it?

When I setup my PostgreSQL server one of the first things I will do is import a database for an external source. Which of the following is the right way to do it?
Create a database called "NEWDB" on the PostgreSQL server and then
import my external "BACKUPDB" database from my pg_dump into the
"NEWDB".
Don't create a database on the PostgreSQL server, and import the
"NEWDB" database, thereby automatically creating "NEWDB" on the
postgresql server.
I guess my question is, if I want to import an existing database to the PostgreSQL server, do I first need to create a database for it to go into?
You don't have to. It depends on what you want to achieve. If you dump a single database with pg_dump, CREATE DATABASE and ALTER DATABASE commands are not included. You are expected to connect to an existing database. So you have to create it first.
I quote advice from the manual:
If your database cluster has any local additions to the template1
database, be careful to restore the output of pg_dump into a truly
empty database; otherwise you are likely to get errors due to
duplicate definitions of the added objects. To make an empty database
without any local additions, copy from template0 not template1, for
example:
CREATE DATABASE foo WITH TEMPLATE template0;
And also:
The dump file also does not contain any ALTER DATABASE ... SET
commands; these settings are dumped by pg_dumpall, along with database
users and other installation-wide settings.
pg_dumpall, on the other hand, dumps the whole DB cluster including meta-objects like users. It includes CREATE DATABASE statements and connects to each DB when restoring. You can even include DROP DATABASE statements with the -c (--clean) option. Careful with that.
Every instance of PostgreSQL has a default maintenance db named "postgres" that you can connect to - to create databases for instance or start a full restore (from pg_dumpall). But a single-DB dump (from pg_dump) has to be run against its target database.
Finally:
Once restored, it is wise to run ANALYZE on each database so the
optimizer has useful statistics. You can also run vacuumdb -a -z to
analyze all databases.

Backup MySQL database

I have a MySQL Database of about 1.7GB. I usually back it up using mysqldump and this takes about 2 minutes. However, I would like to know the answers to the following questions:
Does mysqldump block read and/or write operations to the database? Because in a live scenario, I would not want to block users from using the database while it is being backed up.
It would be ideal for me to only backup the WHOLE database once in, say, a week, but in the intermediate days only one table needs to be backed up as the others won't change. Is there a way to achieve this?
Is mysqlhotcopy a better alternative for these purposes?
mysqlhotcopy does not work in certain cases where the readlock is lost,
and does not work with INNODB tables.
mysqldump is more used because it can back up all kinds of tables.
From MySQL documentation
mysqlhotcopy is a Perl script that was originally written and contributed by Tim Bunce. It uses LOCK TABLES, FLUSH TABLES, and cp or scp to make a database backup quickly. It is the fastest way to make a backup of the database or single tables, but it can be run only on the same machine where the database directories are located. mysqlhotcopy works only for backing up MyISAM and ARCHIVE tables. It runs on Unix and NetWare
The mysqldump client is a backup program originally written by Igor Romanenko. It can be used to dump a database or a collection of databases for backup or transfer to another SQL server (not necessarily a MySQL server). The dump typically contains SQL statements to create the table, populate it, or both. However, mysqldump can also be used to generate files in CSV, other delimited text, or XML format.
Bye.
1) mysqldump only blocks when you ask it to (one of the --lock-tables, --lock-all-tables, --single-transaction). but if you want your backup to be consistent then mysqldump should block (using --single-transaction or --lock-all-tables) or you might get an inconsistent database snapshot. Note: --single-transaction works only for InnoDB.
2) sure, just enumerate the tables you want to be backed up after the database name:
mysqldump OPTIONS DATABASE TABLE1 TABLE2 ...
Alternatively you can exclude the tables you don't want:
mysqldump ... --ignore-table=TABLE1 --ignore-table=TABLE2 .. DATABASE
So you can do a whole database dump once a week and backup only the changing tables once a day.
3) mysqlhotcopy inly works on MyISAM tables and in most applications you are better off with InnoDB. There are commercial tools (quite expensive) for hotbackup of innodb tables. Lately there is also the new opensource one for this purpose - Xtrabackup
Also, to automate the process you can use astrails-safe. It supports database backup with mysqldump and filesystem with tar. +encryption +upload to S3, +many other goodies. There is no xtrabackup support yet, but it should be easy to add if this is what you need.
Adding a mysql slave to your setup would allow you to take consistant backups without locking the production database.
Adding a slave also gives you a binary log of changes. A dump is a snapshot of the database at the time you took the dump. The binary log contains all statements that modified the data along with a timestamp.
If you have a failure in the middle of the day and your only taking backups once a day, you've lost a half a days worth of work. With binary logs and mysqldump, you could restore from the previous day and 'play' the logs forward to the point of failure.
http://dev.mysql.com/doc/refman/5.0/en/binary-log.html
If your running MySQL on a linux server with LVM disks or a windows server with VSS, you should check out Zamanda.
It takes binary diffs of the data on disk, which is much faster to read and restore than a text dump of the database.
No, you can specify tables to be locked using --lock-tables but they aren't by default
If you don't specify any tables then the whole DB is backed up, or you can specify a list of tables :
mysqldump [options] db_name [tables]
Not used it sorry, however I run a number of MySQL DBs, some bigger some smaller than 1.7gb and I use mysqldump for all my backups.
Maatkit dump might be useful.
http://www.maatkit.org/doc/mk-parallel-dump.htmlhttp://www.maatkit.org/doc/mk-parallel-dump.html
For mysql and PHP try this
This will also remove files after n days
$dbhost = 'localhost';
$dbuser = 'xxxxx';
$dbpass = 'xxxxx';
$dbname = 'database1';
$folder = 'backups/'; // Name of folder you want to place the file
$filename = $dbname . date("Y-m-d-H-i-s") . ".sql";
$remove_days = 7; // Number of days that the file will stay on the server
$command="mysqldump --host=$dbhost --user=$dbuser --password=$dbpass $dbname > $folder$filename";
system($command);
$files = (glob("$folder"."*.sql"));
foreach($files as $file) {
if(is_file($file)
&& time() - filemtime($file) >= $remove_days*24*60*60) { // 2 days = 2*24*60*60
unlink($file);
echo "$file removed \n";
} else { echo "$file was last modified: " . date ("F d Y H:i:s.", filemtime($file)) . "\n"; }
}

Reducing Size Of SQL Backup?

I am using SQL Express 2005 and do a backup of all DB's every night. I noticed one DB getting larger and larger. I looked at the DB and cannot see why its getting so big! I was wondering if its something to do with the log file?
Looking for tips on how to find out why its getting so big when its not got that much data in it - Also how to optimise / reduce the size?
Several things to check:
is your database in "Simple" recovery mode? If so, it'll produce a lot less transaction log entries, and the backup will be smaller. Recommended for development - but not for production
if it's in "FULL" recovery mode - do you do regular transaction log backups? That should limit the growth of the transaction log and thus reduce the overall backup size
have you run a DBCC SHRINKDATABASE(yourdatabasename) on it lately? That may help
do you have any log / logging tables in your database that are just filling up over time? Can you remove some of those entries?
You can find the database's recovery model by going to the Object Explorer, right click on your database, select "Properties", and then select the "Options" tab on the dialog:
Marc
If it is the backup that keeps growing and growing, I had the same problem. It is not a 'problem' of course, this is happening by design - you are just making a backup 'set' that will simply expand until all available space is taken.
To avoid this, you've got to change the overwrite options. In the SQL management studio, right-click your DB, TASKS - BACKUP, then in the window for the backup you'll see it defaults to the 'General' page. Change this to 'Options' and you'll get a different set of choices.
The default option at the top is 'Append to the existing media set'. This is what makes your backup increase in size indefinitely. Change this to 'Overwrite all existing backup sets' and the backup will always be only as big as one entire backup, the latest one.
(If you have a SQL script doing this, turn 'NOINIT' to 'INIT')
CAUTION: This means the backup will only be the latest changes - if you made a mistake three days ago but you only have last night's backup, you're stuffed. Only use this method if you have a backup regime that copies your .bak file daily to another location, so you can go back to any one of those files from previous days.
It sounds like you are running with the FULL recovery model and the Transaction Log is growing continuously as the result of no Transaction Log backups being taken.
In order to rectify this you need to:
Take a transaction log backup. (See: BACKUP(TRANSACT-SQL) )
Shrink the transaction log file down
to an appropriate size for your needs. (See:How to use DBCC SHRINKFILE.......)
Schedule regular transaction log
backups according to data recovery
requirements.
I suggest reading the following Microsoft reference in order to ensure that you are managing your database environment appropriately.
Recovery Models and Transaction Log Management
Further Reading: How to stop the transaction log of a SQL Server database from growing unexpectedly
One tip for keeping databases small would be at design time, use the smallest data type that you can use.
for Example you may have a status table, do you really need the index to be an int, when a smallint or tinyint will do?
Darknight
as you do a daily FULL backup for your Database , ofcourse it will get so big with time .
so you have to put a plan for your self . as this
1st day: FULL
/ 2nd day: DIFFERENTIAL
/ 3rd day: DIFFERENTIAL
/ 4th day: DIFFERENTIAL
/ 5th day: DIFFERENTIAL
and then start over .
and when you restore your database , if you want to restore the FULL you can do it easily , but when you need to restore the DIFF version , you backup the first FULL before it with " NO-recovery " then the DIFF you need , and then you will have your data back safely .
7zip your backup file for archiving. I recently backed up a database to a 178MB .bak file. After archiving it to a .7z file is was only 16MB.
http://www.7-zip.org/
If you need an archive tool that works with larger files sizes more efficiently and faster than 7zip does, I'd recommend taking a look at LZ4 archiving. I have used it for archiving file backups for years with no issues:
http://lz4.github.io/lz4/