Currently i am using the mysqldump program to create backups, below is an example of how i run it.
mysqldump --opt --skip-lock-tables --single-transaction --add-drop-database
--no-autocommit -u user -ppassword --databases db > dbbackup.sql
I perform alot of inserts and updates on my database through out the day, but when this process starts it can really slow the inserts and updates down, does anyone see any flaw in the way i am backing it up ? (e.g. tables being locked), or is there a way i can improve the backup process so it doesn't effect my inserts and updates as much?
Thanks.
The absolutely best way of doing this without disturbing the production database is to have a master/slave replication set up, you then do the dump from the slave database.
More on MySQL replication here http://dev.mysql.com/doc/refman/5.1/en/replication-howto.html
Even with --skip-lock-tables, mysqldump will issue a READ lock on every table to keep consistency.
Using mysqldump will always lock your database. To perform hot mysql backups, you will need to either set up a slave (that implies some costs) or using some dedicated tools like Percona Xtrabackup (http://www.percona.com/software/percona-xtrabackup), and that is if your database is innoDB (we use xtrabackup for terabytes of data without an issue, on slaves. If your database is not as big, having a slave and locking it to perform the backup shouldn't be that big of a deal :) )
Related
As I'm about to implement it myself, I'm curious to know how people handle
incremental backups for their DB's.
The straight-forward way, as I see it, is to shutdown couch and use a tool
like rsync or duplicity to backup db files. It should do the job well, and
as an added bonus, it could also be used to backup views.
Does anyone know if a similar backup could be done while the couch is still
on (and the db is being updated)?
Does anyone do incremental backups in couchdb2.0?
For incremental backup, you can query the changes feed of a database using the "since" parameter, passing the latest revision from your last backup and then copy only the changes into a new database on the same or different server. AFAIK, there is no "since" parameter for replication, so you will need to roll your own framework for this.
I know, really bad database design, but here we are, I have some kind of forum platform (Based on PunBB) and for each forum, I've generated a new set of tables. Really bad idea.
Time has passed and now I have more than 100,000 tables (SHOW TABLES; SELECT FOUND_ROWS(); - 112965 rows in set (1.21 sec)). Performance are great though as the tables do the job of indexes and when you make a direct references to one table, it's ultrafast.
The issue is now I am trying to back everything up and to move to another server. Of course, it take forever. I've launched a mysqldump:
mysqldump --max_allowed_packet=500M --force --opt -u root -pXXXX a > fullbackup.sql
And it's still processing, a little more than 12 hours! The backup is already 546 Mb in size and Mysql is still alive and working.
I've tried to copy directly the Mysql files but I've ran into the issue that a lot of tables has been corrupted.
Any idea to speed this up?
If you are using AWS RDS take a snapshot.
If you are not, use some other snapshot based tool. Percona has one http://www.percona.com/software/percona-xtrabackup/. Using mysqldump to back up large databases is extremely slow
If your source database is already corrupt, that's an independent issue.
If you are copying the database and the copy is corrupt, that is because you are doing a "hot copy" which means that you can't copy a database while it's running without a special "snapshot tool". Even file systems have such tools. You need a consistent set of files.
I presume from the fact that your tables are corrupted when you copy the files that you are using InnoDB.
It says in the MySQL documentation here
Physical backup tools include the mysqlbackup of MySQL Enterprise Backup for InnoDB or any other tables, or file system-level commands (such as cp, scp, tar, rsync) for MyISAM tables.
You can use MySQL Enterprise Backup to perform fast, reliable, physical hot backups (i.e. while the database is running). I believe it is quiet pricey though.
At my last job we ran MySQL instances with over 160,000 tables.
With that many tables, we had to disable innodb_file_per_table and store all the tables in the central tablespace file ibdata1. If we didn't, the server couldn't function efficiently because it had too many open files. This should be easier with MySQL 8.0 but in the old version of MySQL we were using, the data dictionary couldn't scale up to so many tables.
To do backups, we used Percona XtraBackup. This is an open-source tool that works very similarly to MySQL Enterprise Backup. It performs a physical backup of the data directory, but it does it without the risk of file corruption that you caused by copying files directly. Percona XtraBackup works by copying files, but also copying the InnoDB transaction log continually, so the missing bits of the files can be restored. It's very reliable.
Backing up a database with Percona XtraBackup is a little bit faster, but the greater benefit comes when restoring the backup. Restoring a dump file from mysqldump is very slow. Restoring a physical backup like the one produced by Percona XtraBackup can be done as fast as you can copy the backup files into a new data directory, and then start up the MySQL Server.
A recent blog from Percona shows the difference:
https://www.percona.com/blog/2018/04/02/migrate-to-amazon-rds-with-percona-xtrabackup/
I was wondering whats the best way to backup MySQL (v5.1.x) data -
creating mysql data dir archive
use mysqldump
What are the pro/cons for the above? I am guessing mysqldump has some performance impact on a live database. How much impact are we talking about?
We plan to take a backup every few hours, lets say 4 hrs. Whats the best practice around MySQL backups or database backups in general.
I think that the best way is using mysqldump.
Normally I create a cron task to run in a time of little traffic, it generate a dump naming with a timestamp_databasename_enviroment.sql, so it verify if there are old backups and compact it.
I think that is a good form to do database backups.
If your data size is huge then its better to use
MySQL enterprise backup tools.it takes online backup and it will not impact live services.
xtraback is also like MySQL enterprise backup from percona .
I have 2 databases with MyISAM tables which are updated once a week. They are quite big in size (one DB is 2GB and the other is 6GB). I currently back them up once a week with mysqldump and keep the last 2 weeks' worth of .sql dumps on the same server where the DBs are running.
I would like, however, to be able to dump the backups to another server, as they are taking up server space unnecessarily. What is the best way to achieve this? If possible, I would like to keep the databases running during the backup. (no inserts or updates take place during the backup process, just selects).
Thanks in advance,
Tim
Were I you, I would create a script that did the backup and then sent the backup elsewhere. I know that is kind of what you are asking how to so, but you left out some things that would be good to know, such as what OS are your two systems running?
Of they are both windows, you could mount a network drive and have the backup dump there (or copy the dump there). If they are linux servers I would recommend copying it across using the scp command. If it is a mix then it gets fun and tricky.
If you are working with linux servers, the following guide should walk you through the process of backup. Click me!
If you are still scratching your head after reading that, let me know what kind of OSes you are rolling with and I can provide more detailed instructions.
Good luck!
I have two MS SQL 2005 servers, one for production and one for test and both have a Recovery Model of Full. I restore a backup of the production database to the test server and then have users make changes.
I want to be able to:
Roll back all the changes made to the test SQL server
Apply all the transactions that have occurred on the production SQL server since the test server was originally restored so that the two servers have the same data
I do not want to do a full database restore from backup file as this takes far too long with our +200GB database especially when all the changed data is less than 1GB.
EDIT
Based on the suggestions below I have tried restoring a database with NoRecovery but you cannot create a snapshot of a database that is in that state.
I have also tried restoring it to Standby Read only mode which works and I can take a snapshot of the database then and still apply transaction logs to the original db but I cannot make the database writable again as long as there are snapshots against it.
Running:
restore database TestDB with recovery
Results in the following error:
Msg 5094, Level 16, State 2, Line 1 The operation cannot be performed on a database with database snapshots or active DBCC replicas
First off, once you've restored the backup and set the database to "recovered", that's it -- you will never be able to apply another transaction log backup to it.
However, there are database snapshots. I've never used them, but I believe you could use them for this purpose. I think you need to restore the database, leave it in "not restored" mode -- definitly not standby -- and then generate snapshots based on that. (Or was that mirroring? I read about this stuff years ago, but never had reason to use it.)
Then when you want to update the database, you drop the snapshot, restore the "next" set of transaction log backups, and create a fresh snapshot.
However, I don't think this would work very well. Above and beyond the management and maintenance overhead of doing this, if the testers/developers do a lot of modifications, your database snapshot could get very big, even bigger than the original database -- and that's hard drive space used in addition to the "original" database. For infrequently modified databases this could work, but for large OLTP systems, I have serious doubts.
So what you really want is a copy of Production to be made in Test. First, you must have a current backup of production somewhere??. Usually on a database this size full backups are made Sunday nights and then differential backups are made each night during the week.
Take the Sunday backup copy and restore it as a different database name on your server, say TestRestore. You should be able to kick this off at 5:00 pm and it should take about 10 hours. If it takes a lot longer see Optimizing Backup and Restore Performance in SQL Server.
When you get in in the morning restore the last differential backup from the previous night, this shouldn't take long at all.
Then kick the users off the Test database and rename Test to TestOld (someone will need something), then rename your TestRestore database to be the Test database. See How to rename a SQL Server Database.
The long range solution is to do log shipping from Production to TestRestore. The at a moments notice you can rename things and have a fresh Test database.
For the rollback, the easiest way is probably using a virtual machine and not saving changes when you close it.
For copying changes across from the production to the test, could you restore the differential backups or transaction log backups from production to the test db?
After having tried all of the suggestions offered here I have not found any means of accomplishing what I outlined in the question through SQL. If someone can find a way and post it or has another suggestion I would be happy to try something else but at this point there appears to be no way to accomplish this.
Storage vendors (as netapp) provide the ability to have writeable snapshots.
It gives you the ability to create a snapshot within seconds on the production, do your tests, and drop/recreate the snapshot.
It's a long term solution, but... It works
On Server1, a job exists that compresses the latest full backup
On Server2, there's a job that performs the following steps:
Copies the compressed file to a local drive
Decompresses the file to make the full backup available
Kills all sessions to the database that is about to be restored
Restores the database
Sets the recovery model to Simple
Grants db_owner privileges to the developers
Ref:http://weblogs.sqlteam.com/tarad/archive/2009/02/25/How-to-refresh-a-SQL-Server-database-automatically.aspx