I have 2 databases with MyISAM tables which are updated once a week. They are quite big in size (one DB is 2GB and the other is 6GB). I currently back them up once a week with mysqldump and keep the last 2 weeks' worth of .sql dumps on the same server where the DBs are running.
I would like, however, to be able to dump the backups to another server, as they are taking up server space unnecessarily. What is the best way to achieve this? If possible, I would like to keep the databases running during the backup. (no inserts or updates take place during the backup process, just selects).
Thanks in advance,
Tim
Were I you, I would create a script that did the backup and then sent the backup elsewhere. I know that is kind of what you are asking how to so, but you left out some things that would be good to know, such as what OS are your two systems running?
Of they are both windows, you could mount a network drive and have the backup dump there (or copy the dump there). If they are linux servers I would recommend copying it across using the scp command. If it is a mix then it gets fun and tricky.
If you are working with linux servers, the following guide should walk you through the process of backup. Click me!
If you are still scratching your head after reading that, let me know what kind of OSes you are rolling with and I can provide more detailed instructions.
Good luck!
Related
I know, really bad database design, but here we are, I have some kind of forum platform (Based on PunBB) and for each forum, I've generated a new set of tables. Really bad idea.
Time has passed and now I have more than 100,000 tables (SHOW TABLES; SELECT FOUND_ROWS(); - 112965 rows in set (1.21 sec)). Performance are great though as the tables do the job of indexes and when you make a direct references to one table, it's ultrafast.
The issue is now I am trying to back everything up and to move to another server. Of course, it take forever. I've launched a mysqldump:
mysqldump --max_allowed_packet=500M --force --opt -u root -pXXXX a > fullbackup.sql
And it's still processing, a little more than 12 hours! The backup is already 546 Mb in size and Mysql is still alive and working.
I've tried to copy directly the Mysql files but I've ran into the issue that a lot of tables has been corrupted.
Any idea to speed this up?
If you are using AWS RDS take a snapshot.
If you are not, use some other snapshot based tool. Percona has one http://www.percona.com/software/percona-xtrabackup/. Using mysqldump to back up large databases is extremely slow
If your source database is already corrupt, that's an independent issue.
If you are copying the database and the copy is corrupt, that is because you are doing a "hot copy" which means that you can't copy a database while it's running without a special "snapshot tool". Even file systems have such tools. You need a consistent set of files.
I presume from the fact that your tables are corrupted when you copy the files that you are using InnoDB.
It says in the MySQL documentation here
Physical backup tools include the mysqlbackup of MySQL Enterprise Backup for InnoDB or any other tables, or file system-level commands (such as cp, scp, tar, rsync) for MyISAM tables.
You can use MySQL Enterprise Backup to perform fast, reliable, physical hot backups (i.e. while the database is running). I believe it is quiet pricey though.
At my last job we ran MySQL instances with over 160,000 tables.
With that many tables, we had to disable innodb_file_per_table and store all the tables in the central tablespace file ibdata1. If we didn't, the server couldn't function efficiently because it had too many open files. This should be easier with MySQL 8.0 but in the old version of MySQL we were using, the data dictionary couldn't scale up to so many tables.
To do backups, we used Percona XtraBackup. This is an open-source tool that works very similarly to MySQL Enterprise Backup. It performs a physical backup of the data directory, but it does it without the risk of file corruption that you caused by copying files directly. Percona XtraBackup works by copying files, but also copying the InnoDB transaction log continually, so the missing bits of the files can be restored. It's very reliable.
Backing up a database with Percona XtraBackup is a little bit faster, but the greater benefit comes when restoring the backup. Restoring a dump file from mysqldump is very slow. Restoring a physical backup like the one produced by Percona XtraBackup can be done as fast as you can copy the backup files into a new data directory, and then start up the MySQL Server.
A recent blog from Percona shows the difference:
https://www.percona.com/blog/2018/04/02/migrate-to-amazon-rds-with-percona-xtrabackup/
I was wondering whats the best way to backup MySQL (v5.1.x) data -
creating mysql data dir archive
use mysqldump
What are the pro/cons for the above? I am guessing mysqldump has some performance impact on a live database. How much impact are we talking about?
We plan to take a backup every few hours, lets say 4 hrs. Whats the best practice around MySQL backups or database backups in general.
I think that the best way is using mysqldump.
Normally I create a cron task to run in a time of little traffic, it generate a dump naming with a timestamp_databasename_enviroment.sql, so it verify if there are old backups and compact it.
I think that is a good form to do database backups.
If your data size is huge then its better to use
MySQL enterprise backup tools.it takes online backup and it will not impact live services.
xtraback is also like MySQL enterprise backup from percona .
Our goal is to restore a test environment from our live environment, so basically we
would like to simply backup our current live databases, and just restore them
in our test server.
However... we do not have enough room to move the backups, one of our databases
is 50 GB, and we only have around 20 GB free(the backup is 40 GB uncompressed).
We were thinking of dropping that database to make room for the backup, but I'm assuming that when it restores it, it will run our space.
I am also thinking that we could just Detach/Attach the database file, but I am assuming that
this would mean that we have to take our live database down(which we don't want to do).
Another option is to restore from a network drive, so just set the restore to \severname\X$\RestoreFolder
But are there any things that I should be aware of if we do this?
I would like to thank everyone for your suggestions in advance.
When you say the database is 50 GB, is that just the MDF file? Or is does the LDF file make up a large portion of the 50 GB? Do you need all the log information on a test machine?
To be honest, I'd be tempted to buy a new hard drive and put it in the test server. Even the cheapest one you can find should have more than enough space for several test copies of the database.
Upgrade your hardware. Even a development/test machine should have enough space to backup/restore the catalogs you're working with.
I would like to save my backups from my SQL 2008 server to another server location.
We have 2 servers:
Deployment server
File Server
The problem is that the deployment server doesn't have much space. And we keep 10 days backups of our databases. Therefore we need to store our backups on an external "file server". The problem is that SQL doesn't permit this.
I've tried to run the SQL 2008 service with an account that has admin rights on both pc's (domain account), but this still doesn't work.
Any thoughts on this one.
Otherwise we'll have to put an external harddisk on a rack server and that's kinda silly no?
EDIT:
I've found a way to make it work.
You have to share the folder on the server. Then grant the Development Server (the PC itself) write permissions. This will make external backups possible with SQL server.
Don't know if it's safe though, I find it kinda strange to give a computer rights on a folder.
You can use 3rd party tools like SqlBackupAndFTP
There are several ways of doing this already described, but this one is based on my open source project, SQL Server Compressed Backup. It is a command line tool for backing up SQL Server databases, and it can write to anywhere the NT User running it can write to. Just schedule it in Scheduled Tasks running with a user with appropriate permissions.
An example of backing up to a share on another server would be:
msbp.exe backup "db(database=model)" "gzip" "local(path=\\server\share\path\model.full.bak.gz)"
All the BACKUP command options that make sense for backing up to files are available: log, differential, copy_only, checksum, and more (tape drive options are not available for instance).
you might use a scheduler to move backups after a certain amount of time after the backup started with a batch file.
If I remember correctly there's a hack to enable the sql server to back up on remote storage, but I don't think a hack is the way to go.
Surely the best possibility may be to use an external backup tool which supports the use of agents. They control when the backup starts and take care of the files to move around.
Sascha
You could create a nice and tidy little SQL Server Integration Services (SSIS) package to both carry out the backup and then move the data to your alternative file store.
Interestingly enough, the maintenance plans within SQL Server actually use SSIS components. These same components are available to use within the Business Intelligence Design Studio (BIDS).
I hope this is clear but let me know if you require further assistance.
Cheers, John
My experiences older versions of MSSQL, so there may be things in SQL2008 which help you better.
We are very tight on disk space on some of our old servers. These are machines at our ISP and their restore-from-tape lead time is not good - 24 hours is not uncommon :( so we need to keep a decent online backup history.
We take full backup on Sunday, differential backup each other night, and TLog backups every 10 minutes.
We force Tlog backups every minute during table/index defrag and statistics update - this is because these are the most TLog-intensive tasks that we run, and they were previously responsibly for determining the size of the standing Tlog file; since this change we've been able to run the TLog standing size about 60% smaller than before.
Worth watching the size of Diff backups though - if it turns out that by the Wednesday backup your DIFF backup is getting close to the size of the Full backup you might as well run a Full backup twice a week and have smaller Diff backups for the next day or two.
When you delete your Full or Diff backup files (to recover the disk space) make sure you delete the TLog backups that are earlier. But consider your recovery path - would you like to be able to restore last Sunday's Full backup and all Tlogs since, or are you happy that for moment-in-time restore you can only go back as far as last night's DIFF backup? (i.e. to go back further you can only restore to FULL / DIFF backup, and not to point-in-time) If the later you can delete all earlier Tlog backups as soon as you have made the DIFF backup.
(Obviously, regardless of that, you need to get the backups on to tape etc. and to your copy-server, you just don't have to be dependant on tape recovery to make your Restore, but you are losing granularity of restore with time)
We may recover the last Full (or the Last Full plus Monday's DIFF) to a "temporary database" to check out something, and then drop that DB, but if we really REALLY had to restore to "last Monday 15-minutes-past-10 AM" we would live with having to get backups back from tape or off the copy-server.
All our backup files are in an NTFS directory with Compress set. (You can probably make compressed backups direct from SQL2008??, the servers we have which are disk-starved are running SQL2000). I have read that this is frowned upon, but never seen good reasoning why, and we've never had a problem with it - touch wood!
Some third-party backup software allows setting specific user permissions for network locations. So it doesn't matter what perrmissions SQL Server service account has. I would recommend you to try EMS SQL Backup, which also supports backup compression 'on fly' to save storage space.
The source database is quite large. The target database doesn't grow automatically. They are on different machines.
I'm coming from a MS SQL Server, MySQL background and IDS11 seems overly complex (I am sure, with good reason).
One way to move data from one server to another is to backup the database using the dbexport command.
Then after copying the backup files to the destination server run the dbimport command.
To create a new database you need to create the DBSpace for the new database using the onmonitor tool, at this point you could use the existing files from the other server.
You will then need to create the database on the destination server using the dbaccess tool. The dbaccess tool has a database option that allows you to create a database. When creating the database you specify what DBSpace to use.
The source database may be made up of many chunks which you will also need to copy and attach to the new database.
The easiest way is dbexport/dbimport, as others have mentioned.
The fastest way is using onpload, the High Performance Loader. If you have lots of data, but not a ridiculous number of tables, this is definitely worth pursuing. There are some bits and pieces on the IIUG site that may be of assistance in scripting the HPL to generate all the config you'll need.
You have a few choices.
dbexport/dbimport
onunload/onload
HPL (high performance loader) options.
I have personally used onunload/onload and dbexport/dbimport. I have not used HPL. I'm using IDS 10.
onunload/onload IBM docs
Back up the raw database to disk or tape in page size chunks
faster (especially if you go to disk)
Issues if the the database servers are on different operating systems or hardware or if they just have different page sizes.
dbexport/dbimport IBM docs
backup the database in delimited ascii files
writes an ascii schema of the database including all users, tables, views, indexes, etc. Everything about the structure of the database into one huge plain text file.
separate plain text files for each table of the database as well
not so fast
issues on dbimport on any table that has bad data, any view with incorrect syntax, etc. (This can be a good thing, an opportunity to identify and clean)
DO NOT LEAVE THIS TAPE ON THE FRONT SEAT OF YOUR CAR WHEN YOU RUN INTO THE STORE FOR AN ICE CREAM (or you'll be on the news). Also read ... Not a very secure way to be moving data around. :)
Limitation: Requires exclusive access to the source database.
Here is a good place to start in the docs --> Migration of Data Between Database Servers
have you used the export tool ? There used to be a way if you first put the db's into quiescent mode and then you could actually copy the DBSpaces across (dbspaces tool I think... its been a few years now).
Because with informix you used to be able to specify the DBSpaces(s) to used for the table (maybe even in the alter table ?).
Check - dbaccess tool - there is an export command.
Put the DB's into quiesent mode or shut down, copy the dbspaces and then attach table telling it to point to the new dbspaces file. (the dbspaces tool could be worth while looking at.. I have manuals around here. they are 9.2, but it shouldn't have changed too much).
If both the machines use the same version of IDS then another option would be to use ontape to take a backup on one machine one machine and restore on another. You can use the STDIO option and then just stream the backup onto the other machine where the restore could just restore from the STDIO.
From the "Data Replication for High Availability and Distribution" redbook:
ontape -s -L 0 -F | rsh secondary_server "ontape –p"
You could also create a passwordless ssh connection b/w the hosts and transfer in a more secure way.