Whenever I move a SQL Server database to a new server I always to a database backup and restore. I have seen a lot of people, especially DBA's who will to a detach / re-attach of the MDF file instead. What is the preferred method and why? I find a backup/restore to be safer, less likely hood of corruption.
I just did this. For our small databases, I did a backup/restore - just because I felt like it was 'safer'. However, when moving LARGE databases, it is so much faster to just detach, copy and attach. This beats having to do a (usually slow) backup, followed by a copy and a (usually slow) restore.
Microsoft recommends using alter database 'planned relocation procedure'.
Note, you can use attach/detach to upgrade between SQL versions as shown in the link.
Related
Some background:
A customer has asked an Certified SQL Server Consultant for his opinion on migrating from sql server 2005 to sql server 2008.
One of his most important recommendations was not to use backup/restore but instead use the migration wizard to copy all the data into a new database.
He said that this would ensure that the inner structure of the database would be in an SQL 2008 format, and would ultimately result in better performance.
The Customer is skeptical about this because they cant find any writing, in white papers or otherwise to corroborate the consultants statement.
So they posed me this question:
Given an SQL Database, which originally started out on SQL Server 2000, and has migrated to newer versions of SQL Server using backup/restore. (and finally being on SQL Server 2005)
Would migrating to SQL Server 2008 using the Migration Wizard, and in effect copying all the raw data into a new database, result in better performance characteristics. Then if they would be using the Backup/Restore method again?
I'll repeat what I posted on Twitter, "your consultant is an idiot".
Doing a backup and restore will be much easier, and require a much shorter downtime. Also it will ensure that the data is consistent and that no objects are missed.
So long as you are doing index maintenance (rebuilding or reorging/defragging indexes) then any page splits which have happened are fixed and there will be no performance problems.
When the database is moved from one version to another the physical database file is updated to the new version. You'll notice when you restore the database that the compatibility level is set to the old version's number. This has nothing to do with the physical structure of the database file. You can change the compatibility level at any time to a lower or higher version. You can see this if you restore the database using T-SQL as after the database is restored you'll see the specific upgrade steps which are performed.
In response to qwerty13579's comment, when the indexes are rebuild the index is written to new physical database pages so exporting and importing the data in a SQL Server database isn't needed.
For the record, the migration wizard is about the worst possible option for moving data from database to database.
I agree with Denny.
Backup/restore is the easiest way to upgrade.
For no downtime upgrade you can use database mirorring to new server and fail over to new version
One important task that improves performance is refreshing all statistics when you upgrade to a new version
I know, really bad database design, but here we are, I have some kind of forum platform (Based on PunBB) and for each forum, I've generated a new set of tables. Really bad idea.
Time has passed and now I have more than 100,000 tables (SHOW TABLES; SELECT FOUND_ROWS(); - 112965 rows in set (1.21 sec)). Performance are great though as the tables do the job of indexes and when you make a direct references to one table, it's ultrafast.
The issue is now I am trying to back everything up and to move to another server. Of course, it take forever. I've launched a mysqldump:
mysqldump --max_allowed_packet=500M --force --opt -u root -pXXXX a > fullbackup.sql
And it's still processing, a little more than 12 hours! The backup is already 546 Mb in size and Mysql is still alive and working.
I've tried to copy directly the Mysql files but I've ran into the issue that a lot of tables has been corrupted.
Any idea to speed this up?
If you are using AWS RDS take a snapshot.
If you are not, use some other snapshot based tool. Percona has one http://www.percona.com/software/percona-xtrabackup/. Using mysqldump to back up large databases is extremely slow
If your source database is already corrupt, that's an independent issue.
If you are copying the database and the copy is corrupt, that is because you are doing a "hot copy" which means that you can't copy a database while it's running without a special "snapshot tool". Even file systems have such tools. You need a consistent set of files.
I presume from the fact that your tables are corrupted when you copy the files that you are using InnoDB.
It says in the MySQL documentation here
Physical backup tools include the mysqlbackup of MySQL Enterprise Backup for InnoDB or any other tables, or file system-level commands (such as cp, scp, tar, rsync) for MyISAM tables.
You can use MySQL Enterprise Backup to perform fast, reliable, physical hot backups (i.e. while the database is running). I believe it is quiet pricey though.
At my last job we ran MySQL instances with over 160,000 tables.
With that many tables, we had to disable innodb_file_per_table and store all the tables in the central tablespace file ibdata1. If we didn't, the server couldn't function efficiently because it had too many open files. This should be easier with MySQL 8.0 but in the old version of MySQL we were using, the data dictionary couldn't scale up to so many tables.
To do backups, we used Percona XtraBackup. This is an open-source tool that works very similarly to MySQL Enterprise Backup. It performs a physical backup of the data directory, but it does it without the risk of file corruption that you caused by copying files directly. Percona XtraBackup works by copying files, but also copying the InnoDB transaction log continually, so the missing bits of the files can be restored. It's very reliable.
Backing up a database with Percona XtraBackup is a little bit faster, but the greater benefit comes when restoring the backup. Restoring a dump file from mysqldump is very slow. Restoring a physical backup like the one produced by Percona XtraBackup can be done as fast as you can copy the backup files into a new data directory, and then start up the MySQL Server.
A recent blog from Percona shows the difference:
https://www.percona.com/blog/2018/04/02/migrate-to-amazon-rds-with-percona-xtrabackup/
Is there a way to backup certain tables in a SQL Database? I know I can move certain tables into different filegroups and preform a backup on these filegroup. The only issue with this is I believe you need a backup of all the filegroups and transaction logs to restore the database on a different server.
The reason why I need to restore the backup on a different server is these are backups of customers database. For example we may have a remote customer and need to get a copy of they 4GB database. 90% of this space is taken up by two tables, we don’t need these tables as they only store images. Currently we have to take a copy of the database and upload it to a FTP site…With larger databases this can take a lot of the time and we need to reduce the database size.
The other way I can think of doing this would be to take a full backup of the DB and restore it on the clients SQL server. Then connect to the new temp DB and drop the two tables. Once this is done we could take a backup of the DB. The only issue with this solution is that it could use a lot of system restores at the time of running the query so its less than ideal.
So my idea was to use two filegroups. The primary filegroup would host all of the tables except the two tables which would be in the second filegroup. Then when we need a copy of the database we just take a backup of the primary filegroup.
I have done some testing but have been unable to get it working. Any suggestions? Thanks
Basically your approach using 2 filegroups seems reasonable.
I suppose you're working with SQL Server on both ends, but you should clarify for each which whether that is truly the case as well as which editions (enterprise, standard, express etc.), and which releases 2000, 2005, 2008, (2012 ? ).
Table backup in SQL Server is here a dead horse that still gets a good whippin' now and again. Basically, that's not a feature in the built-in backup feature-set. As you rightly point out, the partial backup feature can be used as a workaround. Also, if you just want to transfer a snapshot from a subset of tables to another server, using ftp you might try working with the bcp utility as suggested by one of the answers in the above linked post, or the export/import data wizards. To round out the list of table backup solutions and workarounds for SQL Server, there is this (and possibly other ? ) third party software that claims to allow individual recovery of table objects, but unfortunately doesn't seem to offer individual object backup, "Object Level Recovery Native" by Red Gate". (I have no affiliation or experience using this particular tool).
As per your more specific concern about restore from partial database backups :
I believe you need a backup of all the filegroups and transaction logs
to restore the database on a different server
1) You might have some difficulties your first time trying to get it to work, but you can perform restores from partial backups as far back as SQL Server 2000, (as a reference see here
2) From 2005 and onward you have the option of partially restoring today, and if you need to you can later restore the remainder of your database. You don't need to include all filegroups-you always include the primary filegroup and if your database is simple recovery mode you need to add all read-write filegroups.
3) You need to apply log backups only if your db is in bulk or full recovery mode and you are restoring changes to a readonly filegroup that since last restore has become read-write. Since you are expecting changes to these tables you will likely not be concerned about read only filegroups, and so not concerned about shipping and applying log backups
You might also investigate some time whether any of the other SQL Server features, merge replication, or those mentioned above (bcp, import/export wizards) might provide a solution that is more simple or more adequately meets your needs.
I'm looking at backup routine which allows our production database to be backed up with sensitive data stripped out of certain columns within the database to be exported to our testing server.
The routine should require the least human intervention and hopefully just be a simple customisable SQL script without taking the production database offline.
Database server is SQL Server 2008.
I've run into similar requirements before, and the only sure solution I know of is to use a copy of your production database. You can mask/delete data on the copy and run backups from there. Yes it's ugly and a waste of resources, but to date I haven't found a solid alternative for this particular problem.
As for the copy method, you do have some options:
Replication
Scheduled DB copy
Backup/restore from production
So while I admit this solution is pretty cringe-worthy, it can be automated and serve your purposes. If you can find productive uses for the database copy that don't require your deleted information (e.g. reports, testing, development) then this can actually be a less-than-terrible solution. It can be a nice security boon to have a slightly out-of-date version of your production database with sensitive data removed.
If you want to take a backup then Just type
BACKUP DATABASE Dbname
If you want to specify offline or anything else then you can do it.
The backup file will generate on the default path of the SQL SERVER 2008.
I am trying to figure out how SQL Server DBAs are doing their backups and verify in 2005. I use the Idera's free stored procs (which is no longer available to download btw) to backup and verify and have gotten around 65% compression. If there any other free alternative?
Not sure if this is what Idera's scripts do, but you could script a (native) SQL backup to a temporary location, then call PKZip or 7zip or some command-line compression software to compress the backup to a permanent storage location.
Note that most of these zip utilities have a high CPU cost.
See the discussion in the comments of this post:
https://blog.stackoverflow.com/2009/02/our-backup-strategy-inexpensive-nas/
(Edit: Or just upgrade to SQL2008 R2, which supports native backup compression.)
The Idera which you are using an 3rd party tool , with that tool it can be backup/restore & you can monitor for your server & databases..
As you asked the question-
am trying to figure out how SQL Server DBAs are doing their backups and verify in 2005. I use the Idera's free stored procs (which is no longer available to download btw) to backup and verify and have gotten around 65% compression. If there any other free alternative?
SQL server has it's own native tool where you can set up your backup of databases to go disk,
usually with the SSIS packages by using Maitenace plan (or)T-sql(where you can configure Full,differntial and log backup also)(where after backup finish you can check the verify integrity of the backup) but if the database grows more and more then you may need to ensre about the capcity as this goes to disk(here you need cut the backup stragegy for big database say 1TB bcz usually for Big database taking full daily causes lot of I/O then you have to decide weekly full backup along with the other days differntial backup in place) & you should do the cleanup also for howmany days you want(in the same maintenace plan only it exists).
see for ex-http://bradmcgehee.com/2010/01/13/how-to-use-sql-backup-inside-a-maintenance-plan/
but the backup/restore totally depends on how well you manage from your side by knowing the business risk & communincated with them.
In sql server 2008 onwards you have backup compression like how the Idera sql safe does.
But the Backup usually depends on what they have Implemented whether from the Native tool or from any other 3rd part like Commvault,Idera,TDP(goes to tape) etc... it depeds on what is agreed for.
Backup-
http://msdn.microsoft.com/en-us/library/ms186865.aspx
Free (good) SQL Server DBA tools are hard to find. = /
Have you considered Windows Backups?
It's definitely not the best thing out there, and it takes up a lot of space, but it is free, it does backup your data, and you already have it installed.