Some background:
A customer has asked an Certified SQL Server Consultant for his opinion on migrating from sql server 2005 to sql server 2008.
One of his most important recommendations was not to use backup/restore but instead use the migration wizard to copy all the data into a new database.
He said that this would ensure that the inner structure of the database would be in an SQL 2008 format, and would ultimately result in better performance.
The Customer is skeptical about this because they cant find any writing, in white papers or otherwise to corroborate the consultants statement.
So they posed me this question:
Given an SQL Database, which originally started out on SQL Server 2000, and has migrated to newer versions of SQL Server using backup/restore. (and finally being on SQL Server 2005)
Would migrating to SQL Server 2008 using the Migration Wizard, and in effect copying all the raw data into a new database, result in better performance characteristics. Then if they would be using the Backup/Restore method again?
I'll repeat what I posted on Twitter, "your consultant is an idiot".
Doing a backup and restore will be much easier, and require a much shorter downtime. Also it will ensure that the data is consistent and that no objects are missed.
So long as you are doing index maintenance (rebuilding or reorging/defragging indexes) then any page splits which have happened are fixed and there will be no performance problems.
When the database is moved from one version to another the physical database file is updated to the new version. You'll notice when you restore the database that the compatibility level is set to the old version's number. This has nothing to do with the physical structure of the database file. You can change the compatibility level at any time to a lower or higher version. You can see this if you restore the database using T-SQL as after the database is restored you'll see the specific upgrade steps which are performed.
In response to qwerty13579's comment, when the indexes are rebuild the index is written to new physical database pages so exporting and importing the data in a SQL Server database isn't needed.
For the record, the migration wizard is about the worst possible option for moving data from database to database.
I agree with Denny.
Backup/restore is the easiest way to upgrade.
For no downtime upgrade you can use database mirorring to new server and fail over to new version
One important task that improves performance is refreshing all statistics when you upgrade to a new version
Related
There is an old SQL Server database which needed to be upgraded to much improved version of database schema. Mostly new columns have been added to existing tables. It is necessary to keep original data in the old database. It there any easy way to upgrade the schema than compacting and updating table by table manually?
I've used Adept SQL in the past with a lot of success. It will compare the databases for you, and even generate a script to bring one database up the other. It is not a free product, but you can use it for a trial (with most features, I believe.) If this is a one-time operation, it'll be just what you need.
In the interest of full disclosure, we liked the product so much that we did end up purchasing it.
Is there a way to backup certain tables in a SQL Database? I know I can move certain tables into different filegroups and preform a backup on these filegroup. The only issue with this is I believe you need a backup of all the filegroups and transaction logs to restore the database on a different server.
The reason why I need to restore the backup on a different server is these are backups of customers database. For example we may have a remote customer and need to get a copy of they 4GB database. 90% of this space is taken up by two tables, we don’t need these tables as they only store images. Currently we have to take a copy of the database and upload it to a FTP site…With larger databases this can take a lot of the time and we need to reduce the database size.
The other way I can think of doing this would be to take a full backup of the DB and restore it on the clients SQL server. Then connect to the new temp DB and drop the two tables. Once this is done we could take a backup of the DB. The only issue with this solution is that it could use a lot of system restores at the time of running the query so its less than ideal.
So my idea was to use two filegroups. The primary filegroup would host all of the tables except the two tables which would be in the second filegroup. Then when we need a copy of the database we just take a backup of the primary filegroup.
I have done some testing but have been unable to get it working. Any suggestions? Thanks
Basically your approach using 2 filegroups seems reasonable.
I suppose you're working with SQL Server on both ends, but you should clarify for each which whether that is truly the case as well as which editions (enterprise, standard, express etc.), and which releases 2000, 2005, 2008, (2012 ? ).
Table backup in SQL Server is here a dead horse that still gets a good whippin' now and again. Basically, that's not a feature in the built-in backup feature-set. As you rightly point out, the partial backup feature can be used as a workaround. Also, if you just want to transfer a snapshot from a subset of tables to another server, using ftp you might try working with the bcp utility as suggested by one of the answers in the above linked post, or the export/import data wizards. To round out the list of table backup solutions and workarounds for SQL Server, there is this (and possibly other ? ) third party software that claims to allow individual recovery of table objects, but unfortunately doesn't seem to offer individual object backup, "Object Level Recovery Native" by Red Gate". (I have no affiliation or experience using this particular tool).
As per your more specific concern about restore from partial database backups :
I believe you need a backup of all the filegroups and transaction logs
to restore the database on a different server
1) You might have some difficulties your first time trying to get it to work, but you can perform restores from partial backups as far back as SQL Server 2000, (as a reference see here
2) From 2005 and onward you have the option of partially restoring today, and if you need to you can later restore the remainder of your database. You don't need to include all filegroups-you always include the primary filegroup and if your database is simple recovery mode you need to add all read-write filegroups.
3) You need to apply log backups only if your db is in bulk or full recovery mode and you are restoring changes to a readonly filegroup that since last restore has become read-write. Since you are expecting changes to these tables you will likely not be concerned about read only filegroups, and so not concerned about shipping and applying log backups
You might also investigate some time whether any of the other SQL Server features, merge replication, or those mentioned above (bcp, import/export wizards) might provide a solution that is more simple or more adequately meets your needs.
I have notice, that when i am trying to restore DB, it is restoring DATA + Stored Procedures. I want to restore only data from my existing database in sql server 2008, how can i achieve this.
Scenario is
I have Production DB and Development DB, while developing i have made several changes to SPs and Table Structure. My file which i am using to track those changes is lost and now i want all Table Structure + SP's change in DB, and should also have latest data from production DB.
How can i achieve this?
You can't do a selective restore.
You have to restore the backup to another "work" database and then migrate the bits you want to recover into the target database. After that, you're free to drop the work database.
You can use the following two tools, one to sync your objects and the other one to sync your data:
ApexSQL Diff – a SQL Server database comparison and synchronization tool which detects differences between database objects and resolves them without errors. It generates comprehensive reports on the found differences and can automate the synchronization process between live and versioned databases, backups, snapshots and script folders
ApexSQL Data Diff - a SQL Server data comparison and synchronization tool which detects data differences and resolves them without errors. It can compare and sync live databases and native or natively compressed database backups and generate comprehensive reports on the detected differences
Disclaimer: I work as a Product Support Engineer at ApexSQL
Simple situation. I've created an application which uses SQL Server as database. I could add a table to this database which contains the version number of my application, so my application can check if it's talking to the correct version of the database. But since there are no other settings that I store inside a database, this would mean that I would add a single table with a single field, which contains only one record.
What a waste of a good resource...
Is there another wat that I can tell the SQL Server database about the product version that it's linked to?
I'm not interested in the version of SQL Server itself but of the database that it's using.
(Btw, this applies to both SQL Server 2000 and 2005.)
If you're using SQL 2005 and up, you can store version info as an Extended Property of the database itself and query the sys.extended_properties view to get the info, eg :
sys.sp_addextendedproperty #name=N'CurrentDBVersion', #value=N'1.4.2'
SELECT Value FROM sys.extended_properties WHERE name = 'CurrentDBVersion' AND class_desc = 'DATABASE'
If SQL 2000, I think your only option is your own table with one row. The overhead is almost non-existent.
I'd go with the massive overhead of a varchar(5) field with a tinyint PK. It makes the most sense if you're talking about a product that already uses the SQL Server database.
You're worried about overhead on such a small part of the system, that it becomes negligible.
I would put the connection settings in the application or a config file that the application reads. Have the app check the version number in the connection settings.
Even if there was such a feature in SQL Server, I wouldn't use it. Why?
Adding a new table to store the information is negligible to both the size and speed of the application and database
A new table could store other configuration data related to the application, and you've already got a mechanism in place for it (and if your application is that large, you will have other configuration data)
Coupling the application to a specific database engine (especially this way) is very rarely a good thing
Not standard practice, and not obvious to someone new looking at the system for the first time
I highly recommend writing the data base version into the database.
In an application we maintained over a decade or so we had updates of the database schema every release.
When the user started the application after an update installation it could detect if the database was to old and convert it to the newer schema. We actually did an incremental update: In order to get from 7 to 10 we did 7 -> 8, 8->9, 9->10.
Also imagine the scenario when somebody restores the database to an older state from a backup.
Don't even think about adding a single table, just do it (and think about the use cases).
I'm working on a legacy project, written for the most part in Delphi 5 before it was upgraded to Delphi 2007. A lot has changed after this upgrade, except the database that's underneath. It still uses MS-Access for data storage.
Now we want to support SQL Server as an alternate database. Still just for single-user situations, although multi-user support will be a feature for the future. And although there won't be many migration problems (see below) when it needs to use a different database, keeping two database structures synchronized is a bit of a problem.
If I would create an SQL script to generate the SQL Server database then I would need a second script to keep the Access database up-to-date too. They don't speak the same dialect. (At least, not for our purposes.) So I need a way to maintain the database structure in a simple way, making sure it can generate both a valid SQL Server database as an Access database. I could write my own tool where I store the database structure inside an XML file, which combined with some smart code and ADOX would generate both database types.
But isn't there already a good tool that can do this?
Note: the application also uses ADO and all queries are just simple select statements. Although it has 50+ tables, there's one root "Document" table and the user selects one of the "documents" in this table. It then collects all records from all tables that are related to this document record and stores them in an in-memory structure. When the user saves the data, it just writes the document record and all changed data back to the database again. Basically, this read/write mechanism of documents is the only database interaction in the whole application. So using a different database is not a big problem.
We will drop the MS-Access database in the future but for now we have 4000 customers using this application. We first need to make sure the whole thing works with SQL Server and we need to continue to maintain the current code. As a result, we will have to support both databases for at least a year.
Take a look at the DB Explorer, there is a trial download too.
OR
Use migration wizard from MS Access to SQL Server
After development in Access (schema changes), use the wizard again.
Use a tool to compare SQL Server schemata.