How to refresh a test instance of SQL server with production data without using full backups - sql

I have two MS SQL 2005 servers, one for production and one for test and both have a Recovery Model of Full. I restore a backup of the production database to the test server and then have users make changes.
I want to be able to:
Roll back all the changes made to the test SQL server
Apply all the transactions that have occurred on the production SQL server since the test server was originally restored so that the two servers have the same data
I do not want to do a full database restore from backup file as this takes far too long with our +200GB database especially when all the changed data is less than 1GB.
EDIT
Based on the suggestions below I have tried restoring a database with NoRecovery but you cannot create a snapshot of a database that is in that state.
I have also tried restoring it to Standby Read only mode which works and I can take a snapshot of the database then and still apply transaction logs to the original db but I cannot make the database writable again as long as there are snapshots against it.
Running:
restore database TestDB with recovery
Results in the following error:
Msg 5094, Level 16, State 2, Line 1 The operation cannot be performed on a database with database snapshots or active DBCC replicas

First off, once you've restored the backup and set the database to "recovered", that's it -- you will never be able to apply another transaction log backup to it.
However, there are database snapshots. I've never used them, but I believe you could use them for this purpose. I think you need to restore the database, leave it in "not restored" mode -- definitly not standby -- and then generate snapshots based on that. (Or was that mirroring? I read about this stuff years ago, but never had reason to use it.)
Then when you want to update the database, you drop the snapshot, restore the "next" set of transaction log backups, and create a fresh snapshot.
However, I don't think this would work very well. Above and beyond the management and maintenance overhead of doing this, if the testers/developers do a lot of modifications, your database snapshot could get very big, even bigger than the original database -- and that's hard drive space used in addition to the "original" database. For infrequently modified databases this could work, but for large OLTP systems, I have serious doubts.

So what you really want is a copy of Production to be made in Test. First, you must have a current backup of production somewhere??. Usually on a database this size full backups are made Sunday nights and then differential backups are made each night during the week.
Take the Sunday backup copy and restore it as a different database name on your server, say TestRestore. You should be able to kick this off at 5:00 pm and it should take about 10 hours. If it takes a lot longer see Optimizing Backup and Restore Performance in SQL Server.
When you get in in the morning restore the last differential backup from the previous night, this shouldn't take long at all.
Then kick the users off the Test database and rename Test to TestOld (someone will need something), then rename your TestRestore database to be the Test database. See How to rename a SQL Server Database.
The long range solution is to do log shipping from Production to TestRestore. The at a moments notice you can rename things and have a fresh Test database.

For the rollback, the easiest way is probably using a virtual machine and not saving changes when you close it.
For copying changes across from the production to the test, could you restore the differential backups or transaction log backups from production to the test db?

After having tried all of the suggestions offered here I have not found any means of accomplishing what I outlined in the question through SQL. If someone can find a way and post it or has another suggestion I would be happy to try something else but at this point there appears to be no way to accomplish this.

Storage vendors (as netapp) provide the ability to have writeable snapshots.
It gives you the ability to create a snapshot within seconds on the production, do your tests, and drop/recreate the snapshot.
It's a long term solution, but... It works

On Server1, a job exists that compresses the latest full backup
On Server2, there's a job that performs the following steps:
Copies the compressed file to a local drive
Decompresses the file to make the full backup available
Kills all sessions to the database that is about to be restored
Restores the database
Sets the recovery model to Simple
Grants db_owner privileges to the developers
Ref:http://weblogs.sqlteam.com/tarad/archive/2009/02/25/How-to-refresh-a-SQL-Server-database-automatically.aspx

Related

How to update my local SQL Server database with the latest backup?

I'm using SQL Server 2012 in a local environment. In fact, it is running on my Windows 7 machine. My problem is as follows: I receive a daily backup of my SQL database. Right now, I'm just restoring the whole database on a daily basis by deleting the existing one. This restore task takes quite some time to complete. My understanding of the restore process is that it overwrites the previous database with the new backup.
Is there a way for SQL Server 2012 to just modify the existing database with any changes that have occured in the new backup? I mean, something like comparing the previous database with the updated one and making the necessary changes where needed.
Yes, instead of a full backup you ill need a differential backup. Restore it to move to a "point in time" state of original database.
Make a basic research about full/differential and log backups (too many info for a short answer)
I don't believe so. You can do things with database replication, but that's probably not appropriate.
If you did have something to just pull out changes it might not be faster than a restore anyway. Are you a C# or similar dev? If so, I'd be tempted to write a service which monitored the location of the backup and start the restore programatically when it arrives; might save some time.
If your question is "Can I merge changes from an external DB to my current DB without having to restore the whole DB?" then the answer is "Yes, but not easily." You can set up log shipping, but that's fairly complicated to do automatically. It's also fairly complicated to do manually, but for different reasons: there's no "Microsoft" way to do it. You have to figure out manual log shipping largely on your own.
You could consider copying the tables manually via a Linked Server. If you're only updating a small number of tables this might work just fine, and you could save yourself some trouble. A linked server on your workstation, a few MERGE statements saved to a .sql file, and you could update the important tabled in the DB as you need to.
You can avoid having to run the full backup on the remote server by using differential backups, but it's not particularly pleasant.
My assumption is that currently you're getting a full backup created with the COPY_ONLY option. This allows you to create an out-of-band backup copy that doesn't interfere with existing backups.
To do what you actually want, you'd have to do this: on the server you set up backup to do a full backup on day 1, and then do differential backups on days 2-X. Now, on your local system, you retain the full backup of the DB you created on day 1. You then have all differential backups since day 1. You restore the day 1 full DB, and then restore each subsequent differential in the correct order.
However, differential backups require the backup chain to be intact. You can't use COPY_ONLY with a differential backup. That means if you're also using backup to actually backup the database, you're going to either use these same backups for your data backups, or you'll need to have your data backups using COPY_ONLY, both of which seem philosophically wrong. Your dev database copies shouldn't be driving your prod backup procedures.
So, you can do it, but:
You still have to do a full DB restore.
You have considerably more work to do to restore the DB.
You might break your existing backup procedures of the original DB.

Database Filegroups - Only restore 1 filegroup on new server

Is there a way to backup certain tables in a SQL Database? I know I can move certain tables into different filegroups and preform a backup on these filegroup. The only issue with this is I believe you need a backup of all the filegroups and transaction logs to restore the database on a different server.
The reason why I need to restore the backup on a different server is these are backups of customers database. For example we may have a remote customer and need to get a copy of they 4GB database. 90% of this space is taken up by two tables, we don’t need these tables as they only store images. Currently we have to take a copy of the database and upload it to a FTP site…With larger databases this can take a lot of the time and we need to reduce the database size.
The other way I can think of doing this would be to take a full backup of the DB and restore it on the clients SQL server. Then connect to the new temp DB and drop the two tables. Once this is done we could take a backup of the DB. The only issue with this solution is that it could use a lot of system restores at the time of running the query so its less than ideal.
So my idea was to use two filegroups. The primary filegroup would host all of the tables except the two tables which would be in the second filegroup. Then when we need a copy of the database we just take a backup of the primary filegroup.
I have done some testing but have been unable to get it working. Any suggestions? Thanks
Basically your approach using 2 filegroups seems reasonable.
I suppose you're working with SQL Server on both ends, but you should clarify for each which whether that is truly the case as well as which editions (enterprise, standard, express etc.), and which releases 2000, 2005, 2008, (2012 ? ).
Table backup in SQL Server is here a dead horse that still gets a good whippin' now and again. Basically, that's not a feature in the built-in backup feature-set. As you rightly point out, the partial backup feature can be used as a workaround. Also, if you just want to transfer a snapshot from a subset of tables to another server, using ftp you might try working with the bcp utility as suggested by one of the answers in the above linked post, or the export/import data wizards. To round out the list of table backup solutions and workarounds for SQL Server, there is this (and possibly other ? ) third party software that claims to allow individual recovery of table objects, but unfortunately doesn't seem to offer individual object backup, "Object Level Recovery Native" by Red Gate". (I have no affiliation or experience using this particular tool).
As per your more specific concern about restore from partial database backups :
I believe you need a backup of all the filegroups and transaction logs
to restore the database on a different server
1) You might have some difficulties your first time trying to get it to work, but you can perform restores from partial backups as far back as SQL Server 2000, (as a reference see here
2) From 2005 and onward you have the option of partially restoring today, and if you need to you can later restore the remainder of your database. You don't need to include all filegroups-you always include the primary filegroup and if your database is simple recovery mode you need to add all read-write filegroups.
3) You need to apply log backups only if your db is in bulk or full recovery mode and you are restoring changes to a readonly filegroup that since last restore has become read-write. Since you are expecting changes to these tables you will likely not be concerned about read only filegroups, and so not concerned about shipping and applying log backups
You might also investigate some time whether any of the other SQL Server features, merge replication, or those mentioned above (bcp, import/export wizards) might provide a solution that is more simple or more adequately meets your needs.

How to rollback changes in SQL Server

By mistake I have updated data on production database. Is there any way to rollback those transactions.
I have executed the update statement from management studio and the script does not have in
Begin Trans/rollback/commit.
Thanks
Here is what I would do in this case:
Restore backup in separate database and compare these databases to recover rows that exist in backup?
If your database is in full recovery mode try reading transaction log to recover the remaining rows.
In order to read transaction log you can use a third party tool such as ApexSQL Log or try to do this yourself through fn_dblog function (here is an example but it’s quite complex).
Here are other posts on this topic:
Read the log file (*.LDF) in SQL Server 2008
How can I rollback an UPDATE query in SQL server 2005?
Without transaction (or indeed even with a committed transaction), there is no easy way to revert the changes made.
Transaction are mostly useful to ensure that a series of changes to the database are performed as a single unit, i.e. so that either all of these changes get performed [in the order prescribed] or that none of them get performed at all (or more precisely that the database server rolls-back whatever changes readily done would there be a problem before all changes are completed normaly).
Depending on the recovery model associated with your database, the SQL log file may be of help in one of two ways:
If you have a backup and if the log file was started right after this backup, the logfile may help "roll forward" the database to the point that preceded the unfortunate changes mentioned in the question. (aka point-in-time restore)
If no such backup is avaiable, the log file may be suitable to reverse the unfortunate changes
Both of these approaches imply that the SQL log was indeed maintained as some of the recovery models, are such that the log file get truncated (its data lost) after each successful batch/transaction. And neither of these approaches is easy, the latter in particular probably require third party software (or a lenghty procedure) etc.
Depending on how your backups are set up, you may be able to do a point in time restore. Talk to your DBA. You may also want to take the DB offline ASAP to prevent more changes that would eventually be lost when you do the restore.

Microsoft SQL Server 2008 External Backup

I would like to save my backups from my SQL 2008 server to another server location.
We have 2 servers:
Deployment server
File Server
The problem is that the deployment server doesn't have much space. And we keep 10 days backups of our databases. Therefore we need to store our backups on an external "file server". The problem is that SQL doesn't permit this.
I've tried to run the SQL 2008 service with an account that has admin rights on both pc's (domain account), but this still doesn't work.
Any thoughts on this one.
Otherwise we'll have to put an external harddisk on a rack server and that's kinda silly no?
EDIT:
I've found a way to make it work.
You have to share the folder on the server. Then grant the Development Server (the PC itself) write permissions. This will make external backups possible with SQL server.
Don't know if it's safe though, I find it kinda strange to give a computer rights on a folder.
You can use 3rd party tools like SqlBackupAndFTP
There are several ways of doing this already described, but this one is based on my open source project, SQL Server Compressed Backup. It is a command line tool for backing up SQL Server databases, and it can write to anywhere the NT User running it can write to. Just schedule it in Scheduled Tasks running with a user with appropriate permissions.
An example of backing up to a share on another server would be:
msbp.exe backup "db(database=model)" "gzip" "local(path=\\server\share\path\model.full.bak.gz)"
All the BACKUP command options that make sense for backing up to files are available: log, differential, copy_only, checksum, and more (tape drive options are not available for instance).
you might use a scheduler to move backups after a certain amount of time after the backup started with a batch file.
If I remember correctly there's a hack to enable the sql server to back up on remote storage, but I don't think a hack is the way to go.
Surely the best possibility may be to use an external backup tool which supports the use of agents. They control when the backup starts and take care of the files to move around.
Sascha
You could create a nice and tidy little SQL Server Integration Services (SSIS) package to both carry out the backup and then move the data to your alternative file store.
Interestingly enough, the maintenance plans within SQL Server actually use SSIS components. These same components are available to use within the Business Intelligence Design Studio (BIDS).
I hope this is clear but let me know if you require further assistance.
Cheers, John
My experiences older versions of MSSQL, so there may be things in SQL2008 which help you better.
We are very tight on disk space on some of our old servers. These are machines at our ISP and their restore-from-tape lead time is not good - 24 hours is not uncommon :( so we need to keep a decent online backup history.
We take full backup on Sunday, differential backup each other night, and TLog backups every 10 minutes.
We force Tlog backups every minute during table/index defrag and statistics update - this is because these are the most TLog-intensive tasks that we run, and they were previously responsibly for determining the size of the standing Tlog file; since this change we've been able to run the TLog standing size about 60% smaller than before.
Worth watching the size of Diff backups though - if it turns out that by the Wednesday backup your DIFF backup is getting close to the size of the Full backup you might as well run a Full backup twice a week and have smaller Diff backups for the next day or two.
When you delete your Full or Diff backup files (to recover the disk space) make sure you delete the TLog backups that are earlier. But consider your recovery path - would you like to be able to restore last Sunday's Full backup and all Tlogs since, or are you happy that for moment-in-time restore you can only go back as far as last night's DIFF backup? (i.e. to go back further you can only restore to FULL / DIFF backup, and not to point-in-time) If the later you can delete all earlier Tlog backups as soon as you have made the DIFF backup.
(Obviously, regardless of that, you need to get the backups on to tape etc. and to your copy-server, you just don't have to be dependant on tape recovery to make your Restore, but you are losing granularity of restore with time)
We may recover the last Full (or the Last Full plus Monday's DIFF) to a "temporary database" to check out something, and then drop that DB, but if we really REALLY had to restore to "last Monday 15-minutes-past-10 AM" we would live with having to get backups back from tape or off the copy-server.
All our backup files are in an NTFS directory with Compress set. (You can probably make compressed backups direct from SQL2008??, the servers we have which are disk-starved are running SQL2000). I have read that this is frowned upon, but never seen good reasoning why, and we've never had a problem with it - touch wood!
Some third-party backup software allows setting specific user permissions for network locations. So it doesn't matter what perrmissions SQL Server service account has. I would recommend you to try EMS SQL Backup, which also supports backup compression 'on fly' to save storage space.

Backup/Restore database for oracle 10g testing using sqlplus or rman

Using Oracle 10g with our testing server what is the most efficient/easy way to backup and restore a database to a static point, assuming that you always want to go back to the given point once a backup has been created.
A sample use case would be the following
install and configure all software
Modify data to the base testing point
take a backup somehow (this is part of the question, how to do this)
do testing
return to step 3 state (restore back to backup point, this is the other half of the question)
Optimally this would be completed through sqlplus or rman or some other scriptable method.
You do not need to take a backup at your base time. Just enable flashback database, create a guaranteed restore point, run your tests and flashback to the previously created restore point.
The steps for this would be:
Startup the instance in mount mode.
startup force mount;
Create the restore point.
create restore point before_test guarantee flashback database;
Open the database.
alter database open;
Run your tests.
Shutdown and mount the instance.
shutdown immediate;
startup mount;
Flashback to the restore point.
flashback database to restore point before_test;
Open the database.
alter database open;
You could use a feature in Oracle called Flashback which allows you to create a restore point, which you can easily jump back to after you've done testing.
Quoted from the site,
Flashback Database is like a 'rewind
button' for your database. It provides
database point in time recovery
without requiring a backup of the
database to first be restored. When
you eliminate the time it takes to
restore a database backup from tape,
database point in time recovery is
fast.
From my experience import/export is probably the way to go. Export creates a logical snapshot of your DB so you won't find it useful for big DBs or exacting performance requirements. However it works great for making snapshots and whatnot to use on a number of machines.
I used it on a rails project to get a prod snapshot that we could swap between developers for integration testing and we did the job within rake scripts. We wrote a small sqlplus script that destroyed the DB then imported the dump file over the top.
Some articles you may want to check:
OraFAQ Cheatsheet
Oracle Wiki
Oracle apparently don't like imp/exp any more in favour of data pump, when we used data pump we needed things we couldn't have (i.e. SYSDBA privileges we couldn't get in a shared environment). So take a look but don't be disheartened if data pump is not your bag, the old imp/exp are still there :)
I can't recommend RMAN for this kind of thing becuase RMAN takes a lot of setup and will need config in the DB (it also has its own catalog DB for backups which is a pain in the proverbial for a bare metal restore).
If you are using a filesystem that supports copy-on-write snapshots, you could set up the database to the state that you want. Then shut down everything and take a filesystem snapshot. Then go about your testing and when you're ready to start over you could roll back the snapshot. This might be simpler than other options, assuming you have a filesystem which supports snapshots.
#Michael Ridley solution is perfectly scriptable, and will work with any version of oracle.
This is exactly what I do, I have a script which runs weekly to
Rollback the file system
Apply production archive logs
Take new "Pre-Data-Masking" FS snapshot
Reset logs
Apply "preproduction" data masking.
Take new "Post-Data-Masking" snapshot (allows rollback to post masked data)
Open database
This allows us to keep our development databases close to our production database.
To do this I use ZFS.
This method can also be used for your applications, or even you entire "environment" (eg, you could "rollback" your entire environment with a single (scripted) command.
If you are running 10g though, the first thing you'd probably want to look into is Flashback, as its built into the database.