Is there a way to query a backup database without restoring it? - sql

I have a SQL Server 2005 instance running and a client of mine deleted some data that they would like to get back. It is four records. Is there a way to query the backups to see if the data exists without restoring the database?
They just noticed the data was missing and it could have been deleted by them 3 months ago or yesterday, so the backups could have been overwritten and it not exist at all. I am just trying to cover my bases to see if I can find the data before telling them they should not of clicked OK the second time when I asked them if they were sure they wanted to delete that record.

RedGate sells Virtual Restore, which can
Rapidly mount live, fully functional databases direct from backups
You could sign up for a trial and check your current backups.
P.S. I haven't used Virtual Restore, but the other RedGate products I used were of good quality.

No, there is not that posibility, but, if your backup media are files, (in example with diferent names in a folder), you can make a loop script to automate the restore each one and a subsequent query to the missing records.
The script just need to do restore database ... from ... if you want I have a similar script but not accesible to me now.

Related

How to restore and existing backup from an existing database to a new one without affecting the original?

I've searched extensively for the answer to this question and could not find a good answer. I've looked into several restore DB articles and a few rollbacks too but still no success.
My situation is: I have a very large database in which I did execute a wrong update query for a single column of a single table, and I have a full backup of this database until yesterday (which is more than enough to correct the problem). But the other tables of this same DB were updated in the meantime, and I require them to keep their current values.
so after all the reading my plan was : Restore the full backup to a new location then get the values of the column I need and input those in the current database.
My problem is: I'm not being able to restore this full backup without affecting the production DB. When I try to restore it, the sql studio says the mdf file can't be overwritten (which is good because I'll be using the table further), then i saw some articles telling me to use the MOVE query. But if I do use it the mdf files from the original/production table will be relocated thus affecting the table right ?
I also saw a few articles telling me to roll it back if I have transaction logs backups. I wasn't actually able to tell if I do have those, nor what are those. even after googling it out
Any thoughts on how I should proceed ?
sorry if it is a newbie question, but I'm not originally a programmer yet I have been doing this for work and I really need it done fast ! So any help would be strongly appreciated
I'm using SQL Server Standard 2005 with SQL Server Mangmt Studio 2008.
Restore The backup With Different Name Like DB_Temp on any location
Copy the Table From Running DB using Select INTO.......
Import records from newly restored DB (DB_Temp) Table to Running DB
Delete the Database DB_Temp
Check the changes between recently copied and original table
Update records accordingly
Thanks

How to update my local SQL Server database with the latest backup?

I'm using SQL Server 2012 in a local environment. In fact, it is running on my Windows 7 machine. My problem is as follows: I receive a daily backup of my SQL database. Right now, I'm just restoring the whole database on a daily basis by deleting the existing one. This restore task takes quite some time to complete. My understanding of the restore process is that it overwrites the previous database with the new backup.
Is there a way for SQL Server 2012 to just modify the existing database with any changes that have occured in the new backup? I mean, something like comparing the previous database with the updated one and making the necessary changes where needed.
Yes, instead of a full backup you ill need a differential backup. Restore it to move to a "point in time" state of original database.
Make a basic research about full/differential and log backups (too many info for a short answer)
I don't believe so. You can do things with database replication, but that's probably not appropriate.
If you did have something to just pull out changes it might not be faster than a restore anyway. Are you a C# or similar dev? If so, I'd be tempted to write a service which monitored the location of the backup and start the restore programatically when it arrives; might save some time.
If your question is "Can I merge changes from an external DB to my current DB without having to restore the whole DB?" then the answer is "Yes, but not easily." You can set up log shipping, but that's fairly complicated to do automatically. It's also fairly complicated to do manually, but for different reasons: there's no "Microsoft" way to do it. You have to figure out manual log shipping largely on your own.
You could consider copying the tables manually via a Linked Server. If you're only updating a small number of tables this might work just fine, and you could save yourself some trouble. A linked server on your workstation, a few MERGE statements saved to a .sql file, and you could update the important tabled in the DB as you need to.
You can avoid having to run the full backup on the remote server by using differential backups, but it's not particularly pleasant.
My assumption is that currently you're getting a full backup created with the COPY_ONLY option. This allows you to create an out-of-band backup copy that doesn't interfere with existing backups.
To do what you actually want, you'd have to do this: on the server you set up backup to do a full backup on day 1, and then do differential backups on days 2-X. Now, on your local system, you retain the full backup of the DB you created on day 1. You then have all differential backups since day 1. You restore the day 1 full DB, and then restore each subsequent differential in the correct order.
However, differential backups require the backup chain to be intact. You can't use COPY_ONLY with a differential backup. That means if you're also using backup to actually backup the database, you're going to either use these same backups for your data backups, or you'll need to have your data backups using COPY_ONLY, both of which seem philosophically wrong. Your dev database copies shouldn't be driving your prod backup procedures.
So, you can do it, but:
You still have to do a full DB restore.
You have considerably more work to do to restore the DB.
You might break your existing backup procedures of the original DB.

roll back SQL query executed by mistake

I think the question says it all,
the following update query has been executed - by mistake - in SQL Server management studio
update kms_students set student_campus='4' where student_campus='KL'
The effected rows are more than 1000, and i can't identify it since that table is already have the student_campus='4' for many previous rows.
Is it possible to roll back?
I believe ApexSQL should do the trick.
ApexSQL works by analyzing the physical transaction log which basically has all the necessary info to restore specific transactions and data, but MS doesn't provide an out-of-box tool to manage it, other than restoring a backup and then manually restoring the transaction log up to a particular date using RESTORE LOGS
Backup. Most Hosting companies keep one, try calling everyone asap.
Your own backups. Even if they're old they will be helpful.
Keep lots of Backups and NEVER try out queries on production environment. NEVER.(Bet you learned that, right?)
To make it a bit easier, you can try putting the backup DB online and execute some PHP/Python/whatever so as to compare each record from the Backup and change the current database fom '4' to 'KL' where needed.
May not be perfect, but can help you avoid a few days of work.

How to refresh a test instance of SQL server with production data without using full backups

I have two MS SQL 2005 servers, one for production and one for test and both have a Recovery Model of Full. I restore a backup of the production database to the test server and then have users make changes.
I want to be able to:
Roll back all the changes made to the test SQL server
Apply all the transactions that have occurred on the production SQL server since the test server was originally restored so that the two servers have the same data
I do not want to do a full database restore from backup file as this takes far too long with our +200GB database especially when all the changed data is less than 1GB.
EDIT
Based on the suggestions below I have tried restoring a database with NoRecovery but you cannot create a snapshot of a database that is in that state.
I have also tried restoring it to Standby Read only mode which works and I can take a snapshot of the database then and still apply transaction logs to the original db but I cannot make the database writable again as long as there are snapshots against it.
Running:
restore database TestDB with recovery
Results in the following error:
Msg 5094, Level 16, State 2, Line 1 The operation cannot be performed on a database with database snapshots or active DBCC replicas
First off, once you've restored the backup and set the database to "recovered", that's it -- you will never be able to apply another transaction log backup to it.
However, there are database snapshots. I've never used them, but I believe you could use them for this purpose. I think you need to restore the database, leave it in "not restored" mode -- definitly not standby -- and then generate snapshots based on that. (Or was that mirroring? I read about this stuff years ago, but never had reason to use it.)
Then when you want to update the database, you drop the snapshot, restore the "next" set of transaction log backups, and create a fresh snapshot.
However, I don't think this would work very well. Above and beyond the management and maintenance overhead of doing this, if the testers/developers do a lot of modifications, your database snapshot could get very big, even bigger than the original database -- and that's hard drive space used in addition to the "original" database. For infrequently modified databases this could work, but for large OLTP systems, I have serious doubts.
So what you really want is a copy of Production to be made in Test. First, you must have a current backup of production somewhere??. Usually on a database this size full backups are made Sunday nights and then differential backups are made each night during the week.
Take the Sunday backup copy and restore it as a different database name on your server, say TestRestore. You should be able to kick this off at 5:00 pm and it should take about 10 hours. If it takes a lot longer see Optimizing Backup and Restore Performance in SQL Server.
When you get in in the morning restore the last differential backup from the previous night, this shouldn't take long at all.
Then kick the users off the Test database and rename Test to TestOld (someone will need something), then rename your TestRestore database to be the Test database. See How to rename a SQL Server Database.
The long range solution is to do log shipping from Production to TestRestore. The at a moments notice you can rename things and have a fresh Test database.
For the rollback, the easiest way is probably using a virtual machine and not saving changes when you close it.
For copying changes across from the production to the test, could you restore the differential backups or transaction log backups from production to the test db?
After having tried all of the suggestions offered here I have not found any means of accomplishing what I outlined in the question through SQL. If someone can find a way and post it or has another suggestion I would be happy to try something else but at this point there appears to be no way to accomplish this.
Storage vendors (as netapp) provide the ability to have writeable snapshots.
It gives you the ability to create a snapshot within seconds on the production, do your tests, and drop/recreate the snapshot.
It's a long term solution, but... It works
On Server1, a job exists that compresses the latest full backup
On Server2, there's a job that performs the following steps:
Copies the compressed file to a local drive
Decompresses the file to make the full backup available
Kills all sessions to the database that is about to be restored
Restores the database
Sets the recovery model to Simple
Grants db_owner privileges to the developers
Ref:http://weblogs.sqlteam.com/tarad/archive/2009/02/25/How-to-refresh-a-SQL-Server-database-automatically.aspx

What's your process for dealing with database schema changes in a dev team?

here's a more general question on how you handle database schema changes in a development team.
We are a team of developers and the databases used during development are running locally on everyone's box as we want to avoid the requirement to have web access all the time. So running a single central database instance somewhere is not a real option.
Whenever one of us decides that it is time to extend/change the db schema, we mail database files (MYI/MYD) or SQL files to execute around, or give others instructions on the phone what they need to do to get the changed code running on their local DBs. That's not the perfect approach for sure. The same problem arises when we need to adjust the DB schema on staging or production once a new release is ready.
I was wondering ... how do you guys handle this kind of stuff? For source code, we use SVN.
Really appreciate your input!
Thanks,
Michael
One approach we've used in the past is to script the entire DDL for the database, along with any test/setup data needed. Store that in SVN, then when there's a change, any developer can pull down the changes, drop the database, and rebuild it from the script files.
At the very least you should have the scripts of all the objects in the database (tables, stored procedures, etc) under source control.
I don't think mailing schema changes is a real option for a professional development team.
We had a system on one of my previous teams that was the best I've encountered for dealing with this situation.
The nightly build of the application included a build of a database (SQL Server). The database got built to the Test DB server. Each developer then had a DTS package (this was a while ago, and I'm sure they upgraded to SSIS packages) to pull down that nightly DB build to their local DB environment.
This kept the master copy in one location and put the onus on the developers to keep their local dev databases fresh.
At my work, we deal with pretty large databases that are time-consuming to generate, so for us, starting from scratch with a new DB isn't ideal. Like Harper, we have our DDL in SVN. Additionally, we store a version number in a database table. Every check-in that changes the DB must be accompanied by a script that:
Will upgrade the database schema and modify any existing data appropriately, and
Will update the version number in the database.
Further, we number the scripts and database versions such that a script we've written knows how to upgrade further along a branch or from an older branch to a newer one without any input from the developer (apart from the database name and the directory to the upgrade scripts).
Thus, if I've got a copy of a customer's 4GB DB that's from a year old version and I want to test how their data will work with the version we cut yesterday, I can just run our script and let it handle the upgrades rather than having to start from scratch and redo every INSERT, UPDATE and DELETE performed since the database was created.
We have a non-SQL description of the database schema. When the application starts, it compares the desired database schema with the actual database schema, and performs whatever ADD TABLE, ADD COLUMN, ADD INDEX, etc. statements it needs to do to get the database to look right.
This doesn't handle every case; sometimes you have to delete the database and recreate if if you've changed something that the schema resolver can't handle, but most of the time we don't need to worry about it.
I'd certainly keep the database schema in source code control.
At my present job, every time there's a schema change, we write the SQL for the change (alter table xyz add column ...) and put it in SVN. Then developers can update test databases by running this script. It's pretty clumsy but it works.
At a previous job I wrote some code that at application start-up would automatically compare the actual database schema to what it expected, and if it was not up to date perform the updates. Mostly this was done for deployment reasons: When we shipped new copies of the software, it would then automatically update the user's database. But it was also handy for developers.
I think there should be some generic SQL tool to do this. Maybe there is, but I've never seen one.