My requirement: I have a task to copy MDF and LDF of a one of the development DB from the E drive on the server to a shared folder on network. I have scheduled a jobs to detach the DB 1st of every month. Is there any way i can schedule a task or a use a ssis package to copy this MDF and LDF files to a shared folder on network. i could use a sql job or a windows task than a SSIS package. Thank You.
I assume you will be using these MDF and LDF files to create the same database on some other server.
Instead of detaching database and moving MDF and LDF files I would suggest you to take a backup of the database and move that backup file across the network.
Benefits:
Using compression, the size of the backup file could be a lot smaller compare to the MDF and LDF files.
Instead of using SSIS, a SQL Agent job can take that backup for you on a network share.
To reduce the pressure on network you can take a backup on multiple files and move them multiple relatively smaller files across the network.
For security you can also encrypt your backup.
You can use SSIS to take a backup and move across the network again all of the above applies to that backup too (security, network load etc)
Basically moving a backup file is much more efficient and safer than using detach and attach method to move databases across the network.
Attach and detach method is also good but only when you are staying on the same server.
Answer to your question is, yes you can detach and attach using SSIS but dont do it, because of all the reasons mentioned above.
Do a nightly backup as both files (mdf and ldf) are kept together and smaller than the actual database if you compress it. (100gb database will be around about 4gb when backup is compressed)
do not email but use ftp or of some sort.
In my years of administering database I have not come across copying mdf and ldf files apart from relocating to another server by detach and attach.
Related
Current resource and requirement
I have an Azure VM running Windows Server 2012 R2. I want to upgrade its size from current 4 cores, 7 GB Memory to 8 cores, 56 GB Memory. Before upgrading, I need to move the database files from the Temporary Storage D drive to another disk drive (say E).
Problem
I accidentally setup my database files in the temporary storage drive without realizing that upgrading the size of the VM would result in loss of data within that drive. The database files are used by an application running in the VM.
Plan
My current approach is to shut down the application and SQL services (SQL Server 2008 R2) and set it to disabled. Then, I need to move the .MDF files from temporary storage D into another drive E. Then, I plan to change the temporary storage D into temporary storage E, and set the drive E as Local Disk D. The next step would be to upgrade the VM size. The application is from some other vendor, and they have confirmed that if the application and the SQL services are disabled, moving the SQL MDF files should not affect operation of the application.
Question
So, I would like to know the best method to move database files from temporary storage D to another drive E.
I am sure I didn't follow all of your move temp storage to e to d to.... but it doesn't matter. There are a few ways to accomplish what you want. One of the easiest.
Detach database
Move Files to New Location
Attach database
That's it when you re-attach the directory, drive, whatever can be completely different than what you started with so whatever you do with temp e/d/.... it doesn't matter to sql-server.
Note if you try moving the database without first Detaching it then you are in for a big headache!
In SQL Server Management Studio I can take database backups and save them as .bak files, which can also be restored in SQL Server Management Studio.
Is there any difference between doing this and setting up a script which backs up the .MDF and .LDF files - and if there was ever an issue, I could just re-attach the .MDF and .LDF files?
Thanks
It depends on your restore needs and backup space. It's certainly possible to to just reattach MDF and LDF files, but there are some limitations:
You have to detach the database first, make the backup (both files at the same time) and then reattach. This means downtime.
You cannot make incremental backups.
You cannot make differential backups.
You cannot do point-in-time restoration.
You basically have to make a full backup each time you copy the MDF and LDF files, which can really eat up space (thus, it can be better to do incremental or differential backups).
SQL Server has built-in mechanisms that can run without invoking external scripts to do regular backups. MDF and LDF backups require external scripts that have permission to access the data directory, the backup location and the server to attach/detach the database.
Basically, I'd say that unless you have a really good reason to not use the built-in backup functionality, I'd avoid doing manual backups of the MDF and LDF files.
Database backups are much more powerful than backing up the files.
First of all, if you backup the files while the database is in use, because it is changing constantly you may get something that works, or more likely you will get a corrupted file. A proper database backup coordinates ongoing changes with the backup process so that the backup file is perfectly consistent. A file backup may give you a file that has half of the changes in a transaction and not the other half, or,worse, half the changes in a particular page and not the other half.
Secondly, proper database backups let you recover the database to ANY point in time beginning at the oldest full backup, not just the point in time that the backup was made. (You will need the chain of all log backups made since the full backup to do this).
EDIT: note that as pointed out in the comments, the built-in functions don't necessarily provide point-in-time recovery--only if you use the type of backups that provide that functionality (though there are other reasons to use that type of backup even if you don't need point-in-time recovery).
The files generated by SQL Server backup only contains data not free, unused, space. The .mdf and .ndf files will contain (sometimes very large amounts of) empty, unused, space making these files larger than the backup files. And you have to detach the database from SQL Server to copy it out.
I'm using SQL Server 2012 in a local environment. In fact, it is running on my Windows 7 machine. My problem is as follows: I receive a daily backup of my SQL database. Right now, I'm just restoring the whole database on a daily basis by deleting the existing one. This restore task takes quite some time to complete. My understanding of the restore process is that it overwrites the previous database with the new backup.
Is there a way for SQL Server 2012 to just modify the existing database with any changes that have occured in the new backup? I mean, something like comparing the previous database with the updated one and making the necessary changes where needed.
Yes, instead of a full backup you ill need a differential backup. Restore it to move to a "point in time" state of original database.
Make a basic research about full/differential and log backups (too many info for a short answer)
I don't believe so. You can do things with database replication, but that's probably not appropriate.
If you did have something to just pull out changes it might not be faster than a restore anyway. Are you a C# or similar dev? If so, I'd be tempted to write a service which monitored the location of the backup and start the restore programatically when it arrives; might save some time.
If your question is "Can I merge changes from an external DB to my current DB without having to restore the whole DB?" then the answer is "Yes, but not easily." You can set up log shipping, but that's fairly complicated to do automatically. It's also fairly complicated to do manually, but for different reasons: there's no "Microsoft" way to do it. You have to figure out manual log shipping largely on your own.
You could consider copying the tables manually via a Linked Server. If you're only updating a small number of tables this might work just fine, and you could save yourself some trouble. A linked server on your workstation, a few MERGE statements saved to a .sql file, and you could update the important tabled in the DB as you need to.
You can avoid having to run the full backup on the remote server by using differential backups, but it's not particularly pleasant.
My assumption is that currently you're getting a full backup created with the COPY_ONLY option. This allows you to create an out-of-band backup copy that doesn't interfere with existing backups.
To do what you actually want, you'd have to do this: on the server you set up backup to do a full backup on day 1, and then do differential backups on days 2-X. Now, on your local system, you retain the full backup of the DB you created on day 1. You then have all differential backups since day 1. You restore the day 1 full DB, and then restore each subsequent differential in the correct order.
However, differential backups require the backup chain to be intact. You can't use COPY_ONLY with a differential backup. That means if you're also using backup to actually backup the database, you're going to either use these same backups for your data backups, or you'll need to have your data backups using COPY_ONLY, both of which seem philosophically wrong. Your dev database copies shouldn't be driving your prod backup procedures.
So, you can do it, but:
You still have to do a full DB restore.
You have considerably more work to do to restore the DB.
You might break your existing backup procedures of the original DB.
We are not hosting our databases. Right now, One person is manually creating a .bak file from the production server. The .bak then copied to each developer's pc. Is there a better apporach that would make this process easier? I am working on build project right now for our team, I am thinking about adding the .bak file into SVN so each person has the correct local version? I had tried to generate a sql script but, it has no data just the schema?
Developers can't share a single dev database?
Adding the .bak file to SVN sounds bad. That's going to keep every version of it forever - you'd be better off (in most cases) leaving it on a network share visible by all developers and letting them copy it down.
You might want to use SSIS packages to let developers make ad hoc copies of production.
You might also be interested in the Data Publishing Wizard, an open source project that lets you script databases with their data. But I'd lean towards SSIS if developers need their own copy of the database.
If the production server has online connectivity to your site you can try the method called "log shipping".
This entails creating a baseline copy of your production database, then taking chunks of the transaction log written on the production server and applying the (actions contained in) the log chunks to your copy. This ensures that after a certain delay your backup database will be in the same state as the production database.
Detailed information can be found here: http://msdn.microsoft.com/en-us/library/ms187103.aspx
As you mentioned SQL 2008 among the tags: as far as I remember SQL2008 has some kind of automatism to set this up.
You can create a schedule back up and restore
You don't have to developer PC for backup, coz. SQL server has it's own back up folder you can use it.
Also you can have restore script generated for each PC from one location, if the developer want to hold the database in their local system.
RESTORE DATABASE [xxxdb] FROM
DISK = N'\xxxx\xxx\xxx\xxxx.bak'
WITH FILE = 1, NOUNLOAD, REPLACE, STATS = 10
GO
Check out SQL Source Control from RedGate, it can be used to keep schema and data in sync with a source control repository (docs say supports SVN). It supports the datbase on a centrally deployed server, or many developer machines as well.
Scripting out the data probably won't be a fun time for everyone depending on how much data there is, but you can also select which tables you're going to do (like lookups) and populate any larger business entity tables using SSIS (or data generator for testing).
I would like to save my backups from my SQL 2008 server to another server location.
We have 2 servers:
Deployment server
File Server
The problem is that the deployment server doesn't have much space. And we keep 10 days backups of our databases. Therefore we need to store our backups on an external "file server". The problem is that SQL doesn't permit this.
I've tried to run the SQL 2008 service with an account that has admin rights on both pc's (domain account), but this still doesn't work.
Any thoughts on this one.
Otherwise we'll have to put an external harddisk on a rack server and that's kinda silly no?
EDIT:
I've found a way to make it work.
You have to share the folder on the server. Then grant the Development Server (the PC itself) write permissions. This will make external backups possible with SQL server.
Don't know if it's safe though, I find it kinda strange to give a computer rights on a folder.
You can use 3rd party tools like SqlBackupAndFTP
There are several ways of doing this already described, but this one is based on my open source project, SQL Server Compressed Backup. It is a command line tool for backing up SQL Server databases, and it can write to anywhere the NT User running it can write to. Just schedule it in Scheduled Tasks running with a user with appropriate permissions.
An example of backing up to a share on another server would be:
msbp.exe backup "db(database=model)" "gzip" "local(path=\\server\share\path\model.full.bak.gz)"
All the BACKUP command options that make sense for backing up to files are available: log, differential, copy_only, checksum, and more (tape drive options are not available for instance).
you might use a scheduler to move backups after a certain amount of time after the backup started with a batch file.
If I remember correctly there's a hack to enable the sql server to back up on remote storage, but I don't think a hack is the way to go.
Surely the best possibility may be to use an external backup tool which supports the use of agents. They control when the backup starts and take care of the files to move around.
Sascha
You could create a nice and tidy little SQL Server Integration Services (SSIS) package to both carry out the backup and then move the data to your alternative file store.
Interestingly enough, the maintenance plans within SQL Server actually use SSIS components. These same components are available to use within the Business Intelligence Design Studio (BIDS).
I hope this is clear but let me know if you require further assistance.
Cheers, John
My experiences older versions of MSSQL, so there may be things in SQL2008 which help you better.
We are very tight on disk space on some of our old servers. These are machines at our ISP and their restore-from-tape lead time is not good - 24 hours is not uncommon :( so we need to keep a decent online backup history.
We take full backup on Sunday, differential backup each other night, and TLog backups every 10 minutes.
We force Tlog backups every minute during table/index defrag and statistics update - this is because these are the most TLog-intensive tasks that we run, and they were previously responsibly for determining the size of the standing Tlog file; since this change we've been able to run the TLog standing size about 60% smaller than before.
Worth watching the size of Diff backups though - if it turns out that by the Wednesday backup your DIFF backup is getting close to the size of the Full backup you might as well run a Full backup twice a week and have smaller Diff backups for the next day or two.
When you delete your Full or Diff backup files (to recover the disk space) make sure you delete the TLog backups that are earlier. But consider your recovery path - would you like to be able to restore last Sunday's Full backup and all Tlogs since, or are you happy that for moment-in-time restore you can only go back as far as last night's DIFF backup? (i.e. to go back further you can only restore to FULL / DIFF backup, and not to point-in-time) If the later you can delete all earlier Tlog backups as soon as you have made the DIFF backup.
(Obviously, regardless of that, you need to get the backups on to tape etc. and to your copy-server, you just don't have to be dependant on tape recovery to make your Restore, but you are losing granularity of restore with time)
We may recover the last Full (or the Last Full plus Monday's DIFF) to a "temporary database" to check out something, and then drop that DB, but if we really REALLY had to restore to "last Monday 15-minutes-past-10 AM" we would live with having to get backups back from tape or off the copy-server.
All our backup files are in an NTFS directory with Compress set. (You can probably make compressed backups direct from SQL2008??, the servers we have which are disk-starved are running SQL2000). I have read that this is frowned upon, but never seen good reasoning why, and we've never had a problem with it - touch wood!
Some third-party backup software allows setting specific user permissions for network locations. So it doesn't matter what perrmissions SQL Server service account has. I would recommend you to try EMS SQL Backup, which also supports backup compression 'on fly' to save storage space.