Setup:
Two MSSQL 2014 servers, one publisher/distributor and the other subscriber. Both are currently set to full recovery.
The database size is relatively small (100g), but we generate about 30g of logs every day. My goal is to have a setup where I can perform backups without downtime.
A full backup on the this database takes about an hour in which it is effectively unavailable and impacts our user base.
The replicated database is taking a log backup every hour and a full backup every day.
Questions
Is it possible and/or advised to have a Simple recovery model on the primary database (in order to prevent the need to do log and full backups), while maintaining a Full recovery model on the replicated database?
Related
I have a SQL Server 2012 database which currently used as a transactional database and reporting database. The application reads/writes into the same database and the reports are also generated against the same database.
Due to some performance issue, I have decided to maintain the two copies of the database. One will be a transactional database which will be accessed by the application. The other database will be the exact copy of the transactional database and it will only be used by the reporting service.
Following are the requirements:
The reporting database should be synched with transactional database in every one hour. That is, the reporting database can have stale data for maximum of 1 hour.
It must be read-only database.
The main intension is NOT recovery or availability.
I am not sure which strategy, transactional log shipping, mirroring or replication, will be best suited in my case. Also if I do the synch operation more frequently (say in every 10 minutes), will there be any impact on the transactional database or the reporting service?
Thanks
I strongly recommend you to use a standby database in readonly state. And every 15 minutes your sqlserveragent has a scheduled job to: a) generate a new .trn logfile within main db, and b) restore it into standby one(your reports db). The only issue is: using this technique your session will be disconnected while agent restores the .trn logfile. But if you can stop the restore job, run your reports and then reactivate it, there is no problem. Seems to be exactly what you need. Or if your reports are fast to run, probably will not be disconnected...if im not wrong restore job can also be configured to wait opened session to finish or to close it. I can check it this last doubt for you tomorrow if you don't find..
Once it is running in the same sql server instance, you don't have to worry about extra licensing...
I have a online Database which will be updated Daily from various Sources.
I need to have a local Database with some tables from Server Database which have to check for any changes or new rows in tables in server and update the local Database for particular Intervals of Time. How can I Achieve this???
You may want to look into SQL Server Replication.
Replication will manage the data synchronization between the two copies of your database. You can configure replication for any tables in the database, including all tables. Replication will take care of checking for updates, adds and deletes from the Server Database and transfer the changes to the local database.
You can setup replication to update the local database at near-real-time or you can schedule periodic updates.
Replication is a high-maintenance solution. It's designed to maintain two copies of the same database with significant reliability. This makes replication a good solution when you must avoid data problems or recover from problems with little to no data loss.
If you don't require the high-maintenance solution, then SQL Server Integration Services (SSIS) may be a good alternative. With SSIS, you develop the data transfer and data management solution. Along with managing data problems, you design the solution to identify data adds, deletes and updates.
One user has said to me
Applying incremental db backups is
tedious, and a royal pain if you miss
a step. I wouldn't bother with the
approach on SQL Server or MySQL -
that's what transaction logs are for,
so you don't need to incorporate it
into your data model
So if i have transactions on mysql or sql server i can have a script to backup any data modified after or between date X and Y? I ask because i am currently designing tables so i can do an increment dump instead of a full.
Yes, you could backup the transaction logs rather than incorporate logic into your data model, providing the database supports it. Your previous question said that you were developing on SQLite...
Speaking from a SQL Server background, it can use transaction logs for both restoration and replication of a database. An ideal setup would have three RAID drives setup - a mirror for the OS, RAID 5 (or better) for the data, and RAID 5 (or better) for the transaction logs. The key part is the transaction logs being on their own RAID setup for optimal performance (not competing with read/write for data) and failover (because RAID is not a backup). For more info - see link.
MySQL transaction logs turns up info on the MySQL Binary Log, which also references replication so I figure there's a fair amount of carryover in approach.
On SQL Server: The key factor is the SLA time of recovery. A full a disaster recovery starts from the latest full backup, applies the latest differential backup, the applies all the log backups after the latest differential backup. If you're missing differential backups from your recovery plan, then you must start from the full backup and then apply all log backups.
The differential backup thus reduces the recovery time by eliminating the need to apply all log backups taken between the last full backup and the last differential backup.
If your database is small, differential backups don't add much advantage because the recovery time is small to start with. But on large databases it makes a difference, as the log backups can be quite large and going through days of log adds up to the recovery time. Adding differential backups can cup back the recovery time by few hours.
I'm not sure I follow your argument about designing tables with differential backup in mind, the two subjects are orthogonal.
I have two MS SQL 2005 servers, one for production and one for test and both have a Recovery Model of Full. I restore a backup of the production database to the test server and then have users make changes.
I want to be able to:
Roll back all the changes made to the test SQL server
Apply all the transactions that have occurred on the production SQL server since the test server was originally restored so that the two servers have the same data
I do not want to do a full database restore from backup file as this takes far too long with our +200GB database especially when all the changed data is less than 1GB.
EDIT
Based on the suggestions below I have tried restoring a database with NoRecovery but you cannot create a snapshot of a database that is in that state.
I have also tried restoring it to Standby Read only mode which works and I can take a snapshot of the database then and still apply transaction logs to the original db but I cannot make the database writable again as long as there are snapshots against it.
Running:
restore database TestDB with recovery
Results in the following error:
Msg 5094, Level 16, State 2, Line 1 The operation cannot be performed on a database with database snapshots or active DBCC replicas
First off, once you've restored the backup and set the database to "recovered", that's it -- you will never be able to apply another transaction log backup to it.
However, there are database snapshots. I've never used them, but I believe you could use them for this purpose. I think you need to restore the database, leave it in "not restored" mode -- definitly not standby -- and then generate snapshots based on that. (Or was that mirroring? I read about this stuff years ago, but never had reason to use it.)
Then when you want to update the database, you drop the snapshot, restore the "next" set of transaction log backups, and create a fresh snapshot.
However, I don't think this would work very well. Above and beyond the management and maintenance overhead of doing this, if the testers/developers do a lot of modifications, your database snapshot could get very big, even bigger than the original database -- and that's hard drive space used in addition to the "original" database. For infrequently modified databases this could work, but for large OLTP systems, I have serious doubts.
So what you really want is a copy of Production to be made in Test. First, you must have a current backup of production somewhere??. Usually on a database this size full backups are made Sunday nights and then differential backups are made each night during the week.
Take the Sunday backup copy and restore it as a different database name on your server, say TestRestore. You should be able to kick this off at 5:00 pm and it should take about 10 hours. If it takes a lot longer see Optimizing Backup and Restore Performance in SQL Server.
When you get in in the morning restore the last differential backup from the previous night, this shouldn't take long at all.
Then kick the users off the Test database and rename Test to TestOld (someone will need something), then rename your TestRestore database to be the Test database. See How to rename a SQL Server Database.
The long range solution is to do log shipping from Production to TestRestore. The at a moments notice you can rename things and have a fresh Test database.
For the rollback, the easiest way is probably using a virtual machine and not saving changes when you close it.
For copying changes across from the production to the test, could you restore the differential backups or transaction log backups from production to the test db?
After having tried all of the suggestions offered here I have not found any means of accomplishing what I outlined in the question through SQL. If someone can find a way and post it or has another suggestion I would be happy to try something else but at this point there appears to be no way to accomplish this.
Storage vendors (as netapp) provide the ability to have writeable snapshots.
It gives you the ability to create a snapshot within seconds on the production, do your tests, and drop/recreate the snapshot.
It's a long term solution, but... It works
On Server1, a job exists that compresses the latest full backup
On Server2, there's a job that performs the following steps:
Copies the compressed file to a local drive
Decompresses the file to make the full backup available
Kills all sessions to the database that is about to be restored
Restores the database
Sets the recovery model to Simple
Grants db_owner privileges to the developers
Ref:http://weblogs.sqlteam.com/tarad/archive/2009/02/25/How-to-refresh-a-SQL-Server-database-automatically.aspx
I'm interested in hearing people's thoughts about the pros and cons of database mirroring vs. log shipping in this scenario: we need to setup a database backup situation wherein there is exactly one secondary server that need not automatically pick up when the primary fails. Recovering and starting with the secondary should not have to take too long though.
Mirroring
Database mirroring is limited to only two servers.
Mirroring with a Witness Server allows for High Availability and automatic fail over.
You can configure your DSN string to have both mirrored servers in it so that when they switch you notice nothing.
While mirrored, your Mirrored Database cannot be accessed. It is in Synchronizing/Restoring mode.
Mirroring with SQL Server 2005 standard edition is not good for load balancing (see sentence above)
Log Shipping
You can log ship to multiple servers.
Log shipping is only as current as how often the job runs. If you ship logs every 15 minutes, the secondary server could be as far as 15 minutes. Making it more of a Warm Standby.
You can leave the database in read only mode while it is being updated. Good for reporting servers.
Good for disaster recovery
For backup purposes I would recommend Mirroring: it keeps an always up-to-date copy of your database with no hassle.. If you don't need automatic fail-over you need just two servers/instances. Note that High Performance mode is only available in the Enterprice (sp) edition!
Switching to the secondary database does take longer with log shipping, but it's not too bad. You'll have to manually copy any uncopied backup files, apply the transaction log backups to the secondary database, recover the secondary database, and change its role to primary. If the old primary databases accessible, you should back up its transaction log before beginning. Failing over with mirroring is somewhat simpler, and can be done automatically if you are using High Availability mode. Even when using High Performance mode, it's still a one statement operation.