Consequences of a full DB Backup on LSN - SQL Server - sql

What are the consequences of a full DB Backup on the LSN?
Will it break the LogShipping that is already configured and running when a full DB backup is complete (if it changes the LSN).
Also what is the best way to perform a DB resilience.
Just have LogShipping configured
Have LogShipping configured (15 min interval) with weekly full backups?
Have a incremental backup every 15 mins and then a weekly full backup
Are there any great tools that would simplify the above process?

Full backups have no effect on the log chain except for the first full backup taken after database creation or after a change to the FULL recovery model. It will not break log shipping. What could break log shipping is if you took a manual log backup and did not apply that to your secondary database.
The best recovery strategy for your database depends completely upon your situation. How much data loss is acceptable? How long of an outage can you reasonably allow for a restore to occur? It's not possible for someone on an online forum to provide an answer that you should rely on for your recovery strategy. You need to research backup and recovery very carefully, understand it inside and out, then apply the strategy that is best for your particular situation. There are many tools out there for backup from companies such as Red-Gate, Idera, Dell, etc.
That being said, just having log shipping is absolutely not good enough.
Also, #Anup ... COPYONLY only prevents the differential bitmap from being reset. It does not impact log backups.

LiteSpeed works fantastic for your full backups, I would run a full backup daily and NOT weekly. Log shipping with or without Litespeed then to be done every 15 minutes is very typical. If you wait too long then Data from your primary to your secondary is too out of sync and you end up with errors based on what SQL Server has.
I always loved Red Gate products, but I have not used their log shipping / backup tools , only their data compare and their schema compare etc..
I believe Quest makes LiteSpeed and Quest I think is owned by Dell.
Currently we are stuck having to monitor the log shipping and it is not full proof. I would prefer Mirroring or Replication.

Related

Might transactional backup cause the operations to be deadlock, etc?

In SQL Server, I get Full and Transactional Log Backup (full: once in a day, transactional: hourly during workimng hours). As far as I see, there are some advantages of transactinal log backup over differential backup. Rearding to these issues, could you clarify me about the following points?
1. When getting transactional backup hourly during employees continue their operations with the data, might there be some problems like deadlock, or corruption of the data? I use job script in SQL Server Management Studio to get backup, but have no idea how SQL Server treats the records that are currently started to be edited.
2. In general looking, what do you suggest for backup selection in addition to full backup? Transactional Log or Differential backup?
No :)
Backups using the backup command do not require locks on any user tables.
Transaction log backups are usually more frequent than hourly, would your company really be okay with loosing an hours worth of data if something bad happened to you database disks?
Your schedule needs to depend on what your requirements are for your RPO (recovery point objective) and RTO (recovery time objective). If can only sustain 5 minutes worth of lost data then a 5 minute transaction log backup is required. If you can only cope with 1 hour worth of downtime then you need to make sure that you have data backups that can be restored and recovered in that amount of time - the first part will depend on how optimized your restore is (ie how long it takes to read the backups from your backup drives and write the data files back to your data drives - https://www.mssqltips.com/sqlservertip/4935/optimize-sql-server-database-restore-performance/#:~:text=%20Optimize%20SQL%20Server%20Database%20Restore%20Performance%20,restore%20the%20database%20by%20using%20some...%20More%20 has some ideas. The second part will depend on how much transaction log data needs to be read and applied back to the database to recover it to the desired point.
You might find that you simply can't do full database backups fast enough, in those cases incremental backups could work as there's less data to write but SQL Server will then have to put it back together.
Of course, if the restore is happening manually then you also need to account for human time in there!
It's a good idea to try out your backup and recovery process (before PROD!), this way you can tell if you're going to need to optimize the process further.

Delete records from SQL Server backup file

It is an insane idea to delete records from backup since the notion of backup is to serve on disaster. But in our case, data deletion is a valid use-case.
Requirement: in brief, we are in need of a system which is capable of deleting a specific record from an active database instance and from all its backups.
We have a fully functional internal system which is capable of performing the mentioned requirement of deleting data from active database. But what we don't know is how to do the same agonist all these database backups.
Question:
Is it possible to find a specific record from a backup?
Is there any predefined schema or data allocation style within SQL Server backup file, which allow us to isolate a specific record?
Can you share any thoughts or experience you have on such style of deletion?
Note: we take 2 full backup daily and store a week worth (14 in total) at any point in time.
I do understand the business concept of "deleted everywhere".
I do not know of any way to do this. I do not believe the format of the backup is even published. That doesn't mean that someone hasn't hacked it, but it certainly isn't a broadly known capability.
I think that, in order to do this, you will need to securely wipe all copies of backups and take new backups. You then lose the point in time recovery capability.
Solution: The way that I would address this business requirement is to recover each backup, delete the desired record(s), secure wipe the backup media (or destroy the old media and use new media), and then take a new backup of THAT recovered version. That will give you a point in time recovery of that data without the specific record(s).
You can't modify the contents of a .bak file. You shouldn't want to do that either. If you want to restore to a specific point in time you should use the Full recovery model and take differential and log backups instead of just full backups.

How to do a quick differential backup and restore in SQL Server

I'm using SpecFlow with Selenium for doing UI testing on my ASP.NET MVC websites. I want to be able to restore the database (SQL Server 2012) to its pre-test condition after I have finished my suite of tests, and I want to do it as quickly as possible. I can do a full backup and restore with replace (or with STOPAT), but that takes well over a minute, when the differential backup itself took only a few seconds. I want to basically set a restore point and then revert to it as quickly as possible, dropping any changes made since the backup. It seems to me that this should be able to be done very quickly, without needing to overwrite the whole database. Is this possible, and if so, how?
Not with a differential backup. What a differential backup is is an image of all of the data pages that have changed since the last full backup. In order to restore a differential backup, you must first restore it's base (i.e. full) backup and then the differential.
What you're asking for is some process that keeps track of the changes since the backup. That's where a database snapshot will shine. It's a copy-on-write technology which is to say that when you make a data modification, it writes the pre-change state of the data page to the snapshot file before writing the change itself. So reverting is quick as it need only pull back those changed pages from the snapshot. Check out the "creating a database snapshot" example in the CREATE DATABASE documentation.
Keep in mind that this isn't really a good guard against failure (which is one of the reasons to take a backup). But for your described use case, it sounds like a good fit.

sql2005 DB size is huge - how to truncate logs in production system?

i have a production DB that i need to periodically truncate logs in.
how can i get this done in a system that can have no down time and is a stand-alone SQL server?
i seem to remember there was a SQL command i can run... so i was thinking to set it up as a step in the backup job so that after a backup is cut i will truncate the SQL logs.
You should not need to truncate logs.
If the logs are growing, then you probably have FULL recovery and no log backups. If this is OK, then you have a long running open transaction or similar but check backups first
if you have log backups, then do them more frequently. IMHO daily is pointless. We run every 15 minutes.. or are you mixing up full and log backups?
If the recovery model is SIMPLE and logs are growing, then the log needs to be that size (eg to allow for major index rebuild) or again you have a probably have a long running open transaction.
See MSDN And Paul Randal's blog

sql server 2005 mirrored database transaction log file maintenance

Ok so for standard, non-mirrored databases, the transaction log is kept in check either simply by having the database in simple mode or by doing regular backups. We keep ours in simple as we have SAN snapshot backups taking place and there is no need for SQL backups.
We're now going to mirroring. I obviously no longer have the choice of simple mode and must use full. this obviously leads to large log files and the need for log backups. That's fine I can deal with that; a maintenance plan that takes a log backup and discards any previous ones. I realise that this backup is essentially useless without its predecessors but the SAN snapshots are doing the backups.
My question is...
a) Is there a way to truncate the log file of all processed rows without creating a backup? (as I can't use them anyway...)
b) A maintenance plan is local to a server and is not replicated across a mirrored pair. How should it be done on a mirrored setup? such that when the database fails over, the plan starts running on the new principal, but doesn't get upset when its a mirror?
Thanks
A. If your server is important enough to mirror it, why isn't it important enough to take transaction log backups? SAN snapshots are point-in-time images of just one point in time, but they don't give you the ability to stop at different points of time along the way. When your developers truncate a table, you want to replay all of the logs right up until that statement, and stop there. That's what transaction log backups are good for.
B. Set up a maintenance plan (or even better, T-SQL scripts like Ola Hallengren's at http://ola.hallengren.com) to back up all of the databases, but check the boxes to only back up the online ones. (Off the top of my head, not sure if that's an option in 2005 - might be 2008 only.) That way, you'll always get whatever ones happen to fail over.
Of course, keep in mind that you need to be careful with things like cleanup scripts and copying those backup files. If you have half of your t-log backups on one share and half on the other, it's tougher to restore.
a) no, you cannot truncate a log that is part of a mirrored database. backing the logs up is your best option. I have several databases that are setup with mirroring simply based on teh HA needs but DR is not required for various reasons. That seems to be your situation? I would really still recommend keeping the log backups for a period of time. No reason to kill a perfectly good recovery plan that is added by your HA strategy. :)
b) My own solutions for this are to have a secondary agent job that monitors based on the status of the mirror. If the mirror is found to change, the secondary job on teh mirror instance is enabled and if possible, the old principal is disabled. if the principal was down and it comes back up, the job is still disabled. the only way the jobs themselves would be switched back is the event of again, another forced failover.