Whether snapshot database has instant records same as the source database in SQL? - sql

Whether snapshot database has instant records same as the source database in SQL?
This is a small production database.
We are considering to have snapshot database in the same server only for reporting purpose. I wonder, whether snapshot database will have instant records or time lag in the records.
I have worked on replication databases which takes about 5 or 10 mins to get the fresh data records.

No, a database snapshot is purely a point in time view of your active database. Not only will it not be instant, it will not ever catch up. It is purely a point in time view of data as it was.
In other words, the more time that lapses between the time the snapshot is taken and the time your query runs against the snapshot, the greater the difference will potentially be between the snapshot and the original query.
This is also evident in the way the snapshot is managed on disk. Snapshots maintain point in time views by referencing original copies of the database pages. As modifications come in post snapshot, a copy of the page is made to maintain the state of the snapshot. Hence, a snapshot on disk is very small at the time that it is taken, but will continue to grow larger and larger as time passes as it continues to keep an exact version of the original state of the database at the moment the snapshot was taken.
As quoted in the documentation. A database snapshot is a read-only, static view of a SQL Server database (the source database).

Related

Might transactional backup cause the operations to be deadlock, etc?

In SQL Server, I get Full and Transactional Log Backup (full: once in a day, transactional: hourly during workimng hours). As far as I see, there are some advantages of transactinal log backup over differential backup. Rearding to these issues, could you clarify me about the following points?
1. When getting transactional backup hourly during employees continue their operations with the data, might there be some problems like deadlock, or corruption of the data? I use job script in SQL Server Management Studio to get backup, but have no idea how SQL Server treats the records that are currently started to be edited.
2. In general looking, what do you suggest for backup selection in addition to full backup? Transactional Log or Differential backup?
No :)
Backups using the backup command do not require locks on any user tables.
Transaction log backups are usually more frequent than hourly, would your company really be okay with loosing an hours worth of data if something bad happened to you database disks?
Your schedule needs to depend on what your requirements are for your RPO (recovery point objective) and RTO (recovery time objective). If can only sustain 5 minutes worth of lost data then a 5 minute transaction log backup is required. If you can only cope with 1 hour worth of downtime then you need to make sure that you have data backups that can be restored and recovered in that amount of time - the first part will depend on how optimized your restore is (ie how long it takes to read the backups from your backup drives and write the data files back to your data drives - https://www.mssqltips.com/sqlservertip/4935/optimize-sql-server-database-restore-performance/#:~:text=%20Optimize%20SQL%20Server%20Database%20Restore%20Performance%20,restore%20the%20database%20by%20using%20some...%20More%20 has some ideas. The second part will depend on how much transaction log data needs to be read and applied back to the database to recover it to the desired point.
You might find that you simply can't do full database backups fast enough, in those cases incremental backups could work as there's less data to write but SQL Server will then have to put it back together.
Of course, if the restore is happening manually then you also need to account for human time in there!
It's a good idea to try out your backup and recovery process (before PROD!), this way you can tell if you're going to need to optimize the process further.

SQL Server Snapshot on live DB

I am in need of some help.I want to reinitialize one of my subscribers with a new snapshot.my previous snapshot i generated was when activity was low on the production database.It took under 2 minutes.
My question is,can i generate a new snapshot during the day when the applications are using the database live? would it lock my tables badly to where transaction wont be followed to the database?
Yes, you can create initial snapshot during the day.
With transactional replication Snapshot Agent takes locks only during the initial phase of snapshot generation.
Performance impact of this operation depends on current workload of your system. So, if previous snapshot generation took 2 minutes, I think I will take not much more time.

How to control growing SQL database day by day

I have an SQL database which has a main Orders table taking 2-5 new rows per day.
Other table which has daily records is Log table. It receives new data every time a user accesses the login page of the web site including time and the IP address of the user. It gets 10-15 new rows per day for now.
As I monitor the daily backup of SQL, I realized that it is growing like 2-3MB per day. I have enough storage but it makes me worried. Is it the Log table causing this growth? I deleted like 150 rows but it didn't cause the .bak file size reduce. It increased! I didn't shrink database and I don't want to do it.
I'm not sure what to do about it. Is there any other decent way of Logging user accesses?
I typically export the rows from the production server, and import into a database on a non-production server (like my local machine), then delete the existing rows from the production server. Also run an optimize on the production server table so the size is recalculated. This is somewhat manual but it keeps the production server table size down, and the export/import process is rather quick.

does a simple-recovery database records transaction logs when selected from a full-recovery database?

does a simple-recovery database records transaction logs when selected from a full-recovery database? I mean, we have a full-recovery database, and it records too much transaction logs, causing its size to grow.
my question is, does the simple recovery still has does its minimal logging even if the data are selected from a full-recovery model database? thank you!
One thing has nothing to do with the other. Where the data comes from does not affect logging of changes to the tables in the db it's going to.
However as Martin Smith pointed out this is solving a symptom, there's naff all point in having full recovery mode on if you (they??) aren't backing up the transaction logs frequently enough to make the overhead useful. Whole point of them, aside from restoring up to particular transaction in the event of some catastrophy in your applications is speed and granularity.
Please read the MSDN page for recovery models.
http://msdn.microsoft.com/en-us/library/ms189275.aspx
Here is a quick summary from MSDN.
1 - Simple model - Automatically reclaims log space to keep space requirements small,
essentially eliminating the need to manage the transaction log space
2 - Bulk Copy model -
An adjunct of the full recovery model that permits high-performance
bulk copy operations.
**The first two do not support point in time recovery!**
3 - Full model - Can recover to an arbitrary point in time
(for example, prior to application or user error).
If no tail log backup possible, recover to last log backup.
So your problem is with either log usage or log backups.
A - Are you deleting from temporary tables instead of truncating?
http://msdn.microsoft.com/en-us/library/ms177570.aspx The delete operating will log each row in the transaction log.
B - Are you inserting large amounts of data via a ETL job? Each insert will get logged in the T-Log.
If you use bulk copy and ETL that support (fast data loads), it will be minimally logged.
However, page density and fill factor come into play when determining the size of the T-LOG.
http://blogs.msdn.com/b/sqlserverfaq/archive/2011/01/07/using-bulk-logged-recovery-model-for-bulk-operations-will-reduce-the-size-of-transaction-log-backups-myths-and-truths.aspx
C - How often are you taking transaction log backups? After each backup, the T-LOG space can be reused. Resulting in overall smaller size.
D - How fragmented is the T-LOG? I suggest reducing and re-growing the log during a maintenance period. A 20% log to data ratio with hourly backups worked fine at my old company. It all depends on how many changes you are making. http://craftydba.com/?p=3374
In summary, these are the places you should be looking at. Not the old data in the system since it is probably not being modified.
Moving the old data to a read only reporting database so that ADHOC queries from novice T-SQL users might not be a bad idea. But that solves other problems, possible BLOCKING and DEADLOCKS in your OLTP database.

Creating tables in SQL Server 2005 master DB

I am adding a monitoring script to check the size of my DB files so I can deliver a weekly report which shows each files size and how much it grew over the last week. In order to get the growth, I was simply going to log a record into a table each week with each DB's size, then compare to the previous week's results. The only trick is where to keep that table. What are the trade-offs in using the master DB instead of just creating a new DB to hold these logs? (I'm assuming there will be other monitors we will add in the future)
The main reason is that master is not calibrated for additional load: it is not installed on IO system with proper capacity planning, is hard to move around to new IO location, it's maintenance plan takes backups and log backups are as frequent as needed for a very low volume of activity, its initial size and growth rate are planned as if no changes are expected. Another reason against it is that many troubleshooting scenarios you would want a copy of the database to inspect, but you'd have to attach a new master to your instance. These are the main reasons why adding objects to master is discouraged. Also many admins understandably prefer an application to use it's own database so it can be properly accounted for, and ultimately easily uninstalled.
Similar problems exist for msdb, but if push comes to shove it would be better to store app data in msdb rather than master since the former is an ordinary database (despite widespread believe that is system, is actually not).
The Master DB is a system database that belongs to SQL Server. It should not be used for any other purposes. Create your own DB to hold your logs.
I would refrain from putting anything in master, it could be overwritten/recreated on an upgrade.
I have put a DBA only ServerInfo database on each server for uses like this, as well as any application specific environmental things (things that differ between prod and test and dev).
You should add a separat database for the logging. It is not garanteed that the master database is not breaking the next patch of sql server if you leave your objects in there.
And microsoft itself does advise you to not do it.
http://msdn.microsoft.com/en-us/library/ms187837.aspx