SQL Server Snapshot on live DB - sql

I am in need of some help.I want to reinitialize one of my subscribers with a new snapshot.my previous snapshot i generated was when activity was low on the production database.It took under 2 minutes.
My question is,can i generate a new snapshot during the day when the applications are using the database live? would it lock my tables badly to where transaction wont be followed to the database?

Yes, you can create initial snapshot during the day.
With transactional replication Snapshot Agent takes locks only during the initial phase of snapshot generation.
Performance impact of this operation depends on current workload of your system. So, if previous snapshot generation took 2 minutes, I think I will take not much more time.

Related

Create a snapshot of a replicated sql server

So my problem I hope someone can help me with is the following:
There is a Master SQL Server that is huge and updates pretty much 24/7...
So locking that database for a dump is no option.
For that reason there is a 2nd server configured to be the slave server of the first instance, replicating every transaction of the binary log file.
The problem in question is how to backup this 2nd instance... Since it is a MariaDB Server it seems that snapshot replication is not possible and even if we were to create a cron for that, the tables would need to be locked for the time of the snapshot creation (DB is around 100GB+).
We thought of two slave servers replicating from that one master, one creating a snapshot every other hour (one e.g. every odd hour while the other does the same thing, just on the even hours).
Is there anyway to get this to work?
Would this approach even work? Since I am able to delay the transactions from the master for x seconds but how would I approach this?
Thank you all in advance for your suggestions and ideas.
Best regards,
TheSaltyOne.

Might transactional backup cause the operations to be deadlock, etc?

In SQL Server, I get Full and Transactional Log Backup (full: once in a day, transactional: hourly during workimng hours). As far as I see, there are some advantages of transactinal log backup over differential backup. Rearding to these issues, could you clarify me about the following points?
1. When getting transactional backup hourly during employees continue their operations with the data, might there be some problems like deadlock, or corruption of the data? I use job script in SQL Server Management Studio to get backup, but have no idea how SQL Server treats the records that are currently started to be edited.
2. In general looking, what do you suggest for backup selection in addition to full backup? Transactional Log or Differential backup?
No :)
Backups using the backup command do not require locks on any user tables.
Transaction log backups are usually more frequent than hourly, would your company really be okay with loosing an hours worth of data if something bad happened to you database disks?
Your schedule needs to depend on what your requirements are for your RPO (recovery point objective) and RTO (recovery time objective). If can only sustain 5 minutes worth of lost data then a 5 minute transaction log backup is required. If you can only cope with 1 hour worth of downtime then you need to make sure that you have data backups that can be restored and recovered in that amount of time - the first part will depend on how optimized your restore is (ie how long it takes to read the backups from your backup drives and write the data files back to your data drives - https://www.mssqltips.com/sqlservertip/4935/optimize-sql-server-database-restore-performance/#:~:text=%20Optimize%20SQL%20Server%20Database%20Restore%20Performance%20,restore%20the%20database%20by%20using%20some...%20More%20 has some ideas. The second part will depend on how much transaction log data needs to be read and applied back to the database to recover it to the desired point.
You might find that you simply can't do full database backups fast enough, in those cases incremental backups could work as there's less data to write but SQL Server will then have to put it back together.
Of course, if the restore is happening manually then you also need to account for human time in there!
It's a good idea to try out your backup and recovery process (before PROD!), this way you can tell if you're going to need to optimize the process further.

Whether snapshot database has instant records same as the source database in SQL?

Whether snapshot database has instant records same as the source database in SQL?
This is a small production database.
We are considering to have snapshot database in the same server only for reporting purpose. I wonder, whether snapshot database will have instant records or time lag in the records.
I have worked on replication databases which takes about 5 or 10 mins to get the fresh data records.
No, a database snapshot is purely a point in time view of your active database. Not only will it not be instant, it will not ever catch up. It is purely a point in time view of data as it was.
In other words, the more time that lapses between the time the snapshot is taken and the time your query runs against the snapshot, the greater the difference will potentially be between the snapshot and the original query.
This is also evident in the way the snapshot is managed on disk. Snapshots maintain point in time views by referencing original copies of the database pages. As modifications come in post snapshot, a copy of the page is made to maintain the state of the snapshot. Hence, a snapshot on disk is very small at the time that it is taken, but will continue to grow larger and larger as time passes as it continues to keep an exact version of the original state of the database at the moment the snapshot was taken.
As quoted in the documentation. A database snapshot is a read-only, static view of a SQL Server database (the source database).

Database creation/deletion times on Azure SQL DB Managed Instances take a long time

I have an Azure SQL Managed Instance and have noticed that database creation and deletion not only takes a long time, 3 minutes to create a new database and up to 5 minutes to then delete that database, but that those times can also fluctuate a lot too. What is the reason for this?
When a new databases are created their files are initialized on Azure Premium storage (which is remote storage) including initial backup so they can be ready for HA, and also they need to be registered in Azure Management Service that controls availability of database. This is not instant, but in most of the cases it should be under a minute. I'm frequently creating databases and I don't remember any case when it was longer than 10-15 sec.
For delete operation you can expect up to 5 minute delay, because we are waiting for the last log backup to be taken (log backups are taken every 5 minutes). In the future we might initiate tail log backup immediately when the DROP DATABASE is executed, but in the current version Managed Instance waits for last backup.

Daily Data subset of main database

I have a large Db 500GB one of our customers wants daily snapshot of only his data, He only has a 3mb connection , I suspect that is the Max ! What method is the most effective method I could use?
1. Views that are updated but it wants the underlaying tables.
2. Replication I don’t know much about this.
3. Alternative method.
Merge replication, which allows you to initialize a subscriber without using a snapshot. You will have to initialize replication on the subscriber from a backup. You can possibly do the same with transactional replication, but it just never worked quite the same for me. YMMV. When it breaks (etc.) you will have to be prepared to ship a new backup and start over (hacking replication sometimes works, but don't count on it). Changing database structures is also a pain once in replication. I have seen ~500Gb with less than 3Mbit work in production, though not without proper planning and preparation (and grey hair)
I have used transactional replication with a read only subscription, where I invoked the distribution with a batch file on a task schedule once a day. Transactional replication did not feel as maintained (from MS) or as stable as merge replication, though the data integrity with transactional was more consistent
I have not tried transaction log shipping, but that might also be an option
(p.s. notice I didn't say "If it breaks")