Im setting up SQL Server Transactional replication that will be continuously running. The distributor for this setup at the server receiving the data
Should i have any concerns with transaction log file sizes if this is running continuously?
Considerations for Transactional Replication: Transaction Log Space
For each database that will be
published using transactional
replication, ensure that the
transaction log has enough space
allocated. The transaction log of a
published database might require more
space than the log of an identical
unpublished database, because the log
records are not truncated until they
have been moved to the distribution
database.
If the distribution database is
unavailable, or if the Log Reader
Agent is not running, the transaction
log of a publication database
continues to grow. The log cannot be
truncated past the oldest published
transaction that has not been
delivered to the distribution
database. We recommend that you set
the transaction log file to auto grow
so that the log can accommodate these
circumstances. For more information,
see CREATE DATABASE (Transact-SQL) and
ALTER DATABASE (Transact-SQL).
Disk Space for the Distribution Database
Ensure that you have enough disk space
to store replicated transactions in
the distribution database:
If you do not make snapshot files
available to Subscribers immediately
(which is the default): transactions
are stored until they have been
replicated to all Subscribers or until
the retention period has been reached,
whichever is shorter.
If you create a transactional
publication and make the snapshot
files available to Subscribers
immediately: transactions are stored
until they have been replicated to all
Subscribers or until the Snapshot
Agent runs and creates a new snapshot,
whichever is longer. If the elapsed
time between Snapshot Agent runs is
greater than the maximum distribution
retention period for the publication,
which has a default of 72 hours,
transactions older than the retention
period are removed from the
distribution database. For more
information, see Subscription
Expiration and Deactivation.
Related
How is consistency maintained in transactional systems in the event of a power outage?
For example, consider the operation below that are performed within a transaction.
UPDATE table SET someColumn = someColumn + 1 WHERE <whatever>
In the event of a sudden power outage, how can we ensure that the operation is completed or not?
From SQL server docs:
SQL Server uses a write-ahead logging (WAL) algorithm, which guarantees that no data modifications are written to disk before the associated log record is written to disk. This maintains the ACID properties for a transaction. ... The log records must be written to disk before the associated dirty page is removed from the buffer cache and written to disk.
As far as I understand, using the example of the SQL increment operation above, the following operations occur:
Write a record to the transaction log.
Flush changes to disk.
When a sudden shutdown occurs, how does the system know if the incremental operation has completed? How does it understand from when to rollback? (a shutdown can occur after adding a log to transaction log, but before flushing changes to disk, is it?)
And I have another (second) question, it is similar to this one.
How can we guarantee atomicity? For example,
UPDATE table SET someColumn = someColumn + 1 AND otherColumn = otherColumn + 2 WHERE <whatever>
How can we ensure that in the event of a sudden power off, the field otherColumn will also be updated, or no fields will be updated?
Write-Ahead Logging (WAL). See eg Transaction Log Architecture
a shutdown can occur after adding a log to transaction log, but before flushing changes to disk, is it?
Log records are added to the transaction log in memory, but when a transaction is committed SQL Server waits for confirmation that all the log records have been written to durable media. And on restart the transaction will be rolled back if all its log records haven't been flushed to disk.
And SQL Server requires the storage system to flush changes to disk, disabling any non-persistent write caching.
I have a SQL Server (distributor and publisher) 2008 which is replicating using both snapshot and transactional replication to replicate to a couple of subscribers. There is plenty of information here https://learn.microsoft.com/en-us/sql/relational-databases/replication/disable-publishing-and-distribution on how to permanently disable replication.
I don't want to permanently disable replication, just temporarily for a network outage that is scheduled for later this week.
I have learned my lesson that when things go amuck it's a complete disable, remove, and re-setup to get everything working again, and there are too many publications to make this an option just to temporarily disable this.
It depends on whether there's going to be a network split between publisher and distributor or distributor and subscriber. Both of the below scenarios deal with transactional replication.
publisher and distributor - the log reader agent will not be able to mark records as delivered to the distribution database and so will stay in the transaction log of the publisher longer than normal. This may cause log growth (depending on how much free space is in your log file currently).
distributor and subscriber - assuming that the network outage is shorter than the minimum retention period for the distribution database, you should be able to just suspend the distribution jobs and everything should pick back up once the network is back online. Depending on the size of the backlog, it may be easier to re-initialize some (or all!) of your articles.
For snapshot replication, you shouldn't need to do much since the only time there's activity is when a snapshot is being created and delivered to the subscriber. You can just disable those jobs for the duration of your event and re-enable them when you're done.
i updated data in my table before commit transaction i shut the database with shutdown abort when again start the database the data gone
how to recover uncommitted transaction in oracle 11g?
There are two possible ways of doing that (besides some workarounds):
Cache Recovery
To solve this dilemma, two separate steps are generally used by Oracle Database for a successful recovery of a system failure: rolling forward with the redo log (cache recovery) and rolling back with the rollback or undo segments (transaction recovery).
The online redo log is a set of operating system files that record all changes made to any database block, including data, index, and rollback segments, whether the changes are committed or uncommitted. All changes to Oracle Database blocks are recorded in the online redo log.
The first step of recovery from an instance or media failure is called cache recovery or rolling forward, and involves reapplying all of the changes recorded in the redo log to the datafiles. Because rollback data is also recorded in the redo log, rolling forward also regenerates the corresponding rollback segments.
Rolling forward proceeds through as many redo log files as necessary to bring the database forward in time. Rolling forward usually includes online redo log files (instance recovery or media recovery) and could include archived redo log files (media recovery only).
After rolling forward, the data blocks contain all committed changes. They could also contain uncommitted changes that were either saved to the datafiles before the failure, or were recorded in the redo log and introduced during cache recovery.
Transaction Recovery
After the roll forward, any changes that were not committed must be undone. Oracle Database applies undo blocks to roll back uncommitted changes in data blocks that were either written before the failure or introduced by redo application during cache recovery. This process is called rolling back or transaction recovery.
Figure 12-2 illustrates rolling forward and rolling back, the two steps necessary to recover from any type of system failure.
Figure 12-2 Basic Recovery Steps: Rolling Forward and Rolling Back
Oracle Database can roll back multiple transactions simultaneously as needed. All transactions that were active at the time of failure are marked as terminated. Instead of waiting for SMON to roll back terminated transactions, new transactions can recover blocking transactions themselves to get the row locks they need.
Source link here.
A small addition, to shed some light on the case:
Oracle performs crash recovery and instance recovery automatically after an instance failure. In the case of media failure, a database administrator (DBA) must initiate a recovery operation. Recovering a backup involves two distinct operations: rolling the backup forward to a more recent time by applying redo data, and rolling back all changes made in uncommitted transactions to their original state.
In general, recovery refers to the various operations involved in restoring, rolling forward, and rolling back a backup. Backup and recovery refers to the various strategies and operations involved in protecting the database against data loss and reconstructing the database should a loss occur.
In brief, you can not recover the updated data, as it should be rolled back, in order to preserve the Database consistency. Have in mind that transactions are atomic, so they should be either COMMITTED or ROLLED BACK. Since the session that initiated it is now killed(stopped), no one can COMMIT it - thus the SMON does a ROLLBACK.
Uncommitted transactions will be rolled-back one the instance starts after the crash.
We are currently having issues with aspstate database mirroring as we have around 10,000 active users online 9-5 day to day and the aspstate db is so heavy on writing and passing this to the mirror that the mirror's drive is very high on IO and keeps causing both servers to be inaccessible due to the latency of writing the data on the mirror. We're using SQL Server 2012 standard so not in asynchronous mode.
We're running the SQL Server on Amazon EC2 instances with EBS backed volumes and 1000IOPS, in your views should this be enough? As we seem to have very smooth times where we've had over 15,000 users online and then other times where only 10,000 users online and we have issues with disk queue lengths on the mirror (backup server not the principle server.)
The principle can be writing to the aspstate.mdf files at 10-20mbps constant when the disk queue length goes up.
We're going to increase the IOPS to 2000 in the mean time as currently we've had to disable mirroring, however would you expect this and has anyone handled this sort of volume before?
Regards
Liam
The bottleneck with a high transaction workload like ASPState is not the data file but the transaction log. In the case of synchronous mirroring, additional latency is introduced for both the network and synchronous commit at the mirror. This latency will not be tolerable if you have a large number of APSState requests. Keep in mind that unless specified otherwise with session state enabled, each ASP.NET page request will require 2 updates to a session state row. So if you have 10,000 active users clicking once every 15 seconds, that requires about 1,300 I/Os per second for transaction log writes alone on each database.
If you must have HA for session state, I suggest failover clustering to eliminate network latency. You might also consider tuning session state by specifying the read only or none directive for pages that don't need session state. Consider using an in-memory session state solution instead of the out-of-the box ASPSession state database if you need to support a large number of users. Also remember that session state data is temporary so you can forgo durability.
Is the database transaction log automatically truncated after we create a backup and the DB is in full recovery mode? Or do we need to make 2 different backups, let's say 1 in full recovery mode and a different one for the log file.
The T-Log will only have portions of it marked inactive, when the database has a transaction log backup on it performed - a portion (vlf) is only marked inactive if there are no outstanding transactions within the VLF.
A full backup, whether in fully logged mode or bulk logged mode will not mark any portions of the t-log inactive.
Paul Randal devoted an entire post to this question before : http://www.sqlskills.com/BLOGS/PAUL/post/Misconceptions-around-the-log-and-log-backups-how-to-convince-yourself.aspx