Does repeatable read isolation level lock all table for update? - sql

Given two transactions:
T1
set transaction isolation level repeatable read;
begin transaction
select * from tmp where val=1;
update tmp set txt='rerwer11' where val=1;
waitfor delay '00:00:7';
commit;
T2
set transaction isolation level repeatable read;
begin transaction
select * from tmp where val=2;
update tmp set txt='rerwer11' where val=2;
commit;
Start T1 and while it is executing launch T2. I thought that first transaction locks only rows with val=1 and thus second transaction does not have to be blocked because processes other rows. But it turned out that second transaction waits for first getting completed.
If i use default isolation level (read committed) for both of them and run update with xlock hint, everything works like I expected: sencond one get blocked only if it tries to read rows with val=1

First of all Isolation levels never affect DDL,DML statements.They are only for select ..Secondly an update will never block the whole table unless accounting for some factors like no index(so table scan),lock escalation..
You are getting blocking in your example due to Repeatable read isolation level which keeps the select shared lock intact until the transaction is committed
Coming to your example
1.Select will never block the table ,but conflicting locks on the table are not allowed until the transaction is finished
2.if your update acquires more than 5000 locks,then it will block the whole table (not even select )

Related

Snapshot isolation behaviour. "Triggered" at first query?

I am doing some tests to try to understand how snapshot isolation works...and I do not. I have SET ALLOW_SNAPSHOT_ISOLATION ON in my db (not interested in READ_COMMITTED_SNAPSHOT atm). Then I do the following tests. I will mark different sessions (practically different tabs in my ssms) by [s1] and [s2] markup,[s2] being the isolated session, and [s1] simulating another, non-isolated session.
First, make a table, and let's give it a row. #[s1]:
create table _g1 (v int)
insert _g1 select 1
select * from _g1
(Output: 1)
Now let's begin an isolated transaction.
#[s2]:
set transaction isolation level snapshot
begin tran
Insert another row, #[s1]:
insert _g1 select 2
Now let's see what the isolated transaction "sees", #[s2]:
select * from _g1
(Output: 1,2)
Strange. Shouldn't the isolation "start counting" from the moment of the "Begin tran"? Here, it should not have returned the 2....Let's do this another time. #[s1]:
insert _g1 select 3
#[s2]:
select * from _g1
(Output: 1,2)
So, this time it worked as I expected and did not account the latest insert.
How is this behaviour explained? Does the isolation start working after the first access of each table?
Snapshot isolation works with row versioning. For each modification on a row, the database engine maintains the previous and the current version of the row, along with the serial number (XSN) of the transaction that made the modification.
When snapshot isolation is used for a transaction in [s2]:
The Database Engine reads a row within the transaction and retrieves
the row version from tempdb whose sequence number is closest to, and
lower than, the transaction sequence number.
(see "How Snapshot Isolation and Row Versioning Work", in https://learn.microsoft.com/en-us/dotnet/framework/data/adonet/sql/snapshot-isolation-in-sql-server). The transaction sequence number XSN2 for the transaction in [s2] is not assigned until a DML statement is issued.
sys.dm_tran_active_snapshot_database_transactions is a DMV which returns a virtual table
for all active transactions that generate or potentially access row versions. You can query this view to get information about active transactions that access row versions.
To verify all the above, you could try:
#[s1]
create table _g1 (v int)
#[s2]
set transaction isolation level snapshot
begin tran
select * from sys.dm_tran_active_snapshot_database_transactions -- < No XSN has been assigned, yet. Zero rows are returned.
select * from _g1 --< XSN2 is now assigned.
(Output: zero rows)
select * from sys.dm_tran_active_snapshot_database_transactions -- < XSN2 has been assigned and the corresponding record is returned.
#[s1]
insert _g1 select 1
select * from _g1
(Output: 1)
#[s2]
select * from _g1
(Output: zero rows)
Please, see the remarks in https://learn.microsoft.com/en-us/sql/relational-databases/system-dynamic-management-views/sys-dm-tran-active-snapshot-database-transactions-transact-sql?view=sql-server-ver15 about when an XSN is issued:
sys.dm_tran_active_snapshot_database_transactions reports transactions that are assigned a transaction sequence number (XSN). The XSN is assigned when the transaction first accesses the version store. In a database that is enabled for snapshot isolation or read committed isolation using row versioning, the examples show when an XSN is assigned to a transaction:
If a transaction is running under serializable isolation level, an XSN is assigned when the transaction first executes a statement, such as an UPDATE operation, that causes a row version to be created.
If a transaction is running under snapshot isolation, an XSN is assigned when any data manipulation language (DML) statement, including a SELECT operation, is executed.
Therefore, to answer your question, snapshot isolation "starts counting" after the first 'SELECT' or other DML statement issued within the transaction and not immediately after the 'begin trasaction' statement.
You can Set Transaction Isolation Level Snapshot
either on Database level or Session level.
In our example,we have set on Session level.
So Isolation Level Snapshot will work only in that session on which it was declare.
Secondly, you must issue a T-Sql statement.
In #s2 ,
Set Transaction Isolation Level Snapshot
Begin Tran
Here Transaction is open but there is no T-Sql.
So Snapshot Version of which which table will be maintain ?
Set Transaction Isolation Level Snapshot
Begin Tran
select * from _g1
Here isolation level will work on table _g1. or what ever tables are mention in T-Sql within Transaction .
In other word it will maintain Own version of records for all tables in TempDB mention in T-Sql this TRANSACTION.
It will read data from TempDB untill that Transaction is not Commit or Rollback.
After this,it will read data from Table .
In #s2, Begin Tran is without RollBack or Commit.
Though all record are committed in #s1,
it do not fetch 3. it fetch 1,2 which were committed prior to issue T Sql on same table.
If Rollback or Commit is done in #S2 then output will be (1,2,3).
Since all Insert statement in #s1 is committed.
After Transaction is Commit or Rollback, it will read data from .
In other example,
Truncate table _g1.
We first start #s2,
Set Transaction Isolation Level Snapshot
Begin Tran
select * from _g1
Output : no record.
Here database engine has maintain own version for table _g1.
Since there is no record in _g1,so TempDB is empty.
In #s1,
insert _g1 select 1
select * from _g1
(Output: 1)
In #s2,
If you simply only run
select * from _g1
or you run all script
Output is still nothing.Because we have not committed or rollback so it continue reading from TempDB.
After Commit or Rollback, it will again refresh Record of TempDB.
So output in #s2 will be 1

How can I lock race conditon in SQL Server?

I have a Stored Procedure in SQL Server with the following scenario:
In my stored procedure I have a function for getting the max serial. I get the max serial and insert it in a table:
Set #Serial = GetMaxSerial(...)
Insert Into MyTable (Serial,...) Values (#Serial,...)
Sometimes my stored procedure is executed 2 times concurrently in a way that both, get same max serial for example 100 and try to insert it in MyTable. The first insert is done successfully but the last fails and I get error about key.
How can I lock these two lines of codes and force my sp to run these lines of code together?
Or is there a better solution?
A very good scenario for SERIALIZABLE transaction isolation level. Transaction isolation level decides what level to access other transactions has to a Row/Resource when one is already working with the Row/Resource. To read more about transaction isolation levels Read this link SET TRANSACTION ISOLATION LEVEL.
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
BEGIN TRANSACTION
Set #Serial = GetMaxSerial(...)
Insert Into MyTable (Serial,...) Values (#Serial,...)
COMMIT TRANSACTION

sql server a simple query takes forever to run due to transaction isolation level

I've come across a problem while learning transaction isolation levels in SQL server.
The problem is that after I run this code (and it finishes without errors):
set implicit_transactions off;
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
BEGIN TRAN T1;
SELECT (...)
WAITFOR DELAY '00:00:5'
SELECT (...)
WAITFOR DELAY '00:00:3'
COMMIT TRAN T1;
I want to run this query:
set implicit_transactions off;
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
BEGIN TRANSACTION T2;
INSERT (...)
INSERT (...)
COMMIT TRANSACTION T2;
But it just says "Executing query", and does nothing.
I think it's because the lock on the tables somehow continues after the first transaction has been finished. Can someone help?
Of course the selects and the inserts refer to the same tables.
Either the first tran is still open (close the window to make sure it is not), or some other tran is open (exec sp_who2). You can't suppress X-locks taken by DML because SQL Server needs those locks during rollback.
#usr offers good possibilities.
A related specific possibility is that you selected only part of the first transaction to execute while tinkering - i.e. executed BEGIN TRAN T1 and never executed COMMIT TRAN T1. It happens - part of Murphy's Law I think. Try executing just COMMIT TRAN T1, then re-trying the second snippet.
The following worked just fine for me on repeated, complete executions in a single session:
set implicit_transactions off;
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
BEGIN TRAN T1;
SELECT * from tbl_A
WAITFOR DELAY '00:00:5'
SELECT * from tbl_B
WAITFOR DELAY '00:00:3'
COMMIT TRAN T1;
set implicit_transactions off;
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
BEGIN TRANSACTION T2;
INSERT tbl_A (ModifiedDate) values (GETDATE())
INSERT tbl_B (ModifiedDate) values (GETDATE())
INSERT tbl_A (ModifiedDate) select top 1 ModifiedDate from tbl_A
INSERT tbl_B (ModifiedDate) select top 1 ModifiedDate from tbl_B
COMMIT TRANSACTION T2;
1 - SET IMPLICT_TRANSACTIONS is usually OFF unless you SET ANSI_DEFAULTS to ON. Then it will be ON. Thus, you can remove this extra statement if it is not needed.
2 - I agree with Aaron. Read uncommitted (no lock) should be used with a SELECT statement. However, this can lead to invalid results. It is prone to missing data, reading data twice, or scan errors.
Read Committed Snap Shot Isolation (RCSI) is a better option at the expense of tempdb (version store space). This will allow your reports (readers) not to be blocked by transactions (writers).
3 - Setting, SET TRANSACTION ISOLATION LEVEL SERIALIZABLE, will use the most amount of locks. Therefore, increase the chances of blocking .
Why use this low concurrency isolation level with two INSERT statements?
I can understand using this level to UPDATE multiple tables. For instance, a bank transaction. Debit one row and Credit another row. Two tables. No one has access to the records until the transaction is complete.
In short, I would use READ COMMITTED isolation level for the insert statements. More than likely, the data being inserted is different.
However, the whole picture is not here.
There is some type of blocking that is occurring. You need to find the root cause.
Here is a code snippet to look at locks and objects that are locked.
--
-- Locked object details
--
-- Old school technique
EXEC sp_lock
GO
-- Lock details
SELECT
resource_type, resource_associated_entity_id,
request_status, request_mode,request_session_id,
resource_description
FROM sys.dm_tran_locks
WHERE resource_database_id = DB_ID('AdventureWorks2012')
GO
-- Page/Key details
SELECT object_name(object_id) as object_nm, *
FROM sys.partitions
WHERE hobt_id = 72057594047037440
GO
-- Object details
SELECT object_name(1266103551)
GO
If you still need help, please identify the two blocking transactions and the locks. Please post this information.

SQL Table not locked even using tablock & holdlock

I am using SQL Lock on a table. here is my query:
set transaction isolation level serializable
go
begin transaction
select * from emp
waitfor delay '00:00:40'
rollback transaction
Now, when I try to access table 'emp' from somewhere else (by opening another query analyzer and firing select query on emp table), still I get data. It should not return data, as table is locked for 40 seconds.
NOTE: I'd also tried "with (tablock,holdlock)" , still not working.
How can I make table inaccessible for that 40 seconds???
I had got my answer
set transaction isolation level serializable
go
begin transaction
select * from emp with (TABLOCKX,holdlock)
waitfor delay '00:00:40'
rollback transaction
It locks table, no one else can access it thenafter for 40 seconds.

Which SQL Read TRANSACTION ISOLATION LEVEL do I want for long running insert?

I have a long running insert transaction that inserts data into several related tables.
When this insert is running, I cannot perform a select * from MainTable. The select just spins its wheels until the insert is done.
I will be performing several of these inserts at the same/overlapping time. To check that the information is not inserted twice, I query the MainTable first to see if an entry is there and that its processed bit is not set.
During the insert transaction, it flips the MainTable processed bit for that row.
So I need to be able to read the table and also be able to tell if the specific row is currently being updated.
Any ideas on how to set this up in Microsoft SQL 2005? I am looking through the SET TRANSACTION ISOLATION LEVEL documentation.
Thank you,
Keith
EDIT: I do not think that the same insert batch will happen at the same time. These are binary files that are being processed and their data inserted into the database. I check that the file has not been processed before I parse and insert the data. When I do the check, if the file has not been seen before I do a quick insert into the MainTable with the processed bit set false.
Is there a way to lock the row being updated instead of the entire table?
You may want to rethink your process before you use READ UNCOMMITTED. There are many good reasons for isolated transactions. If you use READ UNCOMMITTED you may still get duplicates because there is a chance both of the inserts will check for updates at the same time and both not finding them creating duplicates. Try breaking it up into smaller batches or issue periodic COMMITS
EDIT
You can wrap the MainTable update in a transaction that will free up that table quicker but you still may get conflicts with the other tables.
ie
BEGIN TRANSACTION
SELECT #ProcessedBit = ProcessedBit FROM MainTable WHERE ID = XXX
IF #ProcessedBit = False
UPDATE MainTable SET ProcessedBit = True WHERE ID = XXX
COMMIT TRANSACTION
IF #ProcessedBit = False
BEGIN
BEGIN TRANSACTION
-- start long running process
...
COMMIT TRANSACTION
END
EDIT to enable error recovery
BEGIN TRANSACTION
SELECT #ProcessedStatus = ProcessedStatus FROM MainTable WHERE ID = XXX
IF #ProcessedStatus = 'Not Processed'
UPDATE MainTable SET ProcessedBit = 'Processing' WHERE ID = XXX
COMMIT TRANSACTION
IF #ProcessedStatus = 'Not Processed'
BEGIN
BEGIN TRANSACTION
-- start long running process
...
IF No Errors
BEGIN
UPDATE MainTable SET ProcessedStatus = 'Processed' WHERE ID = XXX
COMMIT TRANSACTION
ELSE
ROLLBACK TRANSACTION
END
The only isolation level that allows one transaction to read changes executed by another transaction in progress (before it commits) is:
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED