I have a Stored Procedure in SQL Server with the following scenario:
In my stored procedure I have a function for getting the max serial. I get the max serial and insert it in a table:
Set #Serial = GetMaxSerial(...)
Insert Into MyTable (Serial,...) Values (#Serial,...)
Sometimes my stored procedure is executed 2 times concurrently in a way that both, get same max serial for example 100 and try to insert it in MyTable. The first insert is done successfully but the last fails and I get error about key.
How can I lock these two lines of codes and force my sp to run these lines of code together?
Or is there a better solution?
A very good scenario for SERIALIZABLE transaction isolation level. Transaction isolation level decides what level to access other transactions has to a Row/Resource when one is already working with the Row/Resource. To read more about transaction isolation levels Read this link SET TRANSACTION ISOLATION LEVEL.
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
BEGIN TRANSACTION
Set #Serial = GetMaxSerial(...)
Insert Into MyTable (Serial,...) Values (#Serial,...)
COMMIT TRANSACTION
Related
I am trying to set up a read/write lock in SQL Server. My stored procedure is
CREATE PROCEDURE test()
AS
BEGIN
SELECT VALUE FROM MYTABLE WHERE ID=1
UPDATE MYTABLE SET VALUE=VALUE+1 WHERE ID=1
END
I would like to be sure tha no-one else is going to read or update the "Value" field while this stored procedure is being executed.
I read lots of posts and I read that in SQL Server should be enough to set up a transaction.
CREATE PROCEDURE test()
AS
BEGIN
BEGIN TRANSACTION
SELECT VALUE FROM MYTABLE WHERE ID=1
UPDATE MYTABLE SET VALUE=VALUE+1 WHERE ID=1
COMMIT TRANSACTION
END
But to me this is not enough, since I tried launching two parallel connections, both of them using this stored procedure. With SQL Server Management Studio's debugger, i stopped the first execution inside the transaction, and i observed that the second transaction has been executed!
So i tried to add ISOLATION LEVEL
CREATE PROCEDURE test()
AS
BEGIN
SET TRANSACTION ISOLATION LEVEL READ COMMITTED
BEGIN TRANSACTION
SELECT VALUE FROM MYTABLE WHERE ID=1
UPDATE MYTABLE SET VALUE=VALUE+1 WHERE ID=1
COMMIT TRANSACTION
END
but the result is the same.
I also tried to set isolation level in the client code
SET TRANSACTION ISOLATION LEVEL READ COMMITTED
EXEC test
but again nothing changed.
My question is: in such situation, which is the correct way to set up a lock that blocks the others?
thank you
in such situation, which is the correct way to set up a lock that blocks the others?
The correct lock here is to read the table with an UPDLOCK, in a transaction.
SELECT VALUE FROM MYTABLE with (UPDLOCK)
WHERE ID=1
You can also use an OUTPUT clause to update and return the value in a single query, which will also prevent two sessions from reading and updating the same value
update MyTable set value = value+1
output inserted.value
However you should not generate keys like this. Only one session can generate a key at a time, and the locking to generate the key is held until the end of the session's current transaction. Instead use a SEQUENCE or an IDENTITY column.
I am doing some tests to try to understand how snapshot isolation works...and I do not. I have SET ALLOW_SNAPSHOT_ISOLATION ON in my db (not interested in READ_COMMITTED_SNAPSHOT atm). Then I do the following tests. I will mark different sessions (practically different tabs in my ssms) by [s1] and [s2] markup,[s2] being the isolated session, and [s1] simulating another, non-isolated session.
First, make a table, and let's give it a row. #[s1]:
create table _g1 (v int)
insert _g1 select 1
select * from _g1
(Output: 1)
Now let's begin an isolated transaction.
#[s2]:
set transaction isolation level snapshot
begin tran
Insert another row, #[s1]:
insert _g1 select 2
Now let's see what the isolated transaction "sees", #[s2]:
select * from _g1
(Output: 1,2)
Strange. Shouldn't the isolation "start counting" from the moment of the "Begin tran"? Here, it should not have returned the 2....Let's do this another time. #[s1]:
insert _g1 select 3
#[s2]:
select * from _g1
(Output: 1,2)
So, this time it worked as I expected and did not account the latest insert.
How is this behaviour explained? Does the isolation start working after the first access of each table?
Snapshot isolation works with row versioning. For each modification on a row, the database engine maintains the previous and the current version of the row, along with the serial number (XSN) of the transaction that made the modification.
When snapshot isolation is used for a transaction in [s2]:
The Database Engine reads a row within the transaction and retrieves
the row version from tempdb whose sequence number is closest to, and
lower than, the transaction sequence number.
(see "How Snapshot Isolation and Row Versioning Work", in https://learn.microsoft.com/en-us/dotnet/framework/data/adonet/sql/snapshot-isolation-in-sql-server). The transaction sequence number XSN2 for the transaction in [s2] is not assigned until a DML statement is issued.
sys.dm_tran_active_snapshot_database_transactions is a DMV which returns a virtual table
for all active transactions that generate or potentially access row versions. You can query this view to get information about active transactions that access row versions.
To verify all the above, you could try:
#[s1]
create table _g1 (v int)
#[s2]
set transaction isolation level snapshot
begin tran
select * from sys.dm_tran_active_snapshot_database_transactions -- < No XSN has been assigned, yet. Zero rows are returned.
select * from _g1 --< XSN2 is now assigned.
(Output: zero rows)
select * from sys.dm_tran_active_snapshot_database_transactions -- < XSN2 has been assigned and the corresponding record is returned.
#[s1]
insert _g1 select 1
select * from _g1
(Output: 1)
#[s2]
select * from _g1
(Output: zero rows)
Please, see the remarks in https://learn.microsoft.com/en-us/sql/relational-databases/system-dynamic-management-views/sys-dm-tran-active-snapshot-database-transactions-transact-sql?view=sql-server-ver15 about when an XSN is issued:
sys.dm_tran_active_snapshot_database_transactions reports transactions that are assigned a transaction sequence number (XSN). The XSN is assigned when the transaction first accesses the version store. In a database that is enabled for snapshot isolation or read committed isolation using row versioning, the examples show when an XSN is assigned to a transaction:
If a transaction is running under serializable isolation level, an XSN is assigned when the transaction first executes a statement, such as an UPDATE operation, that causes a row version to be created.
If a transaction is running under snapshot isolation, an XSN is assigned when any data manipulation language (DML) statement, including a SELECT operation, is executed.
Therefore, to answer your question, snapshot isolation "starts counting" after the first 'SELECT' or other DML statement issued within the transaction and not immediately after the 'begin trasaction' statement.
You can Set Transaction Isolation Level Snapshot
either on Database level or Session level.
In our example,we have set on Session level.
So Isolation Level Snapshot will work only in that session on which it was declare.
Secondly, you must issue a T-Sql statement.
In #s2 ,
Set Transaction Isolation Level Snapshot
Begin Tran
Here Transaction is open but there is no T-Sql.
So Snapshot Version of which which table will be maintain ?
Set Transaction Isolation Level Snapshot
Begin Tran
select * from _g1
Here isolation level will work on table _g1. or what ever tables are mention in T-Sql within Transaction .
In other word it will maintain Own version of records for all tables in TempDB mention in T-Sql this TRANSACTION.
It will read data from TempDB untill that Transaction is not Commit or Rollback.
After this,it will read data from Table .
In #s2, Begin Tran is without RollBack or Commit.
Though all record are committed in #s1,
it do not fetch 3. it fetch 1,2 which were committed prior to issue T Sql on same table.
If Rollback or Commit is done in #S2 then output will be (1,2,3).
Since all Insert statement in #s1 is committed.
After Transaction is Commit or Rollback, it will read data from .
In other example,
Truncate table _g1.
We first start #s2,
Set Transaction Isolation Level Snapshot
Begin Tran
select * from _g1
Output : no record.
Here database engine has maintain own version for table _g1.
Since there is no record in _g1,so TempDB is empty.
In #s1,
insert _g1 select 1
select * from _g1
(Output: 1)
In #s2,
If you simply only run
select * from _g1
or you run all script
Output is still nothing.Because we have not committed or rollback so it continue reading from TempDB.
After Commit or Rollback, it will again refresh Record of TempDB.
So output in #s2 will be 1
Given two transactions:
T1
set transaction isolation level repeatable read;
begin transaction
select * from tmp where val=1;
update tmp set txt='rerwer11' where val=1;
waitfor delay '00:00:7';
commit;
T2
set transaction isolation level repeatable read;
begin transaction
select * from tmp where val=2;
update tmp set txt='rerwer11' where val=2;
commit;
Start T1 and while it is executing launch T2. I thought that first transaction locks only rows with val=1 and thus second transaction does not have to be blocked because processes other rows. But it turned out that second transaction waits for first getting completed.
If i use default isolation level (read committed) for both of them and run update with xlock hint, everything works like I expected: sencond one get blocked only if it tries to read rows with val=1
First of all Isolation levels never affect DDL,DML statements.They are only for select ..Secondly an update will never block the whole table unless accounting for some factors like no index(so table scan),lock escalation..
You are getting blocking in your example due to Repeatable read isolation level which keeps the select shared lock intact until the transaction is committed
Coming to your example
1.Select will never block the table ,but conflicting locks on the table are not allowed until the transaction is finished
2.if your update acquires more than 5000 locks,then it will block the whole table (not even select )
This question is related to Is a stored procedure call inside a SQL Server trigger implictly thread safe and atomic? so I don't know if I should re-post the same code or not. Be that as it may, here's the deal.
As it stands, the SQL Server trigger is an INSTEAD OF INSERT for the moment. It inserts data into a table called Foo. Then the trigger calls a stored procedure. One part of the stored procedure selects the last record inserted into Foo:
-- New transaction in stored procedure
BEGIN TRANSACTION
...
DECLARE #FooID INT
SELECT
TOP 1 #FooID = ID
FROM
Foo
ORDER BY
ID DESC
...
END TRANSACTION
Let's say two INSERT statements are executed at the same time (let's call the two INSERT transactions T1 and T2 for simplification). That's two simultaneous trigger calls. The trigger and stored procedure are both atomic in my case.
But do I need to worry about isolation for the SELECTstatement in the stored procedure? Is it guaranteed that the last record inserted will be correctly selected? Or, could I run into a situation where T1 selects the T2 record and vice-versa?
Thank you.
Isolation levels are well covered in the MSDN documentation: Transaction Isolation Levels and they most definitely can affect how the SPs operate. Also, as mentioned yesterday, the SP in the trigger may not see the insert that caused the trigger.
I've come across a problem while learning transaction isolation levels in SQL server.
The problem is that after I run this code (and it finishes without errors):
set implicit_transactions off;
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
BEGIN TRAN T1;
SELECT (...)
WAITFOR DELAY '00:00:5'
SELECT (...)
WAITFOR DELAY '00:00:3'
COMMIT TRAN T1;
I want to run this query:
set implicit_transactions off;
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
BEGIN TRANSACTION T2;
INSERT (...)
INSERT (...)
COMMIT TRANSACTION T2;
But it just says "Executing query", and does nothing.
I think it's because the lock on the tables somehow continues after the first transaction has been finished. Can someone help?
Of course the selects and the inserts refer to the same tables.
Either the first tran is still open (close the window to make sure it is not), or some other tran is open (exec sp_who2). You can't suppress X-locks taken by DML because SQL Server needs those locks during rollback.
#usr offers good possibilities.
A related specific possibility is that you selected only part of the first transaction to execute while tinkering - i.e. executed BEGIN TRAN T1 and never executed COMMIT TRAN T1. It happens - part of Murphy's Law I think. Try executing just COMMIT TRAN T1, then re-trying the second snippet.
The following worked just fine for me on repeated, complete executions in a single session:
set implicit_transactions off;
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
BEGIN TRAN T1;
SELECT * from tbl_A
WAITFOR DELAY '00:00:5'
SELECT * from tbl_B
WAITFOR DELAY '00:00:3'
COMMIT TRAN T1;
set implicit_transactions off;
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
BEGIN TRANSACTION T2;
INSERT tbl_A (ModifiedDate) values (GETDATE())
INSERT tbl_B (ModifiedDate) values (GETDATE())
INSERT tbl_A (ModifiedDate) select top 1 ModifiedDate from tbl_A
INSERT tbl_B (ModifiedDate) select top 1 ModifiedDate from tbl_B
COMMIT TRANSACTION T2;
1 - SET IMPLICT_TRANSACTIONS is usually OFF unless you SET ANSI_DEFAULTS to ON. Then it will be ON. Thus, you can remove this extra statement if it is not needed.
2 - I agree with Aaron. Read uncommitted (no lock) should be used with a SELECT statement. However, this can lead to invalid results. It is prone to missing data, reading data twice, or scan errors.
Read Committed Snap Shot Isolation (RCSI) is a better option at the expense of tempdb (version store space). This will allow your reports (readers) not to be blocked by transactions (writers).
3 - Setting, SET TRANSACTION ISOLATION LEVEL SERIALIZABLE, will use the most amount of locks. Therefore, increase the chances of blocking .
Why use this low concurrency isolation level with two INSERT statements?
I can understand using this level to UPDATE multiple tables. For instance, a bank transaction. Debit one row and Credit another row. Two tables. No one has access to the records until the transaction is complete.
In short, I would use READ COMMITTED isolation level for the insert statements. More than likely, the data being inserted is different.
However, the whole picture is not here.
There is some type of blocking that is occurring. You need to find the root cause.
Here is a code snippet to look at locks and objects that are locked.
--
-- Locked object details
--
-- Old school technique
EXEC sp_lock
GO
-- Lock details
SELECT
resource_type, resource_associated_entity_id,
request_status, request_mode,request_session_id,
resource_description
FROM sys.dm_tran_locks
WHERE resource_database_id = DB_ID('AdventureWorks2012')
GO
-- Page/Key details
SELECT object_name(object_id) as object_nm, *
FROM sys.partitions
WHERE hobt_id = 72057594047037440
GO
-- Object details
SELECT object_name(1266103551)
GO
If you still need help, please identify the two blocking transactions and the locks. Please post this information.