Getting deadlocks on MS SQL stored procedure performing a read/update (put code to handle deadlocks) - sql

I have to admit I'm just learning about properly handling deadlocks but based on suggestions I read, I thought this was the proper way to handle it. Basically I have many processes trying to 'reserve' a row in the database for an update. So I first read for an available row, then write to it. Is this not the right way? If so, how do I need to fix this SP?
CREATE PROCEDURE [dbo].[reserveAccount]
-- Add the parameters for the stored procedure here
#machineId varchar(MAX)
AS
BEGIN
SET TRANSACTION ISOLATION LEVEL READ COMMITTED;
BEGIN TRANSACTION;
declare #id BIGINT;
set #id = (select min(id) from Account_Data where passfail is null and reservedby is null);
update Account_data set reservedby = #machineId where ID = #id;
COMMIT TRANSACTION;
END

You can write this as a single statement. That will may fix the update problem:
update Account_data
set reservedby = #machineId
where ID = (select min(id) from Account_Data where passfail is null and reservedby is null);

Well, yur problem is 2that you have 2 statements - a select and an update. if those run concurrent, then the select will..... make a read lock and the update will demand a write lock. At the same time 2 machins deadlock.
Simple solution is to make the initial select demand an uddate lock (WITH (ROWLOCK, UPDLOCK) as hint). That may or may not work (depends on what else goes on) but it is a good start.
Second step - if that fails - is to use an application elvel lock (sp_getapplock) that makes sure a critical system always has only one owner and htus only exeutes transactions serially.

Related

SQL Locking not implemented correctly

I have implemented a SQL tablockx locking in a Procedure.It is working fine when this is running on one server.But duplicate policyNumber occurs when request comes from two different servers.
declare #plantype varchar(max),
#transid varchar(max),
#IsStorySolution varchar(10),
#outPolicyNumber varchar(max) output,
#status int output -- 0 mean error and 1 means success
)
as
begin
BEGIN TRANSACTION
Declare #policyNumber varchar(100);
Declare #polseqid int;
-- GET POLICY NUMBER ON THE BASIS OF STORY SOLUTION..
IF (UPPER(#IsStorySolution)='Y')
BEGIN
select top 1 #policyNumber=Policy_no from PLAN_POL_NO with (tablockx, holdlock) where policy_no like '9%'
and pol_id_seq is null and status='Y';
END
ELSE
BEGIN
select top 1 #policyNumber=pp.Policy_no from PLAN_POL_NO pp with (tablockx, holdlock) ,PLAN_TYP_MST pt where pp.policy_no like PT.SERIES+'%'
and pt.PLAN_TYPE in (''+ISNULL(#plantype,'')+'') and pol_id_seq is null and pp.status='Y'
END
-- GET POL_SEQ_ID NUMBER
select #polseqid=dbo.Sequence();
--WAITFOR DELAY '00:00:03';
set #policyNumber= ISNULL(#policyNumber,'');
-- UPDATE POLICY ID INFORMATION...
Update PLAN_POL_NO set status='N',TRANSID =#transid , POL_ID_SEQ=ISNULL(#polseqid,0) where Policy_no =#policyNumber
set #outPolicyNumber=#policyNumber;
if(##ERROR<>0) begin GOTO Fail end
COMMIT Transaction
set #status=1;
return;
Fail:
If ##TRANCOUNT>0
begin
Rollback transaction
set #status=0;
return;
This is function which i have called::
CREATE function [dbo].[Sequence]()
returns int
as
begin
declare #POL_ID int
/***************************************
-- Schema name is added with table name in below query
-- as there are two table with same name (PLAN_POL_NO)
-- on different schema (dbo & eapp).
*****************************************/
select #POL_ID=isnull(MAX(POL_ID_SEQ),2354) from dbo.PLAN_POL_NO
return #POL_ID+1
end
The problem you are facing is because concurrent requests are both getting the same POL_ID_SEQ from your table dbo.PLAN_POL_NO.
There are multiple solutions to your problem, but I can think of two that might help you and require none/small code changes:
Using a higher transaction isolation level instead of table hints.
In your stored procedure you can use the following:
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
This will make sure that any data read/modified during the SP block is transactionally consistent and avoids phantom reads, duplicate records etc. It will potentially create higher deadlocks and if these tables are heavily queried/updated throughout your application you might have a whole new set of issues.
Make sure your update to the dbo.PLAN_POL_NO only succeeds if the sequence has not changed. If it has changed, error out (if it changed it means a concurrent transaction obtained the ID and completed first)
Something like this:
Update dbo.PLAN_POL_NO
SET status ='N',
TRANSID = #transid,
POL_ID_SEQ = ISNULL(#polseqid,0)
WHERE Policy_no = #policyNumber
AND POL_ID_SEW = #polseqid - 1
IF ##ROWCOUNT <> 1
BEGIN
-- Update failed, error out and let the SP rollback the transaction
END
A WITH (HOLDLOCK) option in the function might suffice.
The HOLDLOCK from the first queries may be being applied at a row or page level that then DOESN'T include the row of interest that is queried in the function.
However, the function is not reliable since it is not self-contained. I'd be looking to redesign this such that the Sequence can generate AND "take" the sequence number before returning it. The current design is fragile at best.

Conditionally inserting records into a table in multithreaded environment based on a count

I am writing a T-SQL stored procedure that conditionally adds a record to a table only if the number of similar records is below a certain threshold, 10 in the example below. The problem is this will be run from a web application, so it will run on multiple threads, and I need to ensure that the table never has more than 10 similar records.
The basic gist of the procedure is:
BEGIN
DECLARE #c INT
SELECT #c = count(*)
FROM foo
WHERE bar = #a_param
IF #c < 10 THEN
INSERT INTO foo
(bar)
VALUES (#a_param)
END IF
END
I think I could solve any potential concurrency problems by replacing the select statement with:
SELECT #c = count(*) WITH (TABLOCKX, HOLDLOCK)
But I am curious if there any methods other than lock hints for managing concurrency problems in T-SQL
One option would be to use the sp_getapplock system stored procedure. You can place your critical section logic in a transaction and use the built in locking of sql server to ensure synchronized access.
Example:
CREATE PROC MyCriticalWork(#MyParam INT)
AS
DECLARE #LockRequestResult INT
SET #LockRequestResult=0
DECLARE #MyTimeoutMiliseconds INT
SET #MyTimeoutMiliseconds=5000--Wait only five seconds max then timeouit
BEGIN TRAN
EXEC #LockRequestResult=SP_GETAPPLOCK 'MyCriticalWork','Exclusive','Transaction',#MyTimeoutMiliseconds
IF(#LockRequestResult>=0)BEGIN
/*
DO YOUR CRITICAL READS AND WRITES HERE
*/
--Release the lock
COMMIT TRAN
END ELSE
ROLLBACK TRAN
Use SERIALIZABLE. By definition it provides you the illusion that your transaction is the only transaction running. Be aware that this might result in blocking and deadlocking. In fact this SQL code is a classic candidate for deadlocking: Two transactions might first read a set of rows, then both will try to modify that set of rows. Locking hints are the classic way of solving that problem. Retry also works.
As stated in the comment. Why are you trying to insert on multiple threads? You cannot write to a table faster on multiple threads.
But you don't need a declare
insert into [Table_1] (ID, fname, lname)
select 3, 'fname', 'lname'
from [Table_1]
where ID = 3
having COUNT(*) <= 10
If you need to take a lock then do so
The data is not 3NF
Should start any design with a proper data model
Why rule out table lock?
That could very well be the best approach
Really, what are the chances?
Even without a lock you would have to have two at a count of 9 submit at exactly the same time. Even then it would stop at 11. Is the 10 an absolute hard number?

Deadlock on query that is executed simultaneously

I've got a stored procedure that does the following (Simplified):
SET TRANSACTION ISOLATION LEVEL REPEATABLE READ
BEGIN TRANSACTION
DECLARE #intNo int
SET #intNo = (SELECT MAX(intNo) + 1 FROM tbl)
INSERT INTO tbl(intNo)
Values (#intNo)
SELECT intNo
FROM tbl
WHERE (intBatchNumber = #intNo - 1)
COMMIT TRANSACTION
My issue is that when two or more users execute this at the same time I am getting deadlocks. Now as I understand it the moment I do my first select in the proc this should create a lock in tbl. If the second procedure is then called while the first procedure is still executing it should wait for it to complete right?
At the moment this is causing a deadlock, any ideas?
The insert query requires a different lock than the select. The lock for select blocks a second insert, but it does not block a second select. So both queries can start with the select, but they both block on the other's insert.
You can solve this by asking the first query to lock the entire table:
SET #intNo = (SELECT MAX(intNo) + 1 FROM tbl with (tablockx))
^^^^^^^^^^^^^^^
This will make the second transaction's select wait for the complete first transaction to finish.
Make it simpler so you have one statement and no transaction
--BEGIN TRANSACTION not needed
INSERT INTO tbl(intNo)
OUTPUT INSERTED.intNo
SELECT MAX(intNo) + 1 FROM tbl WITH (TABLOCK)
--COMMIT TRANSACTION not needed
Although, why aren't you using IDENTITY...?

sql server: Is this nesting in a transcation sufficient for getting a unique number from the database?

i want to generate a unique number from a table.
It has to be thread safe of course, so when i check for the last number and get '3', and then store '4' in the database, i don't want anybody else just in between those two actions (get the number and store it one higher in the database) also to get '3' back, and then also storing '4'
So i thought, put it in a transaction like this:
begin transaction
declare #maxNum int
select #maxNum = MAX(SequenceNumber) from invoice
where YEAR = #year
if #maxNum is null
begin
set #maxNum = 0
end
set #maxNum = #maxNum + 1
INSERT INTO [Invoice]
([Year]
,[SequenceNumber]
,[DateCreated])
VALUES
(#year
,#maxNum
,GETUTCDATE()
)
commit transaction
return #maxNum
But i wondered, is that enough, to put it in a transaction?
my first thought was: it locks this sp for usage by other people, but is that correct? how can sql server know what to lock at the first step?
Will this construction guarantee me that nobody else will do the select #maxnum part just when i am updating the #maxnum value, and at that moment receiving the same #maxnum as i did so i'm in trouble.
I hope you understand what i want to accomplish, and also if you know if i did choose the right solution.
EDIT:
also described as 'How to Single-Thread a stored procedure'
If you want to have the year and a sequence number stored in the database, and create an invoice number from that, I'd use:
a InvoiceYear column (which could totally be computed as YEAR(InvoiceDate))
an InvoiceID INT IDENTITY column which you could reset every year to 1
create a computed column InvoiceNumber as:
ALTER TABLE dbo.InvoiceTable
ADD InvoiceNumber AS CAST(InvoiceYear AS VARCHAR(4)) + '-' +
RIGHT('000000' + CAST(InvoiceID AS VARCHAR(6)), 6) PERSISTED
This way, you automagically get invoice numbers:
2010-000001
......
2010-001234
......
2010-123456
Of course, if you need more than 6 digits (= 1 million invoices) - just adjust the RIGHT() and CAST() statements for the InvoiceID column.
Also, since this is a persisted computed column, you can index it for fast retrieval.
This way: you don't have to worry about concurrency, stored procs, transactions and stuff like that - SQL Server will do that for you - for free!
No, it's not enough. The shared lock set by the select will not prevent anyone from reading that same value at the same time.
Change this:
select #maxNum = MAX(SequenceNumber) from invoice where YEAR = #year
To this:
select #maxNum = MAX(SequenceNumber) from invoice with (updlock, holdlock) where YEAR = #year
This way you replace the shared lock with an update lock, and two update locks are not compatible with each over.
The holdlock means that the lock is to be held until the end of the transaction. So you do still need the transaction bit.
Note that this will not help if there's some other procedure that also wants to do the update. If that other procedure reads the value without providing the updlock hint, it will still be able to read the previous value of the counter. This may be a good thing, as it improves concurrency in scenarios where the other readers do not intend to make an update later, but it also may be not what you want, in which case either update all procedures to use updlock, or use xlock instead to place an exclusive lock, not compatible with shared locks.
As it turned out, i didn't want to lock the table, i just wanted to execute the stored procedure one at a time.
In C# code i would place a lock on another object, and that's what was discussed here
http://www.sqlservercentral.com/Forums/Topic357663-8-1.aspx
So that's what i used
declare #Result int
EXEC #Result =
sp_getapplock #Resource = 'holdit1', #LockMode = 'Exclusive', #LockTimeout = 10000 --Time to wait for the lock
IF #Result < 0
BEGIN
ROLLBACK TRAN
RAISERROR('Procedure Already Running for holdit1 - Concurrent execution is not supported.',16,9)
RETURN(-1)
END
where 'holdit1' is just a name for the lock.
#result returns 0 or 1 if it succeeds in getting the lock (one of them is when it immediately succeeds, and the other is when you get the lock while waiting)

Having TRANSACTION In All Queries

Do you think always having a transaction around the SQL statements in a stored procedure is a good practice? I'm just about to optimize this legacy application in my company, and one thing I found is that every stored procedure has BEGIN TRANSACTION. Even a procedure with a single select or update statement has one. I thought it would be nice to have BEGIN TRANSACTION if performing multiple actions, but not just one action. I may be wrong, which is why I need someone else to advise me. Thanks for your time, guys.
It is entirely unnecessary as each SQL statement executes atomically, ie. as if it were already running in its own transaction. In fact, opening unnecessary transactions can lead to increased locking, even deadlocks. Forgetting to match COMMITs with BEGINs can leave a transaction open for as long as the connection to the database is open and interfere with other transactions in the same connection.
Such coding almost certainly means that whoever wrote the code was not very experienced in database programming and is a sure smell that there may be other problems as well.
The only possible reason I could see for this is if you have the possibility of needing to roll-back the transaction for a reason other than a SQL failure.
However, if the code is literally
begin transaction
statement
commit
Then I see absolutely no reason to use an explicit transaction, and it's probably being done because it's always been done that way.
I don't know of any benefit of not just using auto commit transactions for these statements.
Possible disadvantages of using explicit transactions everywhere might be that it just adds clutter to the code and so makes it less easy to see when an explicit transaction is being used to ensure correctness over multiple statements.
Additionally it increases the risk that a transaction is left open holding locks unless care is taken (e.g. with SET XACT_ABORT ON).
Also there is a minor performance implication as shown in #8kb's answer. This illustrates it another way using the visual studio profiler.
Setup
(Testing against an empty table)
CREATE TABLE T (X INT)
Explicit
SET NOCOUNT ON
DECLARE #X INT
WHILE ( 1 = 1 )
BEGIN
BEGIN TRAN
SELECT #X = X
FROM T
COMMIT
END
Auto Commit
SET NOCOUNT ON
DECLARE #X INT
WHILE ( 1 = 1 )
BEGIN
SELECT #X = X
FROM T
END
Both of them end up spending time in CMsqlXactImp::Begin and CMsqlXactImp::Commit but for the explicit transactions case it spends a significantly greater proportion of the execution time in these methods and hence less time doing useful work.
+--------------------------------+----------+----------+
| | Auto | Explicit |
+--------------------------------+----------+----------+
| CXStmtQuery::ErsqExecuteQuery | 35.16% | 25.06% |
| CXStmtQuery::XretSchemaChanged | 20.71% | 14.89% |
| CMsqlXactImp::Begin | 5.06% | 13% |
| CMsqlXactImp::Commit | 12.41% | 24.03% |
+--------------------------------+----------+----------+
When performing multiple insert/update/delete, it is better to have a transaction to insure atomicity on operation and it insure that all the tasks of operation are executed or none.
For single insert/update/delete statement, it depends upon what kind of operation (from business layer perspective) you are performing and how important it is. If you are performing some calculation before single insert/update/delete, then better use transaction, may be some data changed after you retrieve data for insert/update/delete.
One plus point is you can add another INSERT (for example) and it's already safe.
Then again, you also have the problem of nested transactions if a stored procedure calls another one. An inner rollback will cause error 266.
If every call is simple CRUD with no nesting then it's pointless: but if you nest or have multiple writes pre TXN then it's good to have a consistent template.
You mentioned that you'll be optimizing this legacy app.
One of the first, and easiest, things you can do to improve performance is remove all the BEGIN TRAN and COMMIT TRAN for the stored procedures that only do SELECTs.
Here is a simple test to demonstrate:
/* Compare basic SELECT times with and without a transaction */
DECLARE #date DATETIME2
DECLARE #noTran INT
DECLARE #withTran INT
SET #noTran = 0
SET #withTran = 0
DECLARE #t TABLE (ColA INT)
INSERT #t VALUES (1)
DECLARE
#count INT,
#value INT
SET #count = 1
WHILE #count < 1000000
BEGIN
SET #date = GETDATE()
SELECT #value = ColA FROM #t WHERE ColA = 1
SET #noTran = #noTran + DATEDIFF(MICROSECOND, #date, GETDATE())
SET #date = GETDATE()
BEGIN TRAN
SELECT #value = ColA FROM #t WHERE ColA = 1
COMMIT TRAN
SET #withTran = #withTran + DATEDIFF(MICROSECOND, #date, GETDATE())
SET #count = #count + 1
END
SELECT
#noTran / 1000000. AS Seconds_NoTransaction,
#withTran / 1000000. AS Seconds_WithTransaction
/** Results **/
Seconds_NoTransaction Seconds_WithTransaction
--------------------------------------- ---------------------------------------
14.23600000 18.08300000
You can see there is a definite overhead associated with the transactions.
Note: this is assuming your these stored procedures are not using any special isolation levels or locking hints (for something like handling pessimistic concurrency). In that case, obvously you would want to keep them.
So to answer the question, I would only leave in the transactions where you are actually attempting to preserve the integrity of the data modifications in case of an error in the code, SQL Server, or the hardware.
I can only say that placing a transaction block like this to every stored procedure might be a novice's work.
A transaction should be placed only in a block that has more than one insert/update statements, other than that, there is no need to place a transaction block in the stored procedure.
BEGIN TRANSACTION / COMMIT syntax shouldn't be used in every stored procedure by default unless you are trying to cover the following scenarios:
You include the WITH MARK option because you want to support restoring the database from a backup to a specific point in time.
You intend to port the code from SQL Server to another database platform like Oracle. Oracle does not commit transactions by default.