SQL Locking not implemented correctly - sql

I have implemented a SQL tablockx locking in a Procedure.It is working fine when this is running on one server.But duplicate policyNumber occurs when request comes from two different servers.
declare #plantype varchar(max),
#transid varchar(max),
#IsStorySolution varchar(10),
#outPolicyNumber varchar(max) output,
#status int output -- 0 mean error and 1 means success
)
as
begin
BEGIN TRANSACTION
Declare #policyNumber varchar(100);
Declare #polseqid int;
-- GET POLICY NUMBER ON THE BASIS OF STORY SOLUTION..
IF (UPPER(#IsStorySolution)='Y')
BEGIN
select top 1 #policyNumber=Policy_no from PLAN_POL_NO with (tablockx, holdlock) where policy_no like '9%'
and pol_id_seq is null and status='Y';
END
ELSE
BEGIN
select top 1 #policyNumber=pp.Policy_no from PLAN_POL_NO pp with (tablockx, holdlock) ,PLAN_TYP_MST pt where pp.policy_no like PT.SERIES+'%'
and pt.PLAN_TYPE in (''+ISNULL(#plantype,'')+'') and pol_id_seq is null and pp.status='Y'
END
-- GET POL_SEQ_ID NUMBER
select #polseqid=dbo.Sequence();
--WAITFOR DELAY '00:00:03';
set #policyNumber= ISNULL(#policyNumber,'');
-- UPDATE POLICY ID INFORMATION...
Update PLAN_POL_NO set status='N',TRANSID =#transid , POL_ID_SEQ=ISNULL(#polseqid,0) where Policy_no =#policyNumber
set #outPolicyNumber=#policyNumber;
if(##ERROR<>0) begin GOTO Fail end
COMMIT Transaction
set #status=1;
return;
Fail:
If ##TRANCOUNT>0
begin
Rollback transaction
set #status=0;
return;
This is function which i have called::
CREATE function [dbo].[Sequence]()
returns int
as
begin
declare #POL_ID int
/***************************************
-- Schema name is added with table name in below query
-- as there are two table with same name (PLAN_POL_NO)
-- on different schema (dbo & eapp).
*****************************************/
select #POL_ID=isnull(MAX(POL_ID_SEQ),2354) from dbo.PLAN_POL_NO
return #POL_ID+1
end

The problem you are facing is because concurrent requests are both getting the same POL_ID_SEQ from your table dbo.PLAN_POL_NO.
There are multiple solutions to your problem, but I can think of two that might help you and require none/small code changes:
Using a higher transaction isolation level instead of table hints.
In your stored procedure you can use the following:
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
This will make sure that any data read/modified during the SP block is transactionally consistent and avoids phantom reads, duplicate records etc. It will potentially create higher deadlocks and if these tables are heavily queried/updated throughout your application you might have a whole new set of issues.
Make sure your update to the dbo.PLAN_POL_NO only succeeds if the sequence has not changed. If it has changed, error out (if it changed it means a concurrent transaction obtained the ID and completed first)
Something like this:
Update dbo.PLAN_POL_NO
SET status ='N',
TRANSID = #transid,
POL_ID_SEQ = ISNULL(#polseqid,0)
WHERE Policy_no = #policyNumber
AND POL_ID_SEW = #polseqid - 1
IF ##ROWCOUNT <> 1
BEGIN
-- Update failed, error out and let the SP rollback the transaction
END

A WITH (HOLDLOCK) option in the function might suffice.
The HOLDLOCK from the first queries may be being applied at a row or page level that then DOESN'T include the row of interest that is queried in the function.
However, the function is not reliable since it is not self-contained. I'd be looking to redesign this such that the Sequence can generate AND "take" the sequence number before returning it. The current design is fragile at best.

Related

SQL Server transaction fails and table gets locked

My stored procedure is like this
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
ALTER PROCEDURE [dbo].[Bid_Create]
#BidType int,
#ClientId int,
#BidDate date,
#EmailNotificationStatus int,
#BidStatus int,
#BidAmount int,
#ProductId int
AS
DECLARE #highestBid int;
BEGIN
BEGIN TRY
BEGIN TRANSACTION
SET NOCOUNT ON;
SET #highestBid = (SELECT Max(wf_bid.BidAmount) AS HighestBitAmount
FROM wf_bid
WHERE wf_bid.ProductId = #ProductId)
IF #highestBid is NULL OR #highestBid < #BidAmount
BEGIN
UPDATE wf_bid
SET BidStatus = '1'
WHERE Id = (SELECT TOP 1 id
FROM [wf_bid]
WHERE BidAmount = (SELECT MAX(BidAmount)
FROM [wf_bid]
WHERE ProductId = #ProductId
AND ClientId = #ClientId))
INSERT INTO wf_bid (BidType, ClientId, BidDate, EmailNotificationStatus, BidStatus)
VALUES (#BidType, #ClientId, #BidDate, #EmailNotificationStatus, #BidStatus)
END
COMMIT TRANSACTION
END TRY
BEGIN CATCH
ROLLBACK TRANSACTION
END CATCH
END
Everything looks okay to me. But once I run this, table is getting locked. No other query on the table works (I think it is because transaction is not getting committed).
Can anyone point out what is wrong with this query? And how can I unlock the table?
But once I run this, table is getting locked
This may be due to update taking many locks,which in turn may be due to predicate not being sargable. Though this update locks(U) lock will be released as soon as the predicate is not matched.You will experience blocking
One more reason ,why this update may block your whole table is when this transaction acquires more than 5000 locks..
another reason can be when your transaction fails after committing so many rows and it has to do a lot of rollback work
Above are the reasons ,i could think of,where you can experience table is locked feeling
to troubleshoot that,you will need to check lockings blokcings using below query
select resource_type,resource_Database_id,
request_mode,request_type,request_Status,request_session_id
from sys.dm_tran_locks
where request_session_id=<<your update session id>>
also you are accessing table many times,for getting max.you can rewrite it like below
;with cte
as
(
select top (1) with ties id,bidstatus from
wf_bid
where ProductId=#ProductId and ClientId=#ClientId)
order by
row_number() over (partition by id order by bid_Amount desc)
)
update cte
set bidstatus=1

Getting deadlocks on MS SQL stored procedure performing a read/update (put code to handle deadlocks)

I have to admit I'm just learning about properly handling deadlocks but based on suggestions I read, I thought this was the proper way to handle it. Basically I have many processes trying to 'reserve' a row in the database for an update. So I first read for an available row, then write to it. Is this not the right way? If so, how do I need to fix this SP?
CREATE PROCEDURE [dbo].[reserveAccount]
-- Add the parameters for the stored procedure here
#machineId varchar(MAX)
AS
BEGIN
SET TRANSACTION ISOLATION LEVEL READ COMMITTED;
BEGIN TRANSACTION;
declare #id BIGINT;
set #id = (select min(id) from Account_Data where passfail is null and reservedby is null);
update Account_data set reservedby = #machineId where ID = #id;
COMMIT TRANSACTION;
END
You can write this as a single statement. That will may fix the update problem:
update Account_data
set reservedby = #machineId
where ID = (select min(id) from Account_Data where passfail is null and reservedby is null);
Well, yur problem is 2that you have 2 statements - a select and an update. if those run concurrent, then the select will..... make a read lock and the update will demand a write lock. At the same time 2 machins deadlock.
Simple solution is to make the initial select demand an uddate lock (WITH (ROWLOCK, UPDLOCK) as hint). That may or may not work (depends on what else goes on) but it is a good start.
Second step - if that fails - is to use an application elvel lock (sp_getapplock) that makes sure a critical system always has only one owner and htus only exeutes transactions serially.

sql server: Is this nesting in a transcation sufficient for getting a unique number from the database?

i want to generate a unique number from a table.
It has to be thread safe of course, so when i check for the last number and get '3', and then store '4' in the database, i don't want anybody else just in between those two actions (get the number and store it one higher in the database) also to get '3' back, and then also storing '4'
So i thought, put it in a transaction like this:
begin transaction
declare #maxNum int
select #maxNum = MAX(SequenceNumber) from invoice
where YEAR = #year
if #maxNum is null
begin
set #maxNum = 0
end
set #maxNum = #maxNum + 1
INSERT INTO [Invoice]
([Year]
,[SequenceNumber]
,[DateCreated])
VALUES
(#year
,#maxNum
,GETUTCDATE()
)
commit transaction
return #maxNum
But i wondered, is that enough, to put it in a transaction?
my first thought was: it locks this sp for usage by other people, but is that correct? how can sql server know what to lock at the first step?
Will this construction guarantee me that nobody else will do the select #maxnum part just when i am updating the #maxnum value, and at that moment receiving the same #maxnum as i did so i'm in trouble.
I hope you understand what i want to accomplish, and also if you know if i did choose the right solution.
EDIT:
also described as 'How to Single-Thread a stored procedure'
If you want to have the year and a sequence number stored in the database, and create an invoice number from that, I'd use:
a InvoiceYear column (which could totally be computed as YEAR(InvoiceDate))
an InvoiceID INT IDENTITY column which you could reset every year to 1
create a computed column InvoiceNumber as:
ALTER TABLE dbo.InvoiceTable
ADD InvoiceNumber AS CAST(InvoiceYear AS VARCHAR(4)) + '-' +
RIGHT('000000' + CAST(InvoiceID AS VARCHAR(6)), 6) PERSISTED
This way, you automagically get invoice numbers:
2010-000001
......
2010-001234
......
2010-123456
Of course, if you need more than 6 digits (= 1 million invoices) - just adjust the RIGHT() and CAST() statements for the InvoiceID column.
Also, since this is a persisted computed column, you can index it for fast retrieval.
This way: you don't have to worry about concurrency, stored procs, transactions and stuff like that - SQL Server will do that for you - for free!
No, it's not enough. The shared lock set by the select will not prevent anyone from reading that same value at the same time.
Change this:
select #maxNum = MAX(SequenceNumber) from invoice where YEAR = #year
To this:
select #maxNum = MAX(SequenceNumber) from invoice with (updlock, holdlock) where YEAR = #year
This way you replace the shared lock with an update lock, and two update locks are not compatible with each over.
The holdlock means that the lock is to be held until the end of the transaction. So you do still need the transaction bit.
Note that this will not help if there's some other procedure that also wants to do the update. If that other procedure reads the value without providing the updlock hint, it will still be able to read the previous value of the counter. This may be a good thing, as it improves concurrency in scenarios where the other readers do not intend to make an update later, but it also may be not what you want, in which case either update all procedures to use updlock, or use xlock instead to place an exclusive lock, not compatible with shared locks.
As it turned out, i didn't want to lock the table, i just wanted to execute the stored procedure one at a time.
In C# code i would place a lock on another object, and that's what was discussed here
http://www.sqlservercentral.com/Forums/Topic357663-8-1.aspx
So that's what i used
declare #Result int
EXEC #Result =
sp_getapplock #Resource = 'holdit1', #LockMode = 'Exclusive', #LockTimeout = 10000 --Time to wait for the lock
IF #Result < 0
BEGIN
ROLLBACK TRAN
RAISERROR('Procedure Already Running for holdit1 - Concurrent execution is not supported.',16,9)
RETURN(-1)
END
where 'holdit1' is just a name for the lock.
#result returns 0 or 1 if it succeeds in getting the lock (one of them is when it immediately succeeds, and the other is when you get the lock while waiting)

SQLServer lock table during stored procedure

I've got a table where I need to auto-assign an ID 99% of the time (the other 1% rules out using an identity column it seems). So I've got a stored procedure to get next ID along the following lines:
select #nextid = lastid+1 from last_auto_id
check next available id in the table...
update last_auto_id set lastid = #nextid
Where the check has to check if users have manually used the IDs and find the next unused ID.
It works fine when I call it serially, returning 1, 2, 3 ... What I need to do is provide some locking where multiple processes call this at the same time. Ideally, I just need it to exclusively lock the last_auto_id table around this code so that a second call must wait for the first to update the table before it can run it's select.
In Postgres, I can do something like 'LOCK TABLE last_auto_id;' to explicitly lock the table. Any ideas how to accomplish it in SQL Server?
Thanks in advance!
Following update increments your lastid by one and assigns this value to your local variable in a single transaction.
Edit
thanks to Dave and Mitch for pointing out isolation level problems with the original solution.
UPDATE last_auto_id WITH (READCOMMITTEDLOCK)
SET #nextid = lastid = lastid + 1
You guys have between you answered my question. I'm putting in my own reply to collate the working solution I've got into one post. The key seems to have been the transaction approach, with locking hints on the last_auto_id table. Setting the transaction isolation to serializable seemed to create deadlock problems.
Here's what I've got (edited to show the full code so hopefully I can get some further answers...):
DECLARE #Pointer AS INT
BEGIN TRANSACTION
-- Check what the next ID to use should be
SELECT #NextId = LastId + 1 FROM Last_Auto_Id WITH (TABLOCKX) WHERE Name = 'CustomerNo'
-- Now check if this next ID already exists in the database
IF EXISTS (SELECT CustomerNo FROM Customer
WHERE ISNUMERIC(CustomerNo) = 1 AND CustomerNo = #NextId)
BEGIN
-- The next ID already exists - we need to find the next lowest free ID
CREATE TABLE #idtbl ( IdNo int )
-- Into temp table, grab all numeric IDs higher than the current next ID
INSERT INTO #idtbl
SELECT CAST(CustomerNo AS INT) FROM Customer
WHERE ISNUMERIC(CustomerNo) = 1 AND CustomerNo >= #NextId
ORDER BY CAST(CustomerNo AS INT)
-- Join the table with itself, based on the right hand side of the join
-- being equal to the ID on the left hand side + 1. We're looking for
-- the lowest record where the right hand side is NULL (i.e. the ID is
-- unused)
SELECT #Pointer = MIN( t1.IdNo ) + 1 FROM #idtbl t1
LEFT OUTER JOIN #idtbl t2 ON t1.IdNo + 1 = t2.IdNo
WHERE t2.IdNo IS NULL
END
UPDATE Last_Auto_Id SET LastId = #NextId WHERE Name = 'CustomerNo'
COMMIT TRANSACTION
SELECT #NextId
This takes out an exclusive table lock at the start of the transaction, which then successfully queues up any further requests until after this request has updated the table and committed it's transaction.
I've written a bit of C code to hammer it with concurrent requests from half a dozen sessions and it's working perfectly.
However, I do have one worry which is the term locking 'hints' - does anyone know if SQLServer treats this as a definite instruction or just a hint (i.e. maybe it won't always obey it??)
How is this solution? No TABLE LOCK is required and works perfectly!!!
DECLARE #NextId INT
UPDATE Last_Auto_Id
SET #NextId = LastId = LastId + 1
WHERE Name = 'CustomerNo'
SELECT #NextId
Update statement always uses a lock to protect its update.
You might wanna consider deadlocks. This usually happens when multiple users use the stored procedure simultaneously. In order to avoid deadlock and make sure every query from the user will succeed you will need to do some handling during update failures and to do this you will need a try catch. This works on Sql Server 2005,2008 only.
DECLARE #Tries tinyint
SET #Tries = 1
WHILE #Tries <= 3
BEGIN
BEGIN TRANSACTION
BEGIN TRY
-- this line updates the last_auto_id
update last_auto_id set lastid = lastid+1
COMMIT
BREAK
END TRY
BEGIN CATCH
SELECT ERROR_NUMBER() AS ErrorNumber, ERROR_MESSAGE() as ErrorMessage
ROLLBACK
SET #Tries = #Tries + 1
CONTINUE
END CATCH
END
I prefer doing this using an identity field in a second table. If you make lastid identity then all you have to do is insert a row in that table and select #scope_identity to get your new value and you still have the concurrency safety of identity even though the id field in your main table is not identity.

Having TRANSACTION In All Queries

Do you think always having a transaction around the SQL statements in a stored procedure is a good practice? I'm just about to optimize this legacy application in my company, and one thing I found is that every stored procedure has BEGIN TRANSACTION. Even a procedure with a single select or update statement has one. I thought it would be nice to have BEGIN TRANSACTION if performing multiple actions, but not just one action. I may be wrong, which is why I need someone else to advise me. Thanks for your time, guys.
It is entirely unnecessary as each SQL statement executes atomically, ie. as if it were already running in its own transaction. In fact, opening unnecessary transactions can lead to increased locking, even deadlocks. Forgetting to match COMMITs with BEGINs can leave a transaction open for as long as the connection to the database is open and interfere with other transactions in the same connection.
Such coding almost certainly means that whoever wrote the code was not very experienced in database programming and is a sure smell that there may be other problems as well.
The only possible reason I could see for this is if you have the possibility of needing to roll-back the transaction for a reason other than a SQL failure.
However, if the code is literally
begin transaction
statement
commit
Then I see absolutely no reason to use an explicit transaction, and it's probably being done because it's always been done that way.
I don't know of any benefit of not just using auto commit transactions for these statements.
Possible disadvantages of using explicit transactions everywhere might be that it just adds clutter to the code and so makes it less easy to see when an explicit transaction is being used to ensure correctness over multiple statements.
Additionally it increases the risk that a transaction is left open holding locks unless care is taken (e.g. with SET XACT_ABORT ON).
Also there is a minor performance implication as shown in #8kb's answer. This illustrates it another way using the visual studio profiler.
Setup
(Testing against an empty table)
CREATE TABLE T (X INT)
Explicit
SET NOCOUNT ON
DECLARE #X INT
WHILE ( 1 = 1 )
BEGIN
BEGIN TRAN
SELECT #X = X
FROM T
COMMIT
END
Auto Commit
SET NOCOUNT ON
DECLARE #X INT
WHILE ( 1 = 1 )
BEGIN
SELECT #X = X
FROM T
END
Both of them end up spending time in CMsqlXactImp::Begin and CMsqlXactImp::Commit but for the explicit transactions case it spends a significantly greater proportion of the execution time in these methods and hence less time doing useful work.
+--------------------------------+----------+----------+
| | Auto | Explicit |
+--------------------------------+----------+----------+
| CXStmtQuery::ErsqExecuteQuery | 35.16% | 25.06% |
| CXStmtQuery::XretSchemaChanged | 20.71% | 14.89% |
| CMsqlXactImp::Begin | 5.06% | 13% |
| CMsqlXactImp::Commit | 12.41% | 24.03% |
+--------------------------------+----------+----------+
When performing multiple insert/update/delete, it is better to have a transaction to insure atomicity on operation and it insure that all the tasks of operation are executed or none.
For single insert/update/delete statement, it depends upon what kind of operation (from business layer perspective) you are performing and how important it is. If you are performing some calculation before single insert/update/delete, then better use transaction, may be some data changed after you retrieve data for insert/update/delete.
One plus point is you can add another INSERT (for example) and it's already safe.
Then again, you also have the problem of nested transactions if a stored procedure calls another one. An inner rollback will cause error 266.
If every call is simple CRUD with no nesting then it's pointless: but if you nest or have multiple writes pre TXN then it's good to have a consistent template.
You mentioned that you'll be optimizing this legacy app.
One of the first, and easiest, things you can do to improve performance is remove all the BEGIN TRAN and COMMIT TRAN for the stored procedures that only do SELECTs.
Here is a simple test to demonstrate:
/* Compare basic SELECT times with and without a transaction */
DECLARE #date DATETIME2
DECLARE #noTran INT
DECLARE #withTran INT
SET #noTran = 0
SET #withTran = 0
DECLARE #t TABLE (ColA INT)
INSERT #t VALUES (1)
DECLARE
#count INT,
#value INT
SET #count = 1
WHILE #count < 1000000
BEGIN
SET #date = GETDATE()
SELECT #value = ColA FROM #t WHERE ColA = 1
SET #noTran = #noTran + DATEDIFF(MICROSECOND, #date, GETDATE())
SET #date = GETDATE()
BEGIN TRAN
SELECT #value = ColA FROM #t WHERE ColA = 1
COMMIT TRAN
SET #withTran = #withTran + DATEDIFF(MICROSECOND, #date, GETDATE())
SET #count = #count + 1
END
SELECT
#noTran / 1000000. AS Seconds_NoTransaction,
#withTran / 1000000. AS Seconds_WithTransaction
/** Results **/
Seconds_NoTransaction Seconds_WithTransaction
--------------------------------------- ---------------------------------------
14.23600000 18.08300000
You can see there is a definite overhead associated with the transactions.
Note: this is assuming your these stored procedures are not using any special isolation levels or locking hints (for something like handling pessimistic concurrency). In that case, obvously you would want to keep them.
So to answer the question, I would only leave in the transactions where you are actually attempting to preserve the integrity of the data modifications in case of an error in the code, SQL Server, or the hardware.
I can only say that placing a transaction block like this to every stored procedure might be a novice's work.
A transaction should be placed only in a block that has more than one insert/update statements, other than that, there is no need to place a transaction block in the stored procedure.
BEGIN TRANSACTION / COMMIT syntax shouldn't be used in every stored procedure by default unless you are trying to cover the following scenarios:
You include the WITH MARK option because you want to support restoring the database from a backup to a specific point in time.
You intend to port the code from SQL Server to another database platform like Oracle. Oracle does not commit transactions by default.