sql locking for batch updates - sql

I'm using SQL Server 2008 R2 and C#.
I'm inserting a batch of rows in SQL Server with a column Status set with value P.
Afterwards, I check how many rows already have the status R and if there are less than 20, I update the row to Status R.
While inserting and updating, more rows are getting added and updated all the time.
I've tried transactions and locking in multiple ways but still: at the moment that a new batch is activated, there are more than 20 rows with status R for a few milliseconds. After those few milliseconds it stabilizes back to 20.
Does anyone have an idea why at bursts the locking doesn't seem to work?
Sample code, reasons, whatever you can share on this subject can be useful!
Thanks!
Following is my stored proc:
DECLARE #return BIT
SET #return = -1
DECLARE #previousValue INT
--insert the started orchestration
INSERT INTO torchestrationcontroller WITH (ROWLOCK)
([flowname],[orchestrationid],[status])
VALUES (#FlowName, #OrchestrationID, 'P')
--check settings
DECLARE #maxRunning INT
SELECT #maxRunning = maxinstances
FROM torchestrationflows WITH (NOLOCK)
WHERE [flowname] = #FlowName
--if running is 0, than you can pass, no limitation here
IF( #maxRunning = 0 )
BEGIN
SET #return = 1
UPDATE torchestrationcontroller WITH(ROWLOCK)
SET [status] = 'R'
WHERE [orchestrationid] = #OrchestrationID
END
ELSE
-- BEGIN
RETRY: -- Label RETRY
BEGIN TRY
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
BEGIN TRANSACTION T1
--else: check how many orchestrations are now running
--start lock table
DECLARE #currentRunning INT
SELECT #currentRunning = Count(*)
FROM torchestrationcontroller WITH (TABLOCKX) --Use an exclusive lock that will be held until the end of the transaction on all data processed by the statement
WHERE [flowname] = #FlowName
AND [status] = 'R'
--CASE
IF( #currentRunning < #maxRunning )
BEGIN
-- less orchestrations are running than allowed
SET #return = 1
UPDATE torchestrationcontroller WITH(TABLOCKX)
SET [status] = 'R'
WHERE [orchestrationid] = #OrchestrationID
END
ELSE
-- more or equal orchestrations are running than allowed
SET #return = 0
--end lock table
SELECT #Return
COMMIT TRANSACTION T1
END TRY
BEGIN CATCH
--PRINT 'Rollback Transaction'
ROLLBACK TRANSACTION
IF ERROR_NUMBER() = 1205 -- Deadlock Error Number
BEGIN
WAITFOR DELAY '00:00:00.05' -- Wait for 5 ms
GOTO RETRY -- Go to Label RETRY
END
END CATCH

I've been able to fix it by setting the isolation level serializable + transaction on the tree stored procedures (two of which I didn't mention because I didn't think they were of importance to this problem). Apparently it was the combination of multiple stored procs that were interfering with each other.
If you know a better way to fix it that could give me a better performance, please let me know!

Related

Generating custom ID for my application

I want to generate a custom ID for one of the feature in my application. Here is the procedure to do that:
CREATE PROCEDURE [dbo].[GetNextVendorInvoiceNo]
AS
BEGIN
Declare #StartingVendorInvoiceNo int = 0
Select #StartingVendorInvoiceNo = MAX(StartingVendorInvoiceNo) + 1
From SystemSettings WITH (TABLOCK)
Update SystemSettings
Set StartingVendorInvoiceNo = #StartingVendorInvoiceNo
Select #StartingVendorInvoiceNo
END
Would there be any issue if multiple users end up calling this procedure. Obviously I don't want multiple users to have the same ID. I am using TABLOCK but not sure if this is the right way or anything else too is required.
SQL Server 2012 has SEQUENCE feature, which is definitely safe for multi-user environment. It is based on an integer type, though.
If you have a complex procedure that generates a "next" ID and you want to make sure that only one instance of the procedure runs at any moment (at the expense of throughput), I'd use `sp_getapplock'. It is easy to use and understand and you don't need to worry about placing correct query hints.
Your procedure would look like this:
CREATE PROCEDURE [dbo].[GetNextVendorInvoiceNo]
AS
BEGIN
SET NOCOUNT ON;
SET XACT_ABORT ON;
BEGIN TRANSACTION;
BEGIN TRY
DECLARE #VarLockResult int;
EXEC #VarLockResult = sp_getapplock
#Resource = 'GetNextVendorInvoiceNo_app_lock',
#LockMode = 'Exclusive',
#LockOwner = 'Transaction',
#LockTimeout = 60000,
#DbPrincipal = 'public';
Declare #StartingVendorInvoiceNo int = 0;
IF #VarLockResult >= 0
BEGIN
-- Acquired the lock, generate the "next" ID
Select #StartingVendorInvoiceNo = MAX(StartingVendorInvoiceNo) + 1
From SystemSettings;
Update SystemSettings
Set StartingVendorInvoiceNo = #StartingVendorInvoiceNo;
END ELSE BEGIN
-- TODO: handle the case when it takes too long to acquire the lock,
-- i.e. return some error code
-- For example, return 0
SET #StartingVendorInvoiceNo = 0;
END;
Select #StartingVendorInvoiceNo;
COMMIT TRANSACTION;
END TRY
BEGIN CATCH
ROLLBACK TRANSACTION;
-- TODO: handle the error
END CATCH;
END
Simple TABLOCK as you wrote is definitely not enough. You need to wrap everything into transaction. Then make sure that lock is held till the end of the transaction, see HOLDLOCK. Then make sure that the lock you are getting is the correct one. You may need TABLOCKX. So, overall you need a pretty good understanding of all these hints and how locking works. It is definitely possible to achieve the same effect with these hints. But, if the logic in the procedure is more complicated than your simplified example it can easily get pretty ugly.
To my mind, sp_getapplock is easy to understand and maintain.

About lock behavior when update row in sql server

Now, I'm trying to increment number sequential in SQL Server with the number provided from users.
I have a problem when multiple user insert a row same time with same number.
I try to update the number that user provided to a temporary table, and I expect when I update the same table with same condition, SQL Server will lock any modified to this row until the current update finished, but it not.
Here is the update statement I used:
UPDATE GlobalParam
SET ValueString = (CAST(ValueString as bigint) + 1)
WHERE Id = 'xxxx'
Could you tell me any way to force the other update command wait until the current command finished ?
This is entire my command :
DECLARE #Result bigint;
UPDATE GlobalParam SET ValueString = (SELECT MAX(Code) FROM Item)
DECLARE #SelectTopStm nvarchar(MAX);
DECLARE #ExistRow int
SET #SelectTopStm = 'SELECT #ExistRow = 1 FROM (SELECT TOP 1 Code FROM Item WHERE Code = '999') temp'
EXEC sp_executesql #SelectTopStm, N'#ExistRow int output', #ExistRow output
IF (#ExistRow is not null)
BEGIN
DECLARE #MaxValue bigint
DECLARE #ReturnUpdateTbl table (ValueString nvarchar(max));
UPDATE GlobalParam SET ValueString = (CAST(ValueString as bigint) + 1)
OUTPUT inserted.ValueString INTO #ReturnUpdateTbl
WHERE [Id] = '333A8E1F-16DD-E411-8280-D4BED9D726B3'
SELECT TOP 1 #MaxValue = CAST(ValueString as bigint) FROM #ReturnUpdateTbl
SET #Result = #MaxValue
END
ELSE
BEGIN
SET #Output = 999
END
END
I write the codes above as a stored procedure.
Here is the real code when I insert one Item:
DECLARE #IncrementResult BIGINT
EXEC IncrementNumberUnique
, (some parameters)..
,#Result = #IncrementResult OUTPUT
INSERT INTO ITEM (Id, Code) VALUES ('xxxx', #IncrementResult)
I create 3 threads and make it run at the same time.
The return result :
Id Code
1 999
2 1000
3 1000
Thanks
If I understood your requirements, try ROWLOCK hint to tell the optimizer to start with locking the rows one by one as the update needs them.
UPDATE GlobalParam WITH(ROWLOCK)
SET ValueString = (CAST(ValueString as bigint) + 1)
WHERE Id = 'xxxx'
By default SQL Server does READ Committed locking which releases READ locks once the read operation is committed. Once the Update statement below is complete, all read locks are released from the table Item.
UPDATE GlobalParam SET ValueString = (SELECT MAX(Code) FROM Item)
Since your INSERT into Item is outside the scope of your procedure. you can run the thread in SERIALIZABLE isolation level. Something like this.
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
DECLARE #IncrementResult BIGINT
EXEC IncrementNumberUnique
, (some parameters)..
,#Result = #IncrementResult OUTPUT
INSERT INTO ITEM (Id, Code) VALUES ('xxxx', #IncrementResult)
Changing the isolation level to SERIALIZABLE will increase blocking and contention of resources on item table.
To know more about isolation level, refer this
you should look into identity columns and remove such manual computation of incremental columns if possible.

Optimize SQL Server Stored Procedure further

How can I optimize / refactor this stored procedure further?
Where are the performance improvements to be had?
ALTER PROCEDURE cc_test_setnumber
#UUID AS VARCHAR(50),
#Status AS VARCHAR(50)
AS
BEGIN
SET NOCOUNT ON;
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
IF #Status = 'ACTIVE'
BEGIN
DECLARE #Tbl AS TABLE (UUID VARCHAR(50))
BEGIN TRANSACTION
UPDATE dbo.cc_testagents
SET Status = 'INCALL', LastUpdated = GETDATE()
OUTPUT INSERTED.UUID INTO #Tbl
WHERE ID = (SELECT TOP 1 ID FROM dbo.cc_testagents WHERE Status = 'IDLE')
UPDATE cc_testnumbers
SET Status = 'ACTIVE',
AgentUUID = (SELECT UUID FROM #Tbl)
OUTPUT INSERTED.AgentUUID AS 'UUID'
WHERE CallUUID = #UUID
COMMIT TRANSACTION
END
ELSE --BUSY, WRAPUP, NOANSWER, NOAGENT
BEGIN
UPDATE dbo.cc_testagents
SET Status = 'WRAPUP'
WHERE UUID = (SELECT AgentUUID FROM dbo.cc_testnumbers WHERE CallUUID = #UUID)
SELECT #Status = REPLACE(#Status, 'NOAGENT', 'IDLE')
UPDATE dbo.cc_testnumbers
SET Status = #Status, CallUUID = ''
WHERE CallUUID = #UUID
END
END
In sql server management studio the "database engine tuning advisor" is a good place to start- not only for this stored proc but for any other query running against these tables.
It's hard to give you a specific recommendation without knowing a bit more about the table structure and the other queries that are contending for locks on these tables. Make sure there's a clustered/nonclustered index on cc_testagents.ID, cc_testagents.uuid, cc_testnumbers.CallUUID.
If you're experiencing deadlock issues and there are queries reading from cc_testagents/cc_testnumbers that can tolerate reading uncommitted data a quick and dirty "fix" is to use the nolock hint / set transaction isolation level read uncommitted on those select statements. Depending on who you ask this is considered bad practice, just throwing it out there as something that might hold you over if you have some crippling deadlock issues in production.

How to simulate a deadlock in SQL Server in a single process?

Our client side code detects deadlocks, waits for an interval, then retries the request up to 5 times. The retry logic detects the deadlocks based on the error number 1205.
My goal is to test both the deadlock retry logic and deadlock handling inside of various stored procedures. I can create a deadlock using two different connections. However, I would like to simulate a deadlock inside of a single stored procedure itself.
A deadlock raises the following error message:
Msg 1205, Level 13, State 51, Line 1
Transaction (Process ID 66) was
deadlocked on lock resources with another process and has been chosen
as the deadlock victim. Rerun the transaction.
I see this error message is in sys.messages:
select * from sys.messages where message_id = 1205 and language_id = 1033
message_id language_id severity is_event_logged text
1205 1033 13 0 Transaction (Process ID %d) was deadlocked on %.*ls resources with another process and has been chosen as the deadlock victim. Rerun the transaction.
I can't raise this error using RAISERROR:
raiserror(1205, 13, 51)
Msg 2732, Level 16, State 1, Line 1
Error number 1205 is invalid.
The number must be from 13000 through 2147483647 and it cannot be 50000.
Our deadlock retry logic checks if the error number is 1205. The deadlock needs to have the same message ID, level, and state as a normal deadlock.
Is there a way to simulate a deadlock (with RAISERROR or any other means) and get the same message number out with just one process?
Our databases are using SQL 2005 compatibility, though our servers vary from 2005 through 2008 R2.
As many have pointed out, the answer is no, a single process cannot reliably deadlock itself. I came up with the following solution to simulate a deadlock on a development or test system..
Run the script below in a SQL Server Management Studio window. (Tested on 2008 R2 only.) You can leave it running as long as necessary.
In the place you want to simulate a deadlock, insert a call to sp_simulatedeadlock. Run your process, and the deadlock should occur.
When done testing, stop the SSMS query and run the cleanup code at the bottom.
/*
This script helps simulate deadlocks. Run the entire script in a SQL query window. It will continue running until stopped.
In the target script, insert a call to sp_simulatedeadlock where you want the deadlock to occur.
This stored procedure, also created below, causes the deadlock.
When you are done, stop the execution of this window and run the code in the cleanup section at the bottom.
*/
set nocount on
if object_id('DeadlockTest') is not null
drop table DeadlockTest
create table DeadlockTest
(
Deadlock_Key int primary key clustered,
Deadlock_Count int
)
go
if exists (select * from sysobjects where id = object_id(N'sp_simulatedeadlock')
AND objectproperty(id, N'IsProcedure') = 1)
drop procedure sp_simulatedeadlock
GO
create procedure sp_simulatedeadlock
(
#MaxDeadlocks int = -1 -- specify the number of deadlocks you want; -1 = constant deadlocking
)
as begin
set nocount on
if object_id('DeadlockTest') is null
return
-- Volunteer to be a deadlock victim.
set deadlock_priority low
declare #DeadlockCount int
select #DeadlockCount = Deadlock_Count -- this starts at 0
from DeadlockTest
where Deadlock_Key = 2
-- Trace the start of each deadlock event.
-- To listen to the trace event, setup a SQL Server Profiler trace with event class "UserConfigurable:0".
-- Note that the user running this proc must have ALTER TRACE permission.
-- Also note that there are only 128 characters allowed in the trace text.
declare #trace nvarchar(128)
if #MaxDeadlocks > 0 AND #DeadlockCount > #MaxDeadlocks
begin
set #trace = N'Deadlock Test #MaxDeadlocks: ' + cast(#MaxDeadlocks as nvarchar) + N' #DeadlockCount: ' + cast(#DeadlockCount as nvarchar) + N' Resetting deadlock count. Will not cause deadlock.'
exec sp_trace_generateevent
#eventid = 82, -- 82 = UserConfigurable:0 through 91 = UserConfigurable:9
#userinfo = #trace
-- Reset the number of deadlocks.
-- Hopefully if there is an outer transaction, it will complete and persist this change.
update DeadlockTest
set Deadlock_Count = 0
where Deadlock_Key = 2
return
end
set #trace = N'Deadlock Test #MaxDeadlocks: ' + cast(#MaxDeadlocks as nvarchar) + N' #DeadlockCount: ' + cast(#DeadlockCount as nvarchar) + N' Simulating deadlock.'
exec sp_trace_generateevent
#eventid = 82, -- 82 = UserConfigurable:0 through 91 = UserConfigurable:9
#userinfo = #trace
declare #StartedTransaction bit
set #StartedTransaction = 0
if ##trancount = 0
begin
set #StartedTransaction = 1
begin transaction
end
-- lock 2nd record
update DeadlockTest
set Deadlock_Count = Deadlock_Count
from DeadlockTest
where Deadlock_Key = 2
-- lock 1st record to cause deadlock
update DeadlockTest
set Deadlock_Count = Deadlock_Count
from DeadlockTest
where Deadlock_Key = 1
if #StartedTransaction = 1
rollback
end
go
insert into DeadlockTest(Deadlock_Key, Deadlock_Count)
select 1, 0
union select 2, 0
-- Force other processes to be the deadlock victim.
set deadlock_priority high
begin transaction
while 1 = 1
begin
begin try
begin transaction
-- lock 1st record
update DeadlockTest
set Deadlock_Count = Deadlock_Count
from DeadlockTest
where Deadlock_Key = 1
waitfor delay '00:00:10'
-- lock 2nd record (which will be locked when the target proc calls sp_simulatedeadlock)
update DeadlockTest
set Deadlock_Count = Deadlock_Count
from DeadlockTest
where Deadlock_Key = 2
rollback
end try
begin catch
print 'Error ' + convert(varchar(20), ERROR_NUMBER()) + ': ' + ERROR_MESSAGE()
goto cleanup
end catch
end
cleanup:
if ##trancount > 0
rollback
drop procedure sp_simulatedeadlock
drop table DeadlockTest
You can exploit a bug that Microsoft seems in no hurry to fix by running
begin tran
go
CREATE TYPE dbo.IntIntSet AS TABLE(
Value0 Int NOT NULL,
Value1 Int NOT NULL
)
go
declare #myPK dbo.IntIntSet;
go
rollback
This SQL will cause a deadlock with itself. Lots more details at Aaron Bertand's blog http://sqlperformance.com/2013/11/t-sql-queries/single-tx-deadlock
(Apparently I don't have enough reputation to add a comment. So posting as an answer.)
A deadlock requires at least two processes. the only exception being is the intra-query parallel deadlocks which are kind of impossible to reproduce.
However you can simulate a deadlock on two processes running the exact same query (or sp). Some ideas here
This works reliably from a single session. Use service broker activation to invoke the second thread which is required for a deadlock.
NOTE1: cleanup script not included
NOTE2: service broker has to be enabled:
ALTER DATABASE dbname SET ENABLE_BROKER WITH ROLLBACK IMMEDIATE;
EXEC sp_executesql N'
CREATE OR ALTER PROCEDURE DeadlockReceive
AS
DECLARE #MessageBody NVARCHAR(1000);
RECEIVE #MessageBody = CAST(message_body AS NVARCHAR(1000) )FROM DeadlockQueue
SELECT #MessageBody
EXEC sp_executesql #MessageBody;'
IF EXISTS (SELECT * FROM sys.services WHERE name = 'DeadlockService') DROP SERVICE DeadlockService
IF OBJECT_ID('DeadlockQueue') IS NOT NULL DROP QUEUE dbo.DeadlockQueue
IF EXISTS (SELECT * FROM sys.service_contracts WHERE name = 'DeadlockContract') DROP CONTRACT DeadlockContract
IF EXISTS (SELECT * FROM sys.service_message_types WHERE name = 'DeadlockMessage') DROP MESSAGE TYPE DeadlockMessage
DROP TABLE IF EXISTS DeadlockTable1 ;
DROP TABLE IF EXISTS DeadlockTable2 ;
CREATE MESSAGE TYPE DeadlockMessage VALIDATION = NONE;
CREATE QUEUE DeadlockQueue WITH STATUS = ON, ACTIVATION (PROCEDURE_NAME = DeadlockReceive, EXECUTE AS SELF, MAX_QUEUE_READERS = 1);
CREATE CONTRACT DeadlockContract AUTHORIZATION dbo (DeadlockMessage SENT BY ANY);
CREATE SERVICE DeadlockService ON QUEUE DeadlockQueue (DeadlockContract);
CREATE TABLE DeadlockTable1 (Value INT); INSERT dbo.DeadlockTable1 SELECT 1;
CREATE TABLE DeadlockTable2 (Value INT); INSERT dbo.DeadlockTable2 SELECT 1;
DECLARE #ch UNIQUEIDENTIFIER
BEGIN DIALOG #ch FROM SERVICE DeadlockService TO SERVICE 'DeadlockService' ON CONTRACT DeadlockContract WITH ENCRYPTION = OFF ;
SEND ON CONVERSATION #ch MESSAGE TYPE DeadlockMessage (N'
set deadlock_priority high;
begin tran;
update DeadlockTable2 set value = 5;
waitfor delay ''00:00:01'';
update DeadlockTable1 set value = 5;
commit')
SET DEADLOCK_PRIORITY LOW
BEGIN TRAN
UPDATE dbo.DeadlockTable1 SET Value = 2
waitfor delay '00:00:01';
UPDATE dbo.DeadlockTable2 SET Value = 2
COMMIT
Simplest way to reproduce in C# with Parallel
e.g.
var List = ... (add some items with same ids)
Parallel.ForEach(List,
(item) =>
{
ReportsDataContext erdc = null;
try
{
using (TransactionScope scope = new TransactionScope())
{
erdc = new ReportsDataContext("....connection....");
var report = erdc.Report.Where(x => x.id == item.id).Select(x => x);
report.Count++
erdc.SubmitChanges();
scope.Complete();
}
if (erdc != null)
erdc.Dispose();
}
catch (Exception ex)
{
if (erdc != null)
erdc.Dispose();
ErrorLog.LogEx("multi thread victim", ex);
}
more interest how to prevent that error in real cross thread situation?
I had difficulty getting Paul's answer to work. I made some small changes to get it working.
The key is to begin and rollback the sp_simulatedeadlock transaction within the procedure itself. I made no changes to the procedure in Paul's answer.
DECLARE #DeadlockCounter INT = NULL
SELECT #DeadlockCounter = 0
WHILE #DeadlockCounter < 10
BEGIN
BEGIN TRY
/* The procedure was leaving uncommitted transactions, I rollback the transaction in the catch block */
BEGIN tran simulate
Exec sp_simulatedeadlock
/* Code you want to deadlock */
SELECT #DeadlockCounter = 10
END TRY
BEGIN CATCH
Rollback tran simulate
PRINT ERROR_MESSAGE()
IF (ERROR_MESSAGE() LIKE '%deadlock%' OR ERROR_NUMBER() = 1205) AND #DeadlockCounter < 10
BEGIN
SELECT #DeadlockCounter +=1
PRINT #DeadlockCounter
IF #DeadlockCounter = 10
BEGIN
RAISERROR('Deadlock limit exceeded or error raised', 16, 10);
END
END
END CATCH
END
If you happen to run into problems with the GO/go keywords (Incorrect syntax near 'GO') in any of the scripts above, it's important to know, that this instruction only seems to work in Microsoft SQL Server Management Studio/sqlcmd:
The GO keyword is not T-SQL, but a SQL Server Management Studio artifact that allows you to separate the execution of a script file in multiple batches.I.e. when you run a T-SQL script file in SSMS, the statements are run in batches separated by the GO keyword. […]
SQL Server doesn't understand the GO keyword. So if you need an equivalent, you need to separate and run the batches individually on your own.
(from JotaBe's answer to another question)
So if you want to try out e.g. Michael J Swart's Answer through DBeaver for example, you'll have to remove the go's and run all parts of the query on its own.

Using a table as a queue of work for multiple processes in SQL 2005

Given a table (JobTable) that has 2 columns: JobId, JobStatus (there are others but I'm excluding them as they have no impact on the question).
A process, the WorkGenerator, INSERTs rows into the table.
Another process, the Worker, executes a stored procedure called GetNextJob.
Right now, GetNextJob does a SELECT to find the next piece of work (JobStatus = 1) and then an UPDATE to mark that work as in-progress (JobStatus = 2).
We're looking to scale up by having multiple Worker processes but have found that it's possible for multiple workers to pick up the same piece of work.
I have the following queries:
In GetNextJob, can I combine the
SELECT and UPDATE into a single query
and use the OUTPUT clause to get the
JobId?
How can I guarantee that only
1 process will pick up each piece of
work?
I appreciate answers that work but also explainations as to why they work.
See Processing Data Queues in SQL Server with READPAST and UPDLOCK
also Using SQL Server as resource locking mechanism
Lets build up a solution:
Ensure the UPDATE checks the ##ROWCOUNT
Inspect ##ROWCOUNT after the UPDATE to determine which Worker process wins.
CREATE PROCEDURE [dbo].[GetNextJob]
AS
BEGIN
SET NOCOUNT ON;
DECLARE #jobId INT
SELECT TOP 1 #jobId = Jobs.JobId FROM Jobs
WHERE Jobs.JobStatus = 1
ORDER BY JobId ASC
UPDATE Jobs Set JobStatus = 2
WHERE JobId = #jobId
AND JobStatus = 1;
IF (##ROWCOUNT = 1)
BEGIN
SELECT #jobId;
END
END
GO
Note that with the above procedure the process that does not win does not return any rows and needs to call the procedure again to get the next row.
The above will fix most of all the cases where both Workers pick up the same piece of work because the UPDATE guards against this. However, it's possible for ##ROWCOUNT to be 1 for both workers for the same jobId!
Lock the row within a transaction so only 1 Worker can update the Status
CREATE PROCEDURE [dbo].[GetNextJob]
AS
BEGIN
SET NOCOUNT ON;
BEGIN TRANSACTION
DECLARE #jobId INT
SELECT TOP 1 #jobId = Jobs.JobId FROM Jobs WITH (UPDLOCK, ROWLOCK)
WHERE Jobs.JobStatus = 1
ORDER BY JobId ASC
UPDATE Jobs Set JobStatus = 2
WHERE JobId = #jobId
AND JobStatus = 1;
IF (##ROWCOUNT = 1)
BEGIN
SELECT #jobId;
END
COMMIT
END
GO
Both UPDLOCK and ROWLOCK are required. UPDLOCK on the SELECT tells MSSQL to lock the row as if it is being updated until the transaction is committed. The ROWLOCK (probably isn't necessary) but tells MSSQL to only lock the ROW returned by the SELECT.
Optimising the locking
When 1 process uses the ROWLOCK hint to lock a row, other processes are blocked waiting for that lock to be released. The READPAST hint can be specified. From MSDN:
When READPAST is specified, both
row-level and page-level locks are
skipped. That is, the Database Engine
skips past the rows or pages instead
of blocking the current transaction
until the locks are released.
This will stop the other processes from being blocked and improve performance.
CREATE PROCEDURE [dbo].[GetNextJob]
AS
BEGIN
SET NOCOUNT ON;
BEGIN TRANSACTION
DECLARE #jobId INT
SELECT TOP 1 #jobId = Jobs.JobId FROM Jobs WITH (UPDLOCK, READPAST)
WHERE Jobs.JobStatus = 1
ORDER BY JobId ASC
UPDATE Jobs Set JobStatus = 2
WHERE JobId = #jobId
AND JobStatus = 1;
IF (##ROWCOUNT = 1)
BEGIN
SELECT #jobId;
END
COMMIT
END
GO
To Consider: Combine SELECT and Update
Combine the SELECT and UPDATE and use a SET to get the ID out.
For example:
DECLARE #val int
UPDATE JobTable
SET #val = JobId,
status = 2
WHERE rowid = (SELECT min(JobId) FROM JobTable WHERE status = 1)
SELECT #val
This still requires the transaction to be SERIALIZABLE to ensure that each row is allocated to one Worker only.
To Consider: Combine SELECT and UPDATE again
Combine the SELECT and UPDATE and use the Output clause.