I've got a table where I need to auto-assign an ID 99% of the time (the other 1% rules out using an identity column it seems). So I've got a stored procedure to get next ID along the following lines:
select #nextid = lastid+1 from last_auto_id
check next available id in the table...
update last_auto_id set lastid = #nextid
Where the check has to check if users have manually used the IDs and find the next unused ID.
It works fine when I call it serially, returning 1, 2, 3 ... What I need to do is provide some locking where multiple processes call this at the same time. Ideally, I just need it to exclusively lock the last_auto_id table around this code so that a second call must wait for the first to update the table before it can run it's select.
In Postgres, I can do something like 'LOCK TABLE last_auto_id;' to explicitly lock the table. Any ideas how to accomplish it in SQL Server?
Thanks in advance!
Following update increments your lastid by one and assigns this value to your local variable in a single transaction.
Edit
thanks to Dave and Mitch for pointing out isolation level problems with the original solution.
UPDATE last_auto_id WITH (READCOMMITTEDLOCK)
SET #nextid = lastid = lastid + 1
You guys have between you answered my question. I'm putting in my own reply to collate the working solution I've got into one post. The key seems to have been the transaction approach, with locking hints on the last_auto_id table. Setting the transaction isolation to serializable seemed to create deadlock problems.
Here's what I've got (edited to show the full code so hopefully I can get some further answers...):
DECLARE #Pointer AS INT
BEGIN TRANSACTION
-- Check what the next ID to use should be
SELECT #NextId = LastId + 1 FROM Last_Auto_Id WITH (TABLOCKX) WHERE Name = 'CustomerNo'
-- Now check if this next ID already exists in the database
IF EXISTS (SELECT CustomerNo FROM Customer
WHERE ISNUMERIC(CustomerNo) = 1 AND CustomerNo = #NextId)
BEGIN
-- The next ID already exists - we need to find the next lowest free ID
CREATE TABLE #idtbl ( IdNo int )
-- Into temp table, grab all numeric IDs higher than the current next ID
INSERT INTO #idtbl
SELECT CAST(CustomerNo AS INT) FROM Customer
WHERE ISNUMERIC(CustomerNo) = 1 AND CustomerNo >= #NextId
ORDER BY CAST(CustomerNo AS INT)
-- Join the table with itself, based on the right hand side of the join
-- being equal to the ID on the left hand side + 1. We're looking for
-- the lowest record where the right hand side is NULL (i.e. the ID is
-- unused)
SELECT #Pointer = MIN( t1.IdNo ) + 1 FROM #idtbl t1
LEFT OUTER JOIN #idtbl t2 ON t1.IdNo + 1 = t2.IdNo
WHERE t2.IdNo IS NULL
END
UPDATE Last_Auto_Id SET LastId = #NextId WHERE Name = 'CustomerNo'
COMMIT TRANSACTION
SELECT #NextId
This takes out an exclusive table lock at the start of the transaction, which then successfully queues up any further requests until after this request has updated the table and committed it's transaction.
I've written a bit of C code to hammer it with concurrent requests from half a dozen sessions and it's working perfectly.
However, I do have one worry which is the term locking 'hints' - does anyone know if SQLServer treats this as a definite instruction or just a hint (i.e. maybe it won't always obey it??)
How is this solution? No TABLE LOCK is required and works perfectly!!!
DECLARE #NextId INT
UPDATE Last_Auto_Id
SET #NextId = LastId = LastId + 1
WHERE Name = 'CustomerNo'
SELECT #NextId
Update statement always uses a lock to protect its update.
You might wanna consider deadlocks. This usually happens when multiple users use the stored procedure simultaneously. In order to avoid deadlock and make sure every query from the user will succeed you will need to do some handling during update failures and to do this you will need a try catch. This works on Sql Server 2005,2008 only.
DECLARE #Tries tinyint
SET #Tries = 1
WHILE #Tries <= 3
BEGIN
BEGIN TRANSACTION
BEGIN TRY
-- this line updates the last_auto_id
update last_auto_id set lastid = lastid+1
COMMIT
BREAK
END TRY
BEGIN CATCH
SELECT ERROR_NUMBER() AS ErrorNumber, ERROR_MESSAGE() as ErrorMessage
ROLLBACK
SET #Tries = #Tries + 1
CONTINUE
END CATCH
END
I prefer doing this using an identity field in a second table. If you make lastid identity then all you have to do is insert a row in that table and select #scope_identity to get your new value and you still have the concurrency safety of identity even though the id field in your main table is not identity.
Related
There is a table with IDU (PK) and stat columns. If first bit of stat is 1 I need to set it to 0 and run some stored procedure in this case only, otherwise I do nothing.
Here is the simple query for this
DECLARE #s INT
-- get the current value of status before update
SET #s = (SELECT stat FROM myTable
WHERE IDU = 999999999)
-- check it first bit is 1
IF (#s & 1) = 1
BEGIN
-- first bit is 1, set it to 0
UPDATE myTable
SET status = Stat & ~1
WHERE IDU = 999999999
-- first bit is one, in this case we run our SP
EXEC SOME_STORED_PROCEDURE
END
But I'm not sure that this query is optimal. I heard about OUTPUT parameter for UPDATE query but I found how to get inserted value. Is there a way to get a value that was before insert?
Yes, OUTPUT clause allows you to get the previous value before the update. You need to look at deleted and inserted tables.
DELETED
Is a column prefix that specifies the value deleted by the
update or delete operation. Columns prefixed with DELETED reflect the
value before the UPDATE, DELETE, or MERGE statement is completed.
INSERTED
Is a column prefix that specifies the value added by the insert or
update operation. Columns prefixed with INSERTED reflect the value
after the UPDATE, INSERT, or MERGE statement is completed but before
triggers are executed.
-- Clear the first bit without checking what it was
DECLARE #Results TABLE (OldStat int, NewStat int);
UPDATE myTable
SET Stat = Stat & ~1
WHERE IDU = 999999999
OUTPUT
deleted.Stat AS OldStat
,inserted.Stat AS NewStat
INTO #Results
;
-- Copy data from #Results table into variables for comparison
-- Assumes that IDU is a primary key and #Results can have only one row
DECLARE #OldStat int;
DECLARE #NewStat int;
SELECT #OldStat = OldStat, #NewStat = NewStat
FROM #Results;
IF #OldStat <> #NewStat
BEGIN
EXEC SOME_STORED_PROCEDURE;
END;
Regardless of optimal, this query is not 100% safe. This is because between SET #s =... and UPDATE myTable there is no guarantee the value of stat has not been changed. If this code runs multiple times it is possible to screw up if two cases execute deadly close for the same IDU. The first thread will do ok but the second one will not, since the first would change the stat after the second read it and before update it. A select statement does not lock beyond its own execution time even on SERIALIZABLE isolation.
To be safe, you need to lock the record BEFORE read it, and to do that you need an update statement, even fake:
DECLARE #s INT
BEGIN TRANSACTION
UPDATE myTable SET stat = stat WHERE IDU = 999999999 --now you row lock your row, make sure no other thread can move along
-- get the current value of status before update
SET #s = (SELECT stat FROM myTable
WHERE IDU = 999999999)
-- check it first bit is 1
IF (#s & 1) = 1
BEGIN
-- first bit is 1, set it to 0
UPDATE myTable
SET status = Stat & ~1
WHERE IDU = 999999999
-- first bit is one, in this case we run our SP
-- COMMIT TRANSACTION here? depends on what SOME_STORED_PROCEDURE does
EXEC SOME_STORED_PROCEDURE
COMMIT TRANSACTION --i believe here you release the row lock
I am not sure what you mean by "Is there a way to get a value that was before insert" because you only update and the only data, stat, you had already read from the old record regardless if you update or not.
You could do this with an INSTEAD OF UPDATE Trigger.
I have implemented a SQL tablockx locking in a Procedure.It is working fine when this is running on one server.But duplicate policyNumber occurs when request comes from two different servers.
declare #plantype varchar(max),
#transid varchar(max),
#IsStorySolution varchar(10),
#outPolicyNumber varchar(max) output,
#status int output -- 0 mean error and 1 means success
)
as
begin
BEGIN TRANSACTION
Declare #policyNumber varchar(100);
Declare #polseqid int;
-- GET POLICY NUMBER ON THE BASIS OF STORY SOLUTION..
IF (UPPER(#IsStorySolution)='Y')
BEGIN
select top 1 #policyNumber=Policy_no from PLAN_POL_NO with (tablockx, holdlock) where policy_no like '9%'
and pol_id_seq is null and status='Y';
END
ELSE
BEGIN
select top 1 #policyNumber=pp.Policy_no from PLAN_POL_NO pp with (tablockx, holdlock) ,PLAN_TYP_MST pt where pp.policy_no like PT.SERIES+'%'
and pt.PLAN_TYPE in (''+ISNULL(#plantype,'')+'') and pol_id_seq is null and pp.status='Y'
END
-- GET POL_SEQ_ID NUMBER
select #polseqid=dbo.Sequence();
--WAITFOR DELAY '00:00:03';
set #policyNumber= ISNULL(#policyNumber,'');
-- UPDATE POLICY ID INFORMATION...
Update PLAN_POL_NO set status='N',TRANSID =#transid , POL_ID_SEQ=ISNULL(#polseqid,0) where Policy_no =#policyNumber
set #outPolicyNumber=#policyNumber;
if(##ERROR<>0) begin GOTO Fail end
COMMIT Transaction
set #status=1;
return;
Fail:
If ##TRANCOUNT>0
begin
Rollback transaction
set #status=0;
return;
This is function which i have called::
CREATE function [dbo].[Sequence]()
returns int
as
begin
declare #POL_ID int
/***************************************
-- Schema name is added with table name in below query
-- as there are two table with same name (PLAN_POL_NO)
-- on different schema (dbo & eapp).
*****************************************/
select #POL_ID=isnull(MAX(POL_ID_SEQ),2354) from dbo.PLAN_POL_NO
return #POL_ID+1
end
The problem you are facing is because concurrent requests are both getting the same POL_ID_SEQ from your table dbo.PLAN_POL_NO.
There are multiple solutions to your problem, but I can think of two that might help you and require none/small code changes:
Using a higher transaction isolation level instead of table hints.
In your stored procedure you can use the following:
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
This will make sure that any data read/modified during the SP block is transactionally consistent and avoids phantom reads, duplicate records etc. It will potentially create higher deadlocks and if these tables are heavily queried/updated throughout your application you might have a whole new set of issues.
Make sure your update to the dbo.PLAN_POL_NO only succeeds if the sequence has not changed. If it has changed, error out (if it changed it means a concurrent transaction obtained the ID and completed first)
Something like this:
Update dbo.PLAN_POL_NO
SET status ='N',
TRANSID = #transid,
POL_ID_SEQ = ISNULL(#polseqid,0)
WHERE Policy_no = #policyNumber
AND POL_ID_SEW = #polseqid - 1
IF ##ROWCOUNT <> 1
BEGIN
-- Update failed, error out and let the SP rollback the transaction
END
A WITH (HOLDLOCK) option in the function might suffice.
The HOLDLOCK from the first queries may be being applied at a row or page level that then DOESN'T include the row of interest that is queried in the function.
However, the function is not reliable since it is not self-contained. I'd be looking to redesign this such that the Sequence can generate AND "take" the sequence number before returning it. The current design is fragile at best.
Just want to get some views/possible leads on an issue I have.
I have a stored procedure that updates/deletes a record from a table in my database, the table it deletes from is a live table, that temporary holds the data, and also updates records on a archive table. (for reporting etc..) it works normally and havent had an issues.
However recently I had worked on a windows service to monitor our system (running 24/7), which uses a HTTP call to initiate a program, and once this program has finished it then runs the mention stored procedure to delete out redundant data. Basically the service just runs the program quickly to make sure its functioning correctly.
I have noticed recently that the data isnt always being deleted. Looking through logs I see no errors being reported. And Even see the record in the database has been updated correctly. But just doesnt get deleted.
This unfortunately has a knock on effect with the monitoring service, as this continously runs, and sends out alerts because the data cant be duplicated in the live table, hence why it needs to delete out the data.
Currently I have in place a procedure to clear out any old data. (3 hours).
Result has the value - Rejected.
Below is the stored procedure:
DECLARE #PostponeUntil DATETIME;
DECLARE #Attempts INT;
DECLARE #InitialTarget VARCHAR(8);
DECLARE #MaxAttempts INT;
DECLARE #APIDate DATETIME;
--UPDATE tCallbacks SET Result = #Result WHERE CallbackID = #CallbackID AND UPPER(Result) = 'PENDING';
UPDATE tCallbacks SET Result = #Result WHERE ID = (SELECT TOP 1 ID FROM tCallbacks WHERE CallbackID = #CallbackID ORDER BY ID DESC)
SELECT #InitialTarget = C.InitialTarget, #Attempts = LCB.Attempts, #MaxAttempts = C.CallAttempts
FROM tConfigurations C WITH (NOLOCK)
LEFT JOIN tLiveCallbacks LCB ON LCB.ID = #CallbackID
WHERE C.ID = LCB.ConfigurationID;
IF ((UPPER(#Result) <> 'SUCCESSFUL') AND (UPPER(#Result) <> 'MAXATTEMPTS') AND (UPPER(#Result) <> 'DESTBAR') AND (UPPER(#Result) <> 'REJECTED')) BEGIN
--INSERT A NEW RECORD FOR RTNR/BUSY/UNSUCCESSFUL/REJECT
--Create Callback Archive Record
SELECT #APIDate = CallbackRequestDate FROM tCallbacks WHERE Attempts = 0 AND CallbackID = #CallbackID;
BEGIN TRANSACTION
INSERT INTO tCallbacks (CallbackID, ConfigurationID, InitialTarget, Agent, AgentPresentedCLI, Callee, CalleePresentedCLI, CallbackRequestDate, Attempts, Result, CBRType, ExternalID, ASR, SessionID)
SELECT ID, ConfigurationID, #InitialTarget, Agent, AgentPresentedCLI, Callee, CalleePresentedCLI, #APIDate, #Attempts + 1, 'PENDING', CBRType, ExternalID, ASR, SessionID
FROM tLiveCallbacks
WHERE ID = #CallbackID;
UPDATE LCB
SET PostponeUntil = DATEADD(second, C.CallRetryPeriod, GETDATE()),
Pending = 0,
Attempts = #Attempts + 1
FROM tLiveCallbacks LCB
LEFT JOIN tConfigurations C ON C.ID = LCB.ConfigurationID
WHERE LCB.ID = #CallbackID;
COMMIT TRANSACTION
END
ELSE BEGIN
-- Update the Callbacks archive, when Successful or Max Attempts or DestBar.
IF EXISTS (SELECT ID FROM tLiveCallbacks WHERE ID = #CallbackID) BEGIN
BEGIN TRANSACTION
UPDATE tCallbacks
SET Attempts = #Attempts
WHERE ID IN (SELECT TOP (1) ID
FROM tCallbacks
WHERE CallbackID = #CallbackID
ORDER BY Attempts DESC);
-- The live callback should no longer be active now. As its either been answered or reach the max attempts.
DELETE FROM tLiveCallbacks WHERE ID = #CallbackID;
COMMIT
END
END
You need to fix your transaction processing. What is happening is that one statement is failing but since you don't have a try-catch block all changes are not getting rolled back only the statement that failed.
You should never have a begin tran without a try catch block and a rollback on error. I personally also prefer in something like this to put the errors and associated data into a table variable (which will not rollback) and then insert then to an exception table after the rollback. This way the data retains integrity and you can look up what the problem was.
I have a primary key that I don't want to auto increment (for various reasons) and so I'm looking for a way to simply increment that field when I INSERT. By simply, I mean without stored procedures and without triggers, so just a series of SQL commands (preferably one command).
Here is what I have tried thus far:
BEGIN TRAN
INSERT INTO Table1(id, data_field)
VALUES ( (SELECT (MAX(id) + 1) FROM Table1), '[blob of data]');
COMMIT TRAN;
* Data abstracted to use generic names and identifiers
However, when executed, the command errors, saying that
"Subqueries are not allowed in this
context. only scalar expressions are
allowed"
So, how can I do this/what am I doing wrong?
EDIT: Since it was pointed out as a consideration, the table to be inserted into is guaranteed to have at least 1 row already.
You understand that you will have collisions right?
you need to do something like this and this might cause deadlocks so be very sure what you are trying to accomplish here
DECLARE #id int
BEGIN TRAN
SELECT #id = MAX(id) + 1 FROM Table1 WITH (UPDLOCK, HOLDLOCK)
INSERT INTO Table1(id, data_field)
VALUES (#id ,'[blob of data]')
COMMIT TRAN
To explain the collision thing, I have provided some code
first create this table and insert one row
CREATE TABLE Table1(id int primary key not null, data_field char(100))
GO
Insert Table1 values(1,'[blob of data]')
Go
Now open up two query windows and run this at the same time
declare #i int
set #i =1
while #i < 10000
begin
BEGIN TRAN
INSERT INTO Table1(id, data_field)
SELECT MAX(id) + 1, '[blob of data]' FROM Table1
COMMIT TRAN;
set #i =#i + 1
end
You will see a bunch of these
Server: Msg 2627, Level 14, State 1, Line 7
Violation of PRIMARY KEY constraint 'PK__Table1__3213E83F2962141D'. Cannot insert duplicate key in object 'dbo.Table1'.
The statement has been terminated.
Try this instead:
INSERT INTO Table1 (id, data_field)
SELECT id, '[blob of data]' FROM (SELECT MAX(id) + 1 as id FROM Table1) tbl
I wouldn't recommend doing it that way for any number of reasons though (performance, transaction safety, etc)
It could be because there are no records so the sub query is returning NULL...try
INSERT INTO tblTest(RecordID, Text)
VALUES ((SELECT ISNULL(MAX(RecordID), 0) + 1 FROM tblTest), 'asdf')
I don't know if somebody is still looking for an answer but here is a solution that seems to work:
-- Preparation: execute only once
CREATE TABLE Test (Value int)
CREATE TABLE Lock (LockID uniqueidentifier)
INSERT INTO Lock SELECT NEWID()
-- Real insert
BEGIN TRAN LockTran
-- Lock an object to block simultaneous calls.
UPDATE Lock WITH(TABLOCK)
SET LockID = LockID
INSERT INTO Test
SELECT ISNULL(MAX(T.Value), 0) + 1
FROM Test T
COMMIT TRAN LockTran
We have a similar situation where we needed to increment and could not have gaps in the numbers. (If you use an identity value and a transaction is rolled back, that number will not be inserted and you will have gaps because the identity value does not roll back.)
We created a separate table for last number used and seeded it with 0.
Our insert takes a few steps.
--increment the number
Update dbo.NumberTable
set number = number + 1
--find out what the incremented number is
select #number = number
from dbo.NumberTable
--use the number
insert into dbo.MyTable using the #number
commit or rollback
This causes simultaneous transactions to process in a single line as each concurrent transaction will wait because the NumberTable is locked. As soon as the waiting transaction gets the lock, it increments the current value and locks it from others. That current value is the last number used and if a transaction is rolled back, the NumberTable update is also rolled back so there are no gaps.
Hope that helps.
Another way to cause single file execution is to use a SQL application lock. We have used that approach for longer running processes like synchronizing data between systems so only one synchronizing process can run at a time.
If you're doing it in a trigger, you could make sure it's an "INSTEAD OF" trigger and do it in a couple of statements:
DECLARE #next INT
SET #next = (SELECT (MAX(id) + 1) FROM Table1)
INSERT INTO Table1
VALUES (#next, inserted.datablob)
The only thing you'd have to be careful about is concurrency - if two rows are inserted at the same time, they could attempt to use the same value for #next, causing a conflict.
Does this accomplish what you want?
It seems very odd to do this sort of thing w/o an IDENTITY (auto-increment) column, making me question the architecture itself. I mean, seriously, this is the perfect situation for an IDENTITY column. It might help us answer your question if you'd explain the reasoning behind this decision. =)
Having said that, some options are:
using an INSTEAD OF trigger for this purpose. So, you'd do your INSERT (the INSERT statement would not need to pass in an ID). The trigger code would handle inserting the appropriate ID. You'd need to use the WITH (UPDLOCK, HOLDLOCK) syntax used by another answerer to hold the lock for the duration of the trigger (which is implicitly wrapped in a transaction) & to elevate the lock type from "shared" to "update" lock (IIRC).
you can use the idea above, but have a table whose purpose is to store the last, max value inserted into the table. So, once the table is set up, you would no longer have to do a SELECT MAX(ID) every time. You'd simply increment the value in the table. This is safe provided that you use appropriate locking (as discussed). Again, that avoids repeated table scans every time you INSERT.
use GUIDs instead of IDs. It's much easier to merge tables across databases, since the GUIDs will always be unique (whereas records across databases will have conflicting integer IDs). To avoid page splitting, sequential GUIDs can be used. This is only beneficial if you might need to do database merging.
Use a stored proc in lieu of the trigger approach (since triggers are to be avoided, for some reason). You'd still have the locking issue (and the performance problems that can arise). But sprocs are preferred over dynamic SQL (in the context of applications), and are often much more performant.
Sorry about rambling. Hope that helps.
How about creating a separate table to maintain the counter? It has better performance than MAX(id), as it will be O(1). MAX(id) is at best O(lgn) depending on the implementation.
And then when you need to insert, simply lock the counter table for reading the counter and increment the counter. Then you can release the lock and insert to your table with the incremented counter value.
Have a separate table where you keep your latest ID and for every transaction get a new one.
It may be a bit slower but it should work.
DECLARE #NEWID INT
BEGIN TRAN
UPDATE TABLE SET ID=ID+1
SELECT #NEWID=ID FROM TABLE
COMMIT TRAN
PRINT #NEWID -- Do what you want with your new ID
Code without any transaction scope (I use it in my engineer course as an exercice) :
-- Preparation: execute only once
CREATE TABLE increment (val int);
INSERT INTO increment VALUES (1);
-- Real insert
DECLARE #newIncrement INT;
UPDATE increment
SET #newIncrement = val,
val = val + 1;
INSERT INTO Table1 (id, data_field)
SELECT #newIncrement, 'some data';
declare #nextId int
set #nextId = (select MAX(id)+1 from Table1)
insert into Table1(id, data_field) values (#nextId, '[blob of data]')
commit;
But perhaps a better approach would be using a scalar function getNextId('table1')
Any critiques of this? Works for me.
DECLARE #m_NewRequestID INT
, #m_IsError BIT = 1
, #m_CatchEndless INT = 0
WHILE #m_IsError = 1
BEGIN TRY
SELECT #m_NewRequestID = (SELECT ISNULL(MAX(RequestID), 0) + 1 FROM Requests)
INSERT INTO Requests ( RequestID
, RequestName
, Customer
, Comment
, CreatedFromApplication)
SELECT RequestID = #m_NewRequestID
, RequestName = dbo.ufGetNextAvailableRequestName(PatternName)
, Customer = #Customer
, Comment = [Description]
, CreatedFromApplication = #CreatedFromApplication
FROM RequestPatterns
WHERE PatternID = #PatternID
SET #m_IsError = 0
END TRY
BEGIN CATCH
SET #m_IsError = 1
SET #m_CatchEndless = #m_CatchEndless + 1
IF #m_CatchEndless > 1000
THROW 51000, '[upCreateRequestFromPattern]: Unable to get new RequestID', 1
END CATCH
This should work:
INSERT INTO Table1 (id, data_field)
SELECT (SELECT (MAX(id) + 1) FROM Table1), '[blob of data]';
Or this (substitute LIMIT for other platforms):
INSERT INTO Table1 (id, data_field)
SELECT TOP 1
MAX(id) + 1, '[blob of data]'
FROM
Table1
ORDER BY
[id] DESC;
I am working on a work queueing solution. I want to query a given row in the database, where a status column has a specific value, modify that value and return the row, and I want to do it atomically, so that no other query will see it:
begin transaction
select * from table where pk = x and status = y
update table set status = z where pk = x
commit transaction
--(the row would be returned)
it must be impossible for 2 or more concurrent queries to return the row (one query execution would see the row while its status = y) -- sort of like an interlocked CompareAndExchange operation.
I know the code above runs (for SQL server), but will the swap always be atomic?
I need a solution that will work for SQL Server and Oracle
Is PK the primary key? Then this is a non issue, if you already know the primary key there is no sport. If pk is the primary key, then this begs the obvious question how do you know the pk of the item to dequeue...
The problem is if you don't know the primary key and want to dequeue the next 'available' (ie. status = y) and mark it as dequeued (delete it or set status = z).
The proper way to do this is to use a single statement. Unfortunately the syntax differs between Oracle and SQL Server. The SQL Server syntax is:
update top (1) [<table>]
set status = z
output DELETED.*
where status = y;
I'm not familiar enough with Oracle's RETURNING clause to give an example similar to SQL's OUTPUT one.
Other SQL Server solutions require lock hints on the SELECT (with UPDLOCK) to be correct.
In Oracle the preffered avenue is use the FOR UPDATE, but that does not work in SQL Server since FOR UPDATE is to be used in conjunction with cursors in SQL.
In any case, the behavior you have in the original post is incorrect. Multiple sessions can all select the same row(s) and even all update it, returning the same dequeued item(s) to multiple readers.
As a general rule, to make an operation like this atomic you'll need to ensure that you set an exclusive (or update) lock when you perform the select so that no other transaction can read the row before your update.
The typical syntax for this is something like:
select * from table where pk = x and status = y for update
but you'd need to look it up to be sure.
I have some applications that follow a similar pattern. There is a table like yours that represents a queue of work. The table has two extra columns: thread_id and thread_date. When the app asks for work froom the queue, it submits a thread id. Then a single update statement updates all applicable rows with the thread id column with the submitted id and the thread date column with the current time. After that update, it selects all rows with that thread id. This way you dont need to declare an explicit transaction. The "locking" occurs in the initial update.
The thread_date column is used to ensure that you do not end up with orphaned work items. What happens if items are pulled from the queue and then your app crashes? You have to have the ability to try those work items again. So you might grab all items off the queue that have not been marked completed but have been assigned to a thread with a thread date in the distant past. Its up to you to define "distant."
Try this. The validation is in the UPDATE statement.
Code
IF EXISTS (SELECT * FROM sys.tables WHERE name = 't1')
DROP TABLE dbo.t1
GO
CREATE TABLE dbo.t1 (
ColID int IDENTITY,
[Status] varchar(20)
)
GO
DECLARE #id int
DECLARE #initialValue varchar(20)
DECLARE #newValue varchar(20)
SET #initialValue = 'Initial Value'
INSERT INTO dbo.t1 (Status) VALUES (#initialValue)
SELECT #id = SCOPE_IDENTITY()
SET #newValue = 'Updated Value'
BEGIN TRAN
UPDATE dbo.t1
SET
#initialValue = [Status],
[Status] = #newValue
WHERE ColID = #id
AND [Status] = #initialValue
SELECT ColID, [Status] FROM dbo.t1
COMMIT TRAN
SELECT #initialValue AS '#initialValue', #newValue AS '#newValue'
Results
ColID Status
----- -------------
1 Updated Value
#initialValue #newValue
------------- -------------
Initial Value Updated Value