We have a table TAB1 that's accessed by an Oracle process P1 (e.g. SID=123). The process demands a dynamic SQL delete followed by commit.
Process P1 initiated by SID=123 consists a lot of operations apart from this TAB1 related operation.
Scenario:
SID=123 is active; P1 imposed a row exclusive lock on TAB1(got from querying locked_object view).
another oracle process P2 is intiated by SID=124 (exactly same process as P1 but for different set of data inputs) just after sometime(say, 2-3 mins)P1 gets initiated.
SID=124 is waiting till process P1 initiated by SID=123 is completed; P2 imposed a row exclusive lock on TAB1(got from querying locked_object view).
Question:
I think the same row level lock by P2 expects a 'can go-ahead' from row level lock by P1.
Can we be able to MANUALLY OVERRIDE the locking imposed by process P1 on TAB1 (I hope it's possible), and release the lock once its operation on TAB1 is over? Will this help in reducing the long wait that P2 is now having on TAB1 till entire P1 is over?
Any suggestion would be greatly appreciated. Please let me know if you need more information on this.
Locks are released on transaction boundary, not on process boundary.
In short, if you want P1 to immediately release the lock, P1 has to end the current transcation with an explicit commit or rollback just after the delete operation.
Of course ending the transaction would also commit/rollback other operations that were executed in the same session after the previous commit/rollback. If this is a problem, you have to rethink the business logic.
Wait, you wrote "dynamic SQL delete followed by commit"... if you mean "immediately followed" then the row exclusive lock is already immediately released.
I've actually 'avoided the scenario' which means 'this answer is not the solution' to the question being asked.
What I've did to avoid the scenario:
Added one more column to TAB1 to put a unique identification number for each process.
Used this column to delete only the rows corresponding to that particular process. This, I believe has avoided the processes P1 and P2 waiting for the same row.
Thanks to #Codo, #a_horse_with_no_name, #Ben, #Justin Cave and #colemar for all your help in trying to prettify the question context-wise and for your support.
#Justin Cave: I've been thinking the same solution as proposed by you, but if I would've seen this yesterday, I wouldn't have to waste time till now. Anyways, thanks a lot for your support.
Related
I have a table that looks like that:
Id GroupId
1 G1
2 G1
3 G2
4 G2
5 G2
It should at any time be possible to read all of the rows (committed only). When there will be an update I want to have a transaction that will lock on group id, i.e. there should at any given time be only one transaction that attempts to update per GroupId.
It should ideally be still possible to read all committed rows (i.e. other transaction/ordinary reads that will not try to acquire the "update per group lock" should be still able to read).
The reason I want to do this is that an update can not rely on "outdated" data. I.e. I do make some calculations in a transaction and another transaction cannot edit row with id 1 or add a new row with the same GroupId after these rows were read by the first transaction (even though the first transaction would never modify the row itself it will be dependent on it's value).
Another "nice to have" requirement is that sometimes I would need the same requirement "cross group", i.e. the update transaction would have to lock 2 groups at the same time. (This is not a dynamic number of groups, but rather just 2)
Here are some ideas. I don't think any of them are perfect - I think you will need to give yourself a set of use-cases and try them. Some of the situations I tried after applying locks
SELECTs with the WHERE filter as another group
SELECTs with the WHERE filter as the locked group
UPDATES on the table with the WHERE clause as another group
UPDATEs on the table where ID (not GrpID!) was not locked
UPDATEs on the table where the row was locked (e.g., IDs 1 and 2)
INSERTs into the table with that GrpId
I have the funny feeling that none of these will be 100%, but the most likely answer is the second one (setting the transaction isolation level). It will probably lock more than desired, but will give you the isolation you need.
Also one thing to remember: if you lock many rows (e.g., there are thousands of rows with the GrpId you want) then SQL Server can escalate the lock to be a full-table lock. (I believe the tipping point is 5000 locks, but not sure).
Old-school hackjob
At the start of your transaction, update all the relevant rows somehow e.g.,
BEGIN TRAN
UPDATE YourTable
SET GrpId = GrpId
WHERE GrpId = N'G1';
-- Do other stuff
COMMIT TRAN;
Nothing else can use them because (bravo!) they are a write within a transaction.
Convenient - set isolation level
See https://learn.microsoft.com/en-us/sql/relational-databases/sql-server-transaction-locking-and-row-versioning-guide?view=sql-server-ver15#isolation-levels-in-the-
Before your transaction, set the isolation level high e.g., SERIALIZABLE.
You may want to read all the relevant rows at the start of your transaction (e.g., SELECT Grp FROM YourTable WHERE Grp = N'Grp1') to lock them from being updated.
Flexible but requires a lot of coding
Use resource locking with sp_getapplock and sp_releaseapplock.
These are used to lock resources, not tables or rows.
What is a resource? Well, anything you want it to be. In this case, I'd suggest 'Grp1', 'Grp2' etc. It doesn't actually lock rows. Instead, you ask (via sp_getapplock, or APPLOCK_TEST) whether you can get the resource lock. If so, continue. If not, then stop.
Anything code referring to these tables needs to be reviewed and potentially modified to ask if it's allowed to run or not. If something doesn't ask for permission and just does it, there's no actual real locks stopping it (except via any transactions you've explicity specified).
You also need to ensure that errors are handled appropriately (e.g., still releasing the app_lock) and that processes that are blocked are re-tried.
I think it is inter-DBMS question although I specify it in SQL Server terminology.
Having read msdn documentation, for ex., [ 1 ], I could not understand:
Is it possible to select half-written )partially-overwritten, -updated, -deleted, -inserted) values WITH( NOLOCK) values and if not how is it (half-written values reading) prevented (if no locks are respected)?
Violation of which DBMS principle is reading of half-written value?
I am having difficulties in identifying its term (is it consistence, integrity break)?
What is the name of corresponding term?
Update:
I deleted from this post the questions on UPDATE (DELETE) WITH(NOLOCK).
msdn docs, for example, [ 1 ] and multiple articles tell that SELECT WITH(NOLOCK) is the same as READUNCOMMITTED and "No shared locks are issued to prevent other transactions from modifying data read by the current transaction, and exclusive locks set by other transactions do not block the current transaction from reading the locked data".
Do I understand correctly that DBMS ensures that only completely written (committed or not) values can be read?
How is it ensured if no locks are used or respected?
This question is not about which transaction can read what and when but how reading of incompletely written values is prevented.
Update2:
Since this question started to be downvoted and closed, I moved questions on UPDATE(DELETE) WITH(NOLOCK) to msdn forum:
What is the meaning of UPDATE and DELETE WITH(NOLOCK) statements?
I also repeated this same question in msdn forum:
Is a half-written values reading prevented WITH (NOLOCK) hint?
which caused there complete confusion.
Though, why is it (being closed here as not a question having an answer)?
It is very basic fundamental concept, having simple answer, obligatory for clear understanding by database developers and DBAs.
[ 1 ]
Table Hints (Transact-SQL)
SQL Server 2008 R2
http://msdn.microsoft.com/en-us/library/ms187373.aspx
Oracle doesn't allow dirty reads under any circumstances (ie reading another session's uncommitted values). It violates 'Isolation' (the I in ACID) for the database and potentially gives an apparent inconsistency for the reading operation [eg seeing a child record without a parent].
There are two mechanisms in play.
Firstly, each record has a lock byte indicating whether it is currently locked or not. The value of the byte points to a transaction in the block header, so a session can determine whether the lock is its own or belongs to another session.
If a read sees that the byte is set then it uses a pointer in the block header to find an older version of the block. If it is still locked, it keeps following the pointers until it gets to a version of the block where the record shows as unlocked. Then it returns the value.
The same mechanism is also used for time based consistency. If a select started at 3pm and it finds a block modified at 3:02 pm then it follows the history back to find the version of the block that was current at 3:00. It may then find that the record it wants was locked at 3:00pm [it may have been committed at 3:01pm] and has to go back further to see what the committed value was at 3:00pm.
The other protection mechanism is a latch. When it reads a block, it takes a latch on it for the duration of the read. This prevents another process (potentially running on another CPU) from accessing the block during the duration of the read (ie process A cannot set the lock byte at the same time as thread B is reading the block - it has to wait until the read is finished). These latches are very low level CPU operations and are only held for very short durations. On a single core/cpu box, latching isn't necessary as there's only one core so only on thread can execute at one time anyway.
After some contemplation and experimenting, I believe, that DBMS-es do not provide such value integrity:
Had we assumed such possibility, then we immediately come to conclusion that the same transaction in the same transaction scope can use incompletely defined (written, inserted, updated, deleted) values what really never happens;
Values are used not only by DBMS but outside of it by operating system and other software frameworks
It is most probably the transaction features implemented at operating system, or even more low (machine, hardware) level.
Update:
The Gary's answer supports it:
"These latches are very low level CPU operations and are only held for very short durations."
Though, I did not intend to mix discussion of DBMS transaction isolation phenomena with low-level transaction support provided by hardware.
Update2:
And, how would be better to name the corresponding low-level (hardware transaction support) term distinctively in order to avoid its confusion and inconsistencies with DBMS transaction terminology? Is it value consistency or value integrity?
Update3:
Suppose I have 2 GB string in nvarchar(max) having 1GB RAM. How would CPU provide integrity for this value on hardware level?
The answer by Razvan Socol in msdn thread Is a half-written values reading prevented WITH (NOLOCK) hint? gives a script catching the reading of partially-updated values. That site is currently down and I re-produce this code here:
1) Create a Test table and fill 10 rows of it:
if object_id('Test') IS not NULL
drop table Test;
CREATE TABLE Test (
ID int IDENTITY PRIMARY KEY,
Txt nvarchar(max) NOT NULL
)
GO
-----------
INSERT INTO Test
SELECT REPLICATE(CONVERT(nvarchar(max),
CHAR(65+ABS(CHECKSUM(NEWID()))%26)),100000)
GO 10
---
2)
In first session (tab of SSMS) launch time-consuming update:
UPDATE Test
SET Txt=REPLICATE(CONVERT(nvarchar(max),
CHAR(65+ABS(CHECKSUM(NEWID()))%26)),100000)
GO 1000
3)
In second session (tab of SSMS) launch catching half-overwritten values:
WHILE 1=1 BEGIN
SELECT Txt FROM Test WITH (NOLOCK)
WHERE LEN(REPLACE(Txt,LEFT(Txt,1),''))<>0;
select 'rowcount inside=',##rowcount;
IF ##ROWCOUNT<>0 BREAK
END
--for wishing to try it in non-SqlServer DBMS
-- WITH(NOLOCK) hint is another way as setting READ UNCOMMITTED tx iso level
--SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED;
Well, it is SSMS of SQL Server 2008 R2 catching quite rapidly a bunch of half-overwritten values.
I am curious what are the results in other DBMS?
I'm a bit confused with transactions vs locking tables to ensure database integrity and make sure a SELECT and UPDATE remain in sync and no other connection interferes with it. I need to:
SELECT * FROM table WHERE (...) LIMIT 1
if (condition passes) {
// Update row I got from the select
UPDATE table SET column = "value" WHERE (...)
... other logic (including INSERT some data) ...
}
I need to ensure that no other queries will interfere and perform the same SELECT (reading the 'old value' before that connection finishes updating the row.
I know I can default to LOCK TABLES table to just make sure that only 1 connection is doing this at a time, and unlock it when I'm done, but that seems like overkill. Would wrapping that in a transaction do the same thing (ensuring no other connection attempts the same process while another is still processing)? Or would a SELECT ... FOR UPDATE or SELECT ... LOCK IN SHARE MODE be better?
Locking tables prevents other DB users from affecting the rows/tables you've locked. But locks, in and of themselves, will NOT ensure that your logic comes out in a consistent state.
Think of a banking system. When you pay a bill online, there's at least two accounts affected by the transaction: Your account, from which the money is taken. And the receiver's account, into which the money is transferred. And the bank's account, into which they'll happily deposit all the service fees charged on the transaction. Given (as everyone knows these days) that banks are extraordinarily stupid, let's say their system works like this:
$balance = "GET BALANCE FROM your ACCOUNT";
if ($balance < $amount_being_paid) {
charge_huge_overdraft_fees();
}
$balance = $balance - $amount_being paid;
UPDATE your ACCOUNT SET BALANCE = $balance;
$balance = "GET BALANCE FROM receiver ACCOUNT"
charge_insane_transaction_fee();
$balance = $balance + $amount_being_paid
UPDATE receiver ACCOUNT SET BALANCE = $balance
Now, with no locks and no transactions, this system is vulnerable to various race conditions, the biggest of which is multiple payments being performed on your account, or the receiver's account in parallel. While your code has your balance retrieved and is doing the huge_overdraft_fees() and whatnot, it's entirely possible that some other payment will be running the same type of code in parallel. They'll be retrieve your balance (say, $100), do their transactions (take out the $20 you're paying, and the $30 they're screwing you over with), and now both code paths have two different balances: $80 and $70. Depending on which ones finishes last, you'll end up with either of those two balances in your account, instead of the $50 you should have ended up with ($100 - $20 - $30). In this case, "bank error in your favor".
Now, let's say you use locks. Your bill payment ($20) hits the pipe first, so it wins and locks your account record. Now you've got exclusive use, and can deduct the $20 from the balance, and write the new balance back in peace... and your account ends up with $80 as is expected. But... uhoh... You try to go update the receiver's account, and it's locked, and locked longer than the code allows, timing out your transaction... We're dealing with stupid banks, so instead of having proper error handling, the code just pulls an exit(), and your $20 vanishes into a puff of electrons. Now you're out $20, and you still owe $20 to the receiver, and your telephone gets repossessed.
So... enter transactions. You start a transaction, you debit your account $20, you try to credit the receiver with $20... and something blows up again. But this time, instead of exit(), the code can just do rollback, and poof, your $20 is magically added back to your account.
In the end, it boils down to this:
Locks keep anyone else from interfering with any database records you're dealing with. Transactions keep any "later" errors from interfering with "earlier" things you've done. Neither alone can guarantee that things work out ok in the end. But together, they do.
in tomorrow's lesson: The Joy of Deadlocks.
I've started to research the same topic for the same reasons as you indicated in your question. I was confused by the answers given in SO due to them being partial answers and not providing the big picture. After I read couple documentation pages from different RDMS providers these are my takes:
TRANSACTIONS
Statements are database commands mainly to read and modify the data in the database. Transactions are scope of single or multiple statement executions. They provide two things:
A mechanism which guaranties that all statements in a transaction are executed correctly or in case of a single error any data modified by those statements will be reverted to its last correct state (i.e. rollback). What this mechanism provides is called atomicity.
A mechanism which guaranties that concurrent read statements can view the data without the occurrence of some or all phenomena described below.
Dirty read: A transaction reads data written by a concurrent
uncommitted transaction.
Nonrepeatable read: A transaction re-reads data it has previously read
and finds that data has been modified by another transaction (that
committed since the initial read).
Phantom read: A transaction re-executes a query returning a set of
rows that satisfy a search condition and finds that the set of rows
satisfying the condition has changed due to another recently-committed
transaction.
Serialization anomaly: The result of successfully committing a group
of transactions is inconsistent with all possible orderings of running
those transactions one at a time.
What this mechanism provides is called isolation and the mechanism which lets the statements to chose which phenomena should not occur in a transaction is called isolation levels.
As an example this is the isolation-level / phenomena table for PostgreSQL:
If any of the described promises is broken by the database system, changes are rolled back and the caller notified about it.
How these mechanisms are implemented to provide these guaranties is described below.
LOCK TYPES
Exclusive Locks: When an exclusive lock acquired over a resource no other exclusive lock can be acquired over that resource. Exclusive locks are always acquired before a modify statement (INSERT, UPDATE or DELETE) and they are released after the transaction is finished. To explicitly acquire exclusive locks before a modify statement you can use hints like FOR UPDATE(PostgreSQL, MySQL) or UPDLOCK (T-SQL).
Shared Locks: Multiple shared locks can be acquired over a resource. However, shared locks and exclusive locks can not be acquired at the same time over a resource. Shared locks might or might not be acquired before a read statement (SELECT, JOIN) based on database implementation of isolation levels.
LOCK RESOURCE RANGES
Row: single row the statements executes on.
Range: a specific range based on the condition given in the statement (SELECT ... WHERE).
Table: whole table. (Mostly used to prevent deadlocks on big statements like batch update.)
As an example the default shared lock behavior of different isolation levels for SQL-Server :
DEADLOCKS
One of the downsides of locking mechanism is deadlocks. A deadlock occurs when a statement enters a waiting state because a requested resource is held by another waiting statement, which in turn is waiting for another resource held by another waiting statement. In such case database system detects the deadlock and terminates one of the transactions. Careless use of locks can increase the chance of deadlocks however they can occur even without human error.
SNAPSHOTS (DATA VERSIONING)
This is a isolation mechanism which provides to a statement a copy of the data taken at a specific time.
Statement beginning: provides data copy to the statement taken at the beginning of the statement execution. It also helps for the rollback mechanism by keeping this data until transaction is finished.
Transaction beginning: provides data copy to the statement taken at the beginning of the transaction.
All of those mechanisms together provide consistency.
When it comes to Optimistic and Pessimistic locks, they are just namings for the classification of approaches to concurrency problem.
Pessimistic concurrency control:
A system of locks prevents users from modifying data in a way that
affects other users. After a user performs an action that causes a
lock to be applied, other users cannot perform actions that would
conflict with the lock until the owner releases it. This is called
pessimistic control because it is mainly used in environments where
there is high contention for data, where the cost of protecting data
with locks is less than the cost of rolling back transactions if
concurrency conflicts occur.
Optimistic concurrency control:
In optimistic concurrency control, users do not lock data when they
read it. When a user updates data, the system checks to see if another
user changed the data after it was read. If another user updated the
data, an error is raised. Typically, the user receiving the error
rolls back the transaction and starts over. This is called optimistic
because it is mainly used in environments where there is low
contention for data, and where the cost of occasionally rolling back a
transaction is lower than the cost of locking data when read.
For example by default PostgreSQL uses snapshots to make sure the read data didn't change and rolls back if it changed which is an optimistic approach. However, SQL-Server use read locks by default to provide these promises.
The implementation details might change according to database system you chose. However, according to database standards they need to provide those stated transaction guarantees in one way or another using these mechanisms. If you want to know more about the topic or about a specific implementation details below are some useful links for you.
SQL-Server - Transaction Locking and Row Versioning Guide
PostgreSQL - Transaction Isolation
PostgreSQL - Explicit Locking
MySQL - Consistent Nonlocking Reads
MySQL - Locking
Understanding Isolation Levels (Video)
You want a SELECT ... FOR UPDATE or SELECT ... LOCK IN SHARE MODE inside a transaction, as you said, since normally SELECTs, no matter whether they are in a transaction or not, will not lock a table. Which one you choose would depend on whether you want other transactions to be able to read that row while your transaction is in progress.
http://dev.mysql.com/doc/refman/5.0/en/innodb-locking-reads.html
START TRANSACTION WITH CONSISTENT SNAPSHOT will not do the trick for you, as other transactions can still come along and modify that row. This is mentioned right at the top of the link below.
If other sessions simultaneously
update the same table [...] you may
see the table in a state that never
existed in the database.
http://dev.mysql.com/doc/refman/5.0/en/innodb-consistent-read.html
Transaction concepts and locks are different. However, transaction used locks to help it to follow the ACID principles.
If you want to the table to prevent others to read/write at the same time point while you are read/write, you need a lock to do this.
If you want to make sure the data integrity and consistence, you had better use transactions.
I think mixed concepts of isolation levels in transactions with locks.
Please search isolation levels of transactions, SERIALIZE should be the level you want.
I had a similar problem when attempting a IF NOT EXISTS ... and then performing an INSERT which caused a race condition when multiple threads were updating the same table.
I found the solution to the problem here: How to write INSERT IF NOT EXISTS queries in standard SQL
I realise this does not directly answer your question but the same principle of performing an check and insert as a single statement is very useful; you should be able to modify it to perform your update.
I'd use a
START TRANSACTION WITH CONSISTENT SNAPSHOT;
to begin with, and a
COMMIT;
to end with.
Anything you do in between is isolated from the others users of your database if your storage engine supports transactions (which is InnoDB).
You are confused with lock & transaction. They are two different things in RMDB. Lock prevents concurrent operations while transaction focuses on data isolation. Check out this great article for the clarification and some graceful solution.
I have an Update query that recalculates -every- column value in exactly one row, each time.
I've been seeing more row-level lock contention, due to these Update queries occurring on the same row.
I'm thinking maybe one solution would be to have subsequent Updates simply preempt any Updates already in progress. Is this possible? Does Oracle support this kind of Update?
To spell out the idea in full:
update query #1 begins, in its own transaction
needs to update row X
acquires lock on row X
update query #2 begins, again in its own transaction
blocks, waiting for query #1 to release the lock on row X.
My thought is, can step 5 simply be: query #1 is aborted, query #2 proceeds. Or maybe dispense with acquiring the row-level lock in the first place.
I realize this logic would be disastrously wrong should the update query be updating only a subset of columns in a given row. But it's not -- every column gets recalculated, each time.
I'd ask whether a physical table is the right mechanism for whatever you are doing. One factor is how transactions needs to be handled. Anything that means "Don't lock for the duration of the transaction" will run into transactional issues.
There are a couple of non-transactional options:
Global context values might be useful (depends if you are on RAC) and how to handle persistence after a restart.
Another option is DBMS_PIPE where you'd have a background process maintaining that table and the separate sessions send messages to that process rather than update the table directly.
Queuing is another thought.
If you just need to need to reduce the time the record is locked, autonomous transactions could be the answer
It's possible to do the opposite of what you're asking, have query 2 fail if query 1 is in progress using SELECT FOR UPDATE and NOWAIT.
Alternatively, you could try to see if you can get the desired effect by adjusting the isolation level, but I do not recommend this without extensive testing, as you don't know what knock-on effects it may have.
Oracle's UPDATE doesn't support any locking hints.
But OraFAQ forum suggests such hacky workaround:
DECLARE
x CHAR(1);
BEGIN
SELECT 'x' INTO x
FROM tablea
WHERE -- your update condition
FOR UPDATE OF cola NOWAIT;
UPDATE tablea
SET cola = value
WHERE -- your update condition
EXCEPTION
WHEN OTHERS THEN
NULL; -- handle the exception
END;
I have a single process that queries a table for records where PROCESS_IND = 'N', does some processing, and then updates the PROCESS_IND to 'Y'.
I'd like to allow for multiple instances of this process to run, but don't know what the best practices are for avoiding concurrency problems.
Where should I start?
The pattern I'd use is as follows:
Create columns "lockedby" and "locktime" which are a thread/process/machine ID and timestamp respectively (you'll need the machine ID when you split the processing between several machines)
Each task would do a query such as:
UPDATE taskstable SET lockedby=(my id), locktime=now() WHERE lockedby IS NULL ORDER BY ID LIMIT 10
Where 10 is the "batch size".
Then each task does a SELECT to find out which rows it has "locked" for processing, and processes those
After each row is complete, you set lockedby and locktime back to NULL
All this is done in a loop for as many batches as exist.
A cron job or scheduled task, periodically resets the "lockedby" of any row whose locktime is too long ago, as they were presumably done by a task which has hung or crashed. Someone else will then pick them up
The LIMIT 10 is MySQL specific but other databases have equivalents. The ORDER BY is import to avoid the query being nondeterministic.
Although I understand the intention I would disagree on going to row level locking immediately. This will reduce your response time and may actually make your situation worse. If after testing you are seeing concurrency issues with APL you should do an iterative move to “datapage” locking first!
To really answer this question properly more information would be required about the table structure and the indexes involved, but to explain further.
DOL, datarow locking uses a lot more locks than allpage/page level locking. The overhead in managing all the locks and hence the decrease of available memory due to requests for more lock structures within the cache will decrease performance and counter any gains you may have by moving to a more concurrent approach.
Test your approach without the move first on APL (all page locking ‘default’) then if issues are seen move to DOL (datapage first then datarow). Keep in mind when you switch a table to DOL all responses on that table become slightly worse, the table uses more space and the table becomes more prone to fragmentation which requires regular maintenance.
So in short don’t move to datarows straight off try your concurrency approach first then if there are issues use datapage locking first then last resort datarows.
You should enable row level locking on the table with:
CREATE TABLE mytable (...) LOCK DATAROWS
Then you:
Begin the transaction
Select your row with FOR UPDATE option (which will lock it)
Do whatever you want.
No other process can do anything to this row until the transaction ends.
P. S. Some mention overhead problems that can result from using LOCK DATAROWS.
Yes, there is overhead, though i'd hardly call it a problem for a table like this.
But if you switch to DATAPAGES then you may lock only one row per PAGE (2k by default), and processes whose rows reside in one page will not be able to run concurrently.
If we are talking of table with dozen of rows being locked at once, there hardly will be any noticeable performance drop.
Process concurrency is of much more importance for design like that.
The most obvious way is locking, if your database doesn't have locks, you could implement it yourself by adding a "Locked" field.
Some of the ways to simplify the concurrency is to randomize the access to unprocessed items, so instead of competition on the first item, they distribute the access randomly.
Convert the procedure to a single SQL statement and process multiple rows as a single batch. This is how databases are supposed to work.