SQL Server - Multiple Processes Inserting to table - sql

I have a couple of stored procs (Add & Remove) which run some selects, inserts, deletes & updates on some tables. These seem fine.
Each of these procs uses a TRANSACTION.
I begin the transaction before I do any changes to data and near the end of the proc i do..
IF ##TRANSCCOUNT > 0
COMMIT TRANSACTION #transName;
Within the Add and Remove procs, and within the TRANSACTION I call another stored procedure (Adjust) to update a table which keeps a running total of values.
I am finding that this is getting out of sync.....
Here is the body of that proc....
INSERT INTO L2(ProductId, LocationId, POId, StockMoveId, BasketId, OrderId, AdjusterValue, CurrentValue)
SELECT TOP 1
#ProductId, #LocationId, null, null, #BasketId, null, #Value, (CurrentValue + #Value)
FROM L2
WHERE 1=1
AND LocationId = #LocationId
AND ProductId = #ProductId
ORDER BY Id Desc
ProductId, LocationId, StockMoveId and OrderId are all foreign keys to the relevent tables but do allow nulls so only the approprate one needs to be populated with an actual value.
Here is an image showing an example of where it goes wrong....
The 19 should have been addded to 324 nmaking a new total of 343, however, as you can see it seems to have been added to the 300 and 319 is inserted.
Questions...
Is this actually in the transaction that was began in the calling stored proc.
How can I prevent this situation?
I've tried using MAX to get the right row to try and speed up but the execution plan on that isn't as cost effective as the simple TOP. ID, btw is an Identity column and PKey.
Do I need to Lock the table, and if I do with the other process calling Adjust wait or will they error.
Any assistance much appreciated.
More info....
I have been experimenting and it would seem the only solution that consistently works as desired is to have the Id column as an INT field and simply increment it myself on the INSERT.
This doesn't sit well with me as to me it doesn't make sense as to why the IDENTITY column n doesn't seem to cope.
I've tried the posted Identity column solution, sequences and incrementing ID myself

After lots of searching, experimenting I SEEM to have a solution that is now very robust.
I now have the ID as a simple INT column and I manage the ID myself by getting the MAX + 1 for each new insert.
I now wrap the body of the Adjust proc in its own Transaction and use the following to get the next ID.....
DECLARE #trxNam Varchar(10) = 'tranNextId';
DECLARE #newId INT;
DECLARE #currentLevelId INT;
BEGIN TRANSACTION #trxNam;
SELECT #newId = MAX(id) + 1 FROM L2 WITH(updlock,serializable);
I then do my insert using the #newId and COMMIT the named transaction.
I have a scenarion set up where I have a number of Win32Apps caling my API 100s of times that was cosistantly failing due to intermittent PKEY violations.
Now it doesn't.
Happy days!
Still I'm looking to see if I can simply have an identity column again and use the Transaction in the adjust proc...That would be cleaner I think.
I found this article led me to the solution....

Related

Best way to generate a UniqueID for a group of rows?

This is very simplified but I have a web service array of items that look something like this:
[12345, 34131, 13431]
and I am going to be looping through the array and inserting them one by one into a database and I want that table to look like this. These values would be tied to a unique identifier showing that they were
1 12345
1 34131
1 13431
and then if another array came along it would then insert all of its numbers with unique ID 2.... basically this is to keep track of groups.
There will be multiple processes executing this potentially at the same time so what would be the best way to generate the unique identifier and also ensure that 2 processes couldn't have used the same one?
You should fix your data model. It is missing an entity, say, batches.
create table batches (
batch_id int identity(1, 1) primary key,
created_at datetime default getdate()
);
You might have other information as well.
And your table should have a foreign key reference, batch_id to batches.
Then your code should do the following:
Insert a new row into batches. A new batch has begun.
Fetch the id that was just created.
Use this id for the rows that you want to insert.
Although you could do this with a sequence, a separate table makes more sense to me. You are tying a bunch of rows together into something. That something should be represented in the data model.
You can declare this :
DECLARE #UniqueID UNIQUEIDENTIFIER = NEWID();
and use this as your unique identifier when you insert your batch
Since it isn't a primary key, an identity column is out. Honestly I'd probably just track it using a separate id sequence table. Create a proc that grabs the next available ID and then increments it. If you open a transaction at the beginning of the proc it should prevent the second thread from getting the number until the first thread is done with it's update.
Something like:
CREATE PROCEDURE getNextID
#NextNumber INT OUTPUT
,#id_type VARCHAR(20)
AS
BEGIN
SET NOCOUNT ON;
DECLARE #NextValue TABLE (NextNumber int);
BEGIN TRANSACTION;
UPDATE id_sequence
SET last_used_number = ISNULL(#NextNumber, 0) + 1
OUTPUT inserted.last_used_number INTO #NextValue(NextNumber)
WHERE id_type = #id_type
SELECT #NextNumber = NextNumber FROM #NextValue
COMMIT TRANSACTION;
END

SQL number generation in concurrent environment (Transation isolation level)

I am working with an application that generates invoice numbers (sequentially based on few parameters) and so far it has been using a trigger with serialized transaction. Because the trigger is rather "heavy" it manages to timeout execution of the insert query.
I'm now working on a solution to that problem and so far I came to the point where I have a stored procedure that do the insert and after the insert I have a transaction with isolation level serializable (which by the way applies to that transaction only or should i set it back after the transaction has been commited?) that:
gets the number
if not found do the insert into that table and if found updates the number (increment)
commits the transaction
I'm wondering whether there's a better way to ensure the number is used once and gets incrementer with the table locked (only the number tables gets locked, right?).
I read about sp_getapplock, would that be somewhat a better way to achieve my goal?
I would optimize the routine for update (and handle "insert if not there" separately), at which point it would be:
declare #number int;
update tbl
set #number = number, number += 1
where year = #year and month = #month and office = #office and type = #type;
You don't need any specific locking hints or isolation levels, SQL Server will ensure no two transactions read the same value before incrementing.
If you'd like to avoid handling the insert separately, you can:
merge into tbl
using (values (#year, #month, #office, #type)) as v(y,m,o,t)
on tbl.year = v.year and tbl.month = v.month and tbl.office = v.office and tbl.type = v.type
when not matched by target then
insert (year, month, office, type, number) values(#year, #month, #office, #type, 1)
when matched then
update set #number = tbl.number, tbl.number += 1
;
Logically this should provide the same guard against race condition as update, but for some reason I don't remember where is the proof.
If you first insert and then update you have a time window where an invalid number is set and can be observed. Further, if the 2nd transaction fails which can always happen you have inconsistent data.
Try this:
Take a fresh number in tran 1.
Insert in tran 2 with the number that was taken already
That way you might burn a number but there will never be inconsistent data.

Conditionally inserting records into a table in multithreaded environment based on a count

I am writing a T-SQL stored procedure that conditionally adds a record to a table only if the number of similar records is below a certain threshold, 10 in the example below. The problem is this will be run from a web application, so it will run on multiple threads, and I need to ensure that the table never has more than 10 similar records.
The basic gist of the procedure is:
BEGIN
DECLARE #c INT
SELECT #c = count(*)
FROM foo
WHERE bar = #a_param
IF #c < 10 THEN
INSERT INTO foo
(bar)
VALUES (#a_param)
END IF
END
I think I could solve any potential concurrency problems by replacing the select statement with:
SELECT #c = count(*) WITH (TABLOCKX, HOLDLOCK)
But I am curious if there any methods other than lock hints for managing concurrency problems in T-SQL
One option would be to use the sp_getapplock system stored procedure. You can place your critical section logic in a transaction and use the built in locking of sql server to ensure synchronized access.
Example:
CREATE PROC MyCriticalWork(#MyParam INT)
AS
DECLARE #LockRequestResult INT
SET #LockRequestResult=0
DECLARE #MyTimeoutMiliseconds INT
SET #MyTimeoutMiliseconds=5000--Wait only five seconds max then timeouit
BEGIN TRAN
EXEC #LockRequestResult=SP_GETAPPLOCK 'MyCriticalWork','Exclusive','Transaction',#MyTimeoutMiliseconds
IF(#LockRequestResult>=0)BEGIN
/*
DO YOUR CRITICAL READS AND WRITES HERE
*/
--Release the lock
COMMIT TRAN
END ELSE
ROLLBACK TRAN
Use SERIALIZABLE. By definition it provides you the illusion that your transaction is the only transaction running. Be aware that this might result in blocking and deadlocking. In fact this SQL code is a classic candidate for deadlocking: Two transactions might first read a set of rows, then both will try to modify that set of rows. Locking hints are the classic way of solving that problem. Retry also works.
As stated in the comment. Why are you trying to insert on multiple threads? You cannot write to a table faster on multiple threads.
But you don't need a declare
insert into [Table_1] (ID, fname, lname)
select 3, 'fname', 'lname'
from [Table_1]
where ID = 3
having COUNT(*) <= 10
If you need to take a lock then do so
The data is not 3NF
Should start any design with a proper data model
Why rule out table lock?
That could very well be the best approach
Really, what are the chances?
Even without a lock you would have to have two at a count of 9 submit at exactly the same time. Even then it would stop at 11. Is the 10 an absolute hard number?

Possible to implement a manual increment with just simple SQL INSERT?

I have a primary key that I don't want to auto increment (for various reasons) and so I'm looking for a way to simply increment that field when I INSERT. By simply, I mean without stored procedures and without triggers, so just a series of SQL commands (preferably one command).
Here is what I have tried thus far:
BEGIN TRAN
INSERT INTO Table1(id, data_field)
VALUES ( (SELECT (MAX(id) + 1) FROM Table1), '[blob of data]');
COMMIT TRAN;
* Data abstracted to use generic names and identifiers
However, when executed, the command errors, saying that
"Subqueries are not allowed in this
context. only scalar expressions are
allowed"
So, how can I do this/what am I doing wrong?
EDIT: Since it was pointed out as a consideration, the table to be inserted into is guaranteed to have at least 1 row already.
You understand that you will have collisions right?
you need to do something like this and this might cause deadlocks so be very sure what you are trying to accomplish here
DECLARE #id int
BEGIN TRAN
SELECT #id = MAX(id) + 1 FROM Table1 WITH (UPDLOCK, HOLDLOCK)
INSERT INTO Table1(id, data_field)
VALUES (#id ,'[blob of data]')
COMMIT TRAN
To explain the collision thing, I have provided some code
first create this table and insert one row
CREATE TABLE Table1(id int primary key not null, data_field char(100))
GO
Insert Table1 values(1,'[blob of data]')
Go
Now open up two query windows and run this at the same time
declare #i int
set #i =1
while #i < 10000
begin
BEGIN TRAN
INSERT INTO Table1(id, data_field)
SELECT MAX(id) + 1, '[blob of data]' FROM Table1
COMMIT TRAN;
set #i =#i + 1
end
You will see a bunch of these
Server: Msg 2627, Level 14, State 1, Line 7
Violation of PRIMARY KEY constraint 'PK__Table1__3213E83F2962141D'. Cannot insert duplicate key in object 'dbo.Table1'.
The statement has been terminated.
Try this instead:
INSERT INTO Table1 (id, data_field)
SELECT id, '[blob of data]' FROM (SELECT MAX(id) + 1 as id FROM Table1) tbl
I wouldn't recommend doing it that way for any number of reasons though (performance, transaction safety, etc)
It could be because there are no records so the sub query is returning NULL...try
INSERT INTO tblTest(RecordID, Text)
VALUES ((SELECT ISNULL(MAX(RecordID), 0) + 1 FROM tblTest), 'asdf')
I don't know if somebody is still looking for an answer but here is a solution that seems to work:
-- Preparation: execute only once
CREATE TABLE Test (Value int)
CREATE TABLE Lock (LockID uniqueidentifier)
INSERT INTO Lock SELECT NEWID()
-- Real insert
BEGIN TRAN LockTran
-- Lock an object to block simultaneous calls.
UPDATE Lock WITH(TABLOCK)
SET LockID = LockID
INSERT INTO Test
SELECT ISNULL(MAX(T.Value), 0) + 1
FROM Test T
COMMIT TRAN LockTran
We have a similar situation where we needed to increment and could not have gaps in the numbers. (If you use an identity value and a transaction is rolled back, that number will not be inserted and you will have gaps because the identity value does not roll back.)
We created a separate table for last number used and seeded it with 0.
Our insert takes a few steps.
--increment the number
Update dbo.NumberTable
set number = number + 1
--find out what the incremented number is
select #number = number
from dbo.NumberTable
--use the number
insert into dbo.MyTable using the #number
commit or rollback
This causes simultaneous transactions to process in a single line as each concurrent transaction will wait because the NumberTable is locked. As soon as the waiting transaction gets the lock, it increments the current value and locks it from others. That current value is the last number used and if a transaction is rolled back, the NumberTable update is also rolled back so there are no gaps.
Hope that helps.
Another way to cause single file execution is to use a SQL application lock. We have used that approach for longer running processes like synchronizing data between systems so only one synchronizing process can run at a time.
If you're doing it in a trigger, you could make sure it's an "INSTEAD OF" trigger and do it in a couple of statements:
DECLARE #next INT
SET #next = (SELECT (MAX(id) + 1) FROM Table1)
INSERT INTO Table1
VALUES (#next, inserted.datablob)
The only thing you'd have to be careful about is concurrency - if two rows are inserted at the same time, they could attempt to use the same value for #next, causing a conflict.
Does this accomplish what you want?
It seems very odd to do this sort of thing w/o an IDENTITY (auto-increment) column, making me question the architecture itself. I mean, seriously, this is the perfect situation for an IDENTITY column. It might help us answer your question if you'd explain the reasoning behind this decision. =)
Having said that, some options are:
using an INSTEAD OF trigger for this purpose. So, you'd do your INSERT (the INSERT statement would not need to pass in an ID). The trigger code would handle inserting the appropriate ID. You'd need to use the WITH (UPDLOCK, HOLDLOCK) syntax used by another answerer to hold the lock for the duration of the trigger (which is implicitly wrapped in a transaction) & to elevate the lock type from "shared" to "update" lock (IIRC).
you can use the idea above, but have a table whose purpose is to store the last, max value inserted into the table. So, once the table is set up, you would no longer have to do a SELECT MAX(ID) every time. You'd simply increment the value in the table. This is safe provided that you use appropriate locking (as discussed). Again, that avoids repeated table scans every time you INSERT.
use GUIDs instead of IDs. It's much easier to merge tables across databases, since the GUIDs will always be unique (whereas records across databases will have conflicting integer IDs). To avoid page splitting, sequential GUIDs can be used. This is only beneficial if you might need to do database merging.
Use a stored proc in lieu of the trigger approach (since triggers are to be avoided, for some reason). You'd still have the locking issue (and the performance problems that can arise). But sprocs are preferred over dynamic SQL (in the context of applications), and are often much more performant.
Sorry about rambling. Hope that helps.
How about creating a separate table to maintain the counter? It has better performance than MAX(id), as it will be O(1). MAX(id) is at best O(lgn) depending on the implementation.
And then when you need to insert, simply lock the counter table for reading the counter and increment the counter. Then you can release the lock and insert to your table with the incremented counter value.
Have a separate table where you keep your latest ID and for every transaction get a new one.
It may be a bit slower but it should work.
DECLARE #NEWID INT
BEGIN TRAN
UPDATE TABLE SET ID=ID+1
SELECT #NEWID=ID FROM TABLE
COMMIT TRAN
PRINT #NEWID -- Do what you want with your new ID
Code without any transaction scope (I use it in my engineer course as an exercice) :
-- Preparation: execute only once
CREATE TABLE increment (val int);
INSERT INTO increment VALUES (1);
-- Real insert
DECLARE #newIncrement INT;
UPDATE increment
SET #newIncrement = val,
val = val + 1;
INSERT INTO Table1 (id, data_field)
SELECT #newIncrement, 'some data';
declare #nextId int
set #nextId = (select MAX(id)+1 from Table1)
insert into Table1(id, data_field) values (#nextId, '[blob of data]')
commit;
But perhaps a better approach would be using a scalar function getNextId('table1')
Any critiques of this? Works for me.
DECLARE #m_NewRequestID INT
, #m_IsError BIT = 1
, #m_CatchEndless INT = 0
WHILE #m_IsError = 1
BEGIN TRY
SELECT #m_NewRequestID = (SELECT ISNULL(MAX(RequestID), 0) + 1 FROM Requests)
INSERT INTO Requests ( RequestID
, RequestName
, Customer
, Comment
, CreatedFromApplication)
SELECT RequestID = #m_NewRequestID
, RequestName = dbo.ufGetNextAvailableRequestName(PatternName)
, Customer = #Customer
, Comment = [Description]
, CreatedFromApplication = #CreatedFromApplication
FROM RequestPatterns
WHERE PatternID = #PatternID
SET #m_IsError = 0
END TRY
BEGIN CATCH
SET #m_IsError = 1
SET #m_CatchEndless = #m_CatchEndless + 1
IF #m_CatchEndless > 1000
THROW 51000, '[upCreateRequestFromPattern]: Unable to get new RequestID', 1
END CATCH
This should work:
INSERT INTO Table1 (id, data_field)
SELECT (SELECT (MAX(id) + 1) FROM Table1), '[blob of data]';
Or this (substitute LIMIT for other platforms):
INSERT INTO Table1 (id, data_field)
SELECT TOP 1
MAX(id) + 1, '[blob of data]'
FROM
Table1
ORDER BY
[id] DESC;

atomic compare and swap in a database

I am working on a work queueing solution. I want to query a given row in the database, where a status column has a specific value, modify that value and return the row, and I want to do it atomically, so that no other query will see it:
begin transaction
select * from table where pk = x and status = y
update table set status = z where pk = x
commit transaction
--(the row would be returned)
it must be impossible for 2 or more concurrent queries to return the row (one query execution would see the row while its status = y) -- sort of like an interlocked CompareAndExchange operation.
I know the code above runs (for SQL server), but will the swap always be atomic?
I need a solution that will work for SQL Server and Oracle
Is PK the primary key? Then this is a non issue, if you already know the primary key there is no sport. If pk is the primary key, then this begs the obvious question how do you know the pk of the item to dequeue...
The problem is if you don't know the primary key and want to dequeue the next 'available' (ie. status = y) and mark it as dequeued (delete it or set status = z).
The proper way to do this is to use a single statement. Unfortunately the syntax differs between Oracle and SQL Server. The SQL Server syntax is:
update top (1) [<table>]
set status = z
output DELETED.*
where status = y;
I'm not familiar enough with Oracle's RETURNING clause to give an example similar to SQL's OUTPUT one.
Other SQL Server solutions require lock hints on the SELECT (with UPDLOCK) to be correct.
In Oracle the preffered avenue is use the FOR UPDATE, but that does not work in SQL Server since FOR UPDATE is to be used in conjunction with cursors in SQL.
In any case, the behavior you have in the original post is incorrect. Multiple sessions can all select the same row(s) and even all update it, returning the same dequeued item(s) to multiple readers.
As a general rule, to make an operation like this atomic you'll need to ensure that you set an exclusive (or update) lock when you perform the select so that no other transaction can read the row before your update.
The typical syntax for this is something like:
select * from table where pk = x and status = y for update
but you'd need to look it up to be sure.
I have some applications that follow a similar pattern. There is a table like yours that represents a queue of work. The table has two extra columns: thread_id and thread_date. When the app asks for work froom the queue, it submits a thread id. Then a single update statement updates all applicable rows with the thread id column with the submitted id and the thread date column with the current time. After that update, it selects all rows with that thread id. This way you dont need to declare an explicit transaction. The "locking" occurs in the initial update.
The thread_date column is used to ensure that you do not end up with orphaned work items. What happens if items are pulled from the queue and then your app crashes? You have to have the ability to try those work items again. So you might grab all items off the queue that have not been marked completed but have been assigned to a thread with a thread date in the distant past. Its up to you to define "distant."
Try this. The validation is in the UPDATE statement.
Code
IF EXISTS (SELECT * FROM sys.tables WHERE name = 't1')
DROP TABLE dbo.t1
GO
CREATE TABLE dbo.t1 (
ColID int IDENTITY,
[Status] varchar(20)
)
GO
DECLARE #id int
DECLARE #initialValue varchar(20)
DECLARE #newValue varchar(20)
SET #initialValue = 'Initial Value'
INSERT INTO dbo.t1 (Status) VALUES (#initialValue)
SELECT #id = SCOPE_IDENTITY()
SET #newValue = 'Updated Value'
BEGIN TRAN
UPDATE dbo.t1
SET
#initialValue = [Status],
[Status] = #newValue
WHERE ColID = #id
AND [Status] = #initialValue
SELECT ColID, [Status] FROM dbo.t1
COMMIT TRAN
SELECT #initialValue AS '#initialValue', #newValue AS '#newValue'
Results
ColID Status
----- -------------
1 Updated Value
#initialValue #newValue
------------- -------------
Initial Value Updated Value