Now, I'm trying to increment number sequential in SQL Server with the number provided from users.
I have a problem when multiple user insert a row same time with same number.
I try to update the number that user provided to a temporary table, and I expect when I update the same table with same condition, SQL Server will lock any modified to this row until the current update finished, but it not.
Here is the update statement I used:
UPDATE GlobalParam
SET ValueString = (CAST(ValueString as bigint) + 1)
WHERE Id = 'xxxx'
Could you tell me any way to force the other update command wait until the current command finished ?
This is entire my command :
DECLARE #Result bigint;
UPDATE GlobalParam SET ValueString = (SELECT MAX(Code) FROM Item)
DECLARE #SelectTopStm nvarchar(MAX);
DECLARE #ExistRow int
SET #SelectTopStm = 'SELECT #ExistRow = 1 FROM (SELECT TOP 1 Code FROM Item WHERE Code = '999') temp'
EXEC sp_executesql #SelectTopStm, N'#ExistRow int output', #ExistRow output
IF (#ExistRow is not null)
BEGIN
DECLARE #MaxValue bigint
DECLARE #ReturnUpdateTbl table (ValueString nvarchar(max));
UPDATE GlobalParam SET ValueString = (CAST(ValueString as bigint) + 1)
OUTPUT inserted.ValueString INTO #ReturnUpdateTbl
WHERE [Id] = '333A8E1F-16DD-E411-8280-D4BED9D726B3'
SELECT TOP 1 #MaxValue = CAST(ValueString as bigint) FROM #ReturnUpdateTbl
SET #Result = #MaxValue
END
ELSE
BEGIN
SET #Output = 999
END
END
I write the codes above as a stored procedure.
Here is the real code when I insert one Item:
DECLARE #IncrementResult BIGINT
EXEC IncrementNumberUnique
, (some parameters)..
,#Result = #IncrementResult OUTPUT
INSERT INTO ITEM (Id, Code) VALUES ('xxxx', #IncrementResult)
I create 3 threads and make it run at the same time.
The return result :
Id Code
1 999
2 1000
3 1000
Thanks
If I understood your requirements, try ROWLOCK hint to tell the optimizer to start with locking the rows one by one as the update needs them.
UPDATE GlobalParam WITH(ROWLOCK)
SET ValueString = (CAST(ValueString as bigint) + 1)
WHERE Id = 'xxxx'
By default SQL Server does READ Committed locking which releases READ locks once the read operation is committed. Once the Update statement below is complete, all read locks are released from the table Item.
UPDATE GlobalParam SET ValueString = (SELECT MAX(Code) FROM Item)
Since your INSERT into Item is outside the scope of your procedure. you can run the thread in SERIALIZABLE isolation level. Something like this.
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
DECLARE #IncrementResult BIGINT
EXEC IncrementNumberUnique
, (some parameters)..
,#Result = #IncrementResult OUTPUT
INSERT INTO ITEM (Id, Code) VALUES ('xxxx', #IncrementResult)
Changing the isolation level to SERIALIZABLE will increase blocking and contention of resources on item table.
To know more about isolation level, refer this
you should look into identity columns and remove such manual computation of incremental columns if possible.
Related
I have the following sql:
UPDATE Customer SET Count=1 WHERE ID=1 AND Count=0
SELECT ##ROWCOUNT
I need to know if this is guaranteed to be atomic.
If 2 users try this simultaneously, will only one succeed and get a return value of 1? Do I need to use a transaction or something else in order to guarantee this?
The goal is to get a unique 'Count' for the customer. Collisions in this system will almost never happen, so I am not concerned with the performance if a user has to query again (and again) to get a unique Count.
EDIT:
The goal is to not use a transaction if it is not needed. Also this logic is ran very infrequently (up to 100 per day), so I wanted to keep it as simple as possible.
It may depend on the sql server you are using. However for most, the answer is yes. I guess you are implementing a lock.
Using SQL SERVER (v 11.0.6020) that this is indeed an atomic operation as best as I can determine.
I wrote some test stored procedures to try to test this logic:
-- Attempt to update a Customer row with a new Count, returns
-- The current count (used as customer order number) and a bit
-- which determines success or failure. If #Success is 0, re-run
-- the query and try again.
CREATE PROCEDURE [dbo].[sp_TestUpdate]
(
#Count INT OUTPUT,
#Success BIT OUTPUT
)
AS
BEGIN
DECLARE #NextCount INT
SELECT #Count=Count FROM Customer WHERE ID=1
SET #NextCount = #Count + 1
UPDATE Customer SET Count=#NextCount WHERE ID=1 AND Count=#Count
SET #Success=##ROWCOUNT
END
And:
-- Loop (many times) trying to get a number and insert in into another
-- table. Execute this loop concurrently in several different windows
-- using SMSS.
CREATE PROCEDURE [dbo].[sp_TestLoop]
AS
BEGIN
DECLARE #Iterations INT
DECLARE #Counter INT
DECLARE #Count INT
DECLARE #Success BIT
SET #Iterations = 40000
SET #Counter = 0
WHILE (#Counter < #Iterations)
BEGIN
SET #Counter = #Counter + 1
EXEC sp_TestUpdate #Count = #Count OUTPUT , #Success = #Success OUTPUT
IF (#Success=1)
BEGIN
INSERT INTO TestImage (ImageNumber) VALUES (#Count)
END
END
END
This code ran, creating unique sequential ImageNumber values in the TestImage table. This proves that the above SQL update call is indeed atomic. Neither function guaranteed the updates were done, but they did guarantee that no duplicates were created, and no numbers were skipped.
I'm implementing in my application an event logging system to save some event types from my code, so I've created a table to store the log type and an Incremental ID:
|LogType|CurrentId|
|info | 1 |
|error | 5 |
And also a table to save the concrete log record
|LogType|IdLog|Message |
|info |1 |Process started|
|error |5 |some error |
So, every time I need to save a new record I call a SPROC to calculate the new id for the log type, basically: newId = (currentId + 1). But I am facing an issue with that calculation because if multiple processes calls the SPROC at the same time the "generated Id" is the same, so I'm getting log records with the same Id, and every record must be Id-unique.
This is my SPROC written for SQL Server 2005:
ALTER PROCEDURE [dbo].[usp_GetLogId]
#LogType VARCHAR(MAX)
AS
BEGIN
SET NOCOUNT ON;
BEGIN TRANSACTION
BEGIN TRY
DECLARE #IdCreated VARCHAR(MAX)
IF EXISTS (SELECT * FROM TBL_ApplicationLogId WHERE LogType = #LogType)
BEGIN
DECLARE #CurrentId BIGINT
SET #CurrentId = (SELECT CurrentId FROM TBL_ApplicationLogId WHERE LogType = #LogType)
DECLARE #NewId BIGINT
SET #NewId = (#CurrentId + 1)
UPDATE TBL_ApplicationLogId
SET CurrentId = #NewId
WHERE LogType = #LogType
SET #IdCreated = CONVERT(VARCHAR, #NewId)
END
ELSE
BEGIN
INSERT INTO TBL_ApplicationLogId VALUES(#LogType, 0)
EXEC #IdCreated = usp_GetLogId #LogType
END
END TRY
BEGIN CATCH
DECLARE #ErrorMessage NVARCHAR(MAX)
SET #ErrorMessage = ERROR_MESSAGE()
IF ##TRANCOUNT > 0
ROLLBACK TRANSACTION;
RAISERROR (#ErrorMessage, 16, 1)
END CATCH
IF ##TRANCOUNT > 0
COMMIT TRANSACTION
SELECT #IdCreated
END
I would appreciate your help to fix the sproc to return an unique id on every call.
It has to work on SQL Server 2005. Thanks
Can you achieve what you want with an identity column?
Then you can just let SQL Server guarantee uniqueness.
Example:
create table my_test_table
(
ID int identity
,SOMEVALUE nvarchar(100)
);
insert into my_test_table(somevalue)values('value1');
insert into my_test_table(somevalue)values('value2');
select * from my_test_table
If you must issue the new ID values yourself for some reason, try using a sequence, as shown here:
if object_id('my_test_table') is not null
begin
drop table my_test_table;
end;
go
create table my_test_table
(
ID int
,SOMEVALUE nvarchar(100)
);
go
if object_id('my_test_sequence') is not null
begin
drop sequence my_test_sequence;
end;
go
CREATE SEQUENCE my_test_sequence
AS INT --other options are here: https://msdn.microsoft.com/en-us/library/ff878091.aspx
START WITH 1
INCREMENT BY 1
MINVALUE 0
NO MAXVALUE;
go
insert into my_test_table(id,somevalue)values(next value for my_test_sequence,'value1');
insert into my_test_table(id,somevalue)values(next value for my_test_sequence,'value2');
insert into my_test_table(id,somevalue)values(next value for my_test_sequence,'value3');
select * from my_test_table
One more edit: I think this is an improvement to the existing stored procedure, given the requirements. Include the new value calculation directly in the UPDATE, ultimately return the value directly from the table (not from a variable which could be out of date) and avoid recursion.
A full test script is below.
if object_id('STACKOVERFLOW_usp_getlogid') is not null
begin
drop procedure STACKOVERFLOW_usp_getlogid;
end
go
if object_id('STACKOVERFLOW_TBL_ApplicationLogId') is not null
begin
drop table STACKOVERFLOW_TBL_ApplicationLogId;
end
go
create table STACKOVERFLOW_TBL_ApplicationLogId(CurrentID int, LogType nvarchar(max));
go
create PROCEDURE [dbo].[STACKOVERFLOW_USP_GETLOGID](#LogType VARCHAR(MAX))
AS
BEGIN
SET NOCOUNT ON;
BEGIN TRANSACTION
BEGIN TRY
DECLARE #IdCreated VARCHAR(MAX)
IF EXISTS (SELECT * FROM STACKOVERFLOW_TBL_ApplicationLogId WHERE LogType = #LogType)
BEGIN
UPDATE STACKOVERFLOW_TBL_APPLICATIONLOGID
SET CurrentId = CurrentID + 1
WHERE LogType = #LogType
END
ELSE
BEGIN
--first time: insert 0.
INSERT INTO STACKOVERFLOW_TBL_ApplicationLogId(CurrentID,LogType) VALUES(0,#LogType);
END
END TRY
BEGIN CATCH
DECLARE #ErrorMessage NVARCHAR(MAX)
SET #ErrorMessage = ERROR_MESSAGE()
IF ##TRANCOUNT > 0
begin
ROLLBACK TRANSACTION;
end
RAISERROR(#ErrorMessage, 16, 1);
END CATCH
select CurrentID from STACKOVERFLOW_TBL_APPLICATIONLOGID where LogType = #LogType;
IF ##TRANCOUNT > 0
begin
COMMIT TRANSACTION
END
end
go
exec STACKOVERFLOW_USP_GETLOGID 'TestLogType1';
exec STACKOVERFLOW_USP_GETLOGID 'TestLogType1';
exec STACKOVERFLOW_USP_GETLOGID 'TestLogType1';
exec STACKOVERFLOW_USP_GETLOGID 'TestLogType2';
exec STACKOVERFLOW_USP_GETLOGID 'TestLogType2';
exec STACKOVERFLOW_USP_GETLOGID 'TestLogType2';
You want your increment and read to be atomic, with a guarantee that no other process can increment in between.
You also need to ensure that the log type exists, and again for it to be thread-safe.
Here's how I would go about that, but you would be advised to read up on how it all works in SQL Server 2005 as I have not had to deal with these things in nearly 8 years.
This should complete the two actions atomically, and also without transactions, in order to prevent threads blocking each other. (Not just performance, but also to avoid DEADLOCKs when interacting with other code.)
ALTER PROCEDURE [dbo].[usp_GetLogId]
#LogType VARCHAR(MAX)
AS
BEGIN
SET NOCOUNT ON;
-- Hold our newly created id in a temp table, so we can use OUTPUT
DECLARE #new_id TABLE (id BIGINT);
-- I think this is thread safe, doing all things in a single statement
----> Check that the log-type has no records
----> If so, then insert an initialising row
----> Output the newly created id into our temp table
INSERT INTO
TBL_ApplicationLogId (
LogType,
CurrentId
)
OUTPUT
INSERTED.CurrentID
INTO
#new_id
SELECT
#LogType, 1
FROM
TBL_ApplicationLogId
WHERE
LogType = #LogType
GROUP BY
LogType
HAVING
COUNT(*) = 0
;
-- I think this is thread safe, doing all things in a single statement
----> Ensure we don't already have a new id created
----> Increment the current id
----> Output it to our temp table
UPDATE
TBL_ApplicationLogId
SET
CurrentId = CurrentId + 1
OUTPUT
INSERTED.CurrentID
INTO
#new_id
WHERE
LogType = #LogType
AND NOT EXISTS (SELECT * FROM #new_id)
;
-- Select the result from our temp table
----> It must be populated either from the INSERT or the UPDATE
SELECT
MAX(id) -- MAX used to tell the system that it's returning a scalar
FROM
#new_id
;
END
Not much you can do here, but validate that:
table TBL_ApplicationLogId is indexed by column LogType.
#LogType sp parameter is the same data type as column LogType in table TBL_ApplicationLogId, so it can actually use the index if/when it exists.
If you have a concurrency issue, maybe forcing the lock level on table TBL_ApplicationLogId during select and update can help. Just add (ROWLOCK) after the table name, Eg: TBL_ApplicationLogId (ROWLOCK)
How can I optimize / refactor this stored procedure further?
Where are the performance improvements to be had?
ALTER PROCEDURE cc_test_setnumber
#UUID AS VARCHAR(50),
#Status AS VARCHAR(50)
AS
BEGIN
SET NOCOUNT ON;
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
IF #Status = 'ACTIVE'
BEGIN
DECLARE #Tbl AS TABLE (UUID VARCHAR(50))
BEGIN TRANSACTION
UPDATE dbo.cc_testagents
SET Status = 'INCALL', LastUpdated = GETDATE()
OUTPUT INSERTED.UUID INTO #Tbl
WHERE ID = (SELECT TOP 1 ID FROM dbo.cc_testagents WHERE Status = 'IDLE')
UPDATE cc_testnumbers
SET Status = 'ACTIVE',
AgentUUID = (SELECT UUID FROM #Tbl)
OUTPUT INSERTED.AgentUUID AS 'UUID'
WHERE CallUUID = #UUID
COMMIT TRANSACTION
END
ELSE --BUSY, WRAPUP, NOANSWER, NOAGENT
BEGIN
UPDATE dbo.cc_testagents
SET Status = 'WRAPUP'
WHERE UUID = (SELECT AgentUUID FROM dbo.cc_testnumbers WHERE CallUUID = #UUID)
SELECT #Status = REPLACE(#Status, 'NOAGENT', 'IDLE')
UPDATE dbo.cc_testnumbers
SET Status = #Status, CallUUID = ''
WHERE CallUUID = #UUID
END
END
In sql server management studio the "database engine tuning advisor" is a good place to start- not only for this stored proc but for any other query running against these tables.
It's hard to give you a specific recommendation without knowing a bit more about the table structure and the other queries that are contending for locks on these tables. Make sure there's a clustered/nonclustered index on cc_testagents.ID, cc_testagents.uuid, cc_testnumbers.CallUUID.
If you're experiencing deadlock issues and there are queries reading from cc_testagents/cc_testnumbers that can tolerate reading uncommitted data a quick and dirty "fix" is to use the nolock hint / set transaction isolation level read uncommitted on those select statements. Depending on who you ask this is considered bad practice, just throwing it out there as something that might hold you over if you have some crippling deadlock issues in production.
anyone have an idea how to set rowcount to zero again in SQL.
I have used rowcount to fetch the number of records inserted in the insert statement. I have to use rowcount again to find the rows updated. So i am trying to reset the rowcount again to zero.
code will be look this :
INSERT INTO Table A ......
INSERT INTO statistics (id,inserted_records ) values (1,##rowcount)
---some operations--
Update Table A ....
Update statistics set updated_records=##rowcount where id=
select ##rowcount
only returns the row count from the most recent statement. It doesn't need to be reset.. Executing another statement will automatically reset it.
If for some bizarre reason, you want to make ##rowcount return 0, execute a query that will return 0 rows.
select 1 where 2=3
You can prove this like so.
declare #t table (i int)
declare #stats table(rc int)
insert #t values (1),(2),(3)
-- rowcount is 3
insert #stats values (##rowcount)
-- rowcount is 1
update #t set i=5 where i=4
select ##ROWCOUNT -- rowcount will be 0
#bibinmatthew ##ROWCOUNT automatically resets whenever you do another transaction.
What I would consider doing is
declare #rowsAffected int
insert into table A ...
select #rowsAffected = ##ROWCOUNT
insert into table statistics (id, inserted_records) values (iID, #rowsAffected)
declare #rowsAffected int
update table A ...
select #rowsAffected = ##ROWCOUNT
update table statistics set updated_records = #rowsAffected where id = iID
This way, you are not having to deal directly with the ##ROWCOUNT variable. I have created the #rowsAffected twice because I am assuming that you have the insert and the update scripts in different stored proc's
Information about rowcount are here:
http://technet.microsoft.com/en-us//library/ms187316.aspx
Statements such as USE, SET , DEALLOCATE CURSOR, CLOSE CURSOR,
BEGIN TRANSACTION or COMMIT TRANSACTION reset the ROWCOUNT value to 0
So maybe you can put the statements into a transaction, so they are better isolated against each other.
And you can always just send a request that does not update anything, like UPDATE mytable SET x=1 WHERE 0=1;
I'm using SQL Server 2008 R2 and C#.
I'm inserting a batch of rows in SQL Server with a column Status set with value P.
Afterwards, I check how many rows already have the status R and if there are less than 20, I update the row to Status R.
While inserting and updating, more rows are getting added and updated all the time.
I've tried transactions and locking in multiple ways but still: at the moment that a new batch is activated, there are more than 20 rows with status R for a few milliseconds. After those few milliseconds it stabilizes back to 20.
Does anyone have an idea why at bursts the locking doesn't seem to work?
Sample code, reasons, whatever you can share on this subject can be useful!
Thanks!
Following is my stored proc:
DECLARE #return BIT
SET #return = -1
DECLARE #previousValue INT
--insert the started orchestration
INSERT INTO torchestrationcontroller WITH (ROWLOCK)
([flowname],[orchestrationid],[status])
VALUES (#FlowName, #OrchestrationID, 'P')
--check settings
DECLARE #maxRunning INT
SELECT #maxRunning = maxinstances
FROM torchestrationflows WITH (NOLOCK)
WHERE [flowname] = #FlowName
--if running is 0, than you can pass, no limitation here
IF( #maxRunning = 0 )
BEGIN
SET #return = 1
UPDATE torchestrationcontroller WITH(ROWLOCK)
SET [status] = 'R'
WHERE [orchestrationid] = #OrchestrationID
END
ELSE
-- BEGIN
RETRY: -- Label RETRY
BEGIN TRY
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
BEGIN TRANSACTION T1
--else: check how many orchestrations are now running
--start lock table
DECLARE #currentRunning INT
SELECT #currentRunning = Count(*)
FROM torchestrationcontroller WITH (TABLOCKX) --Use an exclusive lock that will be held until the end of the transaction on all data processed by the statement
WHERE [flowname] = #FlowName
AND [status] = 'R'
--CASE
IF( #currentRunning < #maxRunning )
BEGIN
-- less orchestrations are running than allowed
SET #return = 1
UPDATE torchestrationcontroller WITH(TABLOCKX)
SET [status] = 'R'
WHERE [orchestrationid] = #OrchestrationID
END
ELSE
-- more or equal orchestrations are running than allowed
SET #return = 0
--end lock table
SELECT #Return
COMMIT TRANSACTION T1
END TRY
BEGIN CATCH
--PRINT 'Rollback Transaction'
ROLLBACK TRANSACTION
IF ERROR_NUMBER() = 1205 -- Deadlock Error Number
BEGIN
WAITFOR DELAY '00:00:00.05' -- Wait for 5 ms
GOTO RETRY -- Go to Label RETRY
END
END CATCH
I've been able to fix it by setting the isolation level serializable + transaction on the tree stored procedures (two of which I didn't mention because I didn't think they were of importance to this problem). Apparently it was the combination of multiple stored procs that were interfering with each other.
If you know a better way to fix it that could give me a better performance, please let me know!