How can I optimize / refactor this stored procedure further?
Where are the performance improvements to be had?
ALTER PROCEDURE cc_test_setnumber
#UUID AS VARCHAR(50),
#Status AS VARCHAR(50)
AS
BEGIN
SET NOCOUNT ON;
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
IF #Status = 'ACTIVE'
BEGIN
DECLARE #Tbl AS TABLE (UUID VARCHAR(50))
BEGIN TRANSACTION
UPDATE dbo.cc_testagents
SET Status = 'INCALL', LastUpdated = GETDATE()
OUTPUT INSERTED.UUID INTO #Tbl
WHERE ID = (SELECT TOP 1 ID FROM dbo.cc_testagents WHERE Status = 'IDLE')
UPDATE cc_testnumbers
SET Status = 'ACTIVE',
AgentUUID = (SELECT UUID FROM #Tbl)
OUTPUT INSERTED.AgentUUID AS 'UUID'
WHERE CallUUID = #UUID
COMMIT TRANSACTION
END
ELSE --BUSY, WRAPUP, NOANSWER, NOAGENT
BEGIN
UPDATE dbo.cc_testagents
SET Status = 'WRAPUP'
WHERE UUID = (SELECT AgentUUID FROM dbo.cc_testnumbers WHERE CallUUID = #UUID)
SELECT #Status = REPLACE(#Status, 'NOAGENT', 'IDLE')
UPDATE dbo.cc_testnumbers
SET Status = #Status, CallUUID = ''
WHERE CallUUID = #UUID
END
END
In sql server management studio the "database engine tuning advisor" is a good place to start- not only for this stored proc but for any other query running against these tables.
It's hard to give you a specific recommendation without knowing a bit more about the table structure and the other queries that are contending for locks on these tables. Make sure there's a clustered/nonclustered index on cc_testagents.ID, cc_testagents.uuid, cc_testnumbers.CallUUID.
If you're experiencing deadlock issues and there are queries reading from cc_testagents/cc_testnumbers that can tolerate reading uncommitted data a quick and dirty "fix" is to use the nolock hint / set transaction isolation level read uncommitted on those select statements. Depending on who you ask this is considered bad practice, just throwing it out there as something that might hold you over if you have some crippling deadlock issues in production.
Related
I want to generate a custom ID for one of the feature in my application. Here is the procedure to do that:
CREATE PROCEDURE [dbo].[GetNextVendorInvoiceNo]
AS
BEGIN
Declare #StartingVendorInvoiceNo int = 0
Select #StartingVendorInvoiceNo = MAX(StartingVendorInvoiceNo) + 1
From SystemSettings WITH (TABLOCK)
Update SystemSettings
Set StartingVendorInvoiceNo = #StartingVendorInvoiceNo
Select #StartingVendorInvoiceNo
END
Would there be any issue if multiple users end up calling this procedure. Obviously I don't want multiple users to have the same ID. I am using TABLOCK but not sure if this is the right way or anything else too is required.
SQL Server 2012 has SEQUENCE feature, which is definitely safe for multi-user environment. It is based on an integer type, though.
If you have a complex procedure that generates a "next" ID and you want to make sure that only one instance of the procedure runs at any moment (at the expense of throughput), I'd use `sp_getapplock'. It is easy to use and understand and you don't need to worry about placing correct query hints.
Your procedure would look like this:
CREATE PROCEDURE [dbo].[GetNextVendorInvoiceNo]
AS
BEGIN
SET NOCOUNT ON;
SET XACT_ABORT ON;
BEGIN TRANSACTION;
BEGIN TRY
DECLARE #VarLockResult int;
EXEC #VarLockResult = sp_getapplock
#Resource = 'GetNextVendorInvoiceNo_app_lock',
#LockMode = 'Exclusive',
#LockOwner = 'Transaction',
#LockTimeout = 60000,
#DbPrincipal = 'public';
Declare #StartingVendorInvoiceNo int = 0;
IF #VarLockResult >= 0
BEGIN
-- Acquired the lock, generate the "next" ID
Select #StartingVendorInvoiceNo = MAX(StartingVendorInvoiceNo) + 1
From SystemSettings;
Update SystemSettings
Set StartingVendorInvoiceNo = #StartingVendorInvoiceNo;
END ELSE BEGIN
-- TODO: handle the case when it takes too long to acquire the lock,
-- i.e. return some error code
-- For example, return 0
SET #StartingVendorInvoiceNo = 0;
END;
Select #StartingVendorInvoiceNo;
COMMIT TRANSACTION;
END TRY
BEGIN CATCH
ROLLBACK TRANSACTION;
-- TODO: handle the error
END CATCH;
END
Simple TABLOCK as you wrote is definitely not enough. You need to wrap everything into transaction. Then make sure that lock is held till the end of the transaction, see HOLDLOCK. Then make sure that the lock you are getting is the correct one. You may need TABLOCKX. So, overall you need a pretty good understanding of all these hints and how locking works. It is definitely possible to achieve the same effect with these hints. But, if the logic in the procedure is more complicated than your simplified example it can easily get pretty ugly.
To my mind, sp_getapplock is easy to understand and maintain.
I've successfully only been able to see transaction isolation level events in the Audit Login event. Are there any other ways to monitor the transaction isolation level changes using SQL Profiler or using some other tool? The reason I ask is because SQL Profiler does not seem to be able to output the events in the right order or it skips events because when setting the IsolationLevel to Serializable in my app it still shows transaction isolation level read committed.
Example Audit Login in SQL Profiler:
-- network protocol: Named Pipes
set quoted_identifier on
set arithabort off
set numeric_roundabort off
set ansi_warnings on
set ansi_padding on
set ansi_nulls on
set concat_null_yields_null on
set cursor_close_on_commit off
set implicit_transactions off
set language us_english
set dateformat mdy
set datefirst 7
set transaction isolation level serializable
I am afraid there isn't one.
Even if there was one, what would you want to see where multiple tables were queried in a join and one or more had NOLOCK which is read uncommitted?
The profiler reports queries at the statement level not the table level so you would have a mix of transaction isolation levels (this is true of the profiler and extended events)
The best you could do is to manually parse the statement start (batch and procedure) and look for the set transaction isolation level.
ed
I found this question while trying to determine why Entity Framework was connecting to our database with the serializable transaction isolation level. As the original answer stated, there is no straightforward way to collect this data either from SQL Trace or Extended Events.
Below is a query to repetitively collect the transaction isolation levels from the sys.dm_exec_sessions and sys.dm_exec_requests DMVs. The tables will grow quickly but are useful for brief tracking of isolation levels from external applications:
SET NOCOUNT ON
IF OBJECT_ID('dbo.Query', 'U') IS NOT NULL DROP TABLE dbo.Query
IF OBJECT_ID('dbo.SessionData', 'U') IS NOT NULL DROP TABLE dbo.SessionData
GO
CREATE TABLE dbo.Query
(
SessionID INT
,StartTime DATETIME
,IsolationLevel INT
,IsolationLevelName VARCHAR(20)
,ObjectName VARCHAR(300)
,StatementText VARCHAR(MAX)
,QueryPlan XML
)
CREATE TABLE dbo.SessionData
(
SessionID INT
,LoginTime DATETIME
,IsolationLevel INT
,IsolationLevelName VARCHAR(20)
,ProgramName VARCHAR(300)
,LoginName VARCHAR(300)
)
WHILE 1=1
BEGIN
INSERT INTO dbo.Query
SELECT
SessionID = req.session_id
,StartTime = req.start_time
,IsolationLevel = req.transaction_isolation_level
,IsolationLevelName =
CASE req.transaction_isolation_level
WHEN 1 THEN 'Read Uncomitted'
WHEN 2 THEN 'Read Committed'
WHEN 3 THEN 'Repeatable Read'
WHEN 4 THEN 'Serializable'
WHEN 5 THEN 'Snapshot'
ELSE 'Unknown' END
,ObjectName = OBJECT_NAME(st.objectid, st.[dbid])
,StatementText = SUBSTRING
(REPLACE
(REPLACE
(SUBSTRING
(ST.[text]
, (req.statement_start_offset/2) + 1
, (
(CASE statement_end_offset
WHEN -1
THEN DATALENGTH(st.[text])
ELSE req.statement_end_offset
END
- req.statement_start_offset)/2) + 1)
, CHAR(10), ' '), CHAR(13), ' '), 1, 512)
, QueryPlan = qp.query_plan
FROM sys.dm_exec_requests req
CROSS APPLY sys.dm_exec_sql_text(req.[sql_handle]) st
CROSS APPLY sys.dm_exec_query_plan(req.plan_handle) qp
WHERE ST.[text] NOT LIKE N'%INSERT INTO dbo.Query%'
INSERT dbo.SessionData
SELECT
SessionID = session_id
,LoginTime = login_time
,IsolationLevel = transaction_isolation_level
,IsolationLevelName =
CASE transaction_isolation_level
WHEN 1 THEN 'Read uncomitted'
WHEN 2 THEN 'Read committed'
WHEN 3 THEN 'Repeatable read'
WHEN 4 THEN 'Serializable'
ELSE 'Unknown' END
,ProgramName = [program_name]
,LoginName = login_name
FROM sys.dm_exec_sessions
WHERE security_id <> 0x01
AND session_id <> ##SPID
END
SELECT *
FROM dbo.Query
SELECT *
FROM dbo.SessionData
Now, I'm trying to increment number sequential in SQL Server with the number provided from users.
I have a problem when multiple user insert a row same time with same number.
I try to update the number that user provided to a temporary table, and I expect when I update the same table with same condition, SQL Server will lock any modified to this row until the current update finished, but it not.
Here is the update statement I used:
UPDATE GlobalParam
SET ValueString = (CAST(ValueString as bigint) + 1)
WHERE Id = 'xxxx'
Could you tell me any way to force the other update command wait until the current command finished ?
This is entire my command :
DECLARE #Result bigint;
UPDATE GlobalParam SET ValueString = (SELECT MAX(Code) FROM Item)
DECLARE #SelectTopStm nvarchar(MAX);
DECLARE #ExistRow int
SET #SelectTopStm = 'SELECT #ExistRow = 1 FROM (SELECT TOP 1 Code FROM Item WHERE Code = '999') temp'
EXEC sp_executesql #SelectTopStm, N'#ExistRow int output', #ExistRow output
IF (#ExistRow is not null)
BEGIN
DECLARE #MaxValue bigint
DECLARE #ReturnUpdateTbl table (ValueString nvarchar(max));
UPDATE GlobalParam SET ValueString = (CAST(ValueString as bigint) + 1)
OUTPUT inserted.ValueString INTO #ReturnUpdateTbl
WHERE [Id] = '333A8E1F-16DD-E411-8280-D4BED9D726B3'
SELECT TOP 1 #MaxValue = CAST(ValueString as bigint) FROM #ReturnUpdateTbl
SET #Result = #MaxValue
END
ELSE
BEGIN
SET #Output = 999
END
END
I write the codes above as a stored procedure.
Here is the real code when I insert one Item:
DECLARE #IncrementResult BIGINT
EXEC IncrementNumberUnique
, (some parameters)..
,#Result = #IncrementResult OUTPUT
INSERT INTO ITEM (Id, Code) VALUES ('xxxx', #IncrementResult)
I create 3 threads and make it run at the same time.
The return result :
Id Code
1 999
2 1000
3 1000
Thanks
If I understood your requirements, try ROWLOCK hint to tell the optimizer to start with locking the rows one by one as the update needs them.
UPDATE GlobalParam WITH(ROWLOCK)
SET ValueString = (CAST(ValueString as bigint) + 1)
WHERE Id = 'xxxx'
By default SQL Server does READ Committed locking which releases READ locks once the read operation is committed. Once the Update statement below is complete, all read locks are released from the table Item.
UPDATE GlobalParam SET ValueString = (SELECT MAX(Code) FROM Item)
Since your INSERT into Item is outside the scope of your procedure. you can run the thread in SERIALIZABLE isolation level. Something like this.
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
DECLARE #IncrementResult BIGINT
EXEC IncrementNumberUnique
, (some parameters)..
,#Result = #IncrementResult OUTPUT
INSERT INTO ITEM (Id, Code) VALUES ('xxxx', #IncrementResult)
Changing the isolation level to SERIALIZABLE will increase blocking and contention of resources on item table.
To know more about isolation level, refer this
you should look into identity columns and remove such manual computation of incremental columns if possible.
I'm using SQL Server 2008 R2 and C#.
I'm inserting a batch of rows in SQL Server with a column Status set with value P.
Afterwards, I check how many rows already have the status R and if there are less than 20, I update the row to Status R.
While inserting and updating, more rows are getting added and updated all the time.
I've tried transactions and locking in multiple ways but still: at the moment that a new batch is activated, there are more than 20 rows with status R for a few milliseconds. After those few milliseconds it stabilizes back to 20.
Does anyone have an idea why at bursts the locking doesn't seem to work?
Sample code, reasons, whatever you can share on this subject can be useful!
Thanks!
Following is my stored proc:
DECLARE #return BIT
SET #return = -1
DECLARE #previousValue INT
--insert the started orchestration
INSERT INTO torchestrationcontroller WITH (ROWLOCK)
([flowname],[orchestrationid],[status])
VALUES (#FlowName, #OrchestrationID, 'P')
--check settings
DECLARE #maxRunning INT
SELECT #maxRunning = maxinstances
FROM torchestrationflows WITH (NOLOCK)
WHERE [flowname] = #FlowName
--if running is 0, than you can pass, no limitation here
IF( #maxRunning = 0 )
BEGIN
SET #return = 1
UPDATE torchestrationcontroller WITH(ROWLOCK)
SET [status] = 'R'
WHERE [orchestrationid] = #OrchestrationID
END
ELSE
-- BEGIN
RETRY: -- Label RETRY
BEGIN TRY
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
BEGIN TRANSACTION T1
--else: check how many orchestrations are now running
--start lock table
DECLARE #currentRunning INT
SELECT #currentRunning = Count(*)
FROM torchestrationcontroller WITH (TABLOCKX) --Use an exclusive lock that will be held until the end of the transaction on all data processed by the statement
WHERE [flowname] = #FlowName
AND [status] = 'R'
--CASE
IF( #currentRunning < #maxRunning )
BEGIN
-- less orchestrations are running than allowed
SET #return = 1
UPDATE torchestrationcontroller WITH(TABLOCKX)
SET [status] = 'R'
WHERE [orchestrationid] = #OrchestrationID
END
ELSE
-- more or equal orchestrations are running than allowed
SET #return = 0
--end lock table
SELECT #Return
COMMIT TRANSACTION T1
END TRY
BEGIN CATCH
--PRINT 'Rollback Transaction'
ROLLBACK TRANSACTION
IF ERROR_NUMBER() = 1205 -- Deadlock Error Number
BEGIN
WAITFOR DELAY '00:00:00.05' -- Wait for 5 ms
GOTO RETRY -- Go to Label RETRY
END
END CATCH
I've been able to fix it by setting the isolation level serializable + transaction on the tree stored procedures (two of which I didn't mention because I didn't think they were of importance to this problem). Apparently it was the combination of multiple stored procs that were interfering with each other.
If you know a better way to fix it that could give me a better performance, please let me know!
Given a table (JobTable) that has 2 columns: JobId, JobStatus (there are others but I'm excluding them as they have no impact on the question).
A process, the WorkGenerator, INSERTs rows into the table.
Another process, the Worker, executes a stored procedure called GetNextJob.
Right now, GetNextJob does a SELECT to find the next piece of work (JobStatus = 1) and then an UPDATE to mark that work as in-progress (JobStatus = 2).
We're looking to scale up by having multiple Worker processes but have found that it's possible for multiple workers to pick up the same piece of work.
I have the following queries:
In GetNextJob, can I combine the
SELECT and UPDATE into a single query
and use the OUTPUT clause to get the
JobId?
How can I guarantee that only
1 process will pick up each piece of
work?
I appreciate answers that work but also explainations as to why they work.
See Processing Data Queues in SQL Server with READPAST and UPDLOCK
also Using SQL Server as resource locking mechanism
Lets build up a solution:
Ensure the UPDATE checks the ##ROWCOUNT
Inspect ##ROWCOUNT after the UPDATE to determine which Worker process wins.
CREATE PROCEDURE [dbo].[GetNextJob]
AS
BEGIN
SET NOCOUNT ON;
DECLARE #jobId INT
SELECT TOP 1 #jobId = Jobs.JobId FROM Jobs
WHERE Jobs.JobStatus = 1
ORDER BY JobId ASC
UPDATE Jobs Set JobStatus = 2
WHERE JobId = #jobId
AND JobStatus = 1;
IF (##ROWCOUNT = 1)
BEGIN
SELECT #jobId;
END
END
GO
Note that with the above procedure the process that does not win does not return any rows and needs to call the procedure again to get the next row.
The above will fix most of all the cases where both Workers pick up the same piece of work because the UPDATE guards against this. However, it's possible for ##ROWCOUNT to be 1 for both workers for the same jobId!
Lock the row within a transaction so only 1 Worker can update the Status
CREATE PROCEDURE [dbo].[GetNextJob]
AS
BEGIN
SET NOCOUNT ON;
BEGIN TRANSACTION
DECLARE #jobId INT
SELECT TOP 1 #jobId = Jobs.JobId FROM Jobs WITH (UPDLOCK, ROWLOCK)
WHERE Jobs.JobStatus = 1
ORDER BY JobId ASC
UPDATE Jobs Set JobStatus = 2
WHERE JobId = #jobId
AND JobStatus = 1;
IF (##ROWCOUNT = 1)
BEGIN
SELECT #jobId;
END
COMMIT
END
GO
Both UPDLOCK and ROWLOCK are required. UPDLOCK on the SELECT tells MSSQL to lock the row as if it is being updated until the transaction is committed. The ROWLOCK (probably isn't necessary) but tells MSSQL to only lock the ROW returned by the SELECT.
Optimising the locking
When 1 process uses the ROWLOCK hint to lock a row, other processes are blocked waiting for that lock to be released. The READPAST hint can be specified. From MSDN:
When READPAST is specified, both
row-level and page-level locks are
skipped. That is, the Database Engine
skips past the rows or pages instead
of blocking the current transaction
until the locks are released.
This will stop the other processes from being blocked and improve performance.
CREATE PROCEDURE [dbo].[GetNextJob]
AS
BEGIN
SET NOCOUNT ON;
BEGIN TRANSACTION
DECLARE #jobId INT
SELECT TOP 1 #jobId = Jobs.JobId FROM Jobs WITH (UPDLOCK, READPAST)
WHERE Jobs.JobStatus = 1
ORDER BY JobId ASC
UPDATE Jobs Set JobStatus = 2
WHERE JobId = #jobId
AND JobStatus = 1;
IF (##ROWCOUNT = 1)
BEGIN
SELECT #jobId;
END
COMMIT
END
GO
To Consider: Combine SELECT and Update
Combine the SELECT and UPDATE and use a SET to get the ID out.
For example:
DECLARE #val int
UPDATE JobTable
SET #val = JobId,
status = 2
WHERE rowid = (SELECT min(JobId) FROM JobTable WHERE status = 1)
SELECT #val
This still requires the transaction to be SERIALIZABLE to ensure that each row is allocated to one Worker only.
To Consider: Combine SELECT and UPDATE again
Combine the SELECT and UPDATE and use the Output clause.