How to simulate a deadlock in SQL Server in a single process? - sql

Our client side code detects deadlocks, waits for an interval, then retries the request up to 5 times. The retry logic detects the deadlocks based on the error number 1205.
My goal is to test both the deadlock retry logic and deadlock handling inside of various stored procedures. I can create a deadlock using two different connections. However, I would like to simulate a deadlock inside of a single stored procedure itself.
A deadlock raises the following error message:
Msg 1205, Level 13, State 51, Line 1
Transaction (Process ID 66) was
deadlocked on lock resources with another process and has been chosen
as the deadlock victim. Rerun the transaction.
I see this error message is in sys.messages:
select * from sys.messages where message_id = 1205 and language_id = 1033
message_id language_id severity is_event_logged text
1205 1033 13 0 Transaction (Process ID %d) was deadlocked on %.*ls resources with another process and has been chosen as the deadlock victim. Rerun the transaction.
I can't raise this error using RAISERROR:
raiserror(1205, 13, 51)
Msg 2732, Level 16, State 1, Line 1
Error number 1205 is invalid.
The number must be from 13000 through 2147483647 and it cannot be 50000.
Our deadlock retry logic checks if the error number is 1205. The deadlock needs to have the same message ID, level, and state as a normal deadlock.
Is there a way to simulate a deadlock (with RAISERROR or any other means) and get the same message number out with just one process?
Our databases are using SQL 2005 compatibility, though our servers vary from 2005 through 2008 R2.

As many have pointed out, the answer is no, a single process cannot reliably deadlock itself. I came up with the following solution to simulate a deadlock on a development or test system..
Run the script below in a SQL Server Management Studio window. (Tested on 2008 R2 only.) You can leave it running as long as necessary.
In the place you want to simulate a deadlock, insert a call to sp_simulatedeadlock. Run your process, and the deadlock should occur.
When done testing, stop the SSMS query and run the cleanup code at the bottom.
/*
This script helps simulate deadlocks. Run the entire script in a SQL query window. It will continue running until stopped.
In the target script, insert a call to sp_simulatedeadlock where you want the deadlock to occur.
This stored procedure, also created below, causes the deadlock.
When you are done, stop the execution of this window and run the code in the cleanup section at the bottom.
*/
set nocount on
if object_id('DeadlockTest') is not null
drop table DeadlockTest
create table DeadlockTest
(
Deadlock_Key int primary key clustered,
Deadlock_Count int
)
go
if exists (select * from sysobjects where id = object_id(N'sp_simulatedeadlock')
AND objectproperty(id, N'IsProcedure') = 1)
drop procedure sp_simulatedeadlock
GO
create procedure sp_simulatedeadlock
(
#MaxDeadlocks int = -1 -- specify the number of deadlocks you want; -1 = constant deadlocking
)
as begin
set nocount on
if object_id('DeadlockTest') is null
return
-- Volunteer to be a deadlock victim.
set deadlock_priority low
declare #DeadlockCount int
select #DeadlockCount = Deadlock_Count -- this starts at 0
from DeadlockTest
where Deadlock_Key = 2
-- Trace the start of each deadlock event.
-- To listen to the trace event, setup a SQL Server Profiler trace with event class "UserConfigurable:0".
-- Note that the user running this proc must have ALTER TRACE permission.
-- Also note that there are only 128 characters allowed in the trace text.
declare #trace nvarchar(128)
if #MaxDeadlocks > 0 AND #DeadlockCount > #MaxDeadlocks
begin
set #trace = N'Deadlock Test #MaxDeadlocks: ' + cast(#MaxDeadlocks as nvarchar) + N' #DeadlockCount: ' + cast(#DeadlockCount as nvarchar) + N' Resetting deadlock count. Will not cause deadlock.'
exec sp_trace_generateevent
#eventid = 82, -- 82 = UserConfigurable:0 through 91 = UserConfigurable:9
#userinfo = #trace
-- Reset the number of deadlocks.
-- Hopefully if there is an outer transaction, it will complete and persist this change.
update DeadlockTest
set Deadlock_Count = 0
where Deadlock_Key = 2
return
end
set #trace = N'Deadlock Test #MaxDeadlocks: ' + cast(#MaxDeadlocks as nvarchar) + N' #DeadlockCount: ' + cast(#DeadlockCount as nvarchar) + N' Simulating deadlock.'
exec sp_trace_generateevent
#eventid = 82, -- 82 = UserConfigurable:0 through 91 = UserConfigurable:9
#userinfo = #trace
declare #StartedTransaction bit
set #StartedTransaction = 0
if ##trancount = 0
begin
set #StartedTransaction = 1
begin transaction
end
-- lock 2nd record
update DeadlockTest
set Deadlock_Count = Deadlock_Count
from DeadlockTest
where Deadlock_Key = 2
-- lock 1st record to cause deadlock
update DeadlockTest
set Deadlock_Count = Deadlock_Count
from DeadlockTest
where Deadlock_Key = 1
if #StartedTransaction = 1
rollback
end
go
insert into DeadlockTest(Deadlock_Key, Deadlock_Count)
select 1, 0
union select 2, 0
-- Force other processes to be the deadlock victim.
set deadlock_priority high
begin transaction
while 1 = 1
begin
begin try
begin transaction
-- lock 1st record
update DeadlockTest
set Deadlock_Count = Deadlock_Count
from DeadlockTest
where Deadlock_Key = 1
waitfor delay '00:00:10'
-- lock 2nd record (which will be locked when the target proc calls sp_simulatedeadlock)
update DeadlockTest
set Deadlock_Count = Deadlock_Count
from DeadlockTest
where Deadlock_Key = 2
rollback
end try
begin catch
print 'Error ' + convert(varchar(20), ERROR_NUMBER()) + ': ' + ERROR_MESSAGE()
goto cleanup
end catch
end
cleanup:
if ##trancount > 0
rollback
drop procedure sp_simulatedeadlock
drop table DeadlockTest

You can exploit a bug that Microsoft seems in no hurry to fix by running
begin tran
go
CREATE TYPE dbo.IntIntSet AS TABLE(
Value0 Int NOT NULL,
Value1 Int NOT NULL
)
go
declare #myPK dbo.IntIntSet;
go
rollback
This SQL will cause a deadlock with itself. Lots more details at Aaron Bertand's blog http://sqlperformance.com/2013/11/t-sql-queries/single-tx-deadlock

(Apparently I don't have enough reputation to add a comment. So posting as an answer.)
A deadlock requires at least two processes. the only exception being is the intra-query parallel deadlocks which are kind of impossible to reproduce.
However you can simulate a deadlock on two processes running the exact same query (or sp). Some ideas here

This works reliably from a single session. Use service broker activation to invoke the second thread which is required for a deadlock.
NOTE1: cleanup script not included
NOTE2: service broker has to be enabled:
ALTER DATABASE dbname SET ENABLE_BROKER WITH ROLLBACK IMMEDIATE;
EXEC sp_executesql N'
CREATE OR ALTER PROCEDURE DeadlockReceive
AS
DECLARE #MessageBody NVARCHAR(1000);
RECEIVE #MessageBody = CAST(message_body AS NVARCHAR(1000) )FROM DeadlockQueue
SELECT #MessageBody
EXEC sp_executesql #MessageBody;'
IF EXISTS (SELECT * FROM sys.services WHERE name = 'DeadlockService') DROP SERVICE DeadlockService
IF OBJECT_ID('DeadlockQueue') IS NOT NULL DROP QUEUE dbo.DeadlockQueue
IF EXISTS (SELECT * FROM sys.service_contracts WHERE name = 'DeadlockContract') DROP CONTRACT DeadlockContract
IF EXISTS (SELECT * FROM sys.service_message_types WHERE name = 'DeadlockMessage') DROP MESSAGE TYPE DeadlockMessage
DROP TABLE IF EXISTS DeadlockTable1 ;
DROP TABLE IF EXISTS DeadlockTable2 ;
CREATE MESSAGE TYPE DeadlockMessage VALIDATION = NONE;
CREATE QUEUE DeadlockQueue WITH STATUS = ON, ACTIVATION (PROCEDURE_NAME = DeadlockReceive, EXECUTE AS SELF, MAX_QUEUE_READERS = 1);
CREATE CONTRACT DeadlockContract AUTHORIZATION dbo (DeadlockMessage SENT BY ANY);
CREATE SERVICE DeadlockService ON QUEUE DeadlockQueue (DeadlockContract);
CREATE TABLE DeadlockTable1 (Value INT); INSERT dbo.DeadlockTable1 SELECT 1;
CREATE TABLE DeadlockTable2 (Value INT); INSERT dbo.DeadlockTable2 SELECT 1;
DECLARE #ch UNIQUEIDENTIFIER
BEGIN DIALOG #ch FROM SERVICE DeadlockService TO SERVICE 'DeadlockService' ON CONTRACT DeadlockContract WITH ENCRYPTION = OFF ;
SEND ON CONVERSATION #ch MESSAGE TYPE DeadlockMessage (N'
set deadlock_priority high;
begin tran;
update DeadlockTable2 set value = 5;
waitfor delay ''00:00:01'';
update DeadlockTable1 set value = 5;
commit')
SET DEADLOCK_PRIORITY LOW
BEGIN TRAN
UPDATE dbo.DeadlockTable1 SET Value = 2
waitfor delay '00:00:01';
UPDATE dbo.DeadlockTable2 SET Value = 2
COMMIT

Simplest way to reproduce in C# with Parallel
e.g.
var List = ... (add some items with same ids)
Parallel.ForEach(List,
(item) =>
{
ReportsDataContext erdc = null;
try
{
using (TransactionScope scope = new TransactionScope())
{
erdc = new ReportsDataContext("....connection....");
var report = erdc.Report.Where(x => x.id == item.id).Select(x => x);
report.Count++
erdc.SubmitChanges();
scope.Complete();
}
if (erdc != null)
erdc.Dispose();
}
catch (Exception ex)
{
if (erdc != null)
erdc.Dispose();
ErrorLog.LogEx("multi thread victim", ex);
}
more interest how to prevent that error in real cross thread situation?

I had difficulty getting Paul's answer to work. I made some small changes to get it working.
The key is to begin and rollback the sp_simulatedeadlock transaction within the procedure itself. I made no changes to the procedure in Paul's answer.
DECLARE #DeadlockCounter INT = NULL
SELECT #DeadlockCounter = 0
WHILE #DeadlockCounter < 10
BEGIN
BEGIN TRY
/* The procedure was leaving uncommitted transactions, I rollback the transaction in the catch block */
BEGIN tran simulate
Exec sp_simulatedeadlock
/* Code you want to deadlock */
SELECT #DeadlockCounter = 10
END TRY
BEGIN CATCH
Rollback tran simulate
PRINT ERROR_MESSAGE()
IF (ERROR_MESSAGE() LIKE '%deadlock%' OR ERROR_NUMBER() = 1205) AND #DeadlockCounter < 10
BEGIN
SELECT #DeadlockCounter +=1
PRINT #DeadlockCounter
IF #DeadlockCounter = 10
BEGIN
RAISERROR('Deadlock limit exceeded or error raised', 16, 10);
END
END
END CATCH
END

If you happen to run into problems with the GO/go keywords (Incorrect syntax near 'GO') in any of the scripts above, it's important to know, that this instruction only seems to work in Microsoft SQL Server Management Studio/sqlcmd:
The GO keyword is not T-SQL, but a SQL Server Management Studio artifact that allows you to separate the execution of a script file in multiple batches.I.e. when you run a T-SQL script file in SSMS, the statements are run in batches separated by the GO keyword. […]
SQL Server doesn't understand the GO keyword. So if you need an equivalent, you need to separate and run the batches individually on your own.
(from JotaBe's answer to another question)
So if you want to try out e.g. Michael J Swart's Answer through DBeaver for example, you'll have to remove the go's and run all parts of the query on its own.

Related

Service Broker queue "is currently disabled" several times right after upgrading from SQL Server 2012 to 2017, nothing in logs

Yesterday we took our SQL Server 2012 instance which has been processing messages for several years without any issues (except a new periodic performance issue that started several months ago, details below), and upgraded it from SQL Server 2012 to SQL Server 2017+CU29 (KB5010786). We have tried moving compat from 2012 to 2017 and while it helps with a new issues we're seeing, performance is not great.
But the big thing is: since then, we've had the queue spontaneously disable itself twice. It works for minutes/hours, then poof the queue is disabled. We have logging and nothing's giving us anything. We had briefly turned on Query Store, but after the queue disabled the first time, we went looking for similar issue. We found a thread online that said they were having similar problems and that it was Query Store, so we immediately flipped it to Read-Only (we had turned it on in order to try and fix a performance issue we see on Mondays, where it grabs a bad plan and rebooting it seems to be the only fix, but we don't see that the rest of the week). Also of note, this doesn't update rows, it just inserts new rows into a series of hourly tables.
We're also seeing massive locking on LCK_M_IX on where we didn't before. Looking at that next. It's on a part of the code that does an insert into a table, where the output is generated from a CLR. (INSERT INTO table FROM SELECT clr). Moving from 2012 to 2017 seems to have changed that behavior, but it's still seems like it's slow overall, but I'm terrified about it spontaneously disabling again.
We are running the same load on two separate servers, so I have the ability to compare things.
The "disabled" message in our logging table appears several times all at the same time (I'm guessing once per thread). Nothing in the SQL Error Log. Interestingly, in some of the rows in the logging table, the message_body is NULL, but has a body in others. But we see no errors for several minutes before it occurred in either.
The service queue "ODS_TargetQueue" is currently disabled.
We're also running a Extended Event that logs any severity 11+ errors.
All it's showing is
The service queue "ODS_TargetQueue" is currently disabled.
We are also seeing this sporadically which we normally don't see unless we're having log backup issues:
The current transaction cannot be committed and cannot support operations that write to the log file. Roll back the transaction.
We have also seen this a handful of times this morning, which seems to be new:
Process ID 57 attempted to unlock a resource it does not own: METADATA: database_id = 7 CONVERSATION_ENDPOINT_RECV($hash = 0x9904a343:0x9c8327f9:0x4b), lockPartitionId = 0. Retry the transaction, because this error may be caused by a timing condition. If the problem persists, contact the database administrator.
The queue:
CREATE QUEUE [svcBroker].[ODS_TargetQueue] WITH STATUS = ON , RETENTION = OFF , ACTIVATION ( STATUS = ON , PROCEDURE_NAME = [svcBroker].[ODS_TargetQueue_Receive] , MAX_QUEUE_READERS = 30 , EXECUTE AS OWNER ), POISON_MESSAGE_HANDLING (STATUS = ON) ON [PRIMARY]
GO
The procedure
SET QUOTED_IDENTIFIER ON
SET ANSI_NULLS ON
GO
CREATE PROCEDURE [svcBroker].[ODS_TargetQueue_Receive]
AS
BEGIN
set nocount on
-- Variable table for received messages.
DECLARE #receive_table TABLE(
queuing_order BIGINT,
conversation_handle UNIQUEIDENTIFIER,
message_type_name SYSNAME,
message_body xml);
-- Cursor for received message table.
DECLARE message_cursor CURSOR LOCAL FORWARD_ONLY READ_ONLY
FOR SELECT
conversation_handle,
message_type_name,
message_body
FROM #receive_table ORDER BY queuing_order;
DECLARE #conversation_handle UNIQUEIDENTIFIER;
DECLARE #message_type SYSNAME;
DECLARE #message_body xml;
-- Error variables.
DECLARE #error_number INT;
DECLARE #error_message VARCHAR(4000);
DECLARE #error_severity INT;
DECLARE #error_state INT;
DECLARE #error_procedure SYSNAME;
DECLARE #error_line INT;
DECLARE #error_dialog VARCHAR(50);
BEGIN TRY
WHILE (1 = 1)
BEGIN
BEGIN TRANSACTION;
-- Receive all available messages into the table.
-- Wait 5 seconds for messages.
WAITFOR (
RECEIVE TOP (1000)
[queuing_order],
[conversation_handle],
[message_type_name],
convert(xml, [message_body])
FROM svcBroker.ODS_TargetQueue
INTO #receive_table
), TIMEOUT 2000;
IF ##ROWCOUNT = 0
BEGIN
COMMIT;
BREAK;
END
ELSE
BEGIN
OPEN message_cursor;
WHILE (1=1)
BEGIN
FETCH NEXT FROM message_cursor
INTO #conversation_handle,
#message_type,
#message_body;
IF (##FETCH_STATUS != 0) BREAK;
-- Process a message.
-- If an exception occurs, catch and attempt to recover.
BEGIN TRY
IF #message_type = 'svcBroker_ods_claim_request'
BEGIN
exec ParseRequestMessages #message_body
END
ELSE IF #message_type in ('svcBroker_EndOfStream', 'http://schemas.microsoft.com/SQL/ServiceBroker/EndDialog')
BEGIN
-- initiator is signaling end of message stream: end the dialog
END CONVERSATION #conversation_handle;
END
ELSE IF #message_type = 'http://schemas.microsoft.com/SQL/ServiceBroker/Error'
BEGIN
-- If the message_type indicates that the message is an error,
-- raise the error and end the conversation.
WITH XMLNAMESPACES ('http://schemas.microsoft.com/SQL/ServiceBroker/Error' AS ssb)
SELECT
#error_number = CAST(#message_body AS XML).value('(//ssb:Error/ssb:Code)[1]', 'INT'),
#error_message = CAST(#message_body AS XML).value('(//ssb:Error/ssb:Description)[1]', 'VARCHAR(4000)');
SET #error_dialog = CAST(#conversation_handle AS VARCHAR(50));
RAISERROR('Error in dialog %s: %s (%i)', 16, 1, #error_dialog, #error_message, #error_number);
END CONVERSATION #conversation_handle;
END
END TRY
BEGIN CATCH
SET #error_number = ERROR_NUMBER();
SET #error_message = ERROR_MESSAGE();
SET #error_severity = ERROR_SEVERITY();
SET #error_state = ERROR_STATE();
SET #error_procedure = ERROR_PROCEDURE();
SET #error_line = ERROR_LINE();
IF XACT_STATE() = -1
BEGIN
-- The transaction is doomed. Only rollback possible.
-- This could disable the queue if done 5 times consecutively!
ROLLBACK TRANSACTION;
-- Record the error.
BEGIN TRANSACTION;
INSERT INTO svcBroker.target_processing_errors (
error_conversation,[error_number],[error_message],[error_severity],
[error_state],[error_procedure],[error_line],[doomed_transaction],
[message_body])
VALUES (NULL, #error_number, #error_message,#error_severity,
#error_state, #error_procedure, #error_line, 1, #message_body);
COMMIT;
-- For this level of error, it is best to exit the proc
-- and give the queue monitor control.
-- Breaking to the outer catch will accomplish this.
RAISERROR ('Message processing error', 16, 1);
END
ELSE IF XACT_STATE() = 1
BEGIN
-- Record error and continue processing messages.
-- Failing message could also be put aside for later processing here.
-- Otherwise it will be discarded.
INSERT INTO svcBroker.target_processing_errors (
error_conversation,[error_number],[error_message],[error_severity],
[error_state],[error_procedure],[error_line],[doomed_transaction],
[message_body])
VALUES (NULL, #error_number, #error_message,#error_severity,
#error_state, #error_procedure, #error_line, 0, #message_body);
END
END CATCH
END
CLOSE message_cursor;
DELETE #receive_table;
END
COMMIT;
END
END TRY
BEGIN CATCH
-- Process the error and exit the proc to give the queue monitor control
SET #error_number = ERROR_NUMBER();
SET #error_message = ERROR_MESSAGE();
SET #error_severity = ERROR_SEVERITY();
SET #error_state = ERROR_STATE();
SET #error_procedure = ERROR_PROCEDURE();
SET #error_line = ERROR_LINE();
IF XACT_STATE() = -1
BEGIN
-- The transaction is doomed. Only rollback possible.
-- This could disable the queue if done 5 times consecutively!
ROLLBACK TRANSACTION;
-- Record the error.
BEGIN TRANSACTION;
INSERT INTO svcBroker.target_processing_errors (
error_conversation,[error_number],[error_message],[error_severity],
[error_state],[error_procedure],[error_line],[doomed_transaction],
[message_body])
VALUES(NULL, #error_number, #error_message,#error_severity, #error_state, #error_procedure, #error_line, 1, #message_body);
COMMIT;
END
ELSE IF XACT_STATE() = 1
BEGIN
-- Record error and commit transaction.
-- Here you could also save anything else you want before exiting.
INSERT INTO svcBroker.target_processing_errors (
error_conversation,[error_number],[error_message],[error_severity],
[error_state],[error_procedure],[error_line],[doomed_transaction],
[message_body])
VALUES(NULL, #error_number, #error_message, #error_severity, #error_state, #error_procedure, #error_line, 0, #message_body);
COMMIT;
END
END CATCH
END;
GO

Generating custom ID for my application

I want to generate a custom ID for one of the feature in my application. Here is the procedure to do that:
CREATE PROCEDURE [dbo].[GetNextVendorInvoiceNo]
AS
BEGIN
Declare #StartingVendorInvoiceNo int = 0
Select #StartingVendorInvoiceNo = MAX(StartingVendorInvoiceNo) + 1
From SystemSettings WITH (TABLOCK)
Update SystemSettings
Set StartingVendorInvoiceNo = #StartingVendorInvoiceNo
Select #StartingVendorInvoiceNo
END
Would there be any issue if multiple users end up calling this procedure. Obviously I don't want multiple users to have the same ID. I am using TABLOCK but not sure if this is the right way or anything else too is required.
SQL Server 2012 has SEQUENCE feature, which is definitely safe for multi-user environment. It is based on an integer type, though.
If you have a complex procedure that generates a "next" ID and you want to make sure that only one instance of the procedure runs at any moment (at the expense of throughput), I'd use `sp_getapplock'. It is easy to use and understand and you don't need to worry about placing correct query hints.
Your procedure would look like this:
CREATE PROCEDURE [dbo].[GetNextVendorInvoiceNo]
AS
BEGIN
SET NOCOUNT ON;
SET XACT_ABORT ON;
BEGIN TRANSACTION;
BEGIN TRY
DECLARE #VarLockResult int;
EXEC #VarLockResult = sp_getapplock
#Resource = 'GetNextVendorInvoiceNo_app_lock',
#LockMode = 'Exclusive',
#LockOwner = 'Transaction',
#LockTimeout = 60000,
#DbPrincipal = 'public';
Declare #StartingVendorInvoiceNo int = 0;
IF #VarLockResult >= 0
BEGIN
-- Acquired the lock, generate the "next" ID
Select #StartingVendorInvoiceNo = MAX(StartingVendorInvoiceNo) + 1
From SystemSettings;
Update SystemSettings
Set StartingVendorInvoiceNo = #StartingVendorInvoiceNo;
END ELSE BEGIN
-- TODO: handle the case when it takes too long to acquire the lock,
-- i.e. return some error code
-- For example, return 0
SET #StartingVendorInvoiceNo = 0;
END;
Select #StartingVendorInvoiceNo;
COMMIT TRANSACTION;
END TRY
BEGIN CATCH
ROLLBACK TRANSACTION;
-- TODO: handle the error
END CATCH;
END
Simple TABLOCK as you wrote is definitely not enough. You need to wrap everything into transaction. Then make sure that lock is held till the end of the transaction, see HOLDLOCK. Then make sure that the lock you are getting is the correct one. You may need TABLOCKX. So, overall you need a pretty good understanding of all these hints and how locking works. It is definitely possible to achieve the same effect with these hints. But, if the logic in the procedure is more complicated than your simplified example it can easily get pretty ugly.
To my mind, sp_getapplock is easy to understand and maintain.

Error in stored procedure : current transaction cannot be commited and cannot support operations that write to the log file

The error message is:
The current transaction cannot be committed and cannot support operations that write to the log file. Roll back the transaction.
This part right here causes an error (once I comment the SELECT clause out everything runs smoothly).
DECLARE #TSV_Target_Counter INT
DECLARE #TargetTable nvarchar(255)
DECLARE #TargetColumn nvarchar(255)
DECLARE #Value nvarchar(4000)
DECLARE #SQLSTR nvarchar(4000)
SET #TSV_Target_Counter = ( SELECT MIN(Transition_Set_Variable_ID)
FROM #TSV_WithTarget )
SET #TargetTable = ( SELECT TargetTable
FROM #TSV_WithTarget
WHERE Transition_Set_Variable_ID = #TSV_Target_Counter )
SET #TargetColumn = ( SELECT TargetColumn
FROM #TSV_WithTarget
WHERE Transition_Set_Variable_ID = #TSV_Target_Counter )
SET #Value = ( SELECT Value
FROM #TSV_WithTarget
WHERE Transition_Set_Variable_ID = #TSV_Target_Counter )
-- problem starts here
SELECT #SQLSTR = 'UPDATE Business_Partner AS BP
INNER JOIN BP_Contact AS BPC ON BP.Business_Partner_ID = BPC.Business_Partner_ID
INNER JOIN Due_Diligence AS DD ON BPC.BP_Contact_ID = DD.BP_Contact_ID
SET' + #TargetColumn + ' = ' + #Value + '
WHERE DD.Process_Instance_ID = ' + #Process_Instance_ID
-- ends here
EXEC(#SQLSTR);
Am I doing something wrong?
I am trying to test this SP with this transaction :
BEGIN TRANSACTION T1
EXEC Process_Instance_Value_AddAlter -- the name of the SP
REVERT
ROLLBACK TRANSACTION T1
You are operating in the context of an uncommitable (aka. 'doomed') transaction. Which implies there is more code that you did not show and probably the call occurs from a CATCH block. See Uncommittable Transactions and XACT_STATE:
If an error generated in a TRY block causes the state of the current transaction to be invalidated, the transaction is classified as an uncommittable transaction. An error that ordinarily ends a transaction outside a TRY block causes a transaction to enter an uncommittable state when the error occurs inside a TRY block. An uncommittable transaction can only perform read operations or a ROLLBACK TRANSACTION. The transaction cannot execute any Transact-SQL statements that would generate a write operation or a COMMIT TRANSACTION. The XACT_STATE function returns a value of -1 if a transaction has been classified as an uncommittable transaction. When a batch finishes, the Database Engine rolls back any active uncommittable transactions. If no error message was sent when the transaction entered an uncommittable state, when the batch finishes, an error message will be sent to the client application. This indicates that an uncommittable transaction was detected and rolled back.
The fix is quite simple: do not call the procedure from an uncommitable transaction context. Always check the XACT_STATE() in a CATCH block.

sql locking for batch updates

I'm using SQL Server 2008 R2 and C#.
I'm inserting a batch of rows in SQL Server with a column Status set with value P.
Afterwards, I check how many rows already have the status R and if there are less than 20, I update the row to Status R.
While inserting and updating, more rows are getting added and updated all the time.
I've tried transactions and locking in multiple ways but still: at the moment that a new batch is activated, there are more than 20 rows with status R for a few milliseconds. After those few milliseconds it stabilizes back to 20.
Does anyone have an idea why at bursts the locking doesn't seem to work?
Sample code, reasons, whatever you can share on this subject can be useful!
Thanks!
Following is my stored proc:
DECLARE #return BIT
SET #return = -1
DECLARE #previousValue INT
--insert the started orchestration
INSERT INTO torchestrationcontroller WITH (ROWLOCK)
([flowname],[orchestrationid],[status])
VALUES (#FlowName, #OrchestrationID, 'P')
--check settings
DECLARE #maxRunning INT
SELECT #maxRunning = maxinstances
FROM torchestrationflows WITH (NOLOCK)
WHERE [flowname] = #FlowName
--if running is 0, than you can pass, no limitation here
IF( #maxRunning = 0 )
BEGIN
SET #return = 1
UPDATE torchestrationcontroller WITH(ROWLOCK)
SET [status] = 'R'
WHERE [orchestrationid] = #OrchestrationID
END
ELSE
-- BEGIN
RETRY: -- Label RETRY
BEGIN TRY
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
BEGIN TRANSACTION T1
--else: check how many orchestrations are now running
--start lock table
DECLARE #currentRunning INT
SELECT #currentRunning = Count(*)
FROM torchestrationcontroller WITH (TABLOCKX) --Use an exclusive lock that will be held until the end of the transaction on all data processed by the statement
WHERE [flowname] = #FlowName
AND [status] = 'R'
--CASE
IF( #currentRunning < #maxRunning )
BEGIN
-- less orchestrations are running than allowed
SET #return = 1
UPDATE torchestrationcontroller WITH(TABLOCKX)
SET [status] = 'R'
WHERE [orchestrationid] = #OrchestrationID
END
ELSE
-- more or equal orchestrations are running than allowed
SET #return = 0
--end lock table
SELECT #Return
COMMIT TRANSACTION T1
END TRY
BEGIN CATCH
--PRINT 'Rollback Transaction'
ROLLBACK TRANSACTION
IF ERROR_NUMBER() = 1205 -- Deadlock Error Number
BEGIN
WAITFOR DELAY '00:00:00.05' -- Wait for 5 ms
GOTO RETRY -- Go to Label RETRY
END
END CATCH
I've been able to fix it by setting the isolation level serializable + transaction on the tree stored procedures (two of which I didn't mention because I didn't think they were of importance to this problem). Apparently it was the combination of multiple stored procs that were interfering with each other.
If you know a better way to fix it that could give me a better performance, please let me know!

SQL Create Procedure Abort Logic

Good afternoon all -
I have a temporary stored procedure that needs to be run as a hotfix in several places and I'd like to abort the creation and compilation of the SP if the version of the application is not exactly what I enter. I have the basic idea working but I'd like the messages to come out without all the schema issues from trying to compile the SP.
Here is basically what I have:
IF EXISTS ... DROP PROCEDURE
SELECT TOP 1 Version INTO #CurrentVersion FROM Application_Version ORDER BY UpdateDate DESC
IF NOT EXISTS (SELECT 1 FROM #CurrentVersion WHERE Version = 10)
RAISERROR ('This is for U10 only. Check the application version.', 20, 1) WITH LOG
CREATE PROCEDURE ....
The RAISERROR causes the SP to not end up in the DB and I do get an error but I also end up with schema errors due to schema changes in the past. Due to the SP needing to be the first statement in the batch I can't use IF / ELSE and NOEXEC yields the same results as RAISERROR (without the error).
Any ideas for what can be done to get all of the same results as above without the SP checking the schema if it hits the RAISERROR so I don't end up with a bunch of extra messages reported?
What you want is the error condition to stop execution of the script, which is possible in SQLCMD mode of the query editor with a simple :on error exit:
:on error exit
SELECT TOP 1 Version INTO #CurrentVersion FROM Application_Version ORDER BY UpdateDate DESC
IF NOT EXISTS (SELECT 1 FROM #CurrentVersion WHERE Version = 10)
RAISERROR ('This is for U10 only. Check the application version.', 16, 1);
go
IF EXISTS ... DROP PROCEDURE
go
CREATE PROCEDURE ....
...
go
With this in place there is no need to raise severity 20. Severity 16 is enough, which will take care of the ERRORLOG issue you complain.
The RETURN statement will exit out of a SP. When doing error checking, put a BEGIN and END after your IF statement and after your RAISERROR put a RETURN statement.
There are a couple of options here. My approach would be as follows, because I feel that it provides the best flow:
IF EXISTS ... DROP PROCEDURE
IF EXISTS (SELECT * FROM Application_Version WHERE Version = 10)
BEGIN
DECLARE #sql NVARCHAR(MAX)
SET #sql = 'CREATE PROCEDURE blablabla AS
BEGIN
-- Your Procedure HERE
END'
EXEC sp_executesql #sql
END ELSE
RAISERROR ('This is for U10 only. Check the application version.', 20, 1) WITH LOG