Service Broker queue "is currently disabled" several times right after upgrading from SQL Server 2012 to 2017, nothing in logs - sql-server-2012

Yesterday we took our SQL Server 2012 instance which has been processing messages for several years without any issues (except a new periodic performance issue that started several months ago, details below), and upgraded it from SQL Server 2012 to SQL Server 2017+CU29 (KB5010786). We have tried moving compat from 2012 to 2017 and while it helps with a new issues we're seeing, performance is not great.
But the big thing is: since then, we've had the queue spontaneously disable itself twice. It works for minutes/hours, then poof the queue is disabled. We have logging and nothing's giving us anything. We had briefly turned on Query Store, but after the queue disabled the first time, we went looking for similar issue. We found a thread online that said they were having similar problems and that it was Query Store, so we immediately flipped it to Read-Only (we had turned it on in order to try and fix a performance issue we see on Mondays, where it grabs a bad plan and rebooting it seems to be the only fix, but we don't see that the rest of the week). Also of note, this doesn't update rows, it just inserts new rows into a series of hourly tables.
We're also seeing massive locking on LCK_M_IX on where we didn't before. Looking at that next. It's on a part of the code that does an insert into a table, where the output is generated from a CLR. (INSERT INTO table FROM SELECT clr). Moving from 2012 to 2017 seems to have changed that behavior, but it's still seems like it's slow overall, but I'm terrified about it spontaneously disabling again.
We are running the same load on two separate servers, so I have the ability to compare things.
The "disabled" message in our logging table appears several times all at the same time (I'm guessing once per thread). Nothing in the SQL Error Log. Interestingly, in some of the rows in the logging table, the message_body is NULL, but has a body in others. But we see no errors for several minutes before it occurred in either.
The service queue "ODS_TargetQueue" is currently disabled.
We're also running a Extended Event that logs any severity 11+ errors.
All it's showing is
The service queue "ODS_TargetQueue" is currently disabled.
We are also seeing this sporadically which we normally don't see unless we're having log backup issues:
The current transaction cannot be committed and cannot support operations that write to the log file. Roll back the transaction.
We have also seen this a handful of times this morning, which seems to be new:
Process ID 57 attempted to unlock a resource it does not own: METADATA: database_id = 7 CONVERSATION_ENDPOINT_RECV($hash = 0x9904a343:0x9c8327f9:0x4b), lockPartitionId = 0. Retry the transaction, because this error may be caused by a timing condition. If the problem persists, contact the database administrator.
The queue:
CREATE QUEUE [svcBroker].[ODS_TargetQueue] WITH STATUS = ON , RETENTION = OFF , ACTIVATION ( STATUS = ON , PROCEDURE_NAME = [svcBroker].[ODS_TargetQueue_Receive] , MAX_QUEUE_READERS = 30 , EXECUTE AS OWNER ), POISON_MESSAGE_HANDLING (STATUS = ON) ON [PRIMARY]
GO
The procedure
SET QUOTED_IDENTIFIER ON
SET ANSI_NULLS ON
GO
CREATE PROCEDURE [svcBroker].[ODS_TargetQueue_Receive]
AS
BEGIN
set nocount on
-- Variable table for received messages.
DECLARE #receive_table TABLE(
queuing_order BIGINT,
conversation_handle UNIQUEIDENTIFIER,
message_type_name SYSNAME,
message_body xml);
-- Cursor for received message table.
DECLARE message_cursor CURSOR LOCAL FORWARD_ONLY READ_ONLY
FOR SELECT
conversation_handle,
message_type_name,
message_body
FROM #receive_table ORDER BY queuing_order;
DECLARE #conversation_handle UNIQUEIDENTIFIER;
DECLARE #message_type SYSNAME;
DECLARE #message_body xml;
-- Error variables.
DECLARE #error_number INT;
DECLARE #error_message VARCHAR(4000);
DECLARE #error_severity INT;
DECLARE #error_state INT;
DECLARE #error_procedure SYSNAME;
DECLARE #error_line INT;
DECLARE #error_dialog VARCHAR(50);
BEGIN TRY
WHILE (1 = 1)
BEGIN
BEGIN TRANSACTION;
-- Receive all available messages into the table.
-- Wait 5 seconds for messages.
WAITFOR (
RECEIVE TOP (1000)
[queuing_order],
[conversation_handle],
[message_type_name],
convert(xml, [message_body])
FROM svcBroker.ODS_TargetQueue
INTO #receive_table
), TIMEOUT 2000;
IF ##ROWCOUNT = 0
BEGIN
COMMIT;
BREAK;
END
ELSE
BEGIN
OPEN message_cursor;
WHILE (1=1)
BEGIN
FETCH NEXT FROM message_cursor
INTO #conversation_handle,
#message_type,
#message_body;
IF (##FETCH_STATUS != 0) BREAK;
-- Process a message.
-- If an exception occurs, catch and attempt to recover.
BEGIN TRY
IF #message_type = 'svcBroker_ods_claim_request'
BEGIN
exec ParseRequestMessages #message_body
END
ELSE IF #message_type in ('svcBroker_EndOfStream', 'http://schemas.microsoft.com/SQL/ServiceBroker/EndDialog')
BEGIN
-- initiator is signaling end of message stream: end the dialog
END CONVERSATION #conversation_handle;
END
ELSE IF #message_type = 'http://schemas.microsoft.com/SQL/ServiceBroker/Error'
BEGIN
-- If the message_type indicates that the message is an error,
-- raise the error and end the conversation.
WITH XMLNAMESPACES ('http://schemas.microsoft.com/SQL/ServiceBroker/Error' AS ssb)
SELECT
#error_number = CAST(#message_body AS XML).value('(//ssb:Error/ssb:Code)[1]', 'INT'),
#error_message = CAST(#message_body AS XML).value('(//ssb:Error/ssb:Description)[1]', 'VARCHAR(4000)');
SET #error_dialog = CAST(#conversation_handle AS VARCHAR(50));
RAISERROR('Error in dialog %s: %s (%i)', 16, 1, #error_dialog, #error_message, #error_number);
END CONVERSATION #conversation_handle;
END
END TRY
BEGIN CATCH
SET #error_number = ERROR_NUMBER();
SET #error_message = ERROR_MESSAGE();
SET #error_severity = ERROR_SEVERITY();
SET #error_state = ERROR_STATE();
SET #error_procedure = ERROR_PROCEDURE();
SET #error_line = ERROR_LINE();
IF XACT_STATE() = -1
BEGIN
-- The transaction is doomed. Only rollback possible.
-- This could disable the queue if done 5 times consecutively!
ROLLBACK TRANSACTION;
-- Record the error.
BEGIN TRANSACTION;
INSERT INTO svcBroker.target_processing_errors (
error_conversation,[error_number],[error_message],[error_severity],
[error_state],[error_procedure],[error_line],[doomed_transaction],
[message_body])
VALUES (NULL, #error_number, #error_message,#error_severity,
#error_state, #error_procedure, #error_line, 1, #message_body);
COMMIT;
-- For this level of error, it is best to exit the proc
-- and give the queue monitor control.
-- Breaking to the outer catch will accomplish this.
RAISERROR ('Message processing error', 16, 1);
END
ELSE IF XACT_STATE() = 1
BEGIN
-- Record error and continue processing messages.
-- Failing message could also be put aside for later processing here.
-- Otherwise it will be discarded.
INSERT INTO svcBroker.target_processing_errors (
error_conversation,[error_number],[error_message],[error_severity],
[error_state],[error_procedure],[error_line],[doomed_transaction],
[message_body])
VALUES (NULL, #error_number, #error_message,#error_severity,
#error_state, #error_procedure, #error_line, 0, #message_body);
END
END CATCH
END
CLOSE message_cursor;
DELETE #receive_table;
END
COMMIT;
END
END TRY
BEGIN CATCH
-- Process the error and exit the proc to give the queue monitor control
SET #error_number = ERROR_NUMBER();
SET #error_message = ERROR_MESSAGE();
SET #error_severity = ERROR_SEVERITY();
SET #error_state = ERROR_STATE();
SET #error_procedure = ERROR_PROCEDURE();
SET #error_line = ERROR_LINE();
IF XACT_STATE() = -1
BEGIN
-- The transaction is doomed. Only rollback possible.
-- This could disable the queue if done 5 times consecutively!
ROLLBACK TRANSACTION;
-- Record the error.
BEGIN TRANSACTION;
INSERT INTO svcBroker.target_processing_errors (
error_conversation,[error_number],[error_message],[error_severity],
[error_state],[error_procedure],[error_line],[doomed_transaction],
[message_body])
VALUES(NULL, #error_number, #error_message,#error_severity, #error_state, #error_procedure, #error_line, 1, #message_body);
COMMIT;
END
ELSE IF XACT_STATE() = 1
BEGIN
-- Record error and commit transaction.
-- Here you could also save anything else you want before exiting.
INSERT INTO svcBroker.target_processing_errors (
error_conversation,[error_number],[error_message],[error_severity],
[error_state],[error_procedure],[error_line],[doomed_transaction],
[message_body])
VALUES(NULL, #error_number, #error_message, #error_severity, #error_state, #error_procedure, #error_line, 0, #message_body);
COMMIT;
END
END CATCH
END;
GO

Related

SQL While Loop to retry Transaction until success

I need to have a transaction, which when it fails, rolls back, and retries.
Where's what I have so far:
CREATE PROCEDURE [dbo].[Save]
#MasterID as bigint
AS
BEGIN
SET NOCOUNT ON;
SET XACT_ABORT ON;
Declare #successtrans bit = 0
While #successtrans = 0
Begin -- while
BEGIN TRY
BEGIN TRANSACTION
DELETE SomeRecords WHERE ParentRecordID = #MasterID
INSERT INTO SomeRecords
Select FieldOne, FieldTwo, FieldThree, ParentRecordID
FROM OtherRecords
WHERE ParentRecordID = #MasterID
INSERT INTO AnotherTable
SELECT FieldA, FieldB, FieldC, FieldD
FROM MoreRecords
WHERE FieldD = #MasterID
COMMIT TRANSACTION
set #successtrans = 1
END TRY
BEGIN CATCH
IF(XACT_STATE()) = -1 -- Determine the state of the current transction (1 = committable, 0 = no active transaction (same as ##Trancount check), -1 = active, but uncommittable) only on severity 17 or higher
BEGIN
DECLARE #ErrorNumber bigint = ERROR_Number(),
#ErrorSeverity int = ERROR_SEVERITY(),
#ErrorState int = ERROR_STATE(),
#ErrorProcedure varchar(100) = ERROR_PROCEDURE(),
#ErrorLine int = ERROR_LINE(),
#ErrorMessage varchar(Max) = ERROR_MESSAGE()
IF ##TRANCOUNT > 0
ROLLBACK TRANSACTION
INSERT INTO ErrorLog(ErrorNumber, ErrorSeverity, ErrorState, ErrorProcedure, ErrorLine, ErrorMessage, ErrorDate)
VALUES(#ErrorNumber, #ErrorSeverity, #ErrorState, #ErrorProcedure, #ErrorLine, #ErrorMessage, GETDATE())
END -- if
END CATCH
End -- while
END -- proc
GO
The question is, will this work for what I'm doing? Logic says yes. But this is a methodology I need to implement across 10 or more stored procs, and I'm having trouble making it fail and repeat.
You should check in the catch why it's failing or you could generate an infinite loop (Like already existing primary-keys).
As I don't know why you want to do this I won't say it's a bad design, but it's really suspicious. If it's failing is for some reason and exception message will say you why, instead of "brute forcing" your database until it does what you're expecting you should attack the cause of the problem (Bad data? Concurrency? Lot of causes could be causing the exception)
Edit: You should take care of your log too, on an infinite loop it will fill you hdd eventually

How to log errors even if the transaction is rolled back?

Lets say we have following commands:
SET XACT_ABORT OFF;
SET IMPLICIT_TRANSACTIONS OFF
DECLARE #index int
SET #index = 4;
DECLARE #errorCount int
SET #errorCount = 0;
BEGIN TRANSACTION
WHILE #index > 0
BEGIN
SAVE TRANSACTION Foo;
BEGIN TRY
-- commands to execute...
INSERT INTO AppDb.dbo.Customers VALUES('Jalal', '1990-03-02');
-- make a problem
IF #index = 3
INSERT INTO AppDb.dbo.Customers VALUES('Jalal', '9999-99-99');
END TRY
BEGIN CATCH
ROLLBACK TRANSACTION Foo; -- I want to keep track of previous logs but not works! :(
INSERT INTO AppDb.dbo.LogScripts VALUES(NULL, 'error', 'Customers', suser_name());
SET #errorCount = #errorCount + 1;
END CATCH
SET #index = #index - 1;
END
IF #errorCount > 0
ROLLBACK TRANSACTION
ELSE
COMMIT TRANSACTION
I want to execute a batch, keep all errors in log and then, if no error was occurred, commit all changes. How can implement it in Sql Server?
The transaction is tied to the connection, and as such, all writes will be rolled back on the outer ROLLBACK TRANSACTION (irrespective of the nested savepoints).
What you can do is log the errors to an in-memory structure, like a Table Variable, and then, after committing / rolling back the outer transaction, you can then insert the logs collected.
I've simplified your Logs and Customers tables for the purpose of brevity:
CREATE TABLE [dbo].[Logs](
[Description] [nvarchar](max) NULL
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
GO
CREATE TABLE [dbo].[Customers](
[ID] [int] NOT NULL,
[Name] [nvarchar](50) NULL
);
GO
And then you can track the logs in the table variable:
SET XACT_ABORT OFF;
SET IMPLICIT_TRANSACTIONS OFF
GO
DECLARE #index int;
SET #index = 4;
DECLARE #errorCount int
SET #errorCount = 0;
-- In memory storage to accumulate logs, outside of the transaction
DECLARE #TempLogs AS TABLE (Description NVARCHAR(MAX));
BEGIN TRANSACTION
WHILE #index > 0
BEGIN
-- SAVE TRANSACTION Foo; As per commentary below, savepoint is futile here
BEGIN TRY
-- commands to execute...
INSERT INTO Customers VALUES(1, 'Jalal');
-- make a problem
IF #index = 3
INSERT INTO Customers VALUES(NULL, 'Broken');
END TRY
BEGIN CATCH
-- ROLLBACK TRANSACTION Foo; -- Would roll back to the savepoint
INSERT INTO #TempLogs(Description)
VALUES ('Something bad happened on index ' + CAST(#index AS VARCHAR(50)));
SET #errorCount = #errorCount + 1;
END CATCH
SET #index = #index - 1;
END
IF #errorCount > 0
ROLLBACK TRANSACTION
ELSE
COMMIT TRANSACTION
-- Finally, do the actual insertion of logs, outside the boundaries of the transaction.
INSERT INTO dbo.Logs(Description)
SELECT Description FROM #TempLogs;
One thing to note is that this is quite an expensive way to process data (i.e. attempt to insert all data, and then roll back a batch if there were any problems encountered). An alternative here would be to validate all the data (and return and report errors) before attempting to insert any data.
Also, in the example above, the Savepoint serves no real purpose, as even 'successful' Customer inserts will be eventually rolled back if any errors were detected for the batch.
SqlFiddle here - The loop is completed, and despite 3 customers being inserted, the ROLLBACK TRANSACTION removes all successfully inserted customers. However, the log is still written, as the Table Variable is not subjected to the outer transaction.

SQL - Transaction count after EXECUTE indicates a mismatch

I am working on a stored procedure in MS SQL Server Management Studio 2012.
I am reusing some code from other procedures and this includes calling another stored procedure with the EXEC command.
This procedure is going to take information for an object as parameters, the reset that object's status, and then look for related (downstream) objects and reset their statuses as well.
The code was working but required a small change to reset and related object's status even if it was not in an error status. To address this issue, I created a variable before the main loop that iterates over the related objects, and then when the first object that has an error status is encountered, it sets the value of the variable. The variable itself is a BIT.
Here is the procedure:
USE [DB_TEST]
GO
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
ALTER PROC [dbo].[resetTasksInErrorStatus]
#TaskId INT,
#SubjectId INT,
#ProjectProtocolId INT,
#TimePointId INT,
#WorkbookId INT,
#LogCreatorUserId INT,
#LogReasonTypeId INT,
#LogOtherReason VARCHAR(256),
#SignatureCaptured BIT,
#ManualChangeTaskStatus BIT,
#ErrorIfModifiedAfter DATETIME,
#NewTaskStatusID INT,
#ProcessorUserId INT
AS
BEGIN
SET NOCOUNT ON;
BEGIN TRANSACTION ResetTasksInError
-- Make sure the task has not been modified since the reset was requested
IF(EXISTS(
SELECT ti.* FROM TaskInstance as ti
WHERE ti.ModifyDate > #ErrorIfModifiedAfter
AND ti.TaskId = #TaskId
AND ti.SubjectId = #SubjectId
AND ti.ProjectProtocolId = #ProjectProtocolId
AND ti.TimePointId = #TimePointId
AND ti.WorkbookId = #WorkbookId))
BEGIN
RAISERROR('The task to be reset was modified before the reset could complete', 16, 1);
GOTO error
END
-- Get all downstream task instances
SELECT *
INTO #downstreamTaskInstances
from dbo.fnGetTaskInstancesDownstream
(
#TaskId,
#SubjectId,
#TimePointId
)
-- Get the previous task status
DECLARE #OldTaskStatus INT;
SELECT TOP 1 #OldTaskStatus = ti.TaskStatusTypeId FROM TaskInstance as ti
WHERE ti.TaskId = #TaskId
AND ti.SubjectId = #SubjectId
AND ti.TimePointId = #TimePointId
-- Reset the task
EXEC setTaskStatus
#TaskID,
#SubjectId,
#ProjectProtocolId,
#TimePointId,
#WorkBookId,
#OldTaskStatus,
#NewTaskStatusID,
#ProcessorUserId,
#LogCreatorUserId,
#LogReasonTypeId,
#LogOtherReason,
#SignatureCaptured,
#ManualChangeTaskStatus,
NULL,
NULL,
NULL,
1
-- Check if setTaskStatus rolled back our transaction
IF(##TRANCOUNT = 0)
BEGIN
RAISERROR('Error in sub procedure. Changes rolled back.', 16, 1);
RETURN
END
-- Set a boolean variable to determine whether downstream tasks should be reset
DECLARE #ResetDownstreamTasks BIT = 0;
--Set #ResetDownstreamTasks = 0;
-- Create a cursor of the downstream tasks
DECLARE downstreamCursor CURSOR FOR
SELECT TaskId, TimePointId, ProcessorUserId, TaskStatus, WorkBookId, ProjectProtocolId, IsManual, TaskStatus
FROM #downstreamTaskInstances
OPEN downstreamCursor
-- Reset each downstream task to unprocessed
DECLARE #CursorTaskID INT;
DECLARE #CursorTimePointID INT;
DECLARE #CursorProcessorUserId INT;
DECLARE #CursorTaskStatusID INT;
DECLARE #CursorWorkBookId INT;
DECLARE #CursorProjectProtocolId INT;
DECLARE #CursorIsManual BIT;
Declare #CursorTaskStatus INT;
FETCH NEXT FROM downstreamCursor INTO #CursorTaskID, #CursorTimePointId, #CursorProcessorUserId, #CursorTaskStatusID, #CursorWorkBookId, #CursorProjectProtocolId, #CursorIsManual, #CursorTaskStatus
WHILE ##FETCH_STATUS = 0 AND ##ERROR = 0 AND ##TRANCOUNT = 1
BEGIN
-- Check if the task is in error status, and then make sure it is not an manual task.
-- Manual tasks should never be in error status, so there is no need to reset them.
if #CursorTaskStatus = 10 and #CursorIsManual <> 1
begin
SET #ResetDownstreamTasks = 1;
EXEC setTaskStatus
#CursorTaskID,
#SubjectId,
#CursorProjectProtocolId,
#CursorTimePointId,
#CursorWorkBookId,
#CursorTaskStatusID,
1, -- Unprocessed
#CursorProcessorUserId,
#LogCreatorUserId,
#LogReasonTypeId,
#LogOtherReason,
#SignatureCaptured,
#ManualChangeTaskStatus,
NULL,
NULL,
NULL,
0
end;
if #ResetDownstreamTasks = 1
begin
EXEC setTaskStatus
#CursorTaskID,
#SubjectId,
#CursorProjectProtocolId,
#CursorTimePointId,
#CursorWorkBookId,
#CursorTaskStatusID,
6, -- Inspected
#CursorProcessorUserId,
#LogCreatorUserId,
#LogReasonTypeId,
#LogOtherReason,
#SignatureCaptured,
#ManualChangeTaskStatus,
NULL,
NULL,
NULL,
0
end
FETCH NEXT FROM downstreamCursor INTO #CursorTaskID, #CursorTimePointId, #CursorProcessorUserId, #CursorTaskStatusID, #CursorWorkBookId, #CursorProjectProtocolId, #CursorIsManual, #CursorTaskStatus
END
DROP TABLE #downstreamTaskInstances
CLOSE downstreamCursor
DEALLOCATE downstreamCursor
-- Check if setTaskStatus rolled back our transaction
IF(##TRANCOUNT = 0)
BEGIN
RAISERROR('Error in sub procedure. Changes rolled back.', 16, 1);
RETURN
END
IF(##ERROR <> 0)
BEGIN
GOTO ERROR
END
COMMIT TRANSACTION ResetTasksInError
RETURN
ERROR:
RAISERROR('Error encountered. Changes rolled back.', 16,1);
ROLLBACK TRANSACTION ResetTasksInError
RETURN
END
When I run the procedure I get these errors:
Msg 266, Level 16, State 2, Procedure setTaskStatus, Line 0
Transaction count after EXECUTE indicates a mismatching number of BEGIN and COMMIT statements. Previous count = 1, current count = 0.
Msg 16943, Level 16, State 4, Procedure resetTasksInErrorStatus, Line 197
Could not complete cursor operation because the table schema changed after the cursor was declared.
Msg 3701, Level 11, State 5, Procedure resetTasksInErrorStatus, Line 200
Cannot drop the table '#downstreamTaskInstances', because it does not exist or you do not have permission.
Msg 50000, Level 16, State 1, Procedure resetTasksInErrorStatus, Line 207
Error in sub procedure. Changes rolled back.
If I comment out the SET ... statement the procedure runs and works (not as desired).
I have looked around for similar questions, but none of them solved my problem.
Is there something I am missing with regards to the SET statement?
Is it affecting the ##TRANCOUNT variable somehow?
I did see some posts around that mentioned the problem is likely to be in the stored procedure that this one calls, but I am a little hesitant because that stored procedure works, and these errors only show up when trying to set the value of the variable.
Just a guess .. but i think that what happens is the following:
you create a temp table inside the transaction.
you start a cursor and iterate through it until one of the inner sps gives error.
the transaction is rolledback so the temp table will go with the transaction
the cursor gives the schema change error
drop error
So you should check for the error in one of the inner sps.
Hope it helps.
I ended finding a way to make the stored procedure work.
What I did was I removed the stored procedure EXEC statement from the if statement that set the value of the variable.
So the code in the loop looks like this:
WHILE ##FETCH_STATUS = 0 AND ##ERROR = 0 AND ##TRANCOUNT = 1
BEGIN
-- Check if the task is in error status, and then make sure it is not an manual task.
-- Manual tasks should never be in error status, so there is no need to reset them.
if #CursorTaskStatus = 10 and #CursorIsManual <> 1
begin
SET #ResetDownstreamTasks = 1;
end;
if #ResetDownstreamTasks = 1
begin
EXEC setTaskStatus
#CursorTaskID,
#SubjectId,
#CursorProjectProtocolId,
#CursorTimePointId,
#CursorWorkBookId,
#CursorTaskStatusID,
6, -- Inspected
#CursorProcessorUserId,
#LogCreatorUserId,
#LogReasonTypeId,
#LogOtherReason,
#SignatureCaptured,
#ManualChangeTaskStatus,
NULL,
NULL,
NULL,
0
end
FETCH NEXT FROM downstreamCursor INTO #CursorTaskID, #CursorTimePointId, #CursorProcessorUserId, #CursorTaskStatusID, #CursorWorkBookId, #CursorProjectProtocolId, #CursorIsManual, #CursorTaskStatus
END

Batchwise Script Execution

I have long script which contains the Create tables, create schemas,insert data,update tables etc.I have to do this by only on script in batch wise.I ran it before but it created every time some error due to this some object will present inside the database. So In need some mechanism which can handle the batch execution if something goes wrong the whole script should be rolled back.
Appreciated Help and Time.
--343
Try this:
DECLARE #outer_tran int;
SELECT #outer_tran = ##TRANCOUNT;
-- find out whether we are inside the outer transaction
-- if yes - creating save point if no starting own transaction
IF #outer_tran > 0 SAVE TRAN save_point ELSE BEGIN TRAN;
BEGIN TRY
-- YOUR CODE HERE
-- if no errors and we have started own transaction - commit it
IF #outer_tran = 0 COMMIT;
END TRY
BEGIN CATCH
-- if error occurred - rollback whole transaction if it is own
-- or rollback to save point if we are inside the external transaction
IF #outer_tran > 0 ROLLBACK TRAN save_point ELSE ROLLBACK;
--and rethrow original exception to see what happens
DECLARE
#ErrorMessage nvarchar(max),
#ErrorSeverity int,
#ErrorState int;
SELECT
#ErrorMessage = ERROR_MESSAGE() + ' Line ' + cast(ERROR_LINE() as nvarchar(5)),
#ErrorSeverity = ERROR_SEVERITY(),
#ErrorState = ERROR_STATE();
RAISERROR (#ErrorMessage, #ErrorSeverity, #ErrorState);
END CATCH
While I might not have caught all the nuances of your question, I believe XACT_ABORT will deliver the functionality you seek. Simply add a
SET XACT_ABORT ON;
to the beginning of your script.
With the 2005 release of SQL Server, you have access to try/catch blocks in TSQL as well.

How to simulate a deadlock in SQL Server in a single process?

Our client side code detects deadlocks, waits for an interval, then retries the request up to 5 times. The retry logic detects the deadlocks based on the error number 1205.
My goal is to test both the deadlock retry logic and deadlock handling inside of various stored procedures. I can create a deadlock using two different connections. However, I would like to simulate a deadlock inside of a single stored procedure itself.
A deadlock raises the following error message:
Msg 1205, Level 13, State 51, Line 1
Transaction (Process ID 66) was
deadlocked on lock resources with another process and has been chosen
as the deadlock victim. Rerun the transaction.
I see this error message is in sys.messages:
select * from sys.messages where message_id = 1205 and language_id = 1033
message_id language_id severity is_event_logged text
1205 1033 13 0 Transaction (Process ID %d) was deadlocked on %.*ls resources with another process and has been chosen as the deadlock victim. Rerun the transaction.
I can't raise this error using RAISERROR:
raiserror(1205, 13, 51)
Msg 2732, Level 16, State 1, Line 1
Error number 1205 is invalid.
The number must be from 13000 through 2147483647 and it cannot be 50000.
Our deadlock retry logic checks if the error number is 1205. The deadlock needs to have the same message ID, level, and state as a normal deadlock.
Is there a way to simulate a deadlock (with RAISERROR or any other means) and get the same message number out with just one process?
Our databases are using SQL 2005 compatibility, though our servers vary from 2005 through 2008 R2.
As many have pointed out, the answer is no, a single process cannot reliably deadlock itself. I came up with the following solution to simulate a deadlock on a development or test system..
Run the script below in a SQL Server Management Studio window. (Tested on 2008 R2 only.) You can leave it running as long as necessary.
In the place you want to simulate a deadlock, insert a call to sp_simulatedeadlock. Run your process, and the deadlock should occur.
When done testing, stop the SSMS query and run the cleanup code at the bottom.
/*
This script helps simulate deadlocks. Run the entire script in a SQL query window. It will continue running until stopped.
In the target script, insert a call to sp_simulatedeadlock where you want the deadlock to occur.
This stored procedure, also created below, causes the deadlock.
When you are done, stop the execution of this window and run the code in the cleanup section at the bottom.
*/
set nocount on
if object_id('DeadlockTest') is not null
drop table DeadlockTest
create table DeadlockTest
(
Deadlock_Key int primary key clustered,
Deadlock_Count int
)
go
if exists (select * from sysobjects where id = object_id(N'sp_simulatedeadlock')
AND objectproperty(id, N'IsProcedure') = 1)
drop procedure sp_simulatedeadlock
GO
create procedure sp_simulatedeadlock
(
#MaxDeadlocks int = -1 -- specify the number of deadlocks you want; -1 = constant deadlocking
)
as begin
set nocount on
if object_id('DeadlockTest') is null
return
-- Volunteer to be a deadlock victim.
set deadlock_priority low
declare #DeadlockCount int
select #DeadlockCount = Deadlock_Count -- this starts at 0
from DeadlockTest
where Deadlock_Key = 2
-- Trace the start of each deadlock event.
-- To listen to the trace event, setup a SQL Server Profiler trace with event class "UserConfigurable:0".
-- Note that the user running this proc must have ALTER TRACE permission.
-- Also note that there are only 128 characters allowed in the trace text.
declare #trace nvarchar(128)
if #MaxDeadlocks > 0 AND #DeadlockCount > #MaxDeadlocks
begin
set #trace = N'Deadlock Test #MaxDeadlocks: ' + cast(#MaxDeadlocks as nvarchar) + N' #DeadlockCount: ' + cast(#DeadlockCount as nvarchar) + N' Resetting deadlock count. Will not cause deadlock.'
exec sp_trace_generateevent
#eventid = 82, -- 82 = UserConfigurable:0 through 91 = UserConfigurable:9
#userinfo = #trace
-- Reset the number of deadlocks.
-- Hopefully if there is an outer transaction, it will complete and persist this change.
update DeadlockTest
set Deadlock_Count = 0
where Deadlock_Key = 2
return
end
set #trace = N'Deadlock Test #MaxDeadlocks: ' + cast(#MaxDeadlocks as nvarchar) + N' #DeadlockCount: ' + cast(#DeadlockCount as nvarchar) + N' Simulating deadlock.'
exec sp_trace_generateevent
#eventid = 82, -- 82 = UserConfigurable:0 through 91 = UserConfigurable:9
#userinfo = #trace
declare #StartedTransaction bit
set #StartedTransaction = 0
if ##trancount = 0
begin
set #StartedTransaction = 1
begin transaction
end
-- lock 2nd record
update DeadlockTest
set Deadlock_Count = Deadlock_Count
from DeadlockTest
where Deadlock_Key = 2
-- lock 1st record to cause deadlock
update DeadlockTest
set Deadlock_Count = Deadlock_Count
from DeadlockTest
where Deadlock_Key = 1
if #StartedTransaction = 1
rollback
end
go
insert into DeadlockTest(Deadlock_Key, Deadlock_Count)
select 1, 0
union select 2, 0
-- Force other processes to be the deadlock victim.
set deadlock_priority high
begin transaction
while 1 = 1
begin
begin try
begin transaction
-- lock 1st record
update DeadlockTest
set Deadlock_Count = Deadlock_Count
from DeadlockTest
where Deadlock_Key = 1
waitfor delay '00:00:10'
-- lock 2nd record (which will be locked when the target proc calls sp_simulatedeadlock)
update DeadlockTest
set Deadlock_Count = Deadlock_Count
from DeadlockTest
where Deadlock_Key = 2
rollback
end try
begin catch
print 'Error ' + convert(varchar(20), ERROR_NUMBER()) + ': ' + ERROR_MESSAGE()
goto cleanup
end catch
end
cleanup:
if ##trancount > 0
rollback
drop procedure sp_simulatedeadlock
drop table DeadlockTest
You can exploit a bug that Microsoft seems in no hurry to fix by running
begin tran
go
CREATE TYPE dbo.IntIntSet AS TABLE(
Value0 Int NOT NULL,
Value1 Int NOT NULL
)
go
declare #myPK dbo.IntIntSet;
go
rollback
This SQL will cause a deadlock with itself. Lots more details at Aaron Bertand's blog http://sqlperformance.com/2013/11/t-sql-queries/single-tx-deadlock
(Apparently I don't have enough reputation to add a comment. So posting as an answer.)
A deadlock requires at least two processes. the only exception being is the intra-query parallel deadlocks which are kind of impossible to reproduce.
However you can simulate a deadlock on two processes running the exact same query (or sp). Some ideas here
This works reliably from a single session. Use service broker activation to invoke the second thread which is required for a deadlock.
NOTE1: cleanup script not included
NOTE2: service broker has to be enabled:
ALTER DATABASE dbname SET ENABLE_BROKER WITH ROLLBACK IMMEDIATE;
EXEC sp_executesql N'
CREATE OR ALTER PROCEDURE DeadlockReceive
AS
DECLARE #MessageBody NVARCHAR(1000);
RECEIVE #MessageBody = CAST(message_body AS NVARCHAR(1000) )FROM DeadlockQueue
SELECT #MessageBody
EXEC sp_executesql #MessageBody;'
IF EXISTS (SELECT * FROM sys.services WHERE name = 'DeadlockService') DROP SERVICE DeadlockService
IF OBJECT_ID('DeadlockQueue') IS NOT NULL DROP QUEUE dbo.DeadlockQueue
IF EXISTS (SELECT * FROM sys.service_contracts WHERE name = 'DeadlockContract') DROP CONTRACT DeadlockContract
IF EXISTS (SELECT * FROM sys.service_message_types WHERE name = 'DeadlockMessage') DROP MESSAGE TYPE DeadlockMessage
DROP TABLE IF EXISTS DeadlockTable1 ;
DROP TABLE IF EXISTS DeadlockTable2 ;
CREATE MESSAGE TYPE DeadlockMessage VALIDATION = NONE;
CREATE QUEUE DeadlockQueue WITH STATUS = ON, ACTIVATION (PROCEDURE_NAME = DeadlockReceive, EXECUTE AS SELF, MAX_QUEUE_READERS = 1);
CREATE CONTRACT DeadlockContract AUTHORIZATION dbo (DeadlockMessage SENT BY ANY);
CREATE SERVICE DeadlockService ON QUEUE DeadlockQueue (DeadlockContract);
CREATE TABLE DeadlockTable1 (Value INT); INSERT dbo.DeadlockTable1 SELECT 1;
CREATE TABLE DeadlockTable2 (Value INT); INSERT dbo.DeadlockTable2 SELECT 1;
DECLARE #ch UNIQUEIDENTIFIER
BEGIN DIALOG #ch FROM SERVICE DeadlockService TO SERVICE 'DeadlockService' ON CONTRACT DeadlockContract WITH ENCRYPTION = OFF ;
SEND ON CONVERSATION #ch MESSAGE TYPE DeadlockMessage (N'
set deadlock_priority high;
begin tran;
update DeadlockTable2 set value = 5;
waitfor delay ''00:00:01'';
update DeadlockTable1 set value = 5;
commit')
SET DEADLOCK_PRIORITY LOW
BEGIN TRAN
UPDATE dbo.DeadlockTable1 SET Value = 2
waitfor delay '00:00:01';
UPDATE dbo.DeadlockTable2 SET Value = 2
COMMIT
Simplest way to reproduce in C# with Parallel
e.g.
var List = ... (add some items with same ids)
Parallel.ForEach(List,
(item) =>
{
ReportsDataContext erdc = null;
try
{
using (TransactionScope scope = new TransactionScope())
{
erdc = new ReportsDataContext("....connection....");
var report = erdc.Report.Where(x => x.id == item.id).Select(x => x);
report.Count++
erdc.SubmitChanges();
scope.Complete();
}
if (erdc != null)
erdc.Dispose();
}
catch (Exception ex)
{
if (erdc != null)
erdc.Dispose();
ErrorLog.LogEx("multi thread victim", ex);
}
more interest how to prevent that error in real cross thread situation?
I had difficulty getting Paul's answer to work. I made some small changes to get it working.
The key is to begin and rollback the sp_simulatedeadlock transaction within the procedure itself. I made no changes to the procedure in Paul's answer.
DECLARE #DeadlockCounter INT = NULL
SELECT #DeadlockCounter = 0
WHILE #DeadlockCounter < 10
BEGIN
BEGIN TRY
/* The procedure was leaving uncommitted transactions, I rollback the transaction in the catch block */
BEGIN tran simulate
Exec sp_simulatedeadlock
/* Code you want to deadlock */
SELECT #DeadlockCounter = 10
END TRY
BEGIN CATCH
Rollback tran simulate
PRINT ERROR_MESSAGE()
IF (ERROR_MESSAGE() LIKE '%deadlock%' OR ERROR_NUMBER() = 1205) AND #DeadlockCounter < 10
BEGIN
SELECT #DeadlockCounter +=1
PRINT #DeadlockCounter
IF #DeadlockCounter = 10
BEGIN
RAISERROR('Deadlock limit exceeded or error raised', 16, 10);
END
END
END CATCH
END
If you happen to run into problems with the GO/go keywords (Incorrect syntax near 'GO') in any of the scripts above, it's important to know, that this instruction only seems to work in Microsoft SQL Server Management Studio/sqlcmd:
The GO keyword is not T-SQL, but a SQL Server Management Studio artifact that allows you to separate the execution of a script file in multiple batches.I.e. when you run a T-SQL script file in SSMS, the statements are run in batches separated by the GO keyword. […]
SQL Server doesn't understand the GO keyword. So if you need an equivalent, you need to separate and run the batches individually on your own.
(from JotaBe's answer to another question)
So if you want to try out e.g. Michael J Swart's Answer through DBeaver for example, you'll have to remove the go's and run all parts of the query on its own.