Sniff running SQL statement's variable values? - sql

Is there a way to observe the values of variable inside of a running query. Let's say that I have a loop that has been running for hours. I want to see what the value of the variable which was explicitly set in the query for the start of the loop (#start).
Yes, I can determine this by deductive reasoning by looking at the what the procedure is doing (first value inserted/updated, etc), but I'm looking for a way to actually dig into a running query.

Try RAISERROR(...) WITH NOWAIT with a severity <= 10.
DECLARE #i INT = 0;
DECLARE #out VARCHAR(4);
WHILE #i < 100
BEGIN
SET #i = #i + 1;
SET #out = CAST(#i AS VARCHAR(4));
RAISERROR (#out, 1, 0 ) WITH NOWAIT;
WAITFOR DELAY '00:00:0.250'; -- wait 250 milliseconds
END
Severity levels from 0 through 18 can be specified by any user.
When RAISERROR is run with a severity of 11 or higher in a TRY block, it transfers control to the associated CATCH block. The error is returned to the caller if RAISERROR is run outside the scope of any TRY block, or with a severity of 10 or lower in a TRY block.
##ERROR is set to 0 by default for messages with a severity from 1 through 10.

Related

Service Broker queue "is currently disabled" several times right after upgrading from SQL Server 2012 to 2017, nothing in logs

Yesterday we took our SQL Server 2012 instance which has been processing messages for several years without any issues (except a new periodic performance issue that started several months ago, details below), and upgraded it from SQL Server 2012 to SQL Server 2017+CU29 (KB5010786). We have tried moving compat from 2012 to 2017 and while it helps with a new issues we're seeing, performance is not great.
But the big thing is: since then, we've had the queue spontaneously disable itself twice. It works for minutes/hours, then poof the queue is disabled. We have logging and nothing's giving us anything. We had briefly turned on Query Store, but after the queue disabled the first time, we went looking for similar issue. We found a thread online that said they were having similar problems and that it was Query Store, so we immediately flipped it to Read-Only (we had turned it on in order to try and fix a performance issue we see on Mondays, where it grabs a bad plan and rebooting it seems to be the only fix, but we don't see that the rest of the week). Also of note, this doesn't update rows, it just inserts new rows into a series of hourly tables.
We're also seeing massive locking on LCK_M_IX on where we didn't before. Looking at that next. It's on a part of the code that does an insert into a table, where the output is generated from a CLR. (INSERT INTO table FROM SELECT clr). Moving from 2012 to 2017 seems to have changed that behavior, but it's still seems like it's slow overall, but I'm terrified about it spontaneously disabling again.
We are running the same load on two separate servers, so I have the ability to compare things.
The "disabled" message in our logging table appears several times all at the same time (I'm guessing once per thread). Nothing in the SQL Error Log. Interestingly, in some of the rows in the logging table, the message_body is NULL, but has a body in others. But we see no errors for several minutes before it occurred in either.
The service queue "ODS_TargetQueue" is currently disabled.
We're also running a Extended Event that logs any severity 11+ errors.
All it's showing is
The service queue "ODS_TargetQueue" is currently disabled.
We are also seeing this sporadically which we normally don't see unless we're having log backup issues:
The current transaction cannot be committed and cannot support operations that write to the log file. Roll back the transaction.
We have also seen this a handful of times this morning, which seems to be new:
Process ID 57 attempted to unlock a resource it does not own: METADATA: database_id = 7 CONVERSATION_ENDPOINT_RECV($hash = 0x9904a343:0x9c8327f9:0x4b), lockPartitionId = 0. Retry the transaction, because this error may be caused by a timing condition. If the problem persists, contact the database administrator.
The queue:
CREATE QUEUE [svcBroker].[ODS_TargetQueue] WITH STATUS = ON , RETENTION = OFF , ACTIVATION ( STATUS = ON , PROCEDURE_NAME = [svcBroker].[ODS_TargetQueue_Receive] , MAX_QUEUE_READERS = 30 , EXECUTE AS OWNER ), POISON_MESSAGE_HANDLING (STATUS = ON) ON [PRIMARY]
GO
The procedure
SET QUOTED_IDENTIFIER ON
SET ANSI_NULLS ON
GO
CREATE PROCEDURE [svcBroker].[ODS_TargetQueue_Receive]
AS
BEGIN
set nocount on
-- Variable table for received messages.
DECLARE #receive_table TABLE(
queuing_order BIGINT,
conversation_handle UNIQUEIDENTIFIER,
message_type_name SYSNAME,
message_body xml);
-- Cursor for received message table.
DECLARE message_cursor CURSOR LOCAL FORWARD_ONLY READ_ONLY
FOR SELECT
conversation_handle,
message_type_name,
message_body
FROM #receive_table ORDER BY queuing_order;
DECLARE #conversation_handle UNIQUEIDENTIFIER;
DECLARE #message_type SYSNAME;
DECLARE #message_body xml;
-- Error variables.
DECLARE #error_number INT;
DECLARE #error_message VARCHAR(4000);
DECLARE #error_severity INT;
DECLARE #error_state INT;
DECLARE #error_procedure SYSNAME;
DECLARE #error_line INT;
DECLARE #error_dialog VARCHAR(50);
BEGIN TRY
WHILE (1 = 1)
BEGIN
BEGIN TRANSACTION;
-- Receive all available messages into the table.
-- Wait 5 seconds for messages.
WAITFOR (
RECEIVE TOP (1000)
[queuing_order],
[conversation_handle],
[message_type_name],
convert(xml, [message_body])
FROM svcBroker.ODS_TargetQueue
INTO #receive_table
), TIMEOUT 2000;
IF ##ROWCOUNT = 0
BEGIN
COMMIT;
BREAK;
END
ELSE
BEGIN
OPEN message_cursor;
WHILE (1=1)
BEGIN
FETCH NEXT FROM message_cursor
INTO #conversation_handle,
#message_type,
#message_body;
IF (##FETCH_STATUS != 0) BREAK;
-- Process a message.
-- If an exception occurs, catch and attempt to recover.
BEGIN TRY
IF #message_type = 'svcBroker_ods_claim_request'
BEGIN
exec ParseRequestMessages #message_body
END
ELSE IF #message_type in ('svcBroker_EndOfStream', 'http://schemas.microsoft.com/SQL/ServiceBroker/EndDialog')
BEGIN
-- initiator is signaling end of message stream: end the dialog
END CONVERSATION #conversation_handle;
END
ELSE IF #message_type = 'http://schemas.microsoft.com/SQL/ServiceBroker/Error'
BEGIN
-- If the message_type indicates that the message is an error,
-- raise the error and end the conversation.
WITH XMLNAMESPACES ('http://schemas.microsoft.com/SQL/ServiceBroker/Error' AS ssb)
SELECT
#error_number = CAST(#message_body AS XML).value('(//ssb:Error/ssb:Code)[1]', 'INT'),
#error_message = CAST(#message_body AS XML).value('(//ssb:Error/ssb:Description)[1]', 'VARCHAR(4000)');
SET #error_dialog = CAST(#conversation_handle AS VARCHAR(50));
RAISERROR('Error in dialog %s: %s (%i)', 16, 1, #error_dialog, #error_message, #error_number);
END CONVERSATION #conversation_handle;
END
END TRY
BEGIN CATCH
SET #error_number = ERROR_NUMBER();
SET #error_message = ERROR_MESSAGE();
SET #error_severity = ERROR_SEVERITY();
SET #error_state = ERROR_STATE();
SET #error_procedure = ERROR_PROCEDURE();
SET #error_line = ERROR_LINE();
IF XACT_STATE() = -1
BEGIN
-- The transaction is doomed. Only rollback possible.
-- This could disable the queue if done 5 times consecutively!
ROLLBACK TRANSACTION;
-- Record the error.
BEGIN TRANSACTION;
INSERT INTO svcBroker.target_processing_errors (
error_conversation,[error_number],[error_message],[error_severity],
[error_state],[error_procedure],[error_line],[doomed_transaction],
[message_body])
VALUES (NULL, #error_number, #error_message,#error_severity,
#error_state, #error_procedure, #error_line, 1, #message_body);
COMMIT;
-- For this level of error, it is best to exit the proc
-- and give the queue monitor control.
-- Breaking to the outer catch will accomplish this.
RAISERROR ('Message processing error', 16, 1);
END
ELSE IF XACT_STATE() = 1
BEGIN
-- Record error and continue processing messages.
-- Failing message could also be put aside for later processing here.
-- Otherwise it will be discarded.
INSERT INTO svcBroker.target_processing_errors (
error_conversation,[error_number],[error_message],[error_severity],
[error_state],[error_procedure],[error_line],[doomed_transaction],
[message_body])
VALUES (NULL, #error_number, #error_message,#error_severity,
#error_state, #error_procedure, #error_line, 0, #message_body);
END
END CATCH
END
CLOSE message_cursor;
DELETE #receive_table;
END
COMMIT;
END
END TRY
BEGIN CATCH
-- Process the error and exit the proc to give the queue monitor control
SET #error_number = ERROR_NUMBER();
SET #error_message = ERROR_MESSAGE();
SET #error_severity = ERROR_SEVERITY();
SET #error_state = ERROR_STATE();
SET #error_procedure = ERROR_PROCEDURE();
SET #error_line = ERROR_LINE();
IF XACT_STATE() = -1
BEGIN
-- The transaction is doomed. Only rollback possible.
-- This could disable the queue if done 5 times consecutively!
ROLLBACK TRANSACTION;
-- Record the error.
BEGIN TRANSACTION;
INSERT INTO svcBroker.target_processing_errors (
error_conversation,[error_number],[error_message],[error_severity],
[error_state],[error_procedure],[error_line],[doomed_transaction],
[message_body])
VALUES(NULL, #error_number, #error_message,#error_severity, #error_state, #error_procedure, #error_line, 1, #message_body);
COMMIT;
END
ELSE IF XACT_STATE() = 1
BEGIN
-- Record error and commit transaction.
-- Here you could also save anything else you want before exiting.
INSERT INTO svcBroker.target_processing_errors (
error_conversation,[error_number],[error_message],[error_severity],
[error_state],[error_procedure],[error_line],[doomed_transaction],
[message_body])
VALUES(NULL, #error_number, #error_message, #error_severity, #error_state, #error_procedure, #error_line, 0, #message_body);
COMMIT;
END
END CATCH
END;
GO

WAITFOR DELAY doesn't act separately within each WHILE loop

I've been teaching myself to use WHILE loops and decided to try making a fun Russian Roulette simulation. That is, a query that will randomly SELECT (or PRINT) up to 6 statements (one for each of the chambers in a revolver), the last of which reads "you die!" and any prior to this reading "you survive."
I did this by first creating a table #Nums which contains the numbers 1-6 in random order. I then have a WHILE loop as follows, with a BREAK if the chamber containing the "bullet" (1) is selected (I know there are simpler ways of selecting a random number, but this is adapted from something else I was playing with before and I had no interest in changing it):
SET NOCOUNT ON
CREATE TABLE #Nums ([Num] INT)
DECLARE #Count INT = 1
DECLARE #Limit INT = 6
DECLARE #Number INT
WHILE #Count <= #Limit
BEGIN
SET #Number = ROUND(RAND(CONVERT(varbinary,NEWID()))*#Limit,0,1)+1
IF NOT EXISTS (SELECT [Num] FROM #Nums WHERE [Num] = #Number)
BEGIN
INSERT INTO #Nums VALUES(#Number)
SET #Count += 1
END
END
DECLARE #Chamber INT
WHILE 1=1
BEGIN
SET #Chamber = (SELECT TOP 1 [Num] FROM #Nums)
IF #Chamber = 1
BEGIN
SELECT 'you die!' [Unlucky...]
BREAK
END
SELECT
'you survive.' [Phew...]
DELETE FROM #Nums WHERE [Num] = #Chamber
END
DROP TABLE #Nums
This works fine, but the results all appear instantaneously, and I want to add a delay between each one to add a bit of tension.
I tried using WAITFOR DELAY as follows:
WHILE 1=1
BEGIN
WAITFOR DELAY '00:00:03'
SET #Chamber = (SELECT TOP 1 [Num] FROM #Nums)
IF #Chamber = 1
BEGIN
SELECT 'you die!' [Unlucky...]
BREAK
END
SELECT
'you survive.' [Phew...]
DELETE FROM #Nums WHERE [Num] = #Chamber
END
I would expect the WAITFOR DELAY to initially cause a 3 second delay, then for the first SELECT statement to be executed and for the text to appear in the results grid, and then, assuming the live chamber was not selected, for there to be another 3 second delay and so on, until the live chamber is selected.
However, before anything appears in my results grid, there is a delay of 3 seconds per number of SELECT statements that are executed, after which all results appear at the same time.
I tried using PRINT instead of SELECT but encounter the same issue.
Clearly there's something I'm missing here - can anyone shed some light on this?
It's called buffering. The server doesn't want to return an only partially full response because most of the time, there's all of the networking overheads to account for. Lots of very small packets is more expensive than a few larger packets1.
If you use RAISERROR (don't worry about the name here where we're using 10) you can specify NOWAIT to say "send this immediately". There's no equivalent with PRINT or returning result sets:
SET NOCOUNT ON
CREATE TABLE #Nums ([Num] INT)
DECLARE #Count INT = 1
DECLARE #Limit INT = 6
DECLARE #Number INT
WHILE #Count <= #Limit
BEGIN
SET #Number = ROUND(RAND(CONVERT(varbinary,NEWID()))*#Limit,0,1)+1
IF NOT EXISTS (SELECT [Num] FROM #Nums WHERE [Num] = #Number)
BEGIN
INSERT INTO #Nums VALUES(#Number)
SET #Count += 1
END
END
DECLARE #Chamber INT
WHILE 1=1
BEGIN
WAITFOR DELAY '00:00:03'
SET #Chamber = (SELECT TOP 1 [Num] FROM #Nums)
IF #Chamber = 1
BEGIN
RAISERROR('you die!, Unlucky',10,1) WITH NOWAIT
BREAK
END
RAISERROR('you survive., Phew...',10,1) WITH NOWAIT
DELETE FROM #Nums WHERE [Num] = #Chamber
END
DROP TABLE #Nums
As Larnu already aluded to in comments, this isn't a good use of T-SQL.
SQL is a set-oriented language. We try not to write procedural code (do this, then do that, then run this block of code multiple times). We try to give the server as much as possible in a single query and let it work out how to process it. Whilst T-SQL does have language support for loops, we try to avoid them if possible.
1I'm using packets very loosely here. Note that it applies the same optimizations no matter what networking (or no-networking-local-memory) option is actually being used to carry the connection between client and server.

Purging Tables in SQL SERVER : deleting time

i'm in charge of an OLAP database where i noticed that some cleaning would do some benefits.
my first analysis comes across 500Millions rows to be deleted in approximatively 50 tables . and this represents in general 70% of each table.
i've come across some solution where i need to use a tmp table, drop the original one then bring it again ... but i have too many dependencies and i don't want to take the risk going through that path.
so i went for another solution :
deleting little by little so there is no table locks.
this is a code i've found here in stack overflow that i've tried to improve
deleting 4000 by 4000 rows.
set statistics time off
DECLARE #BATCHSIZE INT, #WAITFORVAL VARCHAR(8), #ITERATION INT, #TOTALROWS
INT, #MAXRUNTIME VARCHAR(8), #BSTOPATMAXTIME BIT, #MSG VARCHAR(500)
SET DEADLOCK_PRIORITY LOW;
SET #BATCHSIZE = 4000
SET #WAITFORVAL = '00:00:10'
SET #MAXRUNTIME = '18:00:00' -- 6 PM
SET #BSTOPATMAXTIME = 1 -- ENFORCE 6 PM STOP TIME
SET #ITERATION = 0 -- LEAVE THIS
SET #TOTALROWS = 0 -- LEAVE THIS
Begin TRY
WHILE #BATCHSIZE>0
BEGIN
-- IF #BSTOPATMAXTIME = 1, THEN WE'LL STOP THE WHOLE JOB AT A SET TIME...
IF (CONVERT(VARCHAR(8),GETDATE(),108) >= #MAXRUNTIME AND #BSTOPATMAXTIME=1) OR #ITERATION >2000
BEGIN
Return;
END
DELETE top (#BATCHSIZE)
FROM FacY where IdDimX not in (select IdDimX from vwX)
SET #BATCHSIZE=##ROWCOUNT
SET #ITERATION=#ITERATION+1
SET #TOTALROWS=#TOTALROWS+#BATCHSIZE
SET #MSG = 'Iteration: ' + CAST(#ITERATION AS VARCHAR) + ' Total deletes:' + CAST(#TOTALROWS AS VARCHAR)
RAISERROR (#MSG, 0, 1) WITH NOWAIT
--COMMIT TRANSACTION;
END
end TRY
BEGIN CATCH
IF ##ERROR <> 0
AND ##TRANCOUNT > 0
BEGIN
PRINT 'There is an error occured. The database update failed.';
ROLLBACK TRANSACTION;
END;
END CATCH;
so this took approx. between 30min and one hour to delete 4Million rows.
instead i trie now to delete 100 000 rows and it did so in 1minutes then i tried 1Millions rows and it did it 5-6minutes.
then i went for more 10million rows and it took 15minutes. ( but logs of 50GB were full at 60% so i think it's the limit)
so now i'm wondering isn't it better if i go deleting by big blocks finally ? since it would take a lot of time.
and what i don't understand is why it takes less time to delete by big blocks?
thanks for your help

Is this sql update guaranteed to be atomic?

I have the following sql:
UPDATE Customer SET Count=1 WHERE ID=1 AND Count=0
SELECT ##ROWCOUNT
I need to know if this is guaranteed to be atomic.
If 2 users try this simultaneously, will only one succeed and get a return value of 1? Do I need to use a transaction or something else in order to guarantee this?
The goal is to get a unique 'Count' for the customer. Collisions in this system will almost never happen, so I am not concerned with the performance if a user has to query again (and again) to get a unique Count.
EDIT:
The goal is to not use a transaction if it is not needed. Also this logic is ran very infrequently (up to 100 per day), so I wanted to keep it as simple as possible.
It may depend on the sql server you are using. However for most, the answer is yes. I guess you are implementing a lock.
Using SQL SERVER (v 11.0.6020) that this is indeed an atomic operation as best as I can determine.
I wrote some test stored procedures to try to test this logic:
-- Attempt to update a Customer row with a new Count, returns
-- The current count (used as customer order number) and a bit
-- which determines success or failure. If #Success is 0, re-run
-- the query and try again.
CREATE PROCEDURE [dbo].[sp_TestUpdate]
(
#Count INT OUTPUT,
#Success BIT OUTPUT
)
AS
BEGIN
DECLARE #NextCount INT
SELECT #Count=Count FROM Customer WHERE ID=1
SET #NextCount = #Count + 1
UPDATE Customer SET Count=#NextCount WHERE ID=1 AND Count=#Count
SET #Success=##ROWCOUNT
END
And:
-- Loop (many times) trying to get a number and insert in into another
-- table. Execute this loop concurrently in several different windows
-- using SMSS.
CREATE PROCEDURE [dbo].[sp_TestLoop]
AS
BEGIN
DECLARE #Iterations INT
DECLARE #Counter INT
DECLARE #Count INT
DECLARE #Success BIT
SET #Iterations = 40000
SET #Counter = 0
WHILE (#Counter < #Iterations)
BEGIN
SET #Counter = #Counter + 1
EXEC sp_TestUpdate #Count = #Count OUTPUT , #Success = #Success OUTPUT
IF (#Success=1)
BEGIN
INSERT INTO TestImage (ImageNumber) VALUES (#Count)
END
END
END
This code ran, creating unique sequential ImageNumber values in the TestImage table. This proves that the above SQL update call is indeed atomic. Neither function guaranteed the updates were done, but they did guarantee that no duplicates were created, and no numbers were skipped.

sql locking for batch updates

I'm using SQL Server 2008 R2 and C#.
I'm inserting a batch of rows in SQL Server with a column Status set with value P.
Afterwards, I check how many rows already have the status R and if there are less than 20, I update the row to Status R.
While inserting and updating, more rows are getting added and updated all the time.
I've tried transactions and locking in multiple ways but still: at the moment that a new batch is activated, there are more than 20 rows with status R for a few milliseconds. After those few milliseconds it stabilizes back to 20.
Does anyone have an idea why at bursts the locking doesn't seem to work?
Sample code, reasons, whatever you can share on this subject can be useful!
Thanks!
Following is my stored proc:
DECLARE #return BIT
SET #return = -1
DECLARE #previousValue INT
--insert the started orchestration
INSERT INTO torchestrationcontroller WITH (ROWLOCK)
([flowname],[orchestrationid],[status])
VALUES (#FlowName, #OrchestrationID, 'P')
--check settings
DECLARE #maxRunning INT
SELECT #maxRunning = maxinstances
FROM torchestrationflows WITH (NOLOCK)
WHERE [flowname] = #FlowName
--if running is 0, than you can pass, no limitation here
IF( #maxRunning = 0 )
BEGIN
SET #return = 1
UPDATE torchestrationcontroller WITH(ROWLOCK)
SET [status] = 'R'
WHERE [orchestrationid] = #OrchestrationID
END
ELSE
-- BEGIN
RETRY: -- Label RETRY
BEGIN TRY
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
BEGIN TRANSACTION T1
--else: check how many orchestrations are now running
--start lock table
DECLARE #currentRunning INT
SELECT #currentRunning = Count(*)
FROM torchestrationcontroller WITH (TABLOCKX) --Use an exclusive lock that will be held until the end of the transaction on all data processed by the statement
WHERE [flowname] = #FlowName
AND [status] = 'R'
--CASE
IF( #currentRunning < #maxRunning )
BEGIN
-- less orchestrations are running than allowed
SET #return = 1
UPDATE torchestrationcontroller WITH(TABLOCKX)
SET [status] = 'R'
WHERE [orchestrationid] = #OrchestrationID
END
ELSE
-- more or equal orchestrations are running than allowed
SET #return = 0
--end lock table
SELECT #Return
COMMIT TRANSACTION T1
END TRY
BEGIN CATCH
--PRINT 'Rollback Transaction'
ROLLBACK TRANSACTION
IF ERROR_NUMBER() = 1205 -- Deadlock Error Number
BEGIN
WAITFOR DELAY '00:00:00.05' -- Wait for 5 ms
GOTO RETRY -- Go to Label RETRY
END
END CATCH
I've been able to fix it by setting the isolation level serializable + transaction on the tree stored procedures (two of which I didn't mention because I didn't think they were of importance to this problem). Apparently it was the combination of multiple stored procs that were interfering with each other.
If you know a better way to fix it that could give me a better performance, please let me know!