Is transaction isolation level 100% reliable in SQL Server? - sql

I'm doing some test about the isolation level in SQL Server.
First I create a table called test with this structure:
Then I run the test code with 2 threads at the same time:
use test-db;
go
declare #count int = 0;
while #count<5000
begin
set transaction isolation level read committed;
begin transaction;
declare #max int;
select #max = coalesce(max(sequence_no),0) from test;
print #max;
insert into test (prefix, sequence_no, thread) values ('AAA', #max+1, 1);
commit transaction;
set #count = #count+1;
end;
The second thread just change the thread number:
use test-db;
go
declare #count int = 0;
while #count<5000
begin
set transaction isolation level read committed;
begin transaction;
declare #max int;
select #max = coalesce(max(sequence_no),0) from test;
print #max;
insert into test (prefix, sequence_no, thread) values ('AAA', #max+1, 2);
commit transaction;
set #count = #count+1;
end;
Under read committed mode, the code should wait for read the max number of sequence when the other thread is not commit, which means they won't generate a same sequence_no.
But it often give me a error:
Violation of PRIMARY KEY constraint 'PK_test'. Cannot insert duplicate key in object 'dbo.test-db'. The duplicate key value is (AAA, 2402).
I test it in repeatable read mode again, and it's the same.
Can someone explain why the reading query between transaction will get crash?
Does it means that isolation level isn't 100% reliable?

You need to use SERIALIZABLE isolation and an UPDLOCK is also required to avoid deadlocks:
SELECT #max = coalesce(max(sequence_no),0)
FROM dbo.test WITH (UPDLOCK, SERIALIZABLE);
This is awful for concurrency so you should really use an identity or a sequence.

Related

Transactions not being committed although I have "Commit Transaction" statement

I'm using SQL Azure and trying to do conditional delete in batches for a large table, sample:
DECLARE
#LargestKeyProcessed BIGINT =1,
#NextBatchMax BIGINT,
#msg varchar(max) ='';
WHILE (#LargestKeyProcessed <= 1000000)
BEGIN
Begin Transaction
SET #NextBatchMax = #LargestKeyProcessed + 50000;
DELETE From mytable
WHERE Id > #LargestKeyProcessed AND Id <= #NextBatchMax And some logic
SET #LargestKeyProcessed = #NextBatchMax;
set #msg=''+#LargestKeyProcessed;
RAISERROR(#msg, 0, 1) WITH NOWAIT
Commit Transaction
END
After the command gets executed successfully I close the tab but SSMS says there are uncomitted transactions although the commit statement is in every iteration. Also the database size seems to remain the same.
I kindly seek your support in explaining why this happens
Thank you very much
I think you can try to add SET IMPLICIT_TRANSACTIONS OFF to the sql. As follows to see if solved your issue.
DECLARE
#LargestKeyProcessed BIGINT =1,
#NextBatchMax BIGINT,
#msg varchar(max) ='';
WHILE (#LargestKeyProcessed <= 1000000)
BEGIN
SET IMPLICIT_TRANSACTIONS OFF
Begin Transaction
SET #NextBatchMax = #LargestKeyProcessed + 50000;
DELETE From mytable
WHERE Id > #LargestKeyProcessed AND Id <= #NextBatchMax And some logic
SET #LargestKeyProcessed = #NextBatchMax;
set #msg=''+#LargestKeyProcessed;
RAISERROR(#msg, 0, 1) WITH NOWAIT
Commit Transaction
END

How to log errors even if the transaction is rolled back?

Lets say we have following commands:
SET XACT_ABORT OFF;
SET IMPLICIT_TRANSACTIONS OFF
DECLARE #index int
SET #index = 4;
DECLARE #errorCount int
SET #errorCount = 0;
BEGIN TRANSACTION
WHILE #index > 0
BEGIN
SAVE TRANSACTION Foo;
BEGIN TRY
-- commands to execute...
INSERT INTO AppDb.dbo.Customers VALUES('Jalal', '1990-03-02');
-- make a problem
IF #index = 3
INSERT INTO AppDb.dbo.Customers VALUES('Jalal', '9999-99-99');
END TRY
BEGIN CATCH
ROLLBACK TRANSACTION Foo; -- I want to keep track of previous logs but not works! :(
INSERT INTO AppDb.dbo.LogScripts VALUES(NULL, 'error', 'Customers', suser_name());
SET #errorCount = #errorCount + 1;
END CATCH
SET #index = #index - 1;
END
IF #errorCount > 0
ROLLBACK TRANSACTION
ELSE
COMMIT TRANSACTION
I want to execute a batch, keep all errors in log and then, if no error was occurred, commit all changes. How can implement it in Sql Server?
The transaction is tied to the connection, and as such, all writes will be rolled back on the outer ROLLBACK TRANSACTION (irrespective of the nested savepoints).
What you can do is log the errors to an in-memory structure, like a Table Variable, and then, after committing / rolling back the outer transaction, you can then insert the logs collected.
I've simplified your Logs and Customers tables for the purpose of brevity:
CREATE TABLE [dbo].[Logs](
[Description] [nvarchar](max) NULL
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
GO
CREATE TABLE [dbo].[Customers](
[ID] [int] NOT NULL,
[Name] [nvarchar](50) NULL
);
GO
And then you can track the logs in the table variable:
SET XACT_ABORT OFF;
SET IMPLICIT_TRANSACTIONS OFF
GO
DECLARE #index int;
SET #index = 4;
DECLARE #errorCount int
SET #errorCount = 0;
-- In memory storage to accumulate logs, outside of the transaction
DECLARE #TempLogs AS TABLE (Description NVARCHAR(MAX));
BEGIN TRANSACTION
WHILE #index > 0
BEGIN
-- SAVE TRANSACTION Foo; As per commentary below, savepoint is futile here
BEGIN TRY
-- commands to execute...
INSERT INTO Customers VALUES(1, 'Jalal');
-- make a problem
IF #index = 3
INSERT INTO Customers VALUES(NULL, 'Broken');
END TRY
BEGIN CATCH
-- ROLLBACK TRANSACTION Foo; -- Would roll back to the savepoint
INSERT INTO #TempLogs(Description)
VALUES ('Something bad happened on index ' + CAST(#index AS VARCHAR(50)));
SET #errorCount = #errorCount + 1;
END CATCH
SET #index = #index - 1;
END
IF #errorCount > 0
ROLLBACK TRANSACTION
ELSE
COMMIT TRANSACTION
-- Finally, do the actual insertion of logs, outside the boundaries of the transaction.
INSERT INTO dbo.Logs(Description)
SELECT Description FROM #TempLogs;
One thing to note is that this is quite an expensive way to process data (i.e. attempt to insert all data, and then roll back a batch if there were any problems encountered). An alternative here would be to validate all the data (and return and report errors) before attempting to insert any data.
Also, in the example above, the Savepoint serves no real purpose, as even 'successful' Customer inserts will be eventually rolled back if any errors were detected for the batch.
SqlFiddle here - The loop is completed, and despite 3 customers being inserted, the ROLLBACK TRANSACTION removes all successfully inserted customers. However, the log is still written, as the Table Variable is not subjected to the outer transaction.

How to get actual exception in SQL transaction when xact_abort is ON

How can I preserve/retrieve the error state or return the actual error when using xact_abort ON?
Currently, when I excecute this stored procedure with an outer transaction already initiated.
begin tran
exec TestFK 2
I get this generic error which hides the actual error
The current transaction cannot be committed and cannot support operations that write to the log file. Roll back the transaction.
But when I execute without an external transaction
exec TestFK 2
I get the proper error.
The INSERT statement conflicted with the FOREIGN KEY constraint "FK__t2__a__3B783965". The conflict occurred in database "XXX", table "dbo.t1", column 'a'.
Setup Code
ALTER procedure [dbo].[TestFK]
#Id int
as
begin
SET NOCOUNT ON
SET xact_abort ON
DECLARE #trancount INT
SET #trancount = ##TRANCOUNT
begin try
IF #trancount = 0
BEGIN TRANSACTION
INSERT INTO t2 VALUES (#Id); -- Foreign key error for #Id = 2
IF #trancount = 0
COMMIT TRANSACTION
end try
begin catch
IF Xact_state() <> 0 AND #trancount = 0
ROLLBACK TRANSACTION
Exec uspInsErrorInfo -- Here I want to preserve the Error State somehow
end catch
END
CREATE TABLE t1 (a INT NOT NULL PRIMARY KEY);
CREATE TABLE t2 (a INT NOT NULL REFERENCES t1(a));
INSERT INTO t1 VALUES (1);
INSERT INTO t1 VALUES (3);
INSERT INTO t1 VALUES (4);
INSERT INTO t1 VALUES (6);
So, the solution I used was to simply remove the try/catch block in this scenario.
Since,
When SET XACT_ABORT is ON, if a Transact-SQL statement raises a
run-time error, the entire transaction is terminated and rolled back.
ALTER procedure [dbo].[TestFK]
#Id int
as
begin
SET NOCOUNT ON
SET xact_abort ON
DECLARE #trancount INT
SET #trancount = ##TRANCOUNT
IF #trancount = 0
BEGIN TRANSACTION
INSERT INTO t2 VALUES (#Id); -- Foreign key error for #Id = 2
-- + some other statements
IF #trancount = 0
COMMIT TRANSACTION
END

SQL Transactions with nolocks and TRANSACTION ISOLATION LEVEL READ COMMITTED

I have a stored procedure like this.
CREATE PROCEDURE [dbo].[mysp]
AS
SET NOCOUNT ON
SET TRANSACTION ISOLATION LEVEL READ COMMITTED
BEGIN
BEGIN TRAN
DECLARE #TrackingCode INT
SELECT #TrackingCode = DefaultsData
FROM dbo.Defaults
WHERE DefaultsID=77
UPDATE dbo.Defaults
SET DefaultsData = #PassedTrackingCode+1
WHERE DefaultsID=77
SELECT #TrackingCode
COMMIT TRAN
END
Assuming that we execute this stored procedure at the same time (concurrently) twice, what will be the #TrackingCode value that is returned at both the time. What if I used NOLOCK on the SELECT statement in the stored proc.
1) If your intention is to generate unique tracking codes then it's a bad idea. You could do this simple test:
CREATE TABLE dbo.Defaults (
DefaultsData INT,
DefaultsID INT PRIMARY KEY
);
GO
INSERT INTO dbo.Defaults VALUES (21, 77);
GO
CREATE PROCEDURE [dbo].[mysp]
AS
SET NOCOUNT ON
SET TRANSACTION ISOLATION LEVEL READ COMMITTED
BEGIN
BEGIN TRAN
DECLARE #TrackingCode INT
SELECT #TrackingCode = DefaultsData
FROM dbo.Defaults
WHERE DefaultsID=77
WAITFOR DELAY '00:00:05' -- 5 seconds delay
UPDATE dbo.Defaults
SET DefaultsData = #TrackingCode+1
WHERE DefaultsID=77
SELECT #TrackingCode -- Maybe it should return the new code
COMMIT TRAN
END;
GO
and then open a new window in SQL Server Management Studio (Ctrl + N) and execute (F5)
EXEC [dbo].[mysp]
and also open a second new window and execute (in less than 5 seconds)
EXEC [dbo].[mysp]
In this case, you will get the same value. NOLOCK is a bad idea (generally speaking) and in this case doesn't help you.
2) If (I repeat my self - sorry) your intention is to generate unique tracking codes then you could use
ALTER PROCEDURE [dbo].[mysp]
#TrackingCode INT OUTPUT
AS
BEGIN
SET NOCOUNT ON;
UPDATE dbo.Defaults
SET #TrackingCode = DefaultsData = DefaultsData + 1 -- It generate the new code
WHERE DefaultsID=77;
END;
GO
or you could use sequences (SQL Server 2012+).

Is it better to error and catch or check for existing?

I have two tables defined as
CREATE TABLE [dbo].[Foo](
FOO_GUID uniqueidentifier NOT NULL PRIMARY KEY CLUSTERED,
SECOND_COLUMN nvarchar(10) NOT NULL,
THIRD_COLUMN nvarchar(10) NOT NULL)
CREATE TABLE [dbo].[FooChanged](
[FooGuid] uniqueidentifier NOT NULL PRIMARY KEY CLUSTERED,
[IsNewRecord] bit NOT NULL)
And a trigger
CREATE TRIGGER dbo.[trg_FooChanged]
ON [dbo].[Foo]
AFTER INSERT,UPDATE
NOT FOR REPLICATION
AS
BEGIN
SET NOCOUNT ON;
DECLARE #isNewRecord as bit;
IF EXISTS(SELECT * FROM DELETED)
BEGIN
--We only want to make a record if the guid or one other column in Foo changed.
IF NOT(UPDATE(FOO_GUID) OR UPDATE(SECOND_COLUMN))
RETURN;
SET #isNewRecord = 0
END
ELSE
SET #isNewRecord = 1;
insert into FooChanged(FooGuid, IsNewRecord) SELECT FOO_GUID, #isNewRecord from INSERTED
END
The following simple test script fails on the last batch with a primary key constraint violation (as expected)
INSERT INTO [dbo].[Foo] ([FOO_GUID],[SECOND_COLUMN],[THIRD_COLUMN]) VALUES (cast(0x1 as uniqueidentifier), '1', '1')
GO
INSERT INTO [dbo].[Foo]([FOO_GUID],[SECOND_COLUMN],[THIRD_COLUMN]) VALUES (cast(0x2 as uniqueidentifier), '2', '2')
GO
UPDATE Foo SET THIRD_COLUMN = '1a' WHERE FOO_GUID = cast(0x1 as uniqueidentifier)
GO
UPDATE Foo SET SECOND_COLUMN = '1a' WHERE FOO_GUID = cast(0x1 as uniqueidentifier)
GO
My question is, which is the "proper" way to not have that error propagate to the user and not mess up any current transactions the user may have (with potentially settings like XACT_ABORT ON).
The two options I see is either check before inserting
declare #guid uniqueidentifier
select #guid = FOO_GUID from INSERTED
BEGIN TRANSACTION
if NOT EXISTS(select * from FooChanged WITH (UPDLOCK, HOLDLOCK) where FooGuid = #guid)
insert into FooChanged(FooGuid, IsNewRecord) SELECT #guid, #isNewRecord from INSERTED
COMMIT TRANSACTION
But I will need to lock on the table to prevent any race conditions which I think will cause performance problems on busy tables
or catch the error in a TRY/CATCH
BEGIN TRY
insert into FooChanged(FooGuid, IsNewRecord) SELECT #guid, #isNewRecord from INSERTED
END TRY
BEGIN CATCH
DECLARE #ErrorMessage NVARCHAR(4000);
DECLARE #ErrorSeverity INT;
DECLARE #ErrorState INT;
SELECT
#ErrorMessage = ERROR_MESSAGE(),
#ErrorSeverity = ERROR_SEVERITY(),
#ErrorState = ERROR_STATE();
if(ERROR_NUMBER() != 2627)
RAISERROR (#ErrorMessage, -- Message text.
#ErrorSeverity, -- Severity.
#ErrorState -- State.
);
END CATCH
But I am concerned about code running inside a XACT_ABORT transaction getting it's XACT_STATE mucked with.
What is the correct approach to use?
What is the correct approach to use?
Check for existence, then insert or update accordingly.
Error handling is expensive - and once an error is thrown there's no way to go back and do something different.
EDIT
Honestly, I may be wrong on the "expensive" part (and don't have any evidence to back it up) as I'm not a SQL expert, but the principle of not being able to go back sill applies.
Here's an article from Aaron Bertrand, who is a much better source on SQL than I am. At first glance it seems to indicate that neither approach is significantly better than the other performance wise.