Transaction not rolling back with PK violation - sql

As per my knowledge if we start transaction (begin tran/commit tran), it will be completely done or nothing. But when I am executing below TSQL code the first insert statement works while the 2nd doesn't.
Background: table A has two columns (ID primary key, Name varchar), and it already had 3 rows of data (ID of 1,2,3).
begin tran
insert into A values (4, 'Tim') -- this works
insert into A values (2, 'Tom') -- this doesn't work because it violates the PK constraint
commit tran
select * from A
Here is my question: since the 2nd insert statement violates the PK constraint and couldn't be committed, I was thinking that everything inside this transaction should all be rolled back because the transaction should be succeed or fail as one unit. But in fact, 'Tim' is added into A while 'Tom' didn't. Does this violate the automicity of transaction?

It depends on how you handle errors in your transaction. If you catch them, or if you ignore them (it seems you are ignoring them), then the transaction will continue and will commit.
Any decent "transaction manager" of a programming language/framework:
Will stop the execution of the code and will roll the transaction back, or
Will doom the transaction, so a commit will never be carried out. It will be replaced by a roll back instead.
If you run these commands at the SQL prompt, you are probably not using any transaction manager, and that's why you may be ignoring the error purposedly and carrying on as if everything was good.

That's not how transactions work in SQL Server. If you have a "statement-terminating" error, SQL Server just continues to the next statement. If you have a "batch-terminating" error, the transaction is aborted and rolled back.
Now I don't want that behaviour ever.
So the first line I write in every stored procedure is:
SET XACT_ABORT ON;
That tells SQL Server that "statement-terminating" errors should be automatically promoted to "batch-terminating" errors. Add that statement to the beginning of your script and you'll see that it now works as expected.

Related

SQL Server Prevent Duplication

SQL Table contains a table with a primary key with two columns: ID and Version. When the application attempts to save to the database it uses a command that looks similar to this:
BEGIN TRANSACTION
INSERT INTO [dbo].[FirstTable] ([ID],[VERSION]) VALUES (41,19)
INSERT INTO [dbo].[SecondTable] ([ID],[VERSION]) VALUES (41,19)
COMMIT TRANSACTION
SecondTable has a foreign key constraint to FirstTable and matching column names.
If two computers execute this statement at the same time, does the first computer lock FirstTable and the second computer waits, then when it is finished waiting it finds that the first insert statement fails and throws an error and does not execute the second statement? Is it possible for the second insert to run successfully on both computers?
What is the safest way to ensure that the second computer does not write anything and returns an error to the calling application?
Since these are transactions, all operations must be successful otherwise everything gets rolled back. Whichever computer completes the first statement of the transaction first will ultimately complete its transaction. The other computer will attempt to insert a duplicate primary key and fail, causing its transaction to roll back.
Ultimately there will be one row added to both tables, via only one of the computers.
Ofcourse if there are 2 statements then one of them is going to be executed and the other one will throw an error . In order to avoid this your best bet would be to use either "if exists" or "if not exists"
What this does is basically checks to see if there is data already present in the table if not then you insert , else just select.
Take a look at the flow shown below :
if not exists (select * from Table with (updlock, rowlock, holdlock) where
...)
/* insert */
else
/* update */
commit /* locks are released here */
The transaction did not rollback automatically if an error was encountered. It needed to have
set xact_abort on
Found this in this question
SQL Server - transactions roll back on error?

SQL Server trigger insert causes lock until commit

Having trouble with locks on a related table via triggers.
I'm inserting into table tCatalog (this has a trigger to simply insert a record in another table tSearchQueue). The insert to tCatalog is inside a transaction that has a lot of other functions that sometimes takes several seconds. However, the tSearchQueue table is locked until the transaction can be committed. Is there a way to avoid this?
INSERT INTO [dbo].[tSearchQueue] (Processed, SQL, sys_CreateDate)
SELECT
0, 'Test ' + cast(CatalogID as varchar(10)), getdate()
FROM
inserted
BEGIN TRAN t1
DECLARE #catalogid int
INSERT INTO tCatalog (ProgramID, sys_CreatedBy, ItemNumber, Description, UOMID)
VALUES (233, 1263, 'brian catalog4', 'brian catalog4', 416)
SELECT #catalogid = SCOPE_IDENTITY()
INSERT INTO tCustomAttributeCatalog (CatalogID, CustomAttributeID, DefaultValue, DefaultValueRead, sys_CreatedBy)
VALUES (#catalogid, 299, 'No', 'No', 1263)
INSERT INTO tCustomAttributeCatalog (CatalogID, CustomAttributeID, DefaultValue, DefaultValueRead, sys_CreatedBy)
VALUES (#catalogid, 300, null, null, 1263)
COMMIT TRAN t1
It looks like you have a background process which wants to be notified of changes so it can do some sort of re-indexing. If that's the case it's not necessarily wrong that it is blocked, since if the transaction does not commit then it shouldn't index it anyway.
So the sequence is:
Begin transaction
insert tCatalog
trigger inserts tSearchQueue
Insert some other tables
Perform a long-running operation
Commit transaction.
And the problem is another process wants to read tSearchQueue but cannot since it is locked.
Option 1: Perform the background operation in batches.
If the process is getting behind because it has too few opportunities to read the table, then maybe reading multiple rows at a time would solve the issue. I.e. each time it gets a chance to read the queue it should read many rows, process them all at once, and then mark them all as done together (or delete them as the case may be).
Option 2: Perform the long running operation first, if possible:
begin transaction
perform long-running operation
insert tCatalog
trigger inserts tSearchQueue
insert some other tables
commit transaction
Other process now finds tSearchQueue is locked only for a short time.
Note that if the long-running operation is a file copy, these can be included in the transaction using CopyFileTransacted, or the copy can be rolled back in a "catch" statement if the operation fails.
Option 3: Background process avoids the lock
If the other process is trying primarily to read the table, then snapshot isolation may solve your problem. This will return only committed rows, as they existed at the point in time. Combined with row-level locking this may solve your problem.
Alternatively the background process might read with the NOLOCK hint (dirty reads). This may result in reading data from transactions which are later rolled back. However if the data is being validated in a separate step (e.g. you are just writing the identifier for the object which needs reindexing) this is not necessarily a problem. If the indexing process can cope with entries which no longer exist, or which haven't actually changed, then having spurious reads won't matter.

How to handle a transaction in Sybase ASE?

I have to insert records into a table in a test environment, so that I now know that it will throw a primary key constraint violation. And because of the scripts that will be run independantly by other people once the time to migrate from an environment to another, I wish to make my script rollback whenever it encounters a problem.
My script is as follows:
use my_db
go
--
-- No rows of the given "my_code" must exist, as they shall be replaced.
--
if exists (
select 1
from my_table
where my_code like 'my_code'
) delete from my_table
where my_code like 'my_code'
--
-- All rows shall be inserted altogether, or rejected altogether at once.
--
begin tran a
insert into my_table (field1, field2, field3) values (value1, value2, value3)
if ##error != 0 or ##transtate != 0
begin
rollback tran a
end
insert into my_table (field1, field2, field3) values (value1, value2, value3)
if ##error != 0 or ##transtate != 0
begin
rollback tran a
end
commit tran a
go
I have tried what I could get from these posts:
Error Handling in Sybase
How to continue executing rest of while loop even if an exception occurs in sybase?
Transaction Handling in Sybase
I have tried with only verifying ##error, ##transtate and both, and I always get the message box reporting the error, and no records are rolled back, that is, the passing rows are still inserted.
I wonder whether there is a way to make sure that Sybase handles the transactions adequately as expected, or else, simply make sure it doesn't autocommit rows when they are inserted as SQL Server allows it - I mean, SQL Server inherit from Sybase, after all... So was it into Sybase, or is it new to SQL Server, I don't know.
I wish to avoid having the error and more preferably log the error and rollback or delete the inserted rows insde the transaction.
Notice that I don't use a stored procedure. This script is a one-timer to update the database for recent changes occured in the software that uses the database.
The code looks mostly correct. If you hit a duplicate-key error upon the insert, then this will be caught by the IF-test.
However, you should also add some logic (GOTO or additional IF-ELSE tests based on a flag) that skips the second insert when you have decided to roll back the first insert.
Currently your code will always execute the second insert, regardless. Unlike some other databases, in ASE control flow is not affected by an error condition and you need to intercept this yourself.
Also note that both inserts are identical, so if there is unique index on the table the second insert will always generate dup-key error if the first insert was successful.
It sounds as if you are using a client that checks the status after every statement or something? ASE itself does not pop up any error message boxes.
To develop this, it is best to run the script from the command line with
isql [...] -i yourscriptfile.sql
Just two remarks:
You are using a transaction name ('a'). This is not necessary and can in fact cause problems: when you roll back to a named transaction, as you do, that must be the outermost transaction or the rollback will fail. Best don't use transaction names.
This problem in the previous bullet can in fact occur if there is already a transaction active when you execute this code. You should always check this at the start of your script with
if ##trancount > 0
begin
print 'Error: transaction already active'
return
end
or something similar.
HTH,
Rob V.

How to Ignoring errors in Trigger and Perform respective operation in MS SQL Server

I have created AFTER INSERT TRIGGER
Now if any case if an error occurs while executing Trigger. It should not effect Insert Operation on Triggered table.
In One word if any ERROR occurs in trigger it should Ignore it.
As I have used
BEGIN TRY
END TRY
BEGIN CATCH
END CATCH
But it give following error message and Rolled back Insert operation on Triggered table
An error was raised during trigger execution. The batch has been
aborted and the user transaction, if any, has been rolled back.
Interesting problem. By default, triggers are designed that if they fail, they rollback the command that fired it. So whenever trigger is executing there is an active transaction, whatever there was an explicit BEGIN TRANSACTION or not on the outside. And also BEGIN/TRY inside trigger will not work. Your best practice would be not to write any code in trigger that could possibly fail - unless it is desired to also fail the firing statement.
In this situation, to suppress this behavior, there are some workarounds.
Option A (the ugly way):
Since transaction is active at the beginning of trigger, you can just COMMIT it and continue with your trigger commands:
CREATE TRIGGER tgTest1 ON Test1 AFTER INSERT
AS
BEGIN
COMMIT;
... do whatever trigger does
END;
Note that if there is an error in trigger code this will still produce the error message, but data in Test1 table are safely inserted.
Option B (also ugly):
You can move your code from trigger to stored procedure. Then call that stored procedure from Wrapper SP that implements BEGIN/TRY and at the end - call Wrapper SP from trigger. This might be a bit tricky to move data from INSERTED table around if needed in the logic (which is in SP now) - probably using some temp tables.
SQLFiddle DEMO
You cannot, and any attempt to solve it is snake oil. No amount of TRY/CATCH or ##ERROR check will work around the fundamental issue.
If you want to use the tightly coupling of a trigger then you must buy into the lower availability induced by the coupling.
If you want to preserve the availability (ie. have the INSERT succeed) then you must give up coupling (remove the trigger). You must do all the processing you were planning to do in the trigger in a separate transaction that starts after your INSERT committed. A SQL Agent job that polls the table for newly inserted rows, an Service Broker launched procedure or even an application layer step are all going to fit the bill.
The accepted answer's option A gave me the following error: "The transaction ended in the trigger. The batch has been aborted.". I circumvented the problem by using the SQL below.
CREATE TRIGGER tgTest1 ON Test1 AFTER INSERT
AS
BEGIN
SET XACT_ABORT OFF
BEGIN TRY
SELECT [Column1] INTO #TableInserted FROM [inserted]
EXECUTE sp_executesql N'INSERT INTO [Table]([Column1]) SELECT [Column1] FROM #TableInserted'
END TRY
BEGIN CATCH
END CATCH
SET XACT_ABORT ON
END

Why does Microsoft SQL Server Implicitly Rollback when a CREATE statement fails?

I am working on pymssql, a python MSSQL driver. I have encountered an interesting situation that I can't seem to find documentation for. It seems that when a CREATE TABLE statement fails, the transaction it was run in is implicitly rolled back:
-- shows 0
select ##TRANCOUNT
BEGIN TRAN
-- will cause an error
INSERT INTO foobar values ('baz')
-- shows 1 as expected
select ##TRANCOUNT
-- will cause an error
CREATE TABLE badschema.t1 (
test1 CHAR(5) NOT NULL
)
-- shows 0, this is not expected
select ##TRANCOUNT
I would like to understand why this is happening and know if there are docs that describe the situation. I am going to code around this behavior in the driver, but I want to make sure that I do so for any other error types that implicitly rollback a transaction.
NOTE
I am not concerned here with typical transactional behavior. I specifically want to know why an implicit rollback is given in the case of the failed CREATE statement but not with the INSERT statement.
Here is the definitive guide to error handling in Sql Server:
http://www.sommarskog.se/error-handling-I.html
It's long, but in a good way, and it was written for Sql Server 2000 but most of it is still accurate. The part you're looking for is here:
http://www.sommarskog.se/error-handling-I.html#whathappens
In your case, the article says that Sql Server is performing a Batch Abortion, and that it will take this measure in the following situations:
Most conversion errors, for instance conversion of non-numeric string to a numeric value.
Superfluous parameter to a parameterless stored procedure.
Exceeding the maximum nesting-level of stored procedures, triggers and functions.
Being selected as a deadlock victim.
Mismatch in number of columns in INSERT-EXEC.
Running out of space for data file or transaction log.
There's a bit more to it than this, so make sure to read the entire section.
It is often, but not always, the point of a transaction to rollback the entire thing if any part of it fails:
http://www.firstsql.com/tutor5.htm
One of the most common reasons to use transactions is when you need the action to be atomic:
An atomic operation in computer
science refers to a set of operations
that can be combined so that they
appear to the rest of the system to be
a single operation with only two
possible outcomes: success or failure.
en.wikipedia.org/wiki/Atomic_(computer_science)
It's probably not documented, because, if I understand your example correctly, it is assumed you intended that functionality by beginning a transaction with BEGIN TRAN
If you run as one batch (which I did first time), the transaction stays open because the INSERT aborts the batch and CREATE TABLE is not run. Only if you run line-by-line does the transaction get rolled back
You can also generate an implicit rollback for the INSERT by setting SET XACT_ABORT ON.
My guess (just had a light bulb moment as I typed the sentence above) is that CREATE TABLE uses SET XACT_ABORT ON internalls = implicit rollback in practice
Some more stuff from me on SO about SET XACT_ABORT (we use it in all our code because it releases locks and rolls back TXNs on client CommandTimeout)