How to handle a transaction in Sybase ASE? - error-handling

I have to insert records into a table in a test environment, so that I now know that it will throw a primary key constraint violation. And because of the scripts that will be run independantly by other people once the time to migrate from an environment to another, I wish to make my script rollback whenever it encounters a problem.
My script is as follows:
use my_db
go
--
-- No rows of the given "my_code" must exist, as they shall be replaced.
--
if exists (
select 1
from my_table
where my_code like 'my_code'
) delete from my_table
where my_code like 'my_code'
--
-- All rows shall be inserted altogether, or rejected altogether at once.
--
begin tran a
insert into my_table (field1, field2, field3) values (value1, value2, value3)
if ##error != 0 or ##transtate != 0
begin
rollback tran a
end
insert into my_table (field1, field2, field3) values (value1, value2, value3)
if ##error != 0 or ##transtate != 0
begin
rollback tran a
end
commit tran a
go
I have tried what I could get from these posts:
Error Handling in Sybase
How to continue executing rest of while loop even if an exception occurs in sybase?
Transaction Handling in Sybase
I have tried with only verifying ##error, ##transtate and both, and I always get the message box reporting the error, and no records are rolled back, that is, the passing rows are still inserted.
I wonder whether there is a way to make sure that Sybase handles the transactions adequately as expected, or else, simply make sure it doesn't autocommit rows when they are inserted as SQL Server allows it - I mean, SQL Server inherit from Sybase, after all... So was it into Sybase, or is it new to SQL Server, I don't know.
I wish to avoid having the error and more preferably log the error and rollback or delete the inserted rows insde the transaction.
Notice that I don't use a stored procedure. This script is a one-timer to update the database for recent changes occured in the software that uses the database.

The code looks mostly correct. If you hit a duplicate-key error upon the insert, then this will be caught by the IF-test.
However, you should also add some logic (GOTO or additional IF-ELSE tests based on a flag) that skips the second insert when you have decided to roll back the first insert.
Currently your code will always execute the second insert, regardless. Unlike some other databases, in ASE control flow is not affected by an error condition and you need to intercept this yourself.
Also note that both inserts are identical, so if there is unique index on the table the second insert will always generate dup-key error if the first insert was successful.
It sounds as if you are using a client that checks the status after every statement or something? ASE itself does not pop up any error message boxes.
To develop this, it is best to run the script from the command line with
isql [...] -i yourscriptfile.sql
Just two remarks:
You are using a transaction name ('a'). This is not necessary and can in fact cause problems: when you roll back to a named transaction, as you do, that must be the outermost transaction or the rollback will fail. Best don't use transaction names.
This problem in the previous bullet can in fact occur if there is already a transaction active when you execute this code. You should always check this at the start of your script with
if ##trancount > 0
begin
print 'Error: transaction already active'
return
end
or something similar.
HTH,
Rob V.

Related

Transaction not rolling back with PK violation

As per my knowledge if we start transaction (begin tran/commit tran), it will be completely done or nothing. But when I am executing below TSQL code the first insert statement works while the 2nd doesn't.
Background: table A has two columns (ID primary key, Name varchar), and it already had 3 rows of data (ID of 1,2,3).
begin tran
insert into A values (4, 'Tim') -- this works
insert into A values (2, 'Tom') -- this doesn't work because it violates the PK constraint
commit tran
select * from A
Here is my question: since the 2nd insert statement violates the PK constraint and couldn't be committed, I was thinking that everything inside this transaction should all be rolled back because the transaction should be succeed or fail as one unit. But in fact, 'Tim' is added into A while 'Tom' didn't. Does this violate the automicity of transaction?
It depends on how you handle errors in your transaction. If you catch them, or if you ignore them (it seems you are ignoring them), then the transaction will continue and will commit.
Any decent "transaction manager" of a programming language/framework:
Will stop the execution of the code and will roll the transaction back, or
Will doom the transaction, so a commit will never be carried out. It will be replaced by a roll back instead.
If you run these commands at the SQL prompt, you are probably not using any transaction manager, and that's why you may be ignoring the error purposedly and carrying on as if everything was good.
That's not how transactions work in SQL Server. If you have a "statement-terminating" error, SQL Server just continues to the next statement. If you have a "batch-terminating" error, the transaction is aborted and rolled back.
Now I don't want that behaviour ever.
So the first line I write in every stored procedure is:
SET XACT_ABORT ON;
That tells SQL Server that "statement-terminating" errors should be automatically promoted to "batch-terminating" errors. Add that statement to the beginning of your script and you'll see that it now works as expected.

SQL Server Prevent Duplication

SQL Table contains a table with a primary key with two columns: ID and Version. When the application attempts to save to the database it uses a command that looks similar to this:
BEGIN TRANSACTION
INSERT INTO [dbo].[FirstTable] ([ID],[VERSION]) VALUES (41,19)
INSERT INTO [dbo].[SecondTable] ([ID],[VERSION]) VALUES (41,19)
COMMIT TRANSACTION
SecondTable has a foreign key constraint to FirstTable and matching column names.
If two computers execute this statement at the same time, does the first computer lock FirstTable and the second computer waits, then when it is finished waiting it finds that the first insert statement fails and throws an error and does not execute the second statement? Is it possible for the second insert to run successfully on both computers?
What is the safest way to ensure that the second computer does not write anything and returns an error to the calling application?
Since these are transactions, all operations must be successful otherwise everything gets rolled back. Whichever computer completes the first statement of the transaction first will ultimately complete its transaction. The other computer will attempt to insert a duplicate primary key and fail, causing its transaction to roll back.
Ultimately there will be one row added to both tables, via only one of the computers.
Ofcourse if there are 2 statements then one of them is going to be executed and the other one will throw an error . In order to avoid this your best bet would be to use either "if exists" or "if not exists"
What this does is basically checks to see if there is data already present in the table if not then you insert , else just select.
Take a look at the flow shown below :
if not exists (select * from Table with (updlock, rowlock, holdlock) where
...)
/* insert */
else
/* update */
commit /* locks are released here */
The transaction did not rollback automatically if an error was encountered. It needed to have
set xact_abort on
Found this in this question
SQL Server - transactions roll back on error?

SQL Server query dry run

I run a lot of queries that perform INSERT's, insert SELECT's, UPDATE's and ALTER's on tables, and when developing these queries, the intermediate steps that are run to test that various parts of the query work, potentially change the table or the data within the table.
Is it possible to do a dry run of a query and have SQL Management Studio give you what the results would be, without actually modifying the data or the table structure?
At the moment I have to back up the database, and run the query. If it works, good, if it doesn't, I have to restore the database (which can take around a hour) and I'm trying to avoid wasting all this time having to restore databases.
Use an SQL transaction to make your changes then back them out.
Before you execute your script:
BEGIN TRANSACTION;
After you execute your script and have done your checking:
ROLLBACK TRANSACTION;
Every change in your script will then be undone.
Note: Make sure you don't have a COMMIT in your script!
Begin the transaction, perform the table operations, and rollback as shown below:
BEGIN TRAN
UPDATE C
SET column1 = 'XXX'
FROM table1 C
SELECT *
FROM table1
WHERE column1 = 'XXX'
ROLLBACK TRAN
This will rollback all the operations performed since the last commit since the beginning of this transaction.

How to Ignoring errors in Trigger and Perform respective operation in MS SQL Server

I have created AFTER INSERT TRIGGER
Now if any case if an error occurs while executing Trigger. It should not effect Insert Operation on Triggered table.
In One word if any ERROR occurs in trigger it should Ignore it.
As I have used
BEGIN TRY
END TRY
BEGIN CATCH
END CATCH
But it give following error message and Rolled back Insert operation on Triggered table
An error was raised during trigger execution. The batch has been
aborted and the user transaction, if any, has been rolled back.
Interesting problem. By default, triggers are designed that if they fail, they rollback the command that fired it. So whenever trigger is executing there is an active transaction, whatever there was an explicit BEGIN TRANSACTION or not on the outside. And also BEGIN/TRY inside trigger will not work. Your best practice would be not to write any code in trigger that could possibly fail - unless it is desired to also fail the firing statement.
In this situation, to suppress this behavior, there are some workarounds.
Option A (the ugly way):
Since transaction is active at the beginning of trigger, you can just COMMIT it and continue with your trigger commands:
CREATE TRIGGER tgTest1 ON Test1 AFTER INSERT
AS
BEGIN
COMMIT;
... do whatever trigger does
END;
Note that if there is an error in trigger code this will still produce the error message, but data in Test1 table are safely inserted.
Option B (also ugly):
You can move your code from trigger to stored procedure. Then call that stored procedure from Wrapper SP that implements BEGIN/TRY and at the end - call Wrapper SP from trigger. This might be a bit tricky to move data from INSERTED table around if needed in the logic (which is in SP now) - probably using some temp tables.
SQLFiddle DEMO
You cannot, and any attempt to solve it is snake oil. No amount of TRY/CATCH or ##ERROR check will work around the fundamental issue.
If you want to use the tightly coupling of a trigger then you must buy into the lower availability induced by the coupling.
If you want to preserve the availability (ie. have the INSERT succeed) then you must give up coupling (remove the trigger). You must do all the processing you were planning to do in the trigger in a separate transaction that starts after your INSERT committed. A SQL Agent job that polls the table for newly inserted rows, an Service Broker launched procedure or even an application layer step are all going to fit the bill.
The accepted answer's option A gave me the following error: "The transaction ended in the trigger. The batch has been aborted.". I circumvented the problem by using the SQL below.
CREATE TRIGGER tgTest1 ON Test1 AFTER INSERT
AS
BEGIN
SET XACT_ABORT OFF
BEGIN TRY
SELECT [Column1] INTO #TableInserted FROM [inserted]
EXECUTE sp_executesql N'INSERT INTO [Table]([Column1]) SELECT [Column1] FROM #TableInserted'
END TRY
BEGIN CATCH
END CATCH
SET XACT_ABORT ON
END

Auto update with script file with transaction

I need to provide an auto update feature to my application.
I am having problem in applying the SQL updates. I have the updated SQL statement in my .sql file and what i want to achieve is that if one statment fails then entire script file must be rolled back
Ex.
create procedure [dbo].[test1]
#P1 varchar(200),
#C1 int
as
begin
Select 1
end
GO
Insert into test (name) values ('vv')
Go
alter procedure [dbo].[test2]
#P1 varchar(200),
#C1 int
as
begin
Select 1
end
GO
Now in the above example, if i get the error in third statement of "alter procedure [dbo].[test2]" then i want to rollback the first two changes also which is creating SP of "test1" and inserting data into "test" table
How should i approach this task? Any help will be much appreciated.
If you need any more info then let me know
Normally, you would want to add a BEGIN TRAN at the beginning, remove the GO statements, and then handle the ROLLBACK TRAN/COMMIT TRAN with a TRY..CATCH block.
When dealing with DML though there are often statements that have to be at the start of a batch, so you can't wrap them in a TRY..CATCH block. In that case you need to put together a system that knows how to roll itself back.
A simple system would be just to backup the database at the start and restore it if anything fails (assuming that you are the only one accessing the database the whole time). Another method would be to log each batch that runs successfully and to have corresponding rollback scripts which you can run to put everything back should a later batch fail. This obviously requires much more work (writing an undo script for every script PLUS fully testing the rollbacks) and can also be a problem if people are still accessing the database while the upgrade is happening.
EDIT:
Here's an example of a simple TRY..CATCH block with transaction handling:
BEGIN TRY
BEGIN TRANSACTION
-- All of your code here, with `RAISERROR` used for any of your own error conditions
COMMIT TRANSACTION
END TRY
BEGIN CATCH
ROLLBACK TRANSACTION
END CATCH
However, the TRY..CATCH block cannot span batches (maybe that's what I was thinking of when I said transactions couldn't), so in your case it would probably be something more like:
IF (OBJECT_ID('dbo.Error_Happened') IS NOT NULL)
DROP TABLE dbo.Error_Happened
GO
BEGIN TRANSACTION
<Some line of code>
IF (##ERROR <> 0)
CREATE TABLE dbo.Error_Happened (my_id INT)
IF (OBJECT_ID('dbo.Error_Happened') IS NOT NULL)
BEGIN
<Another line of code>
IF (##ERROR <> 0)
CREATE TABLE dbo.Error_Happened (my_id INT)
END
...
IF (OBJECT_ID('dbo.Error_Happened) IS NOT NULL)
BEGIN
ROLLBACK TRANSACTION
DROP TABLE dbo.Error_Happened
END
ELSE
COMMIT TRANSACTION
Unfortunately, because of the separate batches from the GO statements you can't use GOTO, you can't use the TRY..CATCH, and you can't persist a variable across the batches. This is why I used the very kludgy trick of creating a table to indicate an error.
A better way would be to simply have an error table and look for rows in it. Just keep in mind that your ROLLBACK will remove those rows at the end as well.