Why does the Try/Catch not complete in SSMS query window? - sql

This sample script is supposed to create two tables and insert a row into each of them.
If all goes well, we should see OK and have two tables with data. If not, we should see FAILED and have no tables at all.
Running this in a query window displays an error for the second insert (as it should), but does not display either a success or failed message. The window just sits waiting for a manual rollback. ??? What am I missing in either the transactioning or the try/catch?
begin try
begin transaction
create table wpt1 (id1 int, junk1 varchar(20))
create table wpt2 (id2 int, junk2 varchar(20))
insert into wpt1 select 1,'blah'
insert into wpt2 select 2,'fred',0 -- <<< deliberate error on this line
commit transaction
print 'OK'
end try
begin catch
rollback transaction
print 'FAILED'
end catch

The problem is that your error is of a high severity, and is a type that breaks the connection immediately. TRY-CATCH can handle softer errors, but it does not catch all errors.
Look for - What Errors Are Not Trapped by a TRY/CATCH Block:
It looks like after the table is created, the following inserts are parsed (recompiled), which trigger statement level recompilations and breaks the batch.

Related

MS SQL Try Catch Transastion

I need to know if the sql below is correct to roll back if something bad happens.
I first create a temp table
IF OBJECT_ID(N'tempdb..#LocalSummery') IS NOT NULL
DROP TABLE #LocalSummery
CREATE TABLE #LocalSummery (
then I populate the table
insert into #LocalSummery'
SELECT *....
then i start my try catch
BEGIN TRY
BEGIN TRANSACTION
delete from LocalSummeryRealTable
INSERT INTO LocalSummeryRealTable
SELECT S.*
FROM #LocalSummery' AS S
COMMIT
END TRY
BEGIN CATCH
IF ##TRANCOUNT > 0
ROLLBACK TRANSACTION
...
I noticed when i did the insert into the LocalSummeryRealTable the fields didn't line up and I got an error. I expected things to roll back but in SSMS query window it gave me a message that the trans wasn't committed and asked me if I wanted to commit it. Why would this ask me when it should of been rolled back?

DDL exception caught on table but not on column

Assuming that the table MyTable already exists, Why does the "In catch" is printed on the first statement, but not on the second?
It seems to be catching errors on duplicate table names but not on duplicate column names
First:
BEGIN TRY
BEGIN TRANSACTION
CREATE TABLE MyTable (id INT)
COMMIT TRANSACTION
END TRY
BEGIN CATCH
PRINT 'in Catch'
ROLLBACK TRANSACTION
END CATCH
Second:
BEGIN TRY
BEGIN TRANSACTION
ALTER TABLE MyTable ADD id INT
COMMIT TRANSACTION
END TRY
BEGIN CATCH
PRINT 'in Catch'
ROLLBACK TRANSACTION
END CATCH
The difference is that the alter table statement generates a compile time error, not a runtime error, so the catch block is never executed as the batch itself is not executed.
You can check this by using the display estimated execution plan button in SQL server management studio, you will see for the CREATE TABLE statement, an estimated plan is displayed, whereas for the ALTER TABLE statement, the error is thrown before SQL server can even generate a plan as it cannot compile the batch.
EDIT - EXPLANATION:
This is to do with the way deferred name resolution works in SQL server, if you are creating an object, SQL server does not check that the object already exists until runtime. However if you reference columns in an object that does exist, the columns etc that you reference must be correct or the statement will fail to compile.
An example of this is with stored procedures, say you have the following table:
create table t1
(
id int
)
then you create a stored procedure like this:
create procedure p1
as
begin
select * from t2
end
It will work as deferred name resolution does not require the object to exist when the procedure is created, but it will fail if it is executed
If, however, you create the procedure like this:
create procedure p2
as
begin
select id2 from t1
end
The procedure will fail to be created as you have referenced an object that does exist, so deferred name resolution rules no longer apply.

TRY CATCH Block in T-SQL

I encountered a stored procedure that had the following error handling block immediately after an update attempt. The following were the last lines of the SP.
Is there any benefit of doing this? It appears to me as though this code is just rethrowing the same error that it caught without any value added and that the code would presumably behave 100% the same if the Try Block were ommited entirely.
Would there be ANY difference in the behavior of the resulting SP if the TRY block were ommitted?
BEGIN CATCH
SELECT #ErrMsg = ERROR_MESSAGE(), #ErrSev = ERROR_SEVERITY(), #ErrState = ERROR_STATE()
RAISERROR (#ErrMsg, #ErrSev, #ErrState)
END CATCH
Barring the fact that the "line error occured on" part of any message returned would reference the RAISERROR line and not the line the error actually occured on, there will be no difference. The main reason to do this is as #Chris says, to allow you to programmatically use/manipulate the error data.
What we usually do in our stored procedure is to write the catch block like this
BEGIN CATCH
DECLARE #i_intErrorNo int
DECLARE #i_strErrorMsg nvarchar(1000)
DECLARE #i_strErrorProc nvarchar(1000)
DECLARE #i_intErrorLine int
SELECT #i_intErrorNo=Error_Number()
SELECT #i_strErrorMsg=Error_Message()
SELECT #i_strErrorProc=Error_Procedure()
SELECT #i_intErrorLine=Error_Line()
INSERT INTO error table ////// Insert statement.
END CATCH
This is something we use to do to store error. For proper message to user, I always use the output parameter to the stored procedure to show the detailed/required reason of the error.
if you look on the msdn page for RAISERROR then you see this general description:
Generates an error message and
initiates error processing for the
session. RAISERROR can either
reference a user-defined message
stored in the sys.messages catalog
view or build a message dynamically.
The message is returned as a server
error message to the calling
application or to an associated CATCH
block of a TRY…CATCH construct.
It appears that the "calling application" will get the error message. It may be that the creator of the stored procedure wanted only the error message, severity, and state to be reported and no other options that can be added. This may be because of security concerns or just that the calling application did not need to know the extra information (which could have been verbose or overly detailed, perhaps).
There is a subtle difference, as demonstrated below.
First setup the following:
CREATE TABLE TMP
( ROW_ID int NOT NULL,
ALTER TABLE TMP ADD CONSTRAINT PK_TMP PRIMARY KEY CLUSTERED (ROW_ID)
)
GO
CREATE PROC pTMP1
AS
BEGIN TRY
INSERT INTO TMP VALUES(1)
INSERT INTO TMP VALUES(1)
INSERT INTO TMP VALUES(2)
END TRY
BEGIN CATCH
DECLARE #ErrMsg varchar(max)= ERROR_MESSAGE(),
#ErrSev int = ERROR_SEVERITY(),
#ErrState int = ERROR_STATE()
RAISERROR (#ErrMsg, #ErrSev, #ErrState)
END CATCH
GO
CREATE PROC pTMP2
AS
INSERT INTO TMP VALUES(1)
INSERT INTO TMP VALUES(1)
INSERT INTO TMP VALUES(2)
GO
Now run the following:
SET NOCOUNT ON
DELETE TMP
exec pTMP1
SELECT * FROM TMP
DELETE TMP
exec pTMP2
SELECT * FROM TMP
SET NOCOUNT OFF
--Cleanup
DROP PROCEDURE pTMP1
DROP PROCEDURE pTMP2
DROP TABLE TMP
You should get the following results:
Msg 50000, Level 14, State 1, Procedure pTMP1, Line 12
Violation of PRIMARY KEY constraint 'PK_TMP'. Cannot insert duplicate key in object 'dbo.TMP'. The duplicate key value is (1).
ROW_ID
-----------
1
Msg 2627, Level 14, State 1, Procedure pTMP2, Line 4
Violation of PRIMARY KEY constraint 'PK_TMP'. Cannot insert duplicate key in object 'dbo.TMP'. The duplicate key value is (1).
The statement has been terminated.
ROW_ID
-----------
1
2
Notice that the TRY..CATCH version did not execute the third INSERT statement, whereas the pTMP2 proc did. This is because control jumps to CATCH as soon as the error occurs.
NOTE: The behaviour of pTMP2 is affected by the XACT_ABORT setting.
Conclusion
The benefit of using TRY..CATCH as demonstrated depends on how you manage your transaction boundaries.
If you roll-back on any error, then the changes will be undone. But this doesn't eliminate side-effects such as addtional processing. NOTE: If a different session simultaneously queries TMP using WITH(NOLOCK) it may even be able to observe the temporary change.
However, if you don't intend rolling back a transaction, you may find the technique is quite important to prevent certain data changes being applied in spite of an earlier error.

Logging into table in SQL Server trigger

I am coding SQL Server 2005 trigger. I want to make some logging during trigger execution, using INSERT statement into my log table. When there occurs error during execution, I want to raise error and cancel action that cause trigger execution, but not to lose log records. What is the best way to achieve this?
Now my trigger logs everything except situation when there is error - because of ROLLBACK. RAISERROR statement is needed in order to inform calling program about error.
Now, my error handling code looks like:
if (#err = 1)
begin
INSERT INTO dbo.log(date, entry) SELECT getdate(), 'ERROR: ' + out from #output
RAISERROR (#msg, 16, 1)
rollback transaction
return
end
Another possible option is to use a table variable to capture the info you want to store in your permanent log table. Table variables are not rolled back if a ROLLBACK TRANSACTION command is given. Sample code is below...
--- Declare table variable
DECLARE #ErrorTable TABLE
( [DATE] smalldatetime,
[ENTRY] varchar(64) )
DECLARE #nErrorVar int
--- Open Transaction
BEGIN TRANSACTION
--- Pretend to cause an error and catch the error code
SET #nErrorVar = 1 --- ##ERROR
IF (#nErrorVar = 1)
BEGIN
--- Insert error info table variable
INSERT INTO #ErrorTable
( [Date], [Entry] )
SELECT
getdate(), 'Error Message Goes Here'
RAISERROR('Error Message Goes Here', 16, 1)
ROLLBACK TRANSACTION
--- Change this to actually insert into your permanent log table
SELECT *
FROM #ErrorTable
END
IF ##TRANCOUNT 0
PRINT 'Open Transactions Exist'
ELSE
PRINT 'No Open Transactions'
The problem here is that logging is part of transaction that modifies your data. Nested transactions will not help here. What you need is to put you logging actions into a separate context (connection), i.e. make it independent from you current transaction.
Two options come to my mind:
use Service Broker for logging - put log data to queue, receive and save the data 'on the other side of the pipe' (i.e. in another process/connection/transaction)
use OPENQUERY - you will need to register you own server as a 'linked server' and execute queries 'remotely' (I know, this looks a little bit strange, but an option anyway ...)
HTH
Don't know if I'm thinking too simple, but why not just change the order of the error handler to insert AFTER the rollback??
if (#err = 1)
begin
RAISERROR (#msg, 16, 1)
rollback transaction
INSERT INTO dbo.log(date, entry) SELECT getdate(), 'ERROR: ' + out from #output
return
end
Checkout error handling in triggers.

SQL Server 'Resume Next' Equivalent

I'm working on a project in VB.net which takes large text files containing T-SQL and executes them against a local SQL database, but I've hit a problem in regards to error handling.
I'm using the following technologies :
VB.net
Framework 3.5
SQL Express 2005
The SQL I'm trying to execute is mostly straight-forwards but my app is completely unaware of the schema or the data contained within. For example :
UPDATE mytable SET mycol2='data' WHERE mycol1=1
INSERT INTO mytable (mycol1, mycol2) VALUES (1,'data')
UPDATE mytable SET mycol2='data' WHERE mycol1=2
INSERT INTO mytable (mycol1, mycol2) VALUES (1,'data')
UPDATE mytable SET mycol2='data' WHERE mycol1=3
The above is a sample of the sort of thing I'm executing, but these files will contain around 10,000 to 20,000 statements each.
My problem is that when using sqlCommand.ExecuteNonQuery(), I get an exception raised because the second INSERT statement will hit the Primary Key constraint on the table.
I need to know that this error happened and log it, but also process any subsequent statements. I've tried wrapping these statements in TRY/CATCH blocks but I can't work out a way to handle the error then continue to process the other statements.
The Query Analyser seems to behave in this way, but not when using sqlCommand.ExecuteNonQuery().
So is there a T-SQL equivalent of 'Resume Next' or some other way I can do this without introducing massive amounts of string handling on my part?
Any help greatly appreciated.
SQL Server does have a Try/Catch syntax. See:
http://msdn.microsoft.com/en-us/library/ms175976.aspx
To use this with your file, you would either have to rewrite the files themselves to wrap each line with the try/catch syntax, or else your code would have to programatically modify the file contents to wrap each line.
There is no T-SQL equivalent of "On Error Resume Next", and thank Cthulhu for that.
Actually your batch executed until the end since key violations are not intrerupting batch execution. If you run the same SQL file from Management Studio you'll see that the result is that all the valid statements were executed and the messages panel contains an error for each key violation. The SqlClient of ADO.NEt behaves much the same way, but at the end of the batch (when SqlCommand.ExecuteNonQuery returns) it parses the messages returned and throws an exception. The exception is one single SqlException but it's Errors collection contains a SqlError for each key violation that occured.
Unfortunately there is no silver bullet. Ideally the SQL files should not cause errors. You can choose to iterate through the SqlErrors of the exception and decide, on individual basis, if the error was serious or you can ignore it, knowing that the SQL files have data quality problems. Some errors may be serious and cannot be ignored. See Database Engine Error Severities.
Another alternative is to explictily tell the SqlClient not to throw. If you set the FireInfoMessageEventOnUserErrors property of the connection to true it will raise an SqlConnection.InfoMessage event instead of twroing an exception.
At the risk of slowing down the process (by making thousands of trips to SQL server rwther than one), you could handle this issue by splitting the file into multiple individual queries each for either INSERT or UPDATE. Then you can catch each individual error as it take place and log it or deal with it as your business logic would require.
begin try
--your critical commands
end try
begin catch
-- is necessary write somethink like this
select ''
end cath
One technique I use is to use a try/catch and within the catch raise an event with the exception information. The caller can then hook up an event handler to do whatever she pleases with the information (log it, collect it for reporting in the UI, whatever).
You can also include the technique (used in many .NET framework areas, e.g. Windows.Forms events and XSD validation) of passing a CancelableEventArgs object as the second event argument, with a Boolean field that the event handler can set to indicate that processing should abort after all.
Another thing I urge you to do is to prepare your INSERTs and UPDATEs, then call them many times with varying argments.
I'm not aware of a way to support resume next, but one approach would be to use a local table variable to prevent the errors in the first place e.g.
Declare #Table table(id int, value varchar(100))
UPDATE mytable SET mycol2='data' WHERE mycol1=1
--Insert values for later usage
INSERT INTO #Table (id, value) VALUES (1,'data')
--Insert only if data does not already exist.
INSERT INTO mytable (mycol1, mycol2)
Select id, Value from #Table t left join
mytable t2 on t.id = t2.MyCol1
where t2.MyCol is null and t.id = 1
EDIT
Ok, I don't know that I suggest this per se, but you could achieve a sort of resume next by wrapping the try catch in a while loop if you set an exit condition at the end of all the steps and keep track of what step you performed last.
Declare #Table table(id int, value varchar(100))
Declare #Step int
set #Step = 0
While (1=1)
Begin
Begin Try
if #Step < 1
Begin
Insert into #Table (id, value) values ('s', 1)
Set #Step = #Step + 1
End
if #Step < 2
Begin
Insert into #Table values ( 1, 's')
Set #Step = #Step + 1
End
Break
End Try
Begin Catch
Set #Step = #Step + 1
End Catch
End
Select * from #Table
Unfortunately, I don't think there's a way to force the SqlCommand to keep processing once an error has been returned.
If you're unsure whether any of the commands will cause an error or not (and at some performance cost), you should split the commands in the text file into individual SqlCommands...which would enable you to use a try/catch block and find the offending statements.
...this, of course, depends on the T-SQL commands in the text file to each be on a separate line (or otherwise delineated).
You need to check whether the PK value already exists. Additionally, one large transaction is always faster than many little transactions; and the rows and pages don't need to be locked for as long overall that way.
-- load a temp/import table
create table #importables (mycol1 int, mycol2 varchar(50))
insert #importables (mycol1, mycol2) values (1, 'data for 1')
insert #importables (mycol1, mycol2) values (2, 'data for 2')
insert #importables (mycol1, mycol2) values (3, 'data for 3')
-- update the base table rows that are already there
update mt set MyCol2 = i.MyCol2
from #importables i (nolock)
inner join MyTable mt on mt.MyCol1 = i.MyCol1
where mt.MyCol2 <> i.MyCol2 -- no need to fire triggers and logs if nothing changed
-- insert new rows AFTER the update, so we don't update the rows we just inserted.
insert MyTable (mycol1, mycol2)
select mycol1, mycol2
from #importables i (nolock)
left join MyTable mt (nolock) on mt.MyCol1 = i.MyCol1
where mt.MyCol1 is null;
You could improve this further by opening a SqlConnection, creating a #temp table, using SqlBulkCopy to do a bulk insert to that temp table, and doing the delta from there (as opposed to my #importables in this example). As long as you use the same SqlConnection, a #temp table will remain accessible to subsequent queries on that connection, until you drop it or disconnect.