DDL exception caught on table but not on column - sql

Assuming that the table MyTable already exists, Why does the "In catch" is printed on the first statement, but not on the second?
It seems to be catching errors on duplicate table names but not on duplicate column names
First:
BEGIN TRY
BEGIN TRANSACTION
CREATE TABLE MyTable (id INT)
COMMIT TRANSACTION
END TRY
BEGIN CATCH
PRINT 'in Catch'
ROLLBACK TRANSACTION
END CATCH
Second:
BEGIN TRY
BEGIN TRANSACTION
ALTER TABLE MyTable ADD id INT
COMMIT TRANSACTION
END TRY
BEGIN CATCH
PRINT 'in Catch'
ROLLBACK TRANSACTION
END CATCH

The difference is that the alter table statement generates a compile time error, not a runtime error, so the catch block is never executed as the batch itself is not executed.
You can check this by using the display estimated execution plan button in SQL server management studio, you will see for the CREATE TABLE statement, an estimated plan is displayed, whereas for the ALTER TABLE statement, the error is thrown before SQL server can even generate a plan as it cannot compile the batch.
EDIT - EXPLANATION:
This is to do with the way deferred name resolution works in SQL server, if you are creating an object, SQL server does not check that the object already exists until runtime. However if you reference columns in an object that does exist, the columns etc that you reference must be correct or the statement will fail to compile.
An example of this is with stored procedures, say you have the following table:
create table t1
(
id int
)
then you create a stored procedure like this:
create procedure p1
as
begin
select * from t2
end
It will work as deferred name resolution does not require the object to exist when the procedure is created, but it will fail if it is executed
If, however, you create the procedure like this:
create procedure p2
as
begin
select id2 from t1
end
The procedure will fail to be created as you have referenced an object that does exist, so deferred name resolution rules no longer apply.

Related

SQL Server behave differently to stop Execution in different types of error

I have list of Alter Table statements:
BEGIN
ALTER TABLE TABLE1 ALTER COLUMN AA INT -- Error Here
ALTER TABLE TABLE1 ALTER COLUMN BB INT
PRINT('CONTINUE AFTER ERROR')
END
After error its stopped execution and skipped other statements.
In output it shows only 1 error.
But in 2nd case where i have a list of DROP INDEX Statements
BEGIN
DROP INDEX TABLE1.INDEX1 -- Error Here
DROP INDEX TABLE2.INDEX2
PRINT('CONTINUE AFTER ERROR')
END
Here after error, it continues execution and prints error log and the text 'CONTINUE AFTER ERROR'.
Why this difference ?
The difference in behavior is because the first batch of ALTER TABLE statements is a compilation error whereas the second batch of DROP INDEX statements is a runtime error.
When a compilation error occurs on a batch, no code executes and only the compilation error is returned. Also, since no code executes with a compilation error, the error cannot even be caught with structured error handling:
BEGIN TRY
ALTER TABLE TABLE1 ALTER COLUMN AA INT -- Error Here
ALTER TABLE TABLE1 ALTER COLUMN BB INT
PRINT('CONTINUE AFTER ERROR')
END TRY
BEGIN CATCH
PRINT 'CAUGHT ERROR';
END CATCH;
Msg 4902, Level 16, State 1, Line 4 Cannot find the object "TABLE1"
because it does not exist or you do not have permissions.
When compilation is successful and a runtime error happens, subsequent statements in the same batch may or may not execute after an error depending the error severity and XACT_ABORT setting.
Most likely because the ALTER TABLE statements actually touch the data in the given tables. Removing the index does not have that impact, so I guess SQL decides it is OK to continue with the next statement.

How to rollback stored procedure that updates a table

Why doesn't this ROLLBACK TRANSACTION statement work?
BEGIN TRANSACTION;
DECLARE #foo INT
EXECUTE [database].[dbo].[get_counter] #CounterID='inventory_records', #nextValue=#foo OUTPUT;
ROLLBACK TRANSACTION;
Background
I'm inserting records into a customer's ERP system built on SQL Server 19. The ERP database doesn't have auto-incrementing primary keys. It instead uses a table called counters where each row has a counterID field and an integer value field.
To insert a new row into a table like inventory_record, I first need to call a stored procedure like this:
EXECUTE get_counter #counterID='inventory_record'
This procedure returns an OUT parameter called #nextValue which I then INSERT into the inventory_record table as its uid.
I need to ROLLBACK this stored procedure's behavior if my insert fails. That way the counter doesn't increase boundlessly on failed INSERT attempts.
Contents of get_counter stored procedure
It's dirt simple but also subject to copyright. I've summarized and truncated here. The counters are stored as sequences in the DB. So get_counter calls sp_sequence_get_range after checking that the requested counter is legitimate.
ALTER PROCEDURE get_counter
#strCounterID varchar(64),
#iIncrementValue integer = 1,
#LastValue BIGINT = NULL OUTPUT
AS
SET NOCOUNT ON
BEGIN
DECLARE
#nextSeqVar SQL_VARIANT
, #lastSeqVar SQL_VARIANT
-- code that confirms valid counter name
BEGIN TRY
-- code that calls [sp_sequence_get_range]
END TRY
BEGIN CATCH
THROW
END CATCH
RETURN(#LastValue)
END
The Problem
The inventory_record counter always increments. I can't roll it back.
If I run the SQL at the top of this question from SSMS, then SELECT value FROM counters WHERE counterID = 'inventory_record', the counter increments each time I execute.
I'm new to transaction handling in SQL Server. Any ideas what I'm missing?
re-post comments as answer for better readability.
The get_counter is using Sequence Numbers (sp_sequence_get_range). Please refer to documentation on Limitation section.
Sequence numbers are generated outside the scope of the current
transaction. They are consumed whether the transaction using the
sequence number is committed or rolled back
You may see a simple demo here

StoredProc manipulating Temporary table throws 'Invalid column name' on execution

I have a a number of sp's that create a temporary table #TempData with various fields. Within these sp's I call some processing sp that operates on #TempData. Temp data processing depends on sp input parameters. SP code is:
CREATE PROCEDURE [dbo].[tempdata_proc]
#ID int,
#NeedAvg tinyint = 0
AS
BEGIN
SET NOCOUNT ON;
if #NeedAvg = 1
Update #TempData set AvgValue = 1
Update #TempData set Value = -1;
END
Then, this sp is called in outer sp with the following code:
USE [BN]
--GO
--DBCC FREEPROCCACHE;
GO
Create table #TempData
(
tele_time datetime
, Value float
--, AvgValue float
)
Create clustered index IXTemp on #TempData(tele_time);
insert into #TempData(tele_time, Value ) values( GETDATE(), 50 ); --sample data
declare
#ID int,
#UpdAvg int;
select
#ID = 1000,
#UpdAvg = 1
;
Exec dbo.tempdata_proc #ID, #UpdAvg ;
select * from #TempData;
drop table #TempData
This code throws an error: Msg 207, Level 16, State 1, Procedure tempdata_proc, Line 8: Invalid column name "AvgValue".
But if only I uncomment declaration AvgValue float - everything works OK.
The question: is there any workaround letting the stored proc code remain the same and providing a tip to the optimizer - skip this because AvgValue column will not be used by the sp due to params passed.
Dynamic SQL is not a welcomed solution BTW. Using alternative to #TempData tablename is undesireable solution according to existing tsql code (huge modifications necessary for that).
Tried SET FMTONLY, tempdb.tempdb.sys.columns, try-catch wrapping without any success.
The way that stored procedures are processed is split into two parts - one part, checking for syntactical correctness, is performed at the time that the stored procedure is created or altered. The remaining part of compilation is deferred until the point in time at which the store procedure is executed. This is referred to as Deferred Name Resolution and allows a stored procedure to include references to tables (not just limited to temp tables) that do not exist at the point in time that the procedure is created.
Unfortunately, when it comes to the point in time that the procedure is executed, it needs to be able to compile all of the individual statements, and it's at this time that it will discover that the table exists but that the column doesn't - and so at this time, it will generate an error and refuse to run the procedure.
The T-SQL language is unfortunately a very simplistic compiler, and doesn't take runtime control flow into account when attempting to perform the compilation. It doesn't analyse the control flow or attempt to defer the compilation in conditional paths - it just fails the compilation because the column doesn't (at this time) exist.
Unfortunately, there aren't any mechanisms built in to SQL Server to control this behaviour - this is the behaviour you get, and anything that addresses it is going to be perceived as a workaround - as evidenced already by the (valid) suggestions in the comments - the two main ways to deal with it are to use dynamic SQL or to ensure that the temp table always contains all columns required.
One way to workaround your concerns about maintenance if you go down the "all uses of the temp table should have all columns" is to move the column definitions into a separate stored procedure, that can then augment the temporary table with all of the required columns - something like:
create procedure S_TT_Init
as
alter table #TT add Column1 int not null
alter table #TT add Column2 varchar(9) null
go
create procedure S_TT_Consumer
as
insert into #TT(Column1,Column2) values (9,'abc')
go
create procedure S_TT_User
as
create table #TT (tmp int null)
exec S_TT_Init
insert into #TT(Column1) values (8)
exec S_TT_Consumer
select Column1 from #TT
go
exec S_TT_User
Which produces the output 8 and 9. You'd put your temp table definition in S_TT_Init, S_TT_Consumer is the inner query that multiple stored procedures call, and S_TT_User is an example of one such stored procedure.
Create the table with the column initially. If you're populating the TEMP table with SPROC output just make it an IDENTITY INT (1,1) so the columns line up with your output.
Then drop the column and re-add it as the appropriate data type later on in the SPROC.
The only (or maybe best) way i can thing off beyond dynamic SQL is using checks for database structure.
if exists (Select 1 From tempdb.sys.columns Where object_id=OBJECT_ID('tempdb.dbo.#TTT') and name = 'AvgValue')
begin
--do something AvgValue related
end
maybe create a simple function that takes table name and column or only column if its always #TempTable and retursn 1/0 if the column exists, would be useful in the long run i think
if dbo.TempTableHasField('AvgValue')=1
begin
-- do something AvgValue related
end
EDIT1: Dang, you are right, sorry about that, i was sure i had ... this.... :( let me thing a bit more

Why does the Try/Catch not complete in SSMS query window?

This sample script is supposed to create two tables and insert a row into each of them.
If all goes well, we should see OK and have two tables with data. If not, we should see FAILED and have no tables at all.
Running this in a query window displays an error for the second insert (as it should), but does not display either a success or failed message. The window just sits waiting for a manual rollback. ??? What am I missing in either the transactioning or the try/catch?
begin try
begin transaction
create table wpt1 (id1 int, junk1 varchar(20))
create table wpt2 (id2 int, junk2 varchar(20))
insert into wpt1 select 1,'blah'
insert into wpt2 select 2,'fred',0 -- <<< deliberate error on this line
commit transaction
print 'OK'
end try
begin catch
rollback transaction
print 'FAILED'
end catch
The problem is that your error is of a high severity, and is a type that breaks the connection immediately. TRY-CATCH can handle softer errors, but it does not catch all errors.
Look for - What Errors Are Not Trapped by a TRY/CATCH Block:
It looks like after the table is created, the following inserts are parsed (recompiled), which trigger statement level recompilations and breaks the batch.

Why does Microsoft SQL Server check columns but not tables in stored procs?

Microsoft SQL Server seems to check column name validity, but not table name validity when defining stored procedures. If it detects that a referenced table name exists currently, it validates the column names in a statement against the columns in that table. So, for example, this will run OK:
CREATE PROCEDURE [dbo].[MyProcedure]
AS
BEGIN
SELECT
Col1, Col2, Col3
FROM
NonExistentTable
END
GO
... as will this:
CREATE PROCEDURE [dbo].[MyProcedure]
AS
BEGIN
SELECT
ExistentCol1, ExistentCol2, ExistentCol3
FROM
ExistentTable
END
GO
... but this fails, with 'Invalid column name':
CREATE PROCEDURE [dbo].[MyProcedure]
AS
BEGIN
SELECT
NonExistentCol1, NonExistentCol2, NonExistentCol3
FROM
ExistentTable
END
GO
Why does SQL Server check columns, but not tables, for existence? Surely it's inconsistent; it should do both, or neither. It's useful for us to be able to define SPs which may refer to tables AND/OR columns which don't exist in the schema yet, so is there a way to turn off SQL Server's checking of column existence in tables which currently exist?
This is called deferred name resolution.
There is no way of turning it off. You can use dynamic SQL or (a nasty hack!) add a reference to a non existent table so that compilation of that statement is deferred.
CREATE PROCEDURE [dbo].[MyProcedure]
AS
BEGIN
CREATE TABLE #Dummy (c int)
SELECT
NonExistantCol1, NonExistantCol2, NonExistantCol3
FROM
ExistantTable
WHERE NOT EXISTS(SELECT * FROM #Dummy)
DROP TABLE #Dummy
END
GO
This article in MSDN should answer your question.
From the article:
When a stored procedure is executed for the first time, the query
processor reads the text of the stored procedure from the
sys.sql_modules catalog view and checks that the names of the objects
used by the procedure are present. This process is called deferred
name resolution because table objects referenced by the stored
procedure need not exist when the stored procedure is created, but
only when it is executed.