Transact-SQL transaction failing without GO - sql

I'm looking at a trivial query and struggle to understand why SQL Server cannot execute it.
Say I have a table
CREATE TABLE [dbo].[t2](
[id] [nvarchar](36) NULL,
[name] [nvarchar](36) NULL
)
And I want to add a new column and set some value to it. So I do the following:
BEGIN TRANSACTION
ALTER TABLE [t2] ADD [name2] [nvarchar](255) NULL
UPDATE [t2] SET [name2] = CONCAT(name, '-XXXX')
COMMIT TRANSACTION
And if I execute the query, I have
I know, it failing because the SQL Server executes the query in a different order for optimization purposes, and one way to fix it would be to separate those two sentences with GO statement. Thus the following query will pass without issues.
BEGIN TRANSACTION
ALTER TABLE [t2] ADD [name2] [nvarchar](255) NULL
GO
UPDATE [t2] SET [name2] = CONCAT(name, '-XXXX')
COMMIT TRANSACTION
Actually, not exactly without issues, as I have to use GO statement which will make transaction scope useless, as discussed on this Stackoverflow question
So I have two questions:
How to make that script work without using GO statement
Why SQL server is not smart enough to figure out such a trivial case? (it is more like a rhetorical question)

This is a parser error. When you run a statement it is parsed before hand, however, only certain DDL operations are "cached" by the parser so that it is aware of later. CREATE is something it will "cache" however, ALTER is not. That is why you can CREATE a table in the same batch and then reference it.
As you have an ALTER then when the parser parses the batch and it gets to the UPDATE statement it will fail, and the error you see is raised. One method is to defer to parsing of the statement:
BEGIN TRANSACTION;
ALTER TABLE [t2] ADD [name2] [nvarchar](255) NULL;
EXEC sys.sp_executesql N'UPDATE [t2] SET [name2] = CONCAT(name, N''-XXXX'');';
COMMIT TRANSACTION;
If, however, N'-XXXX' is meant to be the default value, you could qualify that in the DDL statement instead:
BEGIN TRANSACTION;
ALTER TABLE t2 ADD name2 nvarchar(255) NULL DEFAULT N'-XXXX' WITH VALUES;
COMMIT TRANSACTION;

Related

From within a TSQL block, can I retrieve the originating SQL statement?

I am wondering if I can retrieve the original SQL statement which fired of a particular SQL block.
Say I have a table with an AFTER INSERT, UPDATE trigger on it. From within the trigger, I would like to get the full text of the original INSERT or UPDATE statement that fired the trigger.
Is this possible? Mainly I want to be able to do this for logging/debugging purposes.
I haven't tried to do something like this in a trigger (nor would I necessarily) but you might try something like this.
select top 100
q.[text]
from sys.dm_exec_requests r
outer apply sys.dm_exec_sql_text(r.sql_handle) q
where r.session_id = ##spid
Great question! I use eventdata all the time with ddl triggers, but haven't thought about what to use for dml triggers. I think this is what you're looking for. It's definitely going into my toolbox!
Please note that the code below is for demonstration only. Returning output from a trigger is deprecated. In practice, you'd insert the output of the dbcc command into a log table.
if schema_id(N'log') is null execute (N'create schema log');
go
if object_id(N'[log].[data]'
, N'U') is not null
drop table [log].[data];
go
create table [log].[data] (
[id] [int] identity(1, 1)
, [flower] [sysname]);
go
if exists
(select *
from sys.triggers
where parent_class = 0
and name = 'get_log_dml')
drop trigger [get_log_dml] on database;
go
create trigger [get_log_dml] on [log].[data]
after insert, update
as
declare #dbcc table (
[event_type] [sysname]
, [parameters] [int]
, [event_info] [nvarchar](max)
);
select *
from inserted;
insert into #dbcc
([event_type],[parameters],[event_info])
execute (N'dbcc inputbuffer(##spid)');
select [event_type]
, [parameters]
, [event_info]
from #dbcc;
go
insert into [log].[data]
([flower])
values (N'rose');

How can I update my FinishedOn and DeletedOn columns without hard coding a check against a different table's column?

I have a stored procedure as follows:
CREATE PROCEDURE [ODataTaskResult_Create]
#ODataTaskId BIGINT,
#ODataTaskResultTypeId INTEGER,
#Details CHARACTER VARYING(MAX)
AS
BEGIN TRANSACTION
INSERT INTO [ODataTaskResult] WITH (ROWLOCK, XLOCK) ([ODataTaskId], [ODataTaskResultTypeId], [Details], [CreatedOn])
VALUES (#ODataTaskId, #ODataTaskResultTypeId, #Details, SYSDATETIMEOFFSET())
DECLARE #ODataTaskResultTypeName CHARACTER VARYING(255)
SET #ODataTaskResultTypeName = (
SELECT TOP 1 [ODataTaskType].[Name] FROM [ODataTaskType]
WHERE [ODataTaskType].[Id] = #ODataTaskResultTypeId)
IF (#ODataTaskResultTypeName = 'Finish')
BEGIN
UPDATE [ODataTask]
SET [ODataTask].[FinishedOn] = SYSDATETIMEOFFSET()
WHERE [ODataTask].[Id] = #ODataTaskId
END ELSE IF (#ODataTaskResultTypeName = 'Delete')
BEGIN
UPDATE [ODataTask]
SET [ODataTask].[DeletedOn] = SYSDATETIMEOFFSET()
WHERE [ODataTask].[Id] = #ODataTaskId
END ELSE
RAISERROR('Invalid result type', 16, 1)
COMMIT TRANSACTION
GO
This procedure is supposed to look at the incoming #ODataTaskResultTypeId parameter, pull the result type down from another table, and do something based on the Name column in that record.
Basically when a result is entered against a task, it defines how it completed. If a task is finished, I need to modify the FinishedOn column on the parent task record and not alter the DeletedOn column. We have a constraint that indicates FinishedOn and DeletedOn may not both be NOT NULL.
Feeling at this point that since I have hard coded the different case logic into the stored procedure, it makes maintainability difficult and prevents this from working properly unless the ODataTaskResult table has the correct initial entries.
Should I make the ODataTaskResult_Create procedure only create the result and then have another procedure called ODataTask_Finish as well as another procedure called ODataTask_Delete?
Is there a different approach to this problem that is generally easier to maintain?
We never hard delete entries, only soft delete.
If you want a flexible solution, you can add a column to your ODataTaskType table to hold the stored procedure to run afterwards. You can then use some dynamic sql to dispatch. If the column is called PostComplete_Proc, say:
create proc dbo.[ODataTaskResult_Create]
#ODataTaskId bigint,
#ODataTaskResultTypeId int,
#Details varchar(max)
as
declare
#proc sysname,
#params nvarchar(max) = '#ODataTaskId bigint'
begin transaction
insert into dbo.[ODataTaskResult] with (rowlock, xlock) (
[ODataTaskId], [ODataTaskResultTypeId], [Details], [CreatedOn]
) values
#ODataTaskId,
#ODataTaskResultTypeId,
#Details,
sysdatetimeoffset()
);
select top 1 -- Would there really be more than 1? Why hide potential errors?
#proc = PostComplete_Proc
from
dbo.[ODataTaskType]
where
Id = #ODataTaskResultTypeId;
if #proc is null
raiserror('Invalid result type', 16, 1);
else
exec #proc, #params, #ODataTaskId;
commit transaction;
then create the relevant stored procedures. If you have many result types and few procedures, you can even add another level, where the procedures are stored on a separate table and referenced via foreign keys.
I find it hard to convince myself that rowlock, xlock is doing anything here.

SQL server transaction visibility issue

When I execute the following code (Case 1) I get the value 2 for the count. Which means inside the same transaction the chagnes made to the table are visible. So this behaves in the way I expect.
Case 1
begin tran mytran
begin try
CREATE TABLE [dbo].[ft](
[ft_ID] [int] IDENTITY(1,1) NOT NULL,
[ft_Name] [nvarchar](100) NOT NULL
CONSTRAINT [PK_FileType] PRIMARY KEY CLUSTERED
(
[ft_ID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
INSERT INTO [dbo].[ft]([ft_Name])
VALUES('xxxx')
INSERT INTO [dbo].[ft]([ft_Name])
VALUES('yyyy')
select count(*) from [dbo].[ft]
commit tran mytran
end try
begin catch
rollback tran mytran
end catch
However when I alter a column (e.g. add a new column within the transaction) it is not visible to the (self/same) transaction (Case 2). Let's assume there is a product table without a column called ft_ID and I am adding a column withing the same transaction and going to read it.
Case 2
begin tran mytran
begin try
IF NOT EXISTS (
SELECT *
FROM sys.columns
WHERE object_id = OBJECT_ID(N'dbo.Products')
AND name = 'ft_ID'
)
begin
alter table dbo.Products
add ft_ID int null
end
select ft_ID from dbo.Products
commit tran mytran
end try
begin catch
rollback tran mytran
end catch
When trying to execute Case 2 I get the error "Invalid column name 'ft_ID'" because the newly added column is not visible within the same transaction.
Why this discrepancy happens? Create table is atomic (Case 1) and works in the way I expect but alter table is not. Why changes made within the same transaction are not visible to the statements down (Case 2).
You get a compile errors. The batch is never launched into execution. See Understanding how SQL Server executes a query. Transaction visibility and boundaries has nothing to do with what you're seeing.
You should always separate DDL and DML into separate requests. Without going into too much details, due to the way recovery works, mixing DDL and DML in the same transaction is just asking for trouble. Just take my word on this one.
Rules for Using Batches
...
A table cannot be changed and then the new columns referenced in the same batch.
See this
Alternative is to spawn a child batch and reference your new column from there, like...
exec('select ft_ID from dbo.Products')
However, as Remus said, be very careful about mixing schema changes and selecting data from that schema, especially in one same transaction. Even WITHOUT transaction this code will have side-effects: try wrapping this exec() workaround in the stored procedure, and you will get a recompile every time you call it. Tough luck, but it simply works that way.

SCOPE_IDENTITY And Instead of Insert Trigger work-around

OK, I have a table with no natural key, only an integer identity column as it's primary key. I'd like to insert and retrieve the identity value, but also use a trigger to ensure that certain fields are always set. Originally, the design was to use instead of insert triggers, but that breaks scope_identity. The output clause on the insert statement is also broken by the instead of insert trigger. So, I've come up with an alternate plan and would like to know if there is anything obviously wrong with what I intend to do:
begin contrived example:
CREATE TABLE [dbo].[TestData] (
[TestId] [int] IDENTITY(1,1) PRIMARY KEY NOT NULL,
[Name] [nchar](10) NOT NULL)
CREATE TABLE [dbo].[TestDataModInfo](
[TestId] [int] PRIMARY KEY NOT NULL,
[RowCreateDate] [datetime] NOT NULL)
ALTER TABLE [dbo].[TestDataModInfo] WITH CHECK ADD CONSTRAINT
[FK_TestDataModInfo_TestData] FOREIGN KEY([TestId])
REFERENCES [dbo].[TestData] ([TestId]) ON DELETE CASCADE
CREATE TRIGGER [dbo].[TestData$AfterInsert]
ON [dbo].[TestData]
AFTER INSERT
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON;
INSERT INTO [dbo].[TestDataModInfo]
([TestId],
[RowCreateDate])
SELECT
[TestId],
current_timestamp
FROM inserted
-- Insert statements for trigger here
END
End contrived example.
No, I'm not doing this for one little date field - it's just an example.
The fields that I want to ensure are set have been moved to a separate table (in TestDataModInfo) and the trigger ensures that it's updated. This works, it allows me to use scope_identity() after inserts, and appears to be safe (if my after trigger fails, my insert fails). Is this bad design, and if so, why?
As you mentioned, SCOPE_IDENTITY is designed for this situation. It's not affected by AFTER trigger code, unlike ##IDENTITY.
Apart from using stored procs, this is OK.
I use AFTER triggers for auditing because they are convenient... that is, write to another table in my trigger.
Edit: SCOPE_IDENTITY and parallelism in SQL Server 2005 cam have a problem
HAve you tried using OUTPUT to get the value back instead?
Have you tried using:
SELECT scope_identity();
http://wiki.alphasoftware.com/Scope_Identity+in+SQL+Server+with+nested+and+INSTEAD+OF+triggers
You can use an INSTEAD OF trigger just fine, by in the trigger capturing the value just after the insert to the main table, then spoofing the Scope_Identity() into ##Identity at the end of the trigger:
-- Inside of trigger
SET NOCOUNT ON;
INSERT dbo.YourTable VALUES(blah, blah, blah);
SET #YourTableID = Scope_Identity();
-- ... other DML that inserts to another identity-bearing table
-- Last statement in trigger
SELECT YourTableID INTO #Trash FROM dbo.YourTable WHERE YourTableID = #YourTableID;
Or, here's an alternate final statement that doesn't use any reads, but may cause permission issues if the executing user doesn't have rights (though there are solutions to this).
SET #SQL =
'SELECT identity(smallint, ' + Str(#YourTableID) + ', 1) YourTableID INTO #Trash';
EXEC (#SQL);
Note that Scope_Identity() may return NULL on a table with an INSTEAD OF trigger on it in some cases, even if you use this spoofing method. But you can at least get the value using ##Identity. This can make MS Access ADP projects start working right again after breaking because you put a trigger on a table that the front end inserts to.
Also, be aware that any parallelism at all can make ##Identity and Scope_Identity() return incorrect values—so use OPTION (MAXDOP 1) or TOP 1 or a single-row VALUES clause to defeat this problem.

Atomic Upgrade Scripts

With my database upgrade scripts, I typically just have one long script that makes the necessary changes for that database version. However, if one statement fails halfway through the script, it leaves the database in an inconsistent state.
How can I make the entire upgrade script one atomic operation? I've tried just wrapping all of the statements in a transaction, but that does not work. Even with SET XACT_ABORT ON, if one statement fails and rolls back the transactions, the rest of the statements just keep going. I would like a solution that doesn't require me to write IF ##TRANCOUNT > 0... before each and every statement. For example:
SET XACT_ABORT ON;
GO
BEGIN TRANSACTION;
GO
CREATE TABLE dbo.Customer
(
CustomerID int NOT NULL
, CustomerName varchar(100) NOT NULL
);
GO
CREATE TABLE [dbo].[Order]
(
OrderID int NOT NULL
, OrderDesc varchar(100) NOT NULL
);
GO
/* This causes error and should terminate entire script. */
ALTER TABLE dbo.Order2 ADD
A int;
GO
CREATE TABLE dbo.CustomerOrder
(
CustomerID int NOT NULL
, OrderID int NOT NULL
);
GO
COMMIT TRANSACTION;
GO
The way Red-Gate and other comparison tools work is exactly as you describe... they check ##ERROR and ##TRANCOUNT after every statement, jam it into a #temp table, and at the end they check the #temp table. If any errors occurred, they rollback the transaction, else they commit. I'm sure you could alter whatever tool generates your change scripts to add this kind of logic. (Or instead of re-inventing the wheel, you could use a tool that already creates atomic scripts for you.)
Something like:
TRY
....
CATCH
ROLLBACK TRAN
http://msdn.microsoft.com/en-us/library/ms175976.aspx