This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
Is it possible to run multiple DDL statements inside a transaction (within SQL Server)?
If I have following script:
BEGIN TRAN
GO
ALTER TABLE [dbo].[Table1] CHECK CONSTRAINT [FK_1]
GO
ALTER TABLE [dbo].[Users] CHECK CONSTRAINT [FK_2]
GO
COMMIT TRAN
Transcation is not working. It is still on transaction one statement. For example, if statement 1 failed, statement 2 still is done when running the script.
How to enable Transaction for DDL?
You're running the DDL in separate batches so if your first statement raises anything less than a connection-terminating error (hardware problem etc) the second batch will run.
Management studio treats GO as a batch separator and runs each batch separately.
You could use SET XACT_ABORT ON to automatically rollback your transaction in the event of an error. You can also remove the GO statements as ALTER TABLE statements do not need to be run in separate batches.
MagicMike is right, but I implemented an other solution I know to be efficient (even if his solution seems more elegant).
FYI, my solution with two transactions and a clean error management (the ##error feature exists on SQL Server, check the equivalent on your SQL, in Oracle it should be something like "exception when others" instead of "If (##error=0)" ):
begin tran
ALTER TABLE [dbo].[Table1] CHECK CONSTRAINT [FK_1]
IF (##Error=0)
begin
COMMIT TRAN
end
else
begin
rollback tran
END
begin tran
ALTER TABLE [dbo].[Users] CHECK CONSTRAINT [FK_2]
IF (##Error=0)
begin
COMMIT TRAN
end
else
begin
rollback tran
END
You don't need to disable or enable the DDL command
Just do the following
you can use
Begin Try
.......
End Try
Begin Catch
.......
End Try
in your terms of example
you can do this way
begin try
ALTER TABLE [dbo].temp CHECK CONSTRAINT [FK_1]
--GO
ALTER TABLE [dbo].temp CHECK CONSTRAINT [FK_2]
--GO
end try
begin catch
print 'Error in the Try Block'
end catch
Related
I have a stored procedure that basically inserts from one table to another.
While this procedure is running, I don't want anyone else to be able to start it, however I don't want serialization, the other person to run the procedure after I am done with it.
What I want is for the other person trying to start it to get an error, while I am running the procedure.
I've tried with using sp_getapplock, however I can't manage to completely stop the person from running the procedure.
I also tried finding the procedure with sys.dm_exec_requests and blocking the procedure, while this does work, i think it's not optimal because on some servers I don't have the permissions to run sys.dm_exec_sql_text(sql_handle).
What is the best way for me to do this?
Cunning stunts:
ALTER PROC MyProc
AS
BEGIN TRY
IF OBJECT_ID('tembdb..##lock_proc_MyProc') IS NOT NULL
RAISERROR('Not now.', 16, 0)
ELSE
EXEC('CREATE TABLE ##lock_proc_MyProc (dummy int)')
...
EXEC('DROP TABLE ##lock_proc_MyProc')
END TRY
BEGIN CATCH
...
EXEC('DROP TABLE ##lock_proc_MyProc')
...
END CATCH
GO
Which can be extended by storing spid and watching for zombie ##.
Also you can just raise isolation level/granularity like TABLOCK, UPDLOCK:
ALTER PROC MyProc
AS
BEGIN TRAN
DECLARE #dummy INT
SELECT #dummy=1
FROM t (TABLOCKX, HOLDLOCK)
WHERE 1=0
...
COMMIT TRAN
this will have different effect - will wait, not fall.
I am working in some existing application using SQL Server 2014 in the backend. I find that the pattern to commit the transaction is like
USE AdventureWorks;
GO
BEGIN TRANSACTION;
GO
DELETE FROM HumanResources.JobCandidate WHERE JobCandidateID = 10;
DELETE FROM HumanResources.JobCandidate WHERE JobCandidateID = 11;
DELETE FROM HumanResources.JobCandidate WHERE JobCandidateID = 12;
GO
COMMIT TRANSACTION;
GO
I am wondering if the query failed in commit transaction statement, do i need to have the rollback statement there?
according to this question Can a COMMIT statement (in SQL) ever fail? How?, the commit tran can fail, but do I have to roll that back since the transaction hasn't been commit successfully. Would SQL server roll that back automatically when the connection is closed?
Please point me to the documentation in MSDN or wherever you got the information.
I believe it will, after the connection is closed. You should not count on this, there are many factors to consider including connection pooling. I suggest you look into SET XACT_ABORT ON and / or using a try catch block.
The way that I use transactions:
Begin Try
Begin Tran
-- do some work here...
Commit Tran
End Try
Begin Catch
If ( ##TranCount > 0 )
Rollback Tran
End Catch
doing your transaction commit/rollbacks in a try/catch is probably a best practice.
if, however you want your code to automatically rollback all of the statements in the transaction you need to add the "set xact_abort on" statement somewhere before the begin trans statement. xact_abort automatically rolls back all of the statements in a transaction if any of them fail. to understand the effect of xact_abort, execute the following code. set xact_abort on and off and observe the contents of the table. the first statement of the batch in the sample will always fail because of a primary key violation.
use tempdb
go
if exists (select * from sys.tables where name='t') drop table t
go
create table t (id int not null primary key)
go
insert t values(1)
go
set xact_abort on
begin transaction
insert t values(1)
insert t values(2)
commit transaction
go
select * from t
You can use try...catch like the below.
BEGIN TRY
DELETE
FROM HumanResources.JobCandidate
WHERE JobCandidateID = 10;
DELETE
FROM HumanResources.JobCandidate
WHERE JobCandidateID = 11;
DELETE
FROM HumanResources.JobCandidate
WHERE JobCandidateID = 12;
COMMIT;
END TRY
BEGIN CATCH
ROLLBACK
SELECT Db_name()
,CONVERT(NVARCHAR(15), Error_number())
,CONVERT(NVARCHAR(10), Error_line())
,Error_message()
END CATCH
I'm trying to write a single T-SQL script which will upgrade a system which is currently in deployment. The script will contain a mixture of:
New tables
New columns on existing tables
New functions
New stored procedures
Changes to stored procedures
New views
etc.
As it's a reasonably large upgrade I want the script to rollback if a single part of it fails. I have an outline of my attempted code below:
DECLARE #upgrade NVARCHAR(32);
SELECT #upgrade = 'my upgrade';
BEGIN TRANSACTION #upgrade
BEGIN
PRINT 'Starting';
BEGIN TRY
CREATE TABLE x ( --blah...
);
ALTER TABLE y --blah...
);
CREATE PROCEDURE z AS BEGIN ( --blah...
END
GO --> this is causing trouble!
CREATE FUNCTION a ( --blah...
END TRY
BEGIN CATCH
PRINT 'Error with transaction. Code: ' + ##ERROR + '; Message: ' + ERROR_MESSAGE();
ROLLBACK TRANSACTION #upgrade;
PRINT 'Rollback complete';
RETURN;
END TRY
END
PRINT 'Upgrade successful';
COMMIT TRANSACTION #upgrade;
GO
Note - I know some of the syntax is not perfect - I'm having to re-key the code
It seems as though I can't put Stored Procedures into a transaction block. Is there a reason for this? Is it because of the use of the word GO? If so, how can I put SPs into a transaction block? What are the limitations as to what can go into a transaction block? Or, what would be a better alternative to what I'm trying to achieve?
Thanks
As Thomas Haratyk said in his answer, your issue was the "go". However, you can have as many batches in a transaction as you want. It's the try/catch that doesn't like this. Here's a simple proof-of-concept:
begin tran
go
select 1
go
select 2
go
rollback
begin try
select 1
go
select 2
go
end try
begin catch
select 1
end catch
Remove the GO and create your procedure by using dynamic sql or it will fail.
EXEC ('create procedure z
as
begin
print "hello world"
end')
GO is not a SQL keyword, it is a batch separator. So it cannot be included into a transaction.
Please refer to those topics for further information :
sql error:'CREATE/ALTER PROCEDURE' must be the first statement in a query batch?
Using "GO" within a transaction
http://msdn.microsoft.com/en-us/library/ms188037.aspx
I am creating a script that will be run in a MS SQL server. This script will run multiple statements and needs to be transactional, if one of the statement fails the overall execution is stopped and any changes are rolled back.
I am having trouble creating this transactional model when issuing ALTER TABLE statements to add columns to a table and then updating the newly added column. In order to access the newly added column right away, I use a GO command to execute the ALTER TABLE statement, and then call my UPDATE statement. The problem I am facing is that I cannot issue a GO command inside an IF statement. The IF statement is important within my transactional model. This is a sample code of the script I am trying to run. Also notice that issuing a GO command, will discard the #errorCode variable, and will need to be declared down in the code before being used (This is not in the code below).
BEGIN TRANSACTION
DECLARE #errorCode INT
SET #errorCode = ##ERROR
-- **********************************
-- * Settings
-- **********************************
IF #errorCode = 0
BEGIN
BEGIN TRY
ALTER TABLE Color ADD [CodeID] [uniqueidentifier] NOT NULL DEFAULT ('{00000000-0000-0000-0000-000000000000}')
GO
END TRY
BEGIN CATCH
SET #errorCode = ##ERROR
END CATCH
END
IF #errorCode = 0
BEGIN
BEGIN TRY
UPDATE Color
SET CodeID= 'B6D266DC-B305-4153-A7AB-9109962255FC'
WHERE [Name] = 'Red'
END TRY
BEGIN CATCH
SET #errorCode = ##ERROR
END CATCH
END
-- **********************************
-- * Check #errorCode to issue a COMMIT or a ROLLBACK
-- **********************************
IF #errorCode = 0
BEGIN
COMMIT
PRINT 'Success'
END
ELSE
BEGIN
ROLLBACK
PRINT 'Failure'
END
So what I would like to know is how to go around this problem, issuing ALTER TABLE statements to add a column and then updating that column, all within a script executing as a transactional unit.
GO is not a T-SQL command. Is a batch delimiter. The client tool (SSM, sqlcmd, osql etc) uses it to effectively cut the file at each GO and send to the server the individual batches. So obviously you cannot use GO inside IF, nor can you expect variables to span scope across batches.
Also, you cannot catch exceptions without checking for the XACT_STATE() to ensure the transaction is not doomed.
Using GUIDs for IDs is always at least suspicious.
Using NOT NULL constraints and providing a default 'guid' like '{00000000-0000-0000-0000-000000000000}' also cannot be correct.
Updated:
Separate the ALTER and UPDATE into two batches.
Use sqlcmd extensions to break the script on error. This is supported by SSMS when sqlcmd mode is on, sqlcmd, and is trivial to support it in client libraries too: dbutilsqlcmd.
use XACT_ABORT to force error to interrupt the batch. This is frequently used in maintenance scripts (schema changes). Stored procedures and application logic scripts in general use TRY-CATCH blocks instead, but with proper care: Exception handling and nested transactions.
example script:
:on error exit
set xact_abort on;
go
begin transaction;
go
if columnproperty(object_id('Code'), 'ColorId', 'AllowsNull') is null
begin
alter table Code add ColorId uniqueidentifier null;
end
go
update Code
set ColorId = '...'
where ...
go
commit;
go
Only a successful script will reach the COMMIT. Any error will abort the script and rollback.
I used COLUMNPROPERTY to check for column existance, you could use any method you like instead (eg. lookup sys.columns).
Orthogonal to Remus's comments, what you can do is execute the update in an sp_executesql.
ALTER TABLE [Table] ADD [Xyz] NVARCHAR(256);
DECLARE #sql NVARCHAR(2048) = 'UPDATE [Table] SET [Xyz] = ''abcd'';';
EXEC sys.sp_executesql #query = #sql;
We've needed to do this when creating upgrade scripts. Usually we just use GO but it has been necessary to do things conditionally.
I almost agree with Remus but you can do this with SET XACT_ABORT ON and XACT_STATE
Basically
SET XACT_ABORT ON will abort each batch on error and ROLLBACK
Each batch is separated by GO
Execution jumps to the next batch on error
Use XACT_STATE() will test if the transaction is still valid
Tools like Red Gate SQL Compare use this technique
Something like:
SET XACT_ABORT ON
GO
BEGIN TRANSACTION
GO
IF COLUMNPROPERTY(OBJECT_ID('Color'), 'CodeID', ColumnId) IS NULL
ALTER TABLE Color ADD CodeID [uniqueidentifier] NULL
GO
IF XACT_STATE() = 1
UPDATE Color
SET CodeID= 'B6D266DC-B305-4153-A7AB-9109962255FC'
WHERE [Name] = 'Red'
GO
IF XACT_STATE() = 1
COMMIT TRAN
--else would be rolled back
I've also removed the default. No value = NULL for GUID values. It's meant to be unique: don't try and set every row to all zeros because it will end in tears...
Have you tried it without the GO?
Normally you should not mix table changes and data changes in the same script.
Another alternative, if you don't want to split the code into separate batches, is to use EXEC to create a nested scope/batch
as here
Can I run a dynamic sql in a transaction and roll back using EXEC:
exec('SELECT * FROM TableA; SELECT * FROM TableB;');
Put this in a Transaction and use the ##error after the exec statement to do rollbacks.
eg. Code
BEGIN TRANSACTION
exec('SELECT * FROM TableA; SELECT * FROM TableB;');
IF ##ERROR != 0
BEGIN
ROLLBACK TRANSACTION
RETURN
END
ELSE
COMMIT TRANSACTION
If there are n dynamic sql statements and the error occurs in n/2 will the first 1 to ((n/2) - 1) statements be rolled back
Questions about the first answer
##Error won't pick up the error most likely
Which means that it might not pick up the error, which means a transaction might commit? Which defeats the purpose
TRY/CATCH in SQL Server 2005+
Yes I am using SQL Server 2005 but haven't used the Try Catch before
Would doing the below do the trick
BEGIN TRANSACTION
BEGIN TRY
exec('SELECT * FROM TableA; SELECT * FROM TableB;');
COMMIT TRANSACTION
END TRY
BEGIN CATCH
ROLLBACK TRANSACTION
END CATCH
OR I looked at some more examples on the net
BEGIN TRY --Start the Try Block..
BEGIN TRANSACTION -- Start the transaction..
exec('SELECT * FROM TableA; SELECT * FROM TableB;');
COMMIT TRAN -- Transaction Success!
END TRY
BEGIN CATCH
IF ##TRANCOUNT > 0
ROLLBACK TRAN --RollBack in case of Error
RAISERROR(ERROR_MESSAGE(), ERROR_SEVERITY(), 1)
END CATCH
Yes. The TXNs belong to the current session/connection and dynamic SQL uses the same context.
However, ##ERROR won't pick up the error most likely: the status has to be checked immediately after the offending statement. I'd use TRY/CATCH, assuming SQL Server 2005+
Edit: The TRY/CATCH should work OK.
Don't take our word for it that try catch will work, test it yourself. Since this is dynamic sql the easiest thing to do is to make the first statement correct (and of course it mneeds to bean update,insert or delete or there is no need for atransaction) and then make a deliberate syntax error in the second statment. Then test that the update insert or delete in the first statment went through.
I also want to point out that dynamic sql as rule is a poor practice. Does this really need to be dynamic?