With my database upgrade scripts, I typically just have one long script that makes the necessary changes for that database version. However, if one statement fails halfway through the script, it leaves the database in an inconsistent state.
How can I make the entire upgrade script one atomic operation? I've tried just wrapping all of the statements in a transaction, but that does not work. Even with SET XACT_ABORT ON, if one statement fails and rolls back the transactions, the rest of the statements just keep going. I would like a solution that doesn't require me to write IF ##TRANCOUNT > 0... before each and every statement. For example:
SET XACT_ABORT ON;
GO
BEGIN TRANSACTION;
GO
CREATE TABLE dbo.Customer
(
CustomerID int NOT NULL
, CustomerName varchar(100) NOT NULL
);
GO
CREATE TABLE [dbo].[Order]
(
OrderID int NOT NULL
, OrderDesc varchar(100) NOT NULL
);
GO
/* This causes error and should terminate entire script. */
ALTER TABLE dbo.Order2 ADD
A int;
GO
CREATE TABLE dbo.CustomerOrder
(
CustomerID int NOT NULL
, OrderID int NOT NULL
);
GO
COMMIT TRANSACTION;
GO
The way Red-Gate and other comparison tools work is exactly as you describe... they check ##ERROR and ##TRANCOUNT after every statement, jam it into a #temp table, and at the end they check the #temp table. If any errors occurred, they rollback the transaction, else they commit. I'm sure you could alter whatever tool generates your change scripts to add this kind of logic. (Or instead of re-inventing the wheel, you could use a tool that already creates atomic scripts for you.)
Something like:
TRY
....
CATCH
ROLLBACK TRAN
http://msdn.microsoft.com/en-us/library/ms175976.aspx
Related
I'm using raw SQL and trying to do an update and then an insert on the same table, do you think I need some type of transaction or something?
If I add an INSERT and then an UPDATE in the same script file, will the order of the execution be respected by SQL?
Ended up with this so far:
IF NOT EXISTS (SELECT TOP 1 * FROM dbo.Settings
WHERE Descr = 'GL for Credit Memos')
BEGIN
SET IDENTITY_INSERT dbo.Settings ON
DECLARE #newOrderNo INT;
SET #newOrderNo = 7 -- OrderNo for credit memo to display below 'GL for A/P'
-- Update order no. for settings below 'GL for Credit Memo'
UPDATE dbo.Settings
SET [OrderNo] = [OrderNo] + 1
WHERE [OrderNo] >= #newOrderNo
INSERT INTO [dbo].[Settings]
([SettingID],
[Created],
[Descr],
[Category],
[OrderNo],
[DataType],
[InActive],
[IsRequired])
VALUES
(198, -- New Enum value for setting ID for 'Gl for Credit Memo'
DEFAULT,
'GL for Credit Memo',
'Accounting',
#newOrderNo,
4,
NULL,
1)
SET IDENTITY_INSERT dbo.Settings OFF
END
Whole statements in SQL are always executed in order. There is no reordering except within a single statement (for example the order that rows may be updated in a single UPDATE is undefined).
You probably do want a transaction here, if you want to ensure that the code works in an "all-or-nothing" fashion.
But you do not need any complex catch/rolback code. Contrary to popular opinion, it is almost never necessary, and usually actively harmful.
The only thing you need to do, and this you must do to ensure proper rollbacks, is to use XACT_ABORT. This tells the server to ensure a rollback will always happen, regardless of any errors.
SET XACT_ABORT ON;
BEGIN TRANSACTION;
-- your code here
COMMIT;
If you are executing in SSMS then you will see any errors. If executing from a client app or script, you can catch the error in the client app, then show the error to the user and/or log it.
In SQL Server, autocommit transactions are turned on by default during installation. That means that for most servers, each individual SQL statement opens a transaction in the background, executes, and then commits the transaction if the statement succeeds or rolls it back if the statement fails, all behind the scenes.
You can override that behavior by issuing a BEGIN TRANSACTION statement, which will open a transaction and keep it open until you issue an explicit COMMIT TRANSACTION or ROLLBACK statement. Usually the ROLLBACK will live in the CATCH clause of a TRY...CATCH. Everything between those explicit statements will be treated as a single transaction.
In your case, the UPDATE will be in one transaction and the INSERT will be in another.
Here's the Microsoft documentation.
I'm looking at a trivial query and struggle to understand why SQL Server cannot execute it.
Say I have a table
CREATE TABLE [dbo].[t2](
[id] [nvarchar](36) NULL,
[name] [nvarchar](36) NULL
)
And I want to add a new column and set some value to it. So I do the following:
BEGIN TRANSACTION
ALTER TABLE [t2] ADD [name2] [nvarchar](255) NULL
UPDATE [t2] SET [name2] = CONCAT(name, '-XXXX')
COMMIT TRANSACTION
And if I execute the query, I have
I know, it failing because the SQL Server executes the query in a different order for optimization purposes, and one way to fix it would be to separate those two sentences with GO statement. Thus the following query will pass without issues.
BEGIN TRANSACTION
ALTER TABLE [t2] ADD [name2] [nvarchar](255) NULL
GO
UPDATE [t2] SET [name2] = CONCAT(name, '-XXXX')
COMMIT TRANSACTION
Actually, not exactly without issues, as I have to use GO statement which will make transaction scope useless, as discussed on this Stackoverflow question
So I have two questions:
How to make that script work without using GO statement
Why SQL server is not smart enough to figure out such a trivial case? (it is more like a rhetorical question)
This is a parser error. When you run a statement it is parsed before hand, however, only certain DDL operations are "cached" by the parser so that it is aware of later. CREATE is something it will "cache" however, ALTER is not. That is why you can CREATE a table in the same batch and then reference it.
As you have an ALTER then when the parser parses the batch and it gets to the UPDATE statement it will fail, and the error you see is raised. One method is to defer to parsing of the statement:
BEGIN TRANSACTION;
ALTER TABLE [t2] ADD [name2] [nvarchar](255) NULL;
EXEC sys.sp_executesql N'UPDATE [t2] SET [name2] = CONCAT(name, N''-XXXX'');';
COMMIT TRANSACTION;
If, however, N'-XXXX' is meant to be the default value, you could qualify that in the DDL statement instead:
BEGIN TRANSACTION;
ALTER TABLE t2 ADD name2 nvarchar(255) NULL DEFAULT N'-XXXX' WITH VALUES;
COMMIT TRANSACTION;
I have the following situation where a stored procedure gathers data and performs the necessary joins and inserts the results into a temp table (ex:#Results)
Now, what I want to do is insert all the records from #Results into a table that was previously created but I first want to remove (truncate/delete) the destination and then insert the results. The catch is putting this process of cleaning the destination table and then inserting the new #Results in a transaction.
I did the following:
BEGIN TRANSACTION
DELETE FROM PracticeDB.dbo.TransTable
IF ##ERROR <> 0
ROLLBACK TRANSACTION
ELSE
BEGIN
INSERT INTO PracticeDB.dbo.TransTable
(
[R_ID]
,[LASTNAME]
,[FIRSTNAME]
,[DATASOURCE]
,[USER_STATUS]
,[Salary]
,[Neet_Stat]
)
SELECT [R_ID]
,[LASTNAME]
,[FIRSTNAME]
,[DATASOURCE]
,[USER_STATUS]
,[Salary]
,[Neet_Stat]
FROM #RESULT
Select ##TRANCOUNT TransactionCount, ##ERROR ErrorCount
IF ##ERROR <> 0
ROLLBACK TRANSACTION
ELSE
COMMIT TRANSACTION
END
but I know it isn't working properly and I'm having a hard time finding an example like this though I dont know why considering it seems like something common. In this case it still deletes the target table though the insert fails.
More than anything, some guidance would be nice as to best approach this situation or best practices in a similar case (whats best to use and so forth). Thank you in advance...
I'm really not seeing anything wrong with this. So it DOES delete from your TransTable, but doesn't do the insert? Are you sure #RESULT has records in it?
The only thing that I see, is you're checking ##ERROR after Select ##TRANCOUNT TransactionCount, ##ERROR ErrorCount, which means ##ERROR is going to be from your SELECT statement and not the INSERT statement (although I would always expect that to be 0).
For more info on ##ERROR, see: http://msdn.microsoft.com/en-us/library/ms188790.aspx
You should check ##ERROR after each statement.
As far as best practices, I think Microsoft now recommends you use TRY/CATCH instead of checking ##ERROR after each statement (as of SQL 2005 and after). Take a look at Example B here: http://msdn.microsoft.com/en-us/library/ms175976.aspx
I work using SQL Server. I have 2 tables, Animal and Cat. When I add a new cat to the database, I want to update both tables. I should add the cat to the Animal table first, so that I can add the animal_Id to the Cat table afterwards.
Is there a way to add the record at two tables at the same time? If there isn't, what is the best way to do it?
I just want an idea.
If you use a transaction, both inserts will be done, at least logically, "at the same time".
That means that no other query, done from outside of the transaction, can see the base "between the inserts". And if there is a failure between both inserts (and no effective commit), the final state will ignore first insert.
In order to get the id of a row just added in your session, use SCOPE_IDENTITY.
You can't use INSERT against two tables in one statement.
SET XACT_ABORT ON
BEGIN TRANSACTION
INSERT INTO [A](...) VALUES(...);
INSERT INTO [B](...) VALUES(...);
COMMIT TRANSACTION
SET XACT_ABORT OFF
The transaction is to make sure it is everything or nothing is committed. The XACT_ABORT ensures that if one fails with an error (therefore COMMIT TRANSACTION will not fire), the transaction will be forced to roll back.
I would suggest to use transaction here. For example (if you know the Id of new row beforehand):
DECLARE #CAT TABLE(id int, name varchar(50));
DECLARE #ANIMAL TABLE(id int);
DECLARE #anmalId INT = 1;
BEGIN TRAN
INSERT INTO #ANIMAL VALUES(#anmalId);
INSERT INTO #CAT VALUES(#anmalId, 'Kitty');
COMMIT TRAN
SELECT * FROM #CAT;
SELECT * FROM #ANIMAL;
You can use ##identity in case of auto increments.
Use triggers. That is the best way
how about using trigger while insertion to one table??
OK, I have a table with no natural key, only an integer identity column as it's primary key. I'd like to insert and retrieve the identity value, but also use a trigger to ensure that certain fields are always set. Originally, the design was to use instead of insert triggers, but that breaks scope_identity. The output clause on the insert statement is also broken by the instead of insert trigger. So, I've come up with an alternate plan and would like to know if there is anything obviously wrong with what I intend to do:
begin contrived example:
CREATE TABLE [dbo].[TestData] (
[TestId] [int] IDENTITY(1,1) PRIMARY KEY NOT NULL,
[Name] [nchar](10) NOT NULL)
CREATE TABLE [dbo].[TestDataModInfo](
[TestId] [int] PRIMARY KEY NOT NULL,
[RowCreateDate] [datetime] NOT NULL)
ALTER TABLE [dbo].[TestDataModInfo] WITH CHECK ADD CONSTRAINT
[FK_TestDataModInfo_TestData] FOREIGN KEY([TestId])
REFERENCES [dbo].[TestData] ([TestId]) ON DELETE CASCADE
CREATE TRIGGER [dbo].[TestData$AfterInsert]
ON [dbo].[TestData]
AFTER INSERT
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON;
INSERT INTO [dbo].[TestDataModInfo]
([TestId],
[RowCreateDate])
SELECT
[TestId],
current_timestamp
FROM inserted
-- Insert statements for trigger here
END
End contrived example.
No, I'm not doing this for one little date field - it's just an example.
The fields that I want to ensure are set have been moved to a separate table (in TestDataModInfo) and the trigger ensures that it's updated. This works, it allows me to use scope_identity() after inserts, and appears to be safe (if my after trigger fails, my insert fails). Is this bad design, and if so, why?
As you mentioned, SCOPE_IDENTITY is designed for this situation. It's not affected by AFTER trigger code, unlike ##IDENTITY.
Apart from using stored procs, this is OK.
I use AFTER triggers for auditing because they are convenient... that is, write to another table in my trigger.
Edit: SCOPE_IDENTITY and parallelism in SQL Server 2005 cam have a problem
HAve you tried using OUTPUT to get the value back instead?
Have you tried using:
SELECT scope_identity();
http://wiki.alphasoftware.com/Scope_Identity+in+SQL+Server+with+nested+and+INSTEAD+OF+triggers
You can use an INSTEAD OF trigger just fine, by in the trigger capturing the value just after the insert to the main table, then spoofing the Scope_Identity() into ##Identity at the end of the trigger:
-- Inside of trigger
SET NOCOUNT ON;
INSERT dbo.YourTable VALUES(blah, blah, blah);
SET #YourTableID = Scope_Identity();
-- ... other DML that inserts to another identity-bearing table
-- Last statement in trigger
SELECT YourTableID INTO #Trash FROM dbo.YourTable WHERE YourTableID = #YourTableID;
Or, here's an alternate final statement that doesn't use any reads, but may cause permission issues if the executing user doesn't have rights (though there are solutions to this).
SET #SQL =
'SELECT identity(smallint, ' + Str(#YourTableID) + ', 1) YourTableID INTO #Trash';
EXEC (#SQL);
Note that Scope_Identity() may return NULL on a table with an INSTEAD OF trigger on it in some cases, even if you use this spoofing method. But you can at least get the value using ##Identity. This can make MS Access ADP projects start working right again after breaking because you put a trigger on a table that the front end inserts to.
Also, be aware that any parallelism at all can make ##Identity and Scope_Identity() return incorrect values—so use OPTION (MAXDOP 1) or TOP 1 or a single-row VALUES clause to defeat this problem.