Deleting and then insert to target table in a transaction - sql

I have the following situation where a stored procedure gathers data and performs the necessary joins and inserts the results into a temp table (ex:#Results)
Now, what I want to do is insert all the records from #Results into a table that was previously created but I first want to remove (truncate/delete) the destination and then insert the results. The catch is putting this process of cleaning the destination table and then inserting the new #Results in a transaction.
I did the following:
BEGIN TRANSACTION
DELETE FROM PracticeDB.dbo.TransTable
IF ##ERROR <> 0
ROLLBACK TRANSACTION
ELSE
BEGIN
INSERT INTO PracticeDB.dbo.TransTable
(
[R_ID]
,[LASTNAME]
,[FIRSTNAME]
,[DATASOURCE]
,[USER_STATUS]
,[Salary]
,[Neet_Stat]
)
SELECT [R_ID]
,[LASTNAME]
,[FIRSTNAME]
,[DATASOURCE]
,[USER_STATUS]
,[Salary]
,[Neet_Stat]
FROM #RESULT
Select ##TRANCOUNT TransactionCount, ##ERROR ErrorCount
IF ##ERROR <> 0
ROLLBACK TRANSACTION
ELSE
COMMIT TRANSACTION
END
but I know it isn't working properly and I'm having a hard time finding an example like this though I dont know why considering it seems like something common. In this case it still deletes the target table though the insert fails.
More than anything, some guidance would be nice as to best approach this situation or best practices in a similar case (whats best to use and so forth). Thank you in advance...

I'm really not seeing anything wrong with this. So it DOES delete from your TransTable, but doesn't do the insert? Are you sure #RESULT has records in it?
The only thing that I see, is you're checking ##ERROR after Select ##TRANCOUNT TransactionCount, ##ERROR ErrorCount, which means ##ERROR is going to be from your SELECT statement and not the INSERT statement (although I would always expect that to be 0).
For more info on ##ERROR, see: http://msdn.microsoft.com/en-us/library/ms188790.aspx
You should check ##ERROR after each statement.
As far as best practices, I think Microsoft now recommends you use TRY/CATCH instead of checking ##ERROR after each statement (as of SQL 2005 and after). Take a look at Example B here: http://msdn.microsoft.com/en-us/library/ms175976.aspx

Related

sql identity insert interruption

This might be an obvious question.
I have a sql sproc which runs a cursor.
The cursor will insert a certain amount of records into a table.
The problem is that, say the cursor runs 1000 insert statements, these records identity must follow on each other. if someone else runs an insert while the cursor is running, it will take up a record between two of the records the cursor is running.
can anyone give me some tips to ensure that all the cursor inserts follow each other.
Please note that my cursor might do 50 000 inserts. which means it will take a while to do all the inserts. so that table must not be interrupted while performing inserts.
You can try this:
INSERT INTO YourTable WITH (TABLOCK)
...
...
...
BEGIN TRANSACTION t_Transaction
BEGIN TRY
INSERT INTO Table
SELECT *
FROM tablx
WITH (HOLDLOCK)
COMMIT t_Transaction
END TRY
BEGIN CATCH
ROLLBACK t_Transaction
END CATCH

Can ##Error be non-Zero on a successful insert?

I'm not sure how to make this happen. We're debugging an issue and we need to know if it's possible for ##Error to be non-zero if the insert succeeds. We have a stored procedure that exits if #Error <> 0. And if we knew the answer to this, that would help. Anyone know?
The code is below. We want to know if it's possible to get to the goto statement if the insert succeeded.
-- This happened
insert into Workflow
(SubID, ProcessID, LineID, ReadTime)
values
(#sub_id, #proc_id, #line_id, #read_time)
set #error = ##Error
set #insertedWorkflowId = SCOPE_IDENTITY()
if #error <> 0
begin
set #error_desc = 'insert into tw_workflow'
goto ERROR_EXIT
end
-- This didn't happen
INSERT INTO Master.WorkflowEventProcessing (WorkflowId, SubId, ReadTime, ProcessId, LineId) VALUES (#insertedWorkflowId, #sub_id, #read_time, #proc_id, #line_id)
INSERT INTO Master.ProcessLogging (ProcessCode, WorkflowId, SubId, EventTime) VALUES (10, #insertedWorkflowId, #sub_id, GETDATE())
EDIT
Maybe a better way to say what's wrong is this: The first insert happened but the last two didn't. How is that possible? Maybe the last two inserts simply just failed?
If this insert succeeds then there will be a non-zero ##rowcount since you're simply using values (rather than a select...where which could "successfully" insert 0 rows). You could use this to write some debug checks in there, or just include it as part of the routine for good.
insert into Workflow
(SubID, ProcessID, LineID, ReadTime)
values
(#sub_id, #proc_id, #line_id, #read_time)
if ##rowcount = 0 or ##error <> 0 -- Problems!
UPDATE
If a trigger fires on insert, an error in the trigger with severity of:
< 10: will run without a problem, ##error = 0
between 11 and 16: insert will succeed, ##error != 0
17, 18: insert will succeed, execution will halt
19 (with log): insert will succeed, execution will halt
> 20 (with log): insert will not succeed, execution will halt
I arrived at this by adding a trigger to the workflow table and testing various values for severity, so I can't readily say this would be the exact case in all environments:
alter trigger workflowtrig on workflow after insert as begin
raiserror(13032, 20, 1) with log -- with log is necessary for severity > 18
end
Soooo, after that, we have somewhat of an answer to this question:
Can ##Error be non-Zero on a successful insert?
Yes...BUT, I'm not sure if there is another chain of events that could lead to this, and I'm not creative enough to put together the tests to prove such. Hopefully someone else knows for sure.
I know this all isn't a great answer, but it's too big for a comment, and I thought it might help!
According to the T-SQL documentation, this should not be possible. You could also wrap this in a try / catch, to attempt to catch most errors.
You might want to also consider the possibility that SCOPE_IDENTITY() is not returning the correct value, or the value you think it is. If you have FKs on the Master.WorkflowEventProcessing and Master.ProcessLogging tables, you could get a FK error attempting to insert into those tables because the value returned from SCOPE_IDENTITY is not correct.

Database transaction only partially committing

I've got a T-SQL stored procedure running on a Sybase ASE database server that is sometimes failing to commit all of its operations, even though it completes without exception. Here's a rough example of what it does.
BEGIN TRANSACTION
UPDATE TABLE1
SET FIELD1 = #NEW_VALUE1
WHERE KEY1 = #KEY_VALUE1
IF ##error <> 0 OR ##rowcount <> 1 BEGIN
ROLLBACK
RETURN 1
END
UPDATE TABLE2
SET FIELD2 = #NEW_VALUE2
WHERE KEY2 = #KEY_VALUE2
IF ##error <> 0 OR ##rowcount <> 1 BEGIN
ROLLBACK
RETURN 2
END
INSERT TABLE2 (FIELD2, FIELD3)
VALUES (#NEW_VALUE3a, #NEW_VALUE3b)
IF ##error <> 0 OR ##rowcount <> 1 BEGIN
ROLLBACK
RETURN 3
END
COMMIT TRANSACTION
RETURN 0
The procedure is called at least hundreds of times a day. In a small percentage of those cases (probably < 3%), only the INSERT statement commits. The proc completes and returns 0, but the two UPDATEs don't take. Originally we thought it might be that the WHERE clauses on the UPDATEs weren't matching anything, so we added the IF ##rowcount logic. But even with those checks in there, the INSERT is still happening and the procedure is still completing and returning 0.
I'm looking for ideas about what might cause this type of problem. Is there anything about the way SQL transactions work, or the way Sybase works specifically, that could be causing the COMMIT not to commit everything? Is there something about my IF blocks that could allow the UPDATE to not match anything but the procedure to continue? Any other ideas?
is is possible that they are updating, but something is changing the values back? try adding a update trigger on those tables and within that trigger insert into a log table. for rows that appear to have not been updated look in the log, is there a row or not?
Not knowing how you set the values for your variables, it occurs to me that if the value of #NEW_VALUE1 is the same as the previous value in FIELD1 , the update would succeed and yet appear to have not changed anything making you think the transaction had not happened.
You also could have a trigger that is affecting the update.

Atomic Upgrade Scripts

With my database upgrade scripts, I typically just have one long script that makes the necessary changes for that database version. However, if one statement fails halfway through the script, it leaves the database in an inconsistent state.
How can I make the entire upgrade script one atomic operation? I've tried just wrapping all of the statements in a transaction, but that does not work. Even with SET XACT_ABORT ON, if one statement fails and rolls back the transactions, the rest of the statements just keep going. I would like a solution that doesn't require me to write IF ##TRANCOUNT > 0... before each and every statement. For example:
SET XACT_ABORT ON;
GO
BEGIN TRANSACTION;
GO
CREATE TABLE dbo.Customer
(
CustomerID int NOT NULL
, CustomerName varchar(100) NOT NULL
);
GO
CREATE TABLE [dbo].[Order]
(
OrderID int NOT NULL
, OrderDesc varchar(100) NOT NULL
);
GO
/* This causes error and should terminate entire script. */
ALTER TABLE dbo.Order2 ADD
A int;
GO
CREATE TABLE dbo.CustomerOrder
(
CustomerID int NOT NULL
, OrderID int NOT NULL
);
GO
COMMIT TRANSACTION;
GO
The way Red-Gate and other comparison tools work is exactly as you describe... they check ##ERROR and ##TRANCOUNT after every statement, jam it into a #temp table, and at the end they check the #temp table. If any errors occurred, they rollback the transaction, else they commit. I'm sure you could alter whatever tool generates your change scripts to add this kind of logic. (Or instead of re-inventing the wheel, you could use a tool that already creates atomic scripts for you.)
Something like:
TRY
....
CATCH
ROLLBACK TRAN
http://msdn.microsoft.com/en-us/library/ms175976.aspx

Insert Update stored proc on SQL Server

I've written a stored proc that will do an update if a record exists, otherwise it will do an insert. It looks something like this:
update myTable set Col1=#col1, Col2=#col2 where ID=#ID
if ##rowcount = 0
insert into myTable (Col1, Col2) values (#col1, #col2)
My logic behind writing it in this way is that the update will perform an implicit select using the where clause and if that returns 0 then the insert will take place.
The alternative to doing it this way would be to do a select and then based on the number of rows returned either do an update or insert. This I considered inefficient because if you are to do an update it will cause 2 selects (the first explicit select call and the second implicit in the where of the update). If the proc were to do an insert then there'd be no difference in efficiency.
Is my logic sound here?
Is this how you would combine an insert and update into a stored proc?
Your assumption is right, this is the optimal way to do it and it's called upsert/merge.
Importance of UPSERT - from sqlservercentral.com:
For every update in the case mentioned above we are removing one
additional read from the table if we
use the UPSERT instead of EXISTS.
Unfortunately for an Insert, both the
UPSERT and IF EXISTS methods use the
same number of reads on the table.
Therefore the check for existence
should only be done when there is a
very valid reason to justify the
additional I/O. The optimized way to
do things is to make sure that you
have little reads as possible on the
DB.
The best strategy is to attempt the
update. If no rows are affected by the
update then insert. In most
circumstances, the row will already
exist and only one I/O will be
required.
Edit:
Please check out this answer and the linked blog post to learn about the problems with this pattern and how to make it work safe.
Please read the post on my blog for a good, safe pattern you can use. There are a lot of considerations, and the accepted answer on this question is far from safe.
For a quick answer try the following pattern. It will work fine on SQL 2000 and above. SQL 2005 gives you error handling which opens up other options and SQL 2008 gives you a MERGE command.
begin tran
update t with (serializable)
set hitCount = hitCount + 1
where pk = #id
if ##rowcount = 0
begin
insert t (pk, hitCount)
values (#id,1)
end
commit tran
If to be used with SQL Server 2000/2005 the original code needs to be enclosed in transaction to make sure that data remain consistent in concurrent scenario.
BEGIN TRANSACTION Upsert
update myTable set Col1=#col1, Col2=#col2 where ID=#ID
if ##rowcount = 0
insert into myTable (Col1, Col2) values (#col1, #col2)
COMMIT TRANSACTION Upsert
This will incur additional performance cost, but will ensure data integrity.
Add, as already suggested, MERGE should be used where available.
MERGE is one of the new features in SQL Server 2008, by the way.
You not only need to run it in transaction, it also needs high isolation level. I fact default isolation level is Read Commited and this code need Serializable.
SET transaction isolation level SERIALIZABLE
BEGIN TRANSACTION Upsert
UPDATE myTable set Col1=#col1, Col2=#col2 where ID=#ID
if ##rowcount = 0
begin
INSERT into myTable (ID, Col1, Col2) values (#ID #col1, #col2)
end
COMMIT TRANSACTION Upsert
Maybe adding also the ##error check and rollback could be good idea.
If you are not doing a merge in SQL 2008 you must change it to:
if ##rowcount = 0 and ##error=0
otherwise if the update fails for some reason then it will try and to an insert afterwards because the rowcount on a failed statement is 0
Big fan of the UPSERT, really cuts down on the code to manage. Here is another way I do it: One of the input parameters is ID, if the ID is NULL or 0, you know it's an INSERT, otherwise it's an update. Assumes the application knows if there is an ID, so wont work in all situations, but will cut the executes in half if you do.
Modified Dima Malenko post:
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
BEGIN TRANSACTION UPSERT
UPDATE MYTABLE
SET COL1 = #col1,
COL2 = #col2
WHERE ID = #ID
IF ##rowcount = 0
BEGIN
INSERT INTO MYTABLE
(ID,
COL1,
COL2)
VALUES (#ID,
#col1,
#col2)
END
IF ##Error > 0
BEGIN
INSERT INTO MYERRORTABLE
(ID,
COL1,
COL2)
VALUES (#ID,
#col1,
#col2)
END
COMMIT TRANSACTION UPSERT
You can trap the error and send the record to a failed insert table.
I needed to do this because we are taking whatever data is send via WSDL and if possible fixing it internally.
Your logic seems sound, but you might want to consider adding some code to prevent the insert if you had passed in a specific primary key.
Otherwise, if you're always doing an insert if the update didn't affect any records, what happens when someone deletes the record before you "UPSERT" runs? Now the record you were trying to update doesn't exist, so it'll create a record instead. That probably isn't the behavior you were looking for.