I've got a T-SQL stored procedure running on a Sybase ASE database server that is sometimes failing to commit all of its operations, even though it completes without exception. Here's a rough example of what it does.
BEGIN TRANSACTION
UPDATE TABLE1
SET FIELD1 = #NEW_VALUE1
WHERE KEY1 = #KEY_VALUE1
IF ##error <> 0 OR ##rowcount <> 1 BEGIN
ROLLBACK
RETURN 1
END
UPDATE TABLE2
SET FIELD2 = #NEW_VALUE2
WHERE KEY2 = #KEY_VALUE2
IF ##error <> 0 OR ##rowcount <> 1 BEGIN
ROLLBACK
RETURN 2
END
INSERT TABLE2 (FIELD2, FIELD3)
VALUES (#NEW_VALUE3a, #NEW_VALUE3b)
IF ##error <> 0 OR ##rowcount <> 1 BEGIN
ROLLBACK
RETURN 3
END
COMMIT TRANSACTION
RETURN 0
The procedure is called at least hundreds of times a day. In a small percentage of those cases (probably < 3%), only the INSERT statement commits. The proc completes and returns 0, but the two UPDATEs don't take. Originally we thought it might be that the WHERE clauses on the UPDATEs weren't matching anything, so we added the IF ##rowcount logic. But even with those checks in there, the INSERT is still happening and the procedure is still completing and returning 0.
I'm looking for ideas about what might cause this type of problem. Is there anything about the way SQL transactions work, or the way Sybase works specifically, that could be causing the COMMIT not to commit everything? Is there something about my IF blocks that could allow the UPDATE to not match anything but the procedure to continue? Any other ideas?
is is possible that they are updating, but something is changing the values back? try adding a update trigger on those tables and within that trigger insert into a log table. for rows that appear to have not been updated look in the log, is there a row or not?
Not knowing how you set the values for your variables, it occurs to me that if the value of #NEW_VALUE1 is the same as the previous value in FIELD1 , the update would succeed and yet appear to have not changed anything making you think the transaction had not happened.
You also could have a trigger that is affecting the update.
Related
I have the following code in which I have doubt.
Update Statement on Table 1
Update Statement on Table 2
Select Statement which include both the Table 1
Now above code will return to the application. means it is get all function for the application.
I am getting deadlock error in the application frequently.
I have hundred of users which is fetching the same table at a time.
So I have to make sure that untill the completion of update statement select statement will not fire OR how to lock the update statement.
One more doubt that if suppose I am updating the one row & another user has tried to select that table then will he get the deadlock.
(User was trying to select another row which was not in the update statement.)
what will happen for this scenario.
Please help me.
Thanks in advance
You should use transaction,
BEGIN TRANSACTION [Tran1]
BEGIN TRY
Update Statement on Table 1
Update Statement on Table 2
Select Statement which include both the Table 1
COMMIT TRANSACTION [Tran1]
END TRY
BEGIN CATCH
ROLLBACK TRANSACTION [Tran1]
END CATCH
GO
If you want nobody to update/delete the row, I would go with the UPDLOCK on the SELECT statement. This is an indication that you will update the same row shortly, e.g.
select #Bar = Bar from oFoo WITH (UPDLOCK) where Foo = #Foo;
I have a SQL Server Stored Procedure (using SQL Server 2008 R2) where it performs several different table updates. When rows have been updated I want to record information in an Audit table.
Here is my pseudo code:
UPDATE tblName SET flag = 'Y' WHERE flag = 'N'
IF ##ROWCOUNT > 0
BEGIN
INSERT INTO auditTable...etc
END
Unfortunately, even when zero rows are updated it still records the action in the audit table.
Note: There are no related triggers on the table being updated.
Any ideas why this could be happening?
Any statement that is executed in T-SQL will set the ##rowcount, even the if statement, so the general rule is to capture the value in the statement following the statement you're interested in.
So after
update table set ....
you want
Select #mycount = ##Rowcount
Then you use this value to do your flow control or messages.
As the docs state, even a simple variable assignment will set the ##rowcount to 1.
This is why it's important in this case that if you want people to diagnose the problem then you need to provide the actual code, not pseudo code.
I have the following situation where a stored procedure gathers data and performs the necessary joins and inserts the results into a temp table (ex:#Results)
Now, what I want to do is insert all the records from #Results into a table that was previously created but I first want to remove (truncate/delete) the destination and then insert the results. The catch is putting this process of cleaning the destination table and then inserting the new #Results in a transaction.
I did the following:
BEGIN TRANSACTION
DELETE FROM PracticeDB.dbo.TransTable
IF ##ERROR <> 0
ROLLBACK TRANSACTION
ELSE
BEGIN
INSERT INTO PracticeDB.dbo.TransTable
(
[R_ID]
,[LASTNAME]
,[FIRSTNAME]
,[DATASOURCE]
,[USER_STATUS]
,[Salary]
,[Neet_Stat]
)
SELECT [R_ID]
,[LASTNAME]
,[FIRSTNAME]
,[DATASOURCE]
,[USER_STATUS]
,[Salary]
,[Neet_Stat]
FROM #RESULT
Select ##TRANCOUNT TransactionCount, ##ERROR ErrorCount
IF ##ERROR <> 0
ROLLBACK TRANSACTION
ELSE
COMMIT TRANSACTION
END
but I know it isn't working properly and I'm having a hard time finding an example like this though I dont know why considering it seems like something common. In this case it still deletes the target table though the insert fails.
More than anything, some guidance would be nice as to best approach this situation or best practices in a similar case (whats best to use and so forth). Thank you in advance...
I'm really not seeing anything wrong with this. So it DOES delete from your TransTable, but doesn't do the insert? Are you sure #RESULT has records in it?
The only thing that I see, is you're checking ##ERROR after Select ##TRANCOUNT TransactionCount, ##ERROR ErrorCount, which means ##ERROR is going to be from your SELECT statement and not the INSERT statement (although I would always expect that to be 0).
For more info on ##ERROR, see: http://msdn.microsoft.com/en-us/library/ms188790.aspx
You should check ##ERROR after each statement.
As far as best practices, I think Microsoft now recommends you use TRY/CATCH instead of checking ##ERROR after each statement (as of SQL 2005 and after). Take a look at Example B here: http://msdn.microsoft.com/en-us/library/ms175976.aspx
I'm not sure how to make this happen. We're debugging an issue and we need to know if it's possible for ##Error to be non-zero if the insert succeeds. We have a stored procedure that exits if #Error <> 0. And if we knew the answer to this, that would help. Anyone know?
The code is below. We want to know if it's possible to get to the goto statement if the insert succeeded.
-- This happened
insert into Workflow
(SubID, ProcessID, LineID, ReadTime)
values
(#sub_id, #proc_id, #line_id, #read_time)
set #error = ##Error
set #insertedWorkflowId = SCOPE_IDENTITY()
if #error <> 0
begin
set #error_desc = 'insert into tw_workflow'
goto ERROR_EXIT
end
-- This didn't happen
INSERT INTO Master.WorkflowEventProcessing (WorkflowId, SubId, ReadTime, ProcessId, LineId) VALUES (#insertedWorkflowId, #sub_id, #read_time, #proc_id, #line_id)
INSERT INTO Master.ProcessLogging (ProcessCode, WorkflowId, SubId, EventTime) VALUES (10, #insertedWorkflowId, #sub_id, GETDATE())
EDIT
Maybe a better way to say what's wrong is this: The first insert happened but the last two didn't. How is that possible? Maybe the last two inserts simply just failed?
If this insert succeeds then there will be a non-zero ##rowcount since you're simply using values (rather than a select...where which could "successfully" insert 0 rows). You could use this to write some debug checks in there, or just include it as part of the routine for good.
insert into Workflow
(SubID, ProcessID, LineID, ReadTime)
values
(#sub_id, #proc_id, #line_id, #read_time)
if ##rowcount = 0 or ##error <> 0 -- Problems!
UPDATE
If a trigger fires on insert, an error in the trigger with severity of:
< 10: will run without a problem, ##error = 0
between 11 and 16: insert will succeed, ##error != 0
17, 18: insert will succeed, execution will halt
19 (with log): insert will succeed, execution will halt
> 20 (with log): insert will not succeed, execution will halt
I arrived at this by adding a trigger to the workflow table and testing various values for severity, so I can't readily say this would be the exact case in all environments:
alter trigger workflowtrig on workflow after insert as begin
raiserror(13032, 20, 1) with log -- with log is necessary for severity > 18
end
Soooo, after that, we have somewhat of an answer to this question:
Can ##Error be non-Zero on a successful insert?
Yes...BUT, I'm not sure if there is another chain of events that could lead to this, and I'm not creative enough to put together the tests to prove such. Hopefully someone else knows for sure.
I know this all isn't a great answer, but it's too big for a comment, and I thought it might help!
According to the T-SQL documentation, this should not be possible. You could also wrap this in a try / catch, to attempt to catch most errors.
You might want to also consider the possibility that SCOPE_IDENTITY() is not returning the correct value, or the value you think it is. If you have FKs on the Master.WorkflowEventProcessing and Master.ProcessLogging tables, you could get a FK error attempting to insert into those tables because the value returned from SCOPE_IDENTITY is not correct.
I've written a stored proc that will do an update if a record exists, otherwise it will do an insert. It looks something like this:
update myTable set Col1=#col1, Col2=#col2 where ID=#ID
if ##rowcount = 0
insert into myTable (Col1, Col2) values (#col1, #col2)
My logic behind writing it in this way is that the update will perform an implicit select using the where clause and if that returns 0 then the insert will take place.
The alternative to doing it this way would be to do a select and then based on the number of rows returned either do an update or insert. This I considered inefficient because if you are to do an update it will cause 2 selects (the first explicit select call and the second implicit in the where of the update). If the proc were to do an insert then there'd be no difference in efficiency.
Is my logic sound here?
Is this how you would combine an insert and update into a stored proc?
Your assumption is right, this is the optimal way to do it and it's called upsert/merge.
Importance of UPSERT - from sqlservercentral.com:
For every update in the case mentioned above we are removing one
additional read from the table if we
use the UPSERT instead of EXISTS.
Unfortunately for an Insert, both the
UPSERT and IF EXISTS methods use the
same number of reads on the table.
Therefore the check for existence
should only be done when there is a
very valid reason to justify the
additional I/O. The optimized way to
do things is to make sure that you
have little reads as possible on the
DB.
The best strategy is to attempt the
update. If no rows are affected by the
update then insert. In most
circumstances, the row will already
exist and only one I/O will be
required.
Edit:
Please check out this answer and the linked blog post to learn about the problems with this pattern and how to make it work safe.
Please read the post on my blog for a good, safe pattern you can use. There are a lot of considerations, and the accepted answer on this question is far from safe.
For a quick answer try the following pattern. It will work fine on SQL 2000 and above. SQL 2005 gives you error handling which opens up other options and SQL 2008 gives you a MERGE command.
begin tran
update t with (serializable)
set hitCount = hitCount + 1
where pk = #id
if ##rowcount = 0
begin
insert t (pk, hitCount)
values (#id,1)
end
commit tran
If to be used with SQL Server 2000/2005 the original code needs to be enclosed in transaction to make sure that data remain consistent in concurrent scenario.
BEGIN TRANSACTION Upsert
update myTable set Col1=#col1, Col2=#col2 where ID=#ID
if ##rowcount = 0
insert into myTable (Col1, Col2) values (#col1, #col2)
COMMIT TRANSACTION Upsert
This will incur additional performance cost, but will ensure data integrity.
Add, as already suggested, MERGE should be used where available.
MERGE is one of the new features in SQL Server 2008, by the way.
You not only need to run it in transaction, it also needs high isolation level. I fact default isolation level is Read Commited and this code need Serializable.
SET transaction isolation level SERIALIZABLE
BEGIN TRANSACTION Upsert
UPDATE myTable set Col1=#col1, Col2=#col2 where ID=#ID
if ##rowcount = 0
begin
INSERT into myTable (ID, Col1, Col2) values (#ID #col1, #col2)
end
COMMIT TRANSACTION Upsert
Maybe adding also the ##error check and rollback could be good idea.
If you are not doing a merge in SQL 2008 you must change it to:
if ##rowcount = 0 and ##error=0
otherwise if the update fails for some reason then it will try and to an insert afterwards because the rowcount on a failed statement is 0
Big fan of the UPSERT, really cuts down on the code to manage. Here is another way I do it: One of the input parameters is ID, if the ID is NULL or 0, you know it's an INSERT, otherwise it's an update. Assumes the application knows if there is an ID, so wont work in all situations, but will cut the executes in half if you do.
Modified Dima Malenko post:
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
BEGIN TRANSACTION UPSERT
UPDATE MYTABLE
SET COL1 = #col1,
COL2 = #col2
WHERE ID = #ID
IF ##rowcount = 0
BEGIN
INSERT INTO MYTABLE
(ID,
COL1,
COL2)
VALUES (#ID,
#col1,
#col2)
END
IF ##Error > 0
BEGIN
INSERT INTO MYERRORTABLE
(ID,
COL1,
COL2)
VALUES (#ID,
#col1,
#col2)
END
COMMIT TRANSACTION UPSERT
You can trap the error and send the record to a failed insert table.
I needed to do this because we are taking whatever data is send via WSDL and if possible fixing it internally.
Your logic seems sound, but you might want to consider adding some code to prevent the insert if you had passed in a specific primary key.
Otherwise, if you're always doing an insert if the update didn't affect any records, what happens when someone deletes the record before you "UPSERT" runs? Now the record you were trying to update doesn't exist, so it'll create a record instead. That probably isn't the behavior you were looking for.