Can ##Error be non-Zero on a successful insert? - sql

I'm not sure how to make this happen. We're debugging an issue and we need to know if it's possible for ##Error to be non-zero if the insert succeeds. We have a stored procedure that exits if #Error <> 0. And if we knew the answer to this, that would help. Anyone know?
The code is below. We want to know if it's possible to get to the goto statement if the insert succeeded.
-- This happened
insert into Workflow
(SubID, ProcessID, LineID, ReadTime)
values
(#sub_id, #proc_id, #line_id, #read_time)
set #error = ##Error
set #insertedWorkflowId = SCOPE_IDENTITY()
if #error <> 0
begin
set #error_desc = 'insert into tw_workflow'
goto ERROR_EXIT
end
-- This didn't happen
INSERT INTO Master.WorkflowEventProcessing (WorkflowId, SubId, ReadTime, ProcessId, LineId) VALUES (#insertedWorkflowId, #sub_id, #read_time, #proc_id, #line_id)
INSERT INTO Master.ProcessLogging (ProcessCode, WorkflowId, SubId, EventTime) VALUES (10, #insertedWorkflowId, #sub_id, GETDATE())
EDIT
Maybe a better way to say what's wrong is this: The first insert happened but the last two didn't. How is that possible? Maybe the last two inserts simply just failed?

If this insert succeeds then there will be a non-zero ##rowcount since you're simply using values (rather than a select...where which could "successfully" insert 0 rows). You could use this to write some debug checks in there, or just include it as part of the routine for good.
insert into Workflow
(SubID, ProcessID, LineID, ReadTime)
values
(#sub_id, #proc_id, #line_id, #read_time)
if ##rowcount = 0 or ##error <> 0 -- Problems!
UPDATE
If a trigger fires on insert, an error in the trigger with severity of:
< 10: will run without a problem, ##error = 0
between 11 and 16: insert will succeed, ##error != 0
17, 18: insert will succeed, execution will halt
19 (with log): insert will succeed, execution will halt
> 20 (with log): insert will not succeed, execution will halt
I arrived at this by adding a trigger to the workflow table and testing various values for severity, so I can't readily say this would be the exact case in all environments:
alter trigger workflowtrig on workflow after insert as begin
raiserror(13032, 20, 1) with log -- with log is necessary for severity > 18
end
Soooo, after that, we have somewhat of an answer to this question:
Can ##Error be non-Zero on a successful insert?
Yes...BUT, I'm not sure if there is another chain of events that could lead to this, and I'm not creative enough to put together the tests to prove such. Hopefully someone else knows for sure.
I know this all isn't a great answer, but it's too big for a comment, and I thought it might help!

According to the T-SQL documentation, this should not be possible. You could also wrap this in a try / catch, to attempt to catch most errors.
You might want to also consider the possibility that SCOPE_IDENTITY() is not returning the correct value, or the value you think it is. If you have FKs on the Master.WorkflowEventProcessing and Master.ProcessLogging tables, you could get a FK error attempting to insert into those tables because the value returned from SCOPE_IDENTITY is not correct.

Related

Deleting and then insert to target table in a transaction

I have the following situation where a stored procedure gathers data and performs the necessary joins and inserts the results into a temp table (ex:#Results)
Now, what I want to do is insert all the records from #Results into a table that was previously created but I first want to remove (truncate/delete) the destination and then insert the results. The catch is putting this process of cleaning the destination table and then inserting the new #Results in a transaction.
I did the following:
BEGIN TRANSACTION
DELETE FROM PracticeDB.dbo.TransTable
IF ##ERROR <> 0
ROLLBACK TRANSACTION
ELSE
BEGIN
INSERT INTO PracticeDB.dbo.TransTable
(
[R_ID]
,[LASTNAME]
,[FIRSTNAME]
,[DATASOURCE]
,[USER_STATUS]
,[Salary]
,[Neet_Stat]
)
SELECT [R_ID]
,[LASTNAME]
,[FIRSTNAME]
,[DATASOURCE]
,[USER_STATUS]
,[Salary]
,[Neet_Stat]
FROM #RESULT
Select ##TRANCOUNT TransactionCount, ##ERROR ErrorCount
IF ##ERROR <> 0
ROLLBACK TRANSACTION
ELSE
COMMIT TRANSACTION
END
but I know it isn't working properly and I'm having a hard time finding an example like this though I dont know why considering it seems like something common. In this case it still deletes the target table though the insert fails.
More than anything, some guidance would be nice as to best approach this situation or best practices in a similar case (whats best to use and so forth). Thank you in advance...
I'm really not seeing anything wrong with this. So it DOES delete from your TransTable, but doesn't do the insert? Are you sure #RESULT has records in it?
The only thing that I see, is you're checking ##ERROR after Select ##TRANCOUNT TransactionCount, ##ERROR ErrorCount, which means ##ERROR is going to be from your SELECT statement and not the INSERT statement (although I would always expect that to be 0).
For more info on ##ERROR, see: http://msdn.microsoft.com/en-us/library/ms188790.aspx
You should check ##ERROR after each statement.
As far as best practices, I think Microsoft now recommends you use TRY/CATCH instead of checking ##ERROR after each statement (as of SQL 2005 and after). Take a look at Example B here: http://msdn.microsoft.com/en-us/library/ms175976.aspx

insert 0 rows without error

Can an insert ever execute without error but not insert anything? I am using sql server 2008 and wondering if I can get away with just checking ##error or is there something I am missing. For an update I check ##error and ##rowcount. Having ##rowcount = 0 for an insert just seems strange to me.
edit
#Gregory I basically wondering how I should error check an insert statement. Are there any strange boundary cases where an insert execute and ##error is 0?
You can run an INSERT command using a select that returns an empty table.
The statement will succeed but no rows updated.
INSERT INTO myTable (col1, col2)
SELECT col1, col2
FROM myOtherTable
WHERE 1 = 2
Are you experiencing no errors, but no insert?
Or is this just a question on why you should use ROWCOUNT? ROWCOUNT has other purposes than checking whether or not a single insert worked.
Or are you asking about ##ERROR?
Additionally, an insert could end up inserting no rows without an error if there is an instead of trigger involved.

Database transaction only partially committing

I've got a T-SQL stored procedure running on a Sybase ASE database server that is sometimes failing to commit all of its operations, even though it completes without exception. Here's a rough example of what it does.
BEGIN TRANSACTION
UPDATE TABLE1
SET FIELD1 = #NEW_VALUE1
WHERE KEY1 = #KEY_VALUE1
IF ##error <> 0 OR ##rowcount <> 1 BEGIN
ROLLBACK
RETURN 1
END
UPDATE TABLE2
SET FIELD2 = #NEW_VALUE2
WHERE KEY2 = #KEY_VALUE2
IF ##error <> 0 OR ##rowcount <> 1 BEGIN
ROLLBACK
RETURN 2
END
INSERT TABLE2 (FIELD2, FIELD3)
VALUES (#NEW_VALUE3a, #NEW_VALUE3b)
IF ##error <> 0 OR ##rowcount <> 1 BEGIN
ROLLBACK
RETURN 3
END
COMMIT TRANSACTION
RETURN 0
The procedure is called at least hundreds of times a day. In a small percentage of those cases (probably < 3%), only the INSERT statement commits. The proc completes and returns 0, but the two UPDATEs don't take. Originally we thought it might be that the WHERE clauses on the UPDATEs weren't matching anything, so we added the IF ##rowcount logic. But even with those checks in there, the INSERT is still happening and the procedure is still completing and returning 0.
I'm looking for ideas about what might cause this type of problem. Is there anything about the way SQL transactions work, or the way Sybase works specifically, that could be causing the COMMIT not to commit everything? Is there something about my IF blocks that could allow the UPDATE to not match anything but the procedure to continue? Any other ideas?
is is possible that they are updating, but something is changing the values back? try adding a update trigger on those tables and within that trigger insert into a log table. for rows that appear to have not been updated look in the log, is there a row or not?
Not knowing how you set the values for your variables, it occurs to me that if the value of #NEW_VALUE1 is the same as the previous value in FIELD1 , the update would succeed and yet appear to have not changed anything making you think the transaction had not happened.
You also could have a trigger that is affecting the update.

SQL Server 'Resume Next' Equivalent

I'm working on a project in VB.net which takes large text files containing T-SQL and executes them against a local SQL database, but I've hit a problem in regards to error handling.
I'm using the following technologies :
VB.net
Framework 3.5
SQL Express 2005
The SQL I'm trying to execute is mostly straight-forwards but my app is completely unaware of the schema or the data contained within. For example :
UPDATE mytable SET mycol2='data' WHERE mycol1=1
INSERT INTO mytable (mycol1, mycol2) VALUES (1,'data')
UPDATE mytable SET mycol2='data' WHERE mycol1=2
INSERT INTO mytable (mycol1, mycol2) VALUES (1,'data')
UPDATE mytable SET mycol2='data' WHERE mycol1=3
The above is a sample of the sort of thing I'm executing, but these files will contain around 10,000 to 20,000 statements each.
My problem is that when using sqlCommand.ExecuteNonQuery(), I get an exception raised because the second INSERT statement will hit the Primary Key constraint on the table.
I need to know that this error happened and log it, but also process any subsequent statements. I've tried wrapping these statements in TRY/CATCH blocks but I can't work out a way to handle the error then continue to process the other statements.
The Query Analyser seems to behave in this way, but not when using sqlCommand.ExecuteNonQuery().
So is there a T-SQL equivalent of 'Resume Next' or some other way I can do this without introducing massive amounts of string handling on my part?
Any help greatly appreciated.
SQL Server does have a Try/Catch syntax. See:
http://msdn.microsoft.com/en-us/library/ms175976.aspx
To use this with your file, you would either have to rewrite the files themselves to wrap each line with the try/catch syntax, or else your code would have to programatically modify the file contents to wrap each line.
There is no T-SQL equivalent of "On Error Resume Next", and thank Cthulhu for that.
Actually your batch executed until the end since key violations are not intrerupting batch execution. If you run the same SQL file from Management Studio you'll see that the result is that all the valid statements were executed and the messages panel contains an error for each key violation. The SqlClient of ADO.NEt behaves much the same way, but at the end of the batch (when SqlCommand.ExecuteNonQuery returns) it parses the messages returned and throws an exception. The exception is one single SqlException but it's Errors collection contains a SqlError for each key violation that occured.
Unfortunately there is no silver bullet. Ideally the SQL files should not cause errors. You can choose to iterate through the SqlErrors of the exception and decide, on individual basis, if the error was serious or you can ignore it, knowing that the SQL files have data quality problems. Some errors may be serious and cannot be ignored. See Database Engine Error Severities.
Another alternative is to explictily tell the SqlClient not to throw. If you set the FireInfoMessageEventOnUserErrors property of the connection to true it will raise an SqlConnection.InfoMessage event instead of twroing an exception.
At the risk of slowing down the process (by making thousands of trips to SQL server rwther than one), you could handle this issue by splitting the file into multiple individual queries each for either INSERT or UPDATE. Then you can catch each individual error as it take place and log it or deal with it as your business logic would require.
begin try
--your critical commands
end try
begin catch
-- is necessary write somethink like this
select ''
end cath
One technique I use is to use a try/catch and within the catch raise an event with the exception information. The caller can then hook up an event handler to do whatever she pleases with the information (log it, collect it for reporting in the UI, whatever).
You can also include the technique (used in many .NET framework areas, e.g. Windows.Forms events and XSD validation) of passing a CancelableEventArgs object as the second event argument, with a Boolean field that the event handler can set to indicate that processing should abort after all.
Another thing I urge you to do is to prepare your INSERTs and UPDATEs, then call them many times with varying argments.
I'm not aware of a way to support resume next, but one approach would be to use a local table variable to prevent the errors in the first place e.g.
Declare #Table table(id int, value varchar(100))
UPDATE mytable SET mycol2='data' WHERE mycol1=1
--Insert values for later usage
INSERT INTO #Table (id, value) VALUES (1,'data')
--Insert only if data does not already exist.
INSERT INTO mytable (mycol1, mycol2)
Select id, Value from #Table t left join
mytable t2 on t.id = t2.MyCol1
where t2.MyCol is null and t.id = 1
EDIT
Ok, I don't know that I suggest this per se, but you could achieve a sort of resume next by wrapping the try catch in a while loop if you set an exit condition at the end of all the steps and keep track of what step you performed last.
Declare #Table table(id int, value varchar(100))
Declare #Step int
set #Step = 0
While (1=1)
Begin
Begin Try
if #Step < 1
Begin
Insert into #Table (id, value) values ('s', 1)
Set #Step = #Step + 1
End
if #Step < 2
Begin
Insert into #Table values ( 1, 's')
Set #Step = #Step + 1
End
Break
End Try
Begin Catch
Set #Step = #Step + 1
End Catch
End
Select * from #Table
Unfortunately, I don't think there's a way to force the SqlCommand to keep processing once an error has been returned.
If you're unsure whether any of the commands will cause an error or not (and at some performance cost), you should split the commands in the text file into individual SqlCommands...which would enable you to use a try/catch block and find the offending statements.
...this, of course, depends on the T-SQL commands in the text file to each be on a separate line (or otherwise delineated).
You need to check whether the PK value already exists. Additionally, one large transaction is always faster than many little transactions; and the rows and pages don't need to be locked for as long overall that way.
-- load a temp/import table
create table #importables (mycol1 int, mycol2 varchar(50))
insert #importables (mycol1, mycol2) values (1, 'data for 1')
insert #importables (mycol1, mycol2) values (2, 'data for 2')
insert #importables (mycol1, mycol2) values (3, 'data for 3')
-- update the base table rows that are already there
update mt set MyCol2 = i.MyCol2
from #importables i (nolock)
inner join MyTable mt on mt.MyCol1 = i.MyCol1
where mt.MyCol2 <> i.MyCol2 -- no need to fire triggers and logs if nothing changed
-- insert new rows AFTER the update, so we don't update the rows we just inserted.
insert MyTable (mycol1, mycol2)
select mycol1, mycol2
from #importables i (nolock)
left join MyTable mt (nolock) on mt.MyCol1 = i.MyCol1
where mt.MyCol1 is null;
You could improve this further by opening a SqlConnection, creating a #temp table, using SqlBulkCopy to do a bulk insert to that temp table, and doing the delta from there (as opposed to my #importables in this example). As long as you use the same SqlConnection, a #temp table will remain accessible to subsequent queries on that connection, until you drop it or disconnect.

Insert Update stored proc on SQL Server

I've written a stored proc that will do an update if a record exists, otherwise it will do an insert. It looks something like this:
update myTable set Col1=#col1, Col2=#col2 where ID=#ID
if ##rowcount = 0
insert into myTable (Col1, Col2) values (#col1, #col2)
My logic behind writing it in this way is that the update will perform an implicit select using the where clause and if that returns 0 then the insert will take place.
The alternative to doing it this way would be to do a select and then based on the number of rows returned either do an update or insert. This I considered inefficient because if you are to do an update it will cause 2 selects (the first explicit select call and the second implicit in the where of the update). If the proc were to do an insert then there'd be no difference in efficiency.
Is my logic sound here?
Is this how you would combine an insert and update into a stored proc?
Your assumption is right, this is the optimal way to do it and it's called upsert/merge.
Importance of UPSERT - from sqlservercentral.com:
For every update in the case mentioned above we are removing one
additional read from the table if we
use the UPSERT instead of EXISTS.
Unfortunately for an Insert, both the
UPSERT and IF EXISTS methods use the
same number of reads on the table.
Therefore the check for existence
should only be done when there is a
very valid reason to justify the
additional I/O. The optimized way to
do things is to make sure that you
have little reads as possible on the
DB.
The best strategy is to attempt the
update. If no rows are affected by the
update then insert. In most
circumstances, the row will already
exist and only one I/O will be
required.
Edit:
Please check out this answer and the linked blog post to learn about the problems with this pattern and how to make it work safe.
Please read the post on my blog for a good, safe pattern you can use. There are a lot of considerations, and the accepted answer on this question is far from safe.
For a quick answer try the following pattern. It will work fine on SQL 2000 and above. SQL 2005 gives you error handling which opens up other options and SQL 2008 gives you a MERGE command.
begin tran
update t with (serializable)
set hitCount = hitCount + 1
where pk = #id
if ##rowcount = 0
begin
insert t (pk, hitCount)
values (#id,1)
end
commit tran
If to be used with SQL Server 2000/2005 the original code needs to be enclosed in transaction to make sure that data remain consistent in concurrent scenario.
BEGIN TRANSACTION Upsert
update myTable set Col1=#col1, Col2=#col2 where ID=#ID
if ##rowcount = 0
insert into myTable (Col1, Col2) values (#col1, #col2)
COMMIT TRANSACTION Upsert
This will incur additional performance cost, but will ensure data integrity.
Add, as already suggested, MERGE should be used where available.
MERGE is one of the new features in SQL Server 2008, by the way.
You not only need to run it in transaction, it also needs high isolation level. I fact default isolation level is Read Commited and this code need Serializable.
SET transaction isolation level SERIALIZABLE
BEGIN TRANSACTION Upsert
UPDATE myTable set Col1=#col1, Col2=#col2 where ID=#ID
if ##rowcount = 0
begin
INSERT into myTable (ID, Col1, Col2) values (#ID #col1, #col2)
end
COMMIT TRANSACTION Upsert
Maybe adding also the ##error check and rollback could be good idea.
If you are not doing a merge in SQL 2008 you must change it to:
if ##rowcount = 0 and ##error=0
otherwise if the update fails for some reason then it will try and to an insert afterwards because the rowcount on a failed statement is 0
Big fan of the UPSERT, really cuts down on the code to manage. Here is another way I do it: One of the input parameters is ID, if the ID is NULL or 0, you know it's an INSERT, otherwise it's an update. Assumes the application knows if there is an ID, so wont work in all situations, but will cut the executes in half if you do.
Modified Dima Malenko post:
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
BEGIN TRANSACTION UPSERT
UPDATE MYTABLE
SET COL1 = #col1,
COL2 = #col2
WHERE ID = #ID
IF ##rowcount = 0
BEGIN
INSERT INTO MYTABLE
(ID,
COL1,
COL2)
VALUES (#ID,
#col1,
#col2)
END
IF ##Error > 0
BEGIN
INSERT INTO MYERRORTABLE
(ID,
COL1,
COL2)
VALUES (#ID,
#col1,
#col2)
END
COMMIT TRANSACTION UPSERT
You can trap the error and send the record to a failed insert table.
I needed to do this because we are taking whatever data is send via WSDL and if possible fixing it internally.
Your logic seems sound, but you might want to consider adding some code to prevent the insert if you had passed in a specific primary key.
Otherwise, if you're always doing an insert if the update didn't affect any records, what happens when someone deletes the record before you "UPSERT" runs? Now the record you were trying to update doesn't exist, so it'll create a record instead. That probably isn't the behavior you were looking for.