I have a database that I am working on with over 900 SP's. None of the SP's have any error handling. Is there a utility within SQL Server 2005 or 2008 that would automatically log the SP and the error into a table?
If the SPs are being called from code in a separate data layer, you could possibly add a global exception handler for that class. There is no 'global' error handling, per-se in sql server as far as stored procedures go - think about code - in code, if you had a gazillion classes, and there was no ineritance of any sort, you would have to implement error handling on each class separately. Sql server SPs have their own error handling, such as try-catch and ##error - look on books online, or http://www.codeproject.com/KB/database/ErrorHandling.aspx
Related
The problem is on SQL Server 2008
I have run into this unusual situation. The situation is like this. I have two separate SQL Server installations. Lets name them as
Installation-1
Installation-2
On Installation-1 lets say I have database called Database-1. And on Installation-2 I have other database called Database-2.
Database-1 has a SP called Parent_SP. Database-2 has two SPs called Child_SP_1 and Child_SP_2.
Installation-2 has been added as a linked server on Installation-1.
The Parent_SP on Database-1 calls Child_SP_1 which in turn calls Child_SP_2.
Now what happens is if there is an exception/error in Child_SP_1. I am able to catch it. However if the exception is raised at Child_SP_2. The exception never gets caught in Parent_SP.
I have no idea why such a behavior. Logically any exception should be raised eventually to parent SP. Which is not happening.
I've been trying to debug and create different scenarios but so far no luck.
Is there any reason why it should not work ?
I have a SQL Server 2000 database with a stored procedure that deletes a row from a specific table, given its id. When I call the stored procedure from VB.NET, it does not delete the row, but running the same script directly on the database via SSMS, it works.
Here's my chain of events:
Start SQL Server Profiler to watch all calls to the database. I have
it setup to track when stored procedure starts, completes, and even
on SQL statements start/complete within that stored procedure.
Call stored procedure via VB.NET dll.
Stop the profiler trace to avoid excessive data to dig through.
Select from table, and see that the row still exists.
View the profiler trace, which only shows RPC:Starting, SP:Starting, RPC:Completed. No inner statements are traced, which verifies why the row wasn't deleted since the delete statement never fired.
Copy/paste the EXEC call directly from the RPC:Starting trace entry from when it was called via VB.NET, into SQL Server Management Studio query window pointed to the same database with same credentials.
Start profiler again.
Execute EXEC statement from bullet 6 in SSMS.
Stop profiler.
Select from table, and see that the row GOT DELETED like it should.
View the profiler trace, which shows SP:Starting, all statements starting/completed including the DELETE statement, and SP:Completed.
Why would running it via RPC make it not execute any of the statements in the proc, but running directly acts as it should?
EDIT: Below is my VB.NET code. This is the same code we use in over 100 other places:
Dim paramRowID As New SqlParameter("#RowID", sRowID)
Microsoft.ApplicationBlocks.Data.SqlHelper.ExecuteNonQuery(oConn, "spDeleteRow", paramRowID)
See SqlHelper source here.
EDIT: I hate myself right now. :) SQL threw an exception "nvarchar is incompatible with image" about another parameter that I was passing NULL to. SSMS didn't worry about the type, but VB.NET did since I didn't explicitly tell it that it was of type image. Once I defined that param, it worked. I wish profiler would have told me there was an error though.
Any help would be appreciated,
Greg
That would be because SSMS does not call an RPC but a batch. There is no way in fact to call a RPC from SSMS since you cannot declare a parameter, which is what differentiate an RPC call from a batch call in TDS:
2.2.1.3 SQL Batch To send a SQL statement or a batch of SQL statements, the SQL batch, represented by a Unicode string, is copied into the data section of a TDS packet and then sent to the database server that supports SQL. A SQL batch may span more than one TDS packet. See section 2.2.6.6 for additional detail
2.2.1.5 Remote Procedure Call To execute a remote procedure call (RPC) on the server, the client sends an RPC message data stream to the server. This is a binary stream that contains the RPC name or numeric identifier, options, and parameters. RPCs MUST be in a separate TDS message and not intermixed with SQL statements. There can be several RPCs in one message. See section 2.2.6.5 for additional details.
So monitor instead for the SQL:BatchCompleted event and you'll see your SSMS statement(s).
Does the user the application is using to connect to sql have permission to execute stored procedures? That is the first thing I would verify.
I have an issue where I call a stored procedure from a linked server and it times out. However I have no good way of catching this. Though it occurs rarely I am wondering if there is any way to catch this particular warning:
OLE DB provider "SQLNCLI10" for linked server "serverName" returned message "Query timeout expired".
Unfortunatly warnings aren't caught by try/catch and MS does have an open issue that this should be an error: http://connect.microsoft.com/SQLServer/feedback/details/337043/no-error-raised-when-a-remote-procedure-times-out
I don't want to increase the timeout property, and I know I can do something like:
Declare #ret int
select #ret = 4417
Exec #ret=Server.DB.dbo.RemoteSP
If #ret is null afterwards it means the call failed, however it does not tell me exactly what the cause was. Is there anyway to essentially catch that warning? What are the best practices in for remote procedure calls error handling?
As of 2019 there is still no way to properly catch SQL Server remote timeout errors.
It applies both to remote SP calls and constructs like execute ('select 1') at REMOTESQLSERVER.
As per comment from N.Nelu:
Microsoft docs state under "Errors Unaffected by a TRY...CATCH Construct".
Errors Unaffected by a TRY...CATCH Construct
TRY...CATCH constructs do not trap the following conditions:
Warnings or informational messages that have a severity of 10 or
lower.
Errors that have a severity of 20 or higher that stop the SQL Server Database Engine task processing for the session. If an error
occurs that has severity of 20 or higher and the database connection
is not disrupted, TRY...CATCH will handle the error.
Attentions, such as client-interrupt requests or broken client connections.
When the session is ended by a system administrator by using the KILL statement.
Connect link you have provided is dead but you still can vote to fix this feature here. See also this excellent article on SQL Error handling under 4.3 Query Timeout on Linked Servers.
I am running a stored procedure in SQL Server 2008 inside a try/catch. The stored procedure and the stored procs it calls raise a few errors but in the try/catch you only get the last error from the stored procedure that you are running.
Is there a way/trick to be able to somehow catch ALL the errors generated by child stored proc calls while running a particular stored procedure? (assume that you have no access to any stored procedures so you can't modify where they can write the error, i.e. you can't just change all the stored procedures to stop raising errors and instead write them to some table and in your catch read from that table)
Here is a good resource for how to deal with error handling in SQL Server.
http://www.sqlservercentral.com/articles/Development/anerrorhandlingtemplatefor2005/2295/
However, some of the methods require that you have the ability to change the code in order to capture the errors. There is really no way of getting around this. You can't just ignore the error, keep processing, and then come around later to deal with the error. In most, if not all, languages, exceptions have to be dealt with at the time the exception was raised. T-SQL is no different.
I personally use a stored procedure to log any error whenever it occurs. Here is what I use:
CREATE PROCEDURE [dbo].[Error_Handler]
#returnMessage bit = 'False'
WITH EXEC AS CALLER
AS
BEGIN
INSERT INTO Errors (Number,Severity,State,[Procedure],Line,[Message])
VALUES (
ERROR_NUMBER(),
ERROR_SEVERITY(),
ERROR_STATE(),
isnull(ERROR_PROCEDURE(),'Ad-Hoc Query'),
isnull(ERROR_LINE(),0),
ERROR_MESSAGE())
IF(#returnMessage = 'True')
BEGIN
select Number,Severity,State,[Procedure],Line,[Message]
from Errors
where ErrorID = scope_identity()
END
END
If you have stored procs that are raising more than one error, they need to be replaced no matter what. You probably have data integrity errors in your database. That is a critical, "everything needs to stop right now until this is fixed" kind of issue. If you can't replace them and they were incorrectly written to allow processing to continue when an error was reached, then I know of no way to find the errors. Errors are not recorded unless you tell them to be recorded. If the stored procs belong to a product you bought from another vendor and that's why you can't change them, your best bet is to change to a vendor that actually understands how to program database code because there is no salvaging a product written that badly.
You wouldn't have a Java or c# methods raising error after error. Why do you expect SQL to allow this? An exception is an exception
If the DB Engine is throwing errors then you have problems.
What I've done before is to separate testing and checking code: find out what is wronf first and throw one exception If no errors, do your writes.
Our organization has a lot of its essential data in a mainframe Adabas database. We have ODBC access to this data and from C# have queried/updated it successfully using ODBC/Natural "stored procedures".
What we'd like to be able to do now is to query a mainframe table from within SQL Server 2005 stored procs, dump the results into a table variable, massage it, and join the result with native SQL data as a result set.
The execution of the Natural proc from SQL works fine when we're just selecting it; however, when we insert the result into a table variable SQL seems to be starting a distributed transaction that in turn seems to be wreaking havoc with our connections.
Given that we're not performing updates, is it possible to turn off this DTC-escalation behavior?
Any tips on getting DTC set up properly to talk to DataDirect's (formerly Neon Systems) Shadow ODBC driver?
Check out SET REMOTE_PROC_TRANSACTIONS OFF which should disable it.
Or sp_serveroption to configure the linked server generally, not per batch.
Because you are writing on the MS SQL side, you start a transaction.
By default, it escalates whether it needs to or not.
Even though the table variable does not particapate in the transaction.
I've had similar issues before where the MS SQL side behaves differently based on if MS SQL writes, in a stored proc and other stuff. The most reliable way I found was to use dynamic SQL calls to my Sybase linked server...
The following code sets the "Enable Promotion of Distributed Transactions" for linked servers:
USE [master]
GO
EXEC master.dbo.sp_serveroption #server=N'REMOTE_SERVER', #optname=N'remote proc transaction promotion', #optvalue=N'false'
GO
This will allow you to insert the results of a linked server stored procedure call into a table variable.
I'm not sure about DTC, but DTSX (Integration Services) may be useful for moving the data. However, if you can simply query the data, you may want to look at adding a linked server for direct access. You could then just write a simple query to populate your table based on a select from the linked server's table.
That's true. As you might guess, the Natural procedures we want to call do lookups and calculations that we'd like to keep at that level if possible.