SQL Server: verbose error messages? - sql

Is there some configuration option for MS SQL Server which enables more verbose error messages.
Specific example: I would like to see the actual field values of the inserted record which violates a constraint during an insert, to help track down a bug in stored procedures which I haven't been able to reproduce.

I don't believe there is any such option. There are trace flags that give more information about deadlocks, but I've never heard of one that gives more information on a constraint violation.
If you control the application that is causing the crash then extending it's handling (as Jenn suggested) to include parameter values etc. Once you have the parameter values you can get a copy of live setup on a non-live server and start debugging the issue.
For more options, can any of the users affected reliably reproduce the issue? If they can then you might be able to run a profiler trace to capture the actual statements / parameter values being sent to the database. Of course, if you can figure out the steps to reproduce the issue then you can probably use more traditional debugging methods...
You don't say what the constraint is, I'm assuming it is a fairly complex constraint. If so, could it be broken down into several constraints so you can get more of a hint about the problem with the data?
You could also re-write the constraint as a trigger which could then include more information in the error that it raises. Although this would obviously need testing before being deployed to a production server!
Personally, I would go with changing the error handling of the application. It is probably the less risky change.
PS The application that I helped write, and now spend my time supporting, logs quite a lot of data when an unhandled exception occurs. If it is during a save then our data access layer attaches the complete list of all commands that were being run as part of the save transaction including parameter values. This has proved to be invaluable on many occasions, including some when tracking down constraint violations.

In a stored proc, what I do to get better informatino in a complex SP about the errors is take advantage of the fact that table variables are not affected by a rollback. So I put the information I want to use to troubleshoot into table variables at the time I create it and then if I hit the catch block and rollback, after the rollback I insert the data from the table variable into an exception table along with some meta data like the datetime.
With some thought you can design an exception table that will capture what you need from just about any proc (for instance you could concatenate all the input variables into one field, you could put in the step number that failed (of course then you have to assign stepnumbers to a variable) or you could log every step along the awy and then the last one logged is the one it failed on. Belive me when you are looking at troubleshooting a 100 line SP, this can come in handy. If I have dymanic SQl inteh proc, I can log the SQL variable that contains the dynamic code that was run.
The beauty of this is now you don't have to try to reproduce the error, you know what the input parameters were and any other information you find useful. Yes it can take a bit of time to set up once, but once you do it is relatively easy to get in the habit of putting it into any complex proc that you will want to log errors on.
You might also want to set a an nongeneralized one if you want to return spefic data values of a select used for an insert or the result set of a select that would tell you waht what wopuld have been being updated or deleted. Then you would have that only if the proc failed. This is a little more work than the general exception table but may be needed in some complex cases.

Related

Where do you put SQL RAISERROR code?

I'm trying to set up a custom error message to pass to MS Access (from SQL Server) when a user enters a duplicate key (instead of system message 2627). I've read up on sp_addmessage and RAISERROR and TRY/CATCH blocks which all make perfect sense. But nowhere I've looked does it seem to say where you put the RAISERROR code (and TRY/CATCH block) so it will actually pass back to the application. So, where does the code go?
Don't think about it in terms of users entering duplicate keys. Instead, think in terms of users just entering keys, some of which turn out to be duplicates when you try to insert them. It's a subtle difference, but it helps you here, because it means you think in terms of having your code available for all new table inserts instead of just a specific type of insert.
When a user enters a key, an INSERT sql statement runs. If the key is a duplicate and you have the constraints defined on the table to the prevent that, the INSERT statement fails. If your Access application has you writing custom SQL, you can wrap this in a TRY/CATCH, and put the RAISERROR in the CATCH block. If your Access application is such that you never see any SQL, you may be stuck, and have to put up with the built-in behavior.

How can I tell what the parameter values are for a problem stored procedure?

I have a stored procedure that causes blocking on my SQL server database. Whenever it does block for more than X amount of seconds we get notified with what query is being run, and it looks similar to below.
CREATE PROC [dbo].[sp_problemprocedure] (
#orderid INT
--procedure code
How can I tell what the value is for #orderid? I'd like to know the value because this procedure will run 100+ times a day but only cause blocking a handful of times, and if we can find some sort of pattern between the order id's maybe I'd be able to track down the problem.
The procedure is being called from a .NET application if that helps.
Have you tried printing it from inside the procedure?
http://msdn.microsoft.com/en-us/library/ms176047.aspx
If it's being called from a .NET application you could easily log out the parameter being passed from the .net app, but if you don't have access, also you can use SQL Server profiling. Filters can be set on the command type i.e. proc only as well as the database that is being hit otherwise you will be overwhelmed with all the information a profile can produce.
Link: Using Sql server profiler
rename the procedure
create a logging table
create a new one (same signature/params) which calls the original but first logs the params and starting timestamp and logs after the call finishes the end timestamp
create a synonym for this new proc with the name of the original
Now you have a log for all calls made by whatever app...
You can disbale/enable the logging anytime by simply redefining the synonym to point to the logging wrapper or to the original...
The easiest way would be to run a profiler trace. You'll want to capture calls to the stored procedure.
Really though, that is only going to tell you part of the story. Personally I would start with the code. Try and batch big updates into smaller batches. Try and avoid long-running explicit transactions if they're not necessary. Look at your triggers (if any) and cascading Foreign keys and make sure those are efficient.
easiest way is to do the following:
1) in .NET, grab the date-time just before running the procedure
2) in .Net, after the procedure is complete grab the date-time
3) in .NET, do some date-time math, and if it is "slow", write to a file (log) those start and end date-times, user info, all the the parameters, etc.

How to troubleshoot a stored procedure?

what is the best way of troubleshoot a stored procedure in SQL Server, i mean from where do you start etc..?
Test each SELECT statements (if any) outside of your stored procedure to see whether it returns the expected results;
Make INSERT and UPDATE statements as simple as possible;
Try to test Inserts and Updates outside of your SP so that you can check it gives the expected results;
Use the debugger provided with SSMS Express 2008.
Visual Studio 2008 / 2010 has a debug facility. Simply connect to to your SQL Server instance in 'Server Explorer' and browse to your stored procedure.
Visual Studio 'Test Edition' also can generate Unit Tests around your stored procedures.
Troubleshooting a complex stored proc is far more than just determining if you can get it to run or not and finding the step which won't run. What is most critical is whether it actually returns the corect results or performs the correct actions.
There are two kinds of stored procs that need extensive abilites to troublshoot. First the the proc which creates dynamic SQL. I never create one of these without an input parameter of #debug. When this parameter is set, I have the proc print the SQl statment as it would have run and not run it. Almost everytime, this leads you right away to the problem as you can then see the syntax error in the generated SQL code. You also can run this sql code to see if it is returning the records you expect.
Now with complex procs that have many steps that affect data, I always use an #test input parameter. There are two things I do with the #test parameter, first I make it rollback the actions so that a mistake in development won't mess up the data. Second, I have it display the data before it rollsback to see what the results would have been. (These actually appear in the reverse order in the proc; I just think of them in this order.)
Now I can see what would have gone into the table or been deleted from the tables without affecting the data permananently. Sometimes, I might start with a select of the data as it was before any actions and then compare it to a select run afterwards.
Finally, I often want to log actions of a complex proc and see exactly what steps happened. I don't want those logs to get rolled back if the proc hits an error, so I set up a table variable for the logging information I want at the start of the proc. After each step (or after an error depending on what I want to log), I insert to this table variable. After the rollback or commit statement, I select the results of the table variable or use those results to log to a permanent logging table. This can be especially nice if you are using dynamic SQL because you can log the SQL that was run and then when something strange fails on prod, you have a record of which statement was run when it failed. You do this in a table variable because those do not go out of scope in a rollback.
In SSMS, you can simply start by opening the proc., and clicking on the check mark button (Parse) next to the Execute button on the menu bar. It reports any errors it finds.
If there are no errors there and you're stored procedure is harmless to run (you're not inserting into tables, just creating a temp table for example), then comment out the CREATE PROCEDURE x (or ALTER PROCEDURE x) and declare all the parameters by copying that part, then define them with valid values. Then run it to see what happens.
Maybe this is simple, but it's a place to start.

Handle error in SQL Trigger without failing transaction?

I've a feeling this might not be possible, but here goes...
I've got a table that has an insert trigger on it. When data is inserted into this table the trigger fires and parses a long varbinary column. This trigger performs some operations on the binary data and writes several entries into a second table.
What I have recently discovered is that sometimes the binarydata is not "correct" (i.e. it does not conform to the spec it is supposed to - I have NO control over this whatsoever) and this can cause casting errors etc.
My initial reaction was to wrap things in TRY/CATCH blocks, but it appears this is not a solution either, as the execution of the CATCH means the transaction is doomed and I get a "Transaction doomed in trigger" error.
What is imperitive is that the data still gets written to the initial table. I don't care if the data gets written to the second table or not.
I'm not sure if I can accomplish this or not, and would gratefully receive any advice.
what you could do is commit a transaction inside a trigger and then perform those cast.
i don't know if that's a possible solution to your problem though.
another option would be to create a function IsYourBinaryValueOK which would check the column value. however the check would have to be done with like to not cause an error.
It doesn't sound like this code should run in an insert trigger since it is conceptually two different transactions. You would probably be better off with asynchronous processing such as service broker, a background nanny task that looks for 'not done' work, etc. You could also handle it by using a sproc to do the insert in one transaction and then having it call the do-other-work code afterwards.
If you absolutely have to do it in the trigger then you basically need an autonomous transaction. For some ideas see this link (the techniques apply to sql 2005 as well).

MS SQL Server 2005 - Stored Procedure "Spontaneously Breaks"

A client has reported repeated instances of Very strange behaviour when executing a stored procedure.
They have code which runs off a cached transposition of a volatile dataset. A stored proc was written to reprocess the dataset on demand if:
1. The dataset had changed since the last reprocessing
2. The datset has been unchanged for 5 minutes
(The second condition stops massive repeated recalculation during times of change.)
This worked fine for a couple of weeks, the SP was taking 1-2 seconds to complete the re-processing, and it only did it when required. Then...
The SP suddenly "stopped working" (it just kept running and never returned)
We changed the SP in a subtle way and it worked again
A few days later it stopped working again
Someone then said "we've seen this before, just recompile the SP"
With no change to the code we recompiled the SP, and it worked
A few days later it stopped working again
This has now repeated many, many times. The SP suddenly "stops working", never returning and the client times out. (We tried running it through management studio and cancelled the query after 15 minutes.)
Yet every time we recompile the SP, it suddenly works again.
I haven't yet tried WITH RECOMPILE on the appropriate EXEC statments, but I don't particularly want to do that any way. It gets called hundred of times an hour and normally does Nothing (It only reprocesses the data a few times a day). If possible I want to avoid the overhead of recompiling what is a relatively complicated SP "just to avoid something which "shouldn't" happen...
Has anyone experienced this before?
Does anyone have any suggestions on how to overcome it?
Cheers,
Dems.
EDIT:
The pseduo-code would be as follows:
read "a" from table_x
read "b" from table_x
If (a < b) return
BEGIN TRANSACTION
DELETE table_y
INSERT INTO table_y <3 selects unioned together>
UPDATE table_x
COMMIT TRANSACTION
The selects are "not pretty", but when executed in-line they execute in no time. Including when the SP refuses to complete. And the profiler shows it is the INSERT at which the SP "stalls"
There are no parameters to the SP, and sp_lock shows nothing blocking the process.
This is the footprint of parameter-sniffing. Yes, first step is to try RECOMPILE, though it doesn't always work the way that you want it to on 2005.
Update:
I would try statement-level Recompile on the INSERT anyway as this might be a statistics problem (oh yeah, check that automatics statistics updating is on).
If this does not seem to fit parameter-sniffing, then compare th actual query plan from when it works correctly and from when it is running forever (use estimated plan if you cannot get the actual, though actual is better). You are looking to see if the plan changes or not.
I totally agree with the parameter sniffing diagnosis. If you have input parameters to the SP which are varying (or even if they aren't varying) - be sure to mask them with a local variable and use the local variable in the SP.
You can also use the WITH RECOMPILE if the set is changing but the query plan is no longer any good.
In SQL Server 2008, you can use the OPTIMIZE FOR UNKNOWN feature.
Also, if your process involves populating a table and then using that table in another operation, I recommend breaking the process up into separate SPs and calling them individually WITH RECOMPILE. I think the plans generated at the outset of the process can sometimes be very poor (so poor as not to complete) when you populate a table and then use the results of that table to carry out an operation. Because at the time of the initial plan, the table was a lot different than after the initial insert.
As others have said, something about the way the data or the source table statistics are changing is causing the cached query plan to go stale.
WITH RECOMPILE will probably be the quickest fix - use SET STATISTICS TIME ON to find out what the recompilation cost actually is before dismissing it out of hand.
If that's still not an acceptable solution, the best option is probably to try to refactor the insert statement.
You don't say whether you're using UNION or UNION ALL in your insert statement. I've seen INSERT INTO with UNION produce some bizarre query plans, particularly on pre-SP2 versions of SQL 2005.
Raj's suggestion of dropping and
recreating the target table with
SELECT INTO is one way to go.
You could also try selecting each of
the three source queries into their own
temporary table, then UNION those temp tables
together in the insert.
Alternatively, you could try a
combination of these suggestions -
put the results of the union into a
temporary table with SELECT INTO,
then insert from that into the target
table.
I've seen all of these approaches resolve performance problems in similar scenarios; testing will reveal which gives the best results with the data you have.
Obviously changing the stored procedure (by recompiling) changes the circumstances that led to the lock.
Try to log the progress of your SP as described here or here.
I would agree with the answer given above in a comment, this sounds like an unclosed transaction, particularly if you are still able to run the select statement from query analyser.
Sounds very much like there is an open transaction with a pending delete for table_y and the insert can't happen at this point.
When your SP locks up, can you perform an insert into table_y?
Do you have an index maintenance job?
Are your statistics up to date? One way to tell is examine the estimated and actual query plans for large variations.
As others have said, this sounds very likely to be an uncommitted transaction.
My best guess:
You'll want to make sure that table_y can be deleted completely and quickly.
If there are other stored procedures or external pieces of code that ever hold transactions on this table, you may be waiting forever. (They may error out and never close the transaction)
Another note: try using truncate if possible. it uses fewer resources than a delete with no where clause:
truncate table table_y
Also, once an error happens within your OWN transaction, it will cause all following calls (every 5 minutes apparently) to "hang", unless you handle your error:
begin tran
begin try
-- do normal stuff
end try
begin catch
rollback
end catch
commit
The very first error is what will give you information about the actual error. Seeing it hang in your own subsequent tests is just a secondary effect.
If you are doing these steps:
DELETE table_y
INSERT INTO table_y <3 selects unioned together>
You might want to try this instead
DROP TABLE table_y
SELECT INTO table_y <3 selects unioned together>