SQL Stored Procedure: Poor performance until text is changed - sql

We have run into a strange issue recently. We have a lot of stored procedures on a SQL Azure database and it appears that a number of them keep choosing poor plans to execute the query. We are updating stats on the databases nightly. When we see a stored procedure timing out over and over we tried executing sp_recompile passing in the procedure name and it does get recompiled but doesn't solve the bad plan problem. We tried actually rerunning the ALTER PROCEDURE statement (without changing the actual contents of the procedure) and that did nothing. Then on a whim we change the text of the procedure by adding a newline at the end of it right before the END statement. When we do this and run the ALTER PROCEDURE statement, the procedure suddenly kicks into gear and runs fast again.
I assume alerting the text of the procedure "changes" the procedure more than sp_recompile does and forces a "harder" recompilation and thus a better query plan is chosen? The problem is, while this solves the issue, there is now way to maintain a system of over 100 production databases where this seems to randomly happen. Has anyone else seen this type off issue and know what the underlying problem may be? How does one maintain a system like this?

Related

SQL Server : unexpected performance issue between in-line and stored procedure execution

I have run into an enigma of sorts while researching a performance issue with a specific stored procedure. I did not create that stored procedure, and the logic is fairly ugly with nested selects in join statements, etc...
However, when I copy the logic of the stored procedure directly into a new query window, add the parameters at the top and execute it, this runs in under 400 milliseconds. Yet, when I call the stored procedure and execute it with the exact same parameter values, it takes 23 seconds to run!
This makes absolutely no sense at all to me!
Are there some server-level settings that I should check which could potentially be impacting this?
Thanks
Recompile your stored procedure.
The Microsoft documentation says
When a procedure is compiled for the first time or recompiled, the procedure's query plan is optimized for the current state of the database and its objects. If a database undergoes significant changes to its data or structure, recompiling a procedure updates and optimizes the procedure's query plan for those changes. This can improve the procedure's processing performance.
and
Another reason to force a procedure to recompile is to counteract the "parameter sniffing" behavior of procedure compilation. When SQL Server executes procedures, any parameter values that are used by the procedure when it compiles are included as part of generating the query plan. If these values represent the typical ones with which the procedure is subsequently called, then the procedure benefits from the query plan every time that it compiles and executes. If parameter values on the procedure are frequently atypical, forcing a recompile of the procedure and a new plan based on different parameter values can improve performance.
If you run the SP's queries yourself (in SSMS maybe) they get compiled and run.
How to recompile your SP? (See the doc page linked above.)
You can rerun its definition to recompile it once. That may be enough if the procedure was first defined long ago in the life of your database and your tables have grown a lot.
You can put CREATE PROCEDURE ... WITH RECOMPILE AS into its definition so it will recompile every time you run it.
You can EXEC procedure WITH RECOMPILE; when you run it.
You can restart your SQL Server. (This can be a nuisance; booting the server magically makes bad things stop happening and nobody's quite sure why.)
Recompiling takes some time. But it takes far less time than a complex query with a bad execution plan would.
So... I ended up turning the nested selects on the joins to table variables, and now the sproc is executing in 60-milliseconds, while the in-line sql is taking 250=ms.
However, I still do not understand why the sproc was performing so much slower than the in-line sql version with the original nested sql logic?
I mean, both were using the exact same sql logic, so why was the sproc taking 23-seconds while the in-line was 400-ms?

Performance problem with 8 Nested Stored Procedures

I have a performance problem
I need to run a Stored Procedure from .Net 1.1. This stored procedure calls 8 Stored Procedures. Each one of them process information to throw a comparative between old an new informacion and anter afects the physical table in DataBase.
The problem comes since I try to run it directly from SSMS. Servers starts crashing, getting so slow and almost impossible to work. I think infrastructure people has to restar service directly on the server.
I'm working in development enviroment so there is no much problem, but I can't upload this into production enviroment.
I've been thinking in use procedures only for comparison purposes and never affect physical data. Retrive information from them in Temporary tables in principal procedure and then open my try-catch and begin-end transactions blocks and affect database in my principal stored with the informacion in Temp tables.
My principal stored look as follows: Is this the best way I can do this??
create proc spTest
as
/*Some processes here, temporary tables, etc...*/
begin try
begin distributed transaction
sp_nested1
sp_nested2
sp_nested3
sp_nested4
sp_nested5
sp_nested6
sp_nested7
sp_nested8
/*more processes here, updates, deletes, extra inserts, etc...*/
commit transaction
end try
begin catch
rollback transaction
DECLARE #ERROR VARCHAR(3000)
SELECT #ERROR = CONVERT(VARCHAR(3000),ERROR_MESSAGE())
RAISERROR(#ERROR,16,32)
RETURN
end catch
The basic structure of each nested stored proc is similar but doesn't call any other proc, only each one has their own try and catch blocks.
Any help will be really appreciated... The version Im using is SQL Server 2005
Thank you all in advance....
First when things are slow, there is likely a problem in what you wrote. The first place to look is the execution plan of each stored proc. Do you have table scans?
Have you run each one individually and seen how fast each one is? This would help you define whether the problem is the 8 procs or something else. You appear to have a lot of steps involved in this, the procs may or may not even be the problem.
Are you processing data row-by-row by using a cursor or while loop or scalar User-defined function or correlated subquery? This can affect speed greatly. Do you have the correct indexing? Are your query statements sargable? I see you have a distributed transaction, are you sure the user running the proc has the correct rights on other servers? And that the servers exist and are running? Are you running out of room in the temp db? Do you need to run this in batches rather than try to update millions of records across multiple servers?
Without seeing this mess, it is hard to determine what might be causing it to slow.
But I will share how I work with long complex procs. First they all have a test variable that I use to rollback the transactions at the end until I'm sure I'm getting the right actions happening. I also return the results of what I have inserted before doing the rollback. Now this initially isn't going to help the speed problem. But set it up anyway because if you can't figure out what the problem would be from the execution plan, then probably what you want to do is comment out everything but the first step and run the proc in test mode (and rollback) then keep adding steps until you see the one that it is getting stuck on. Of course it may be more than one.

How to troubleshoot a stored procedure?

what is the best way of troubleshoot a stored procedure in SQL Server, i mean from where do you start etc..?
Test each SELECT statements (if any) outside of your stored procedure to see whether it returns the expected results;
Make INSERT and UPDATE statements as simple as possible;
Try to test Inserts and Updates outside of your SP so that you can check it gives the expected results;
Use the debugger provided with SSMS Express 2008.
Visual Studio 2008 / 2010 has a debug facility. Simply connect to to your SQL Server instance in 'Server Explorer' and browse to your stored procedure.
Visual Studio 'Test Edition' also can generate Unit Tests around your stored procedures.
Troubleshooting a complex stored proc is far more than just determining if you can get it to run or not and finding the step which won't run. What is most critical is whether it actually returns the corect results or performs the correct actions.
There are two kinds of stored procs that need extensive abilites to troublshoot. First the the proc which creates dynamic SQL. I never create one of these without an input parameter of #debug. When this parameter is set, I have the proc print the SQl statment as it would have run and not run it. Almost everytime, this leads you right away to the problem as you can then see the syntax error in the generated SQL code. You also can run this sql code to see if it is returning the records you expect.
Now with complex procs that have many steps that affect data, I always use an #test input parameter. There are two things I do with the #test parameter, first I make it rollback the actions so that a mistake in development won't mess up the data. Second, I have it display the data before it rollsback to see what the results would have been. (These actually appear in the reverse order in the proc; I just think of them in this order.)
Now I can see what would have gone into the table or been deleted from the tables without affecting the data permananently. Sometimes, I might start with a select of the data as it was before any actions and then compare it to a select run afterwards.
Finally, I often want to log actions of a complex proc and see exactly what steps happened. I don't want those logs to get rolled back if the proc hits an error, so I set up a table variable for the logging information I want at the start of the proc. After each step (or after an error depending on what I want to log), I insert to this table variable. After the rollback or commit statement, I select the results of the table variable or use those results to log to a permanent logging table. This can be especially nice if you are using dynamic SQL because you can log the SQL that was run and then when something strange fails on prod, you have a record of which statement was run when it failed. You do this in a table variable because those do not go out of scope in a rollback.
In SSMS, you can simply start by opening the proc., and clicking on the check mark button (Parse) next to the Execute button on the menu bar. It reports any errors it finds.
If there are no errors there and you're stored procedure is harmless to run (you're not inserting into tables, just creating a temp table for example), then comment out the CREATE PROCEDURE x (or ALTER PROCEDURE x) and declare all the parameters by copying that part, then define them with valid values. Then run it to see what happens.
Maybe this is simple, but it's a place to start.

Why does the SqlServer optimizer get so confused with parameters?

I know this has something to do with parameter sniffing, but I'm just perplexed at how something like the following example is even possible with a piece of technology that does so many complex things well.
Many of us have run into stored procedures that intermittently run several of orders of magnitude slower than usual, and then if you copy out the sql from the procedure and use the same parameter values in a separate query window, it runs as fast as usual.
I just fixed a procedure like that by converting this:
alter procedure p_MyProc
(
#param1 int
) as -- do a complex query with #param1
to this:
alter procedure p_MyProc
(
#param1 int
)
as
declare #param1Copy int;
set #param1Copy = #param1;
-- Do the query using #param1Copy
It went from running in over a minute back down to under one second, like it usually runs. This behavior seems totally random. For 9 out of 10 #param1 inputs, the query is fast, regardless of how much data it ends up needing to crunch, or how big the result set it. But for that 1 out of 10, it just gets lost. And the fix is to replace an int with the same int in the query?
It makes no sense.
[Edit]
#gbn linked to this question, which details a similar problem:
Known issue?: SQL Server 2005 stored procedure fails to complete with a parameter
I hesitate to cry "Bug!" because that's so often a cop-out, but this really does seem like a bug to me. When I run the two versions of my stored procedure with the same input, I see identical query plans. The only difference is that the original takes more than a minute to run, and the version with the goofy parameter copying runs instantly.
The 1 in 10 gives the wrong plan that is cached.
RECOMPILE adds an overhead, masking allows each parameter to be evaluated on it's own merits (very simply).
By wrong plan, what if the 1 in 10 generates an scan on index 1 but the other 9 produce a seek on index 2? eg, the 1 in 10 is, say, 50% of the rows?
Edit: other questions
Known issue?: SQL Server 2005 stored procedure fails to complete with a parameter
Stored Procedure failing on a specific user
Edit 2:
Recompile does not work because the parameters are sniffed at compile time.
From other links (pasted in):
This article explains...
...parameter values are sniffed during compilation or recompilation...
Finally (edit 3):
Parameter sniffing was probably a good idea at the time and probably works well mostly. We use it across the board for any parameter that will end up in a WHERE clause.
We don't need to use it because we know that only a few (more complex eg reports or many parameters) could cause issues but we use it for consistency.
And the fact that it will come back and bite us when the users complain and we should have used masking...
It's probably caused by the fact that SQL Server compiles stored procedures and caches execution plans for them and the cached execution plan is probably unsuitable for this new set of parameters. You can try WITH RECOMPILE option to see if it's the cause.
EXECUTE MyProcedure [parameters] WITH RECOMPILE
WITH RECOMPILE option will force SQL Server to ignore the cached plan.
I have had this problem repeatedly on moving my code from a test server to production - on two different builds of SQL Server 2005. I think there are some big problems with the parameter sniffing in some builds of SQL Server 2005. I never had this problem on the dev server, or on two local developer edition boxes. I've never seen it it be such a big problem on SQL Server 2000 or any version going back to 6.5 either.
The cases where I found it, the only workaround was to use parameter masking, and I'm still hoping the DBAs will patch up the production server to SP3 so it will maybe go away. Things which did not work:
using the WITH RECOMPILE hint on EXEC or in the SP itself.
dropping and recreating the SP
using sp_recompile
Note that in the case I was working on, the data was not changing since an earlier invocation - I had simply scripted the code onto the production box which already had data loaded. All the invocations came with no changes to the data since before the SPs existed.
Oh, and if SQL Server can't handle this without masking, they need to add a parameter modifier NOSNIFF or something. What happens if you mask all your parameters, so you have #Something_parm and #Something_var and someone changes the code to use the wrong one and all of a sudden you have a sniffing problem again? Plus you are polluting the namespace within the SP. All these SPs I am "fixing" drive me nuts because I know they are going to be a maintenance nightmare for the less experienced satff I will be handing this project off to one day.
Could you check on the SQL Profiler how many reads and execution time when it is quick and when it is slow? It could be related to the number of rows fetched depending on the parameter value. It doesn't sound like a cache plan issue.
I know this is a 2 year old thread, but it might help someone down the line.
Once you analyze the query execution plans and know what the difference is between the two plans (query by itself and query executing in the stored procedure with a flawed plan), you can modify the query within the stored procedure with a query hint to resolve the issue. This works in a scenario where the query is using the incorrect index when executed in the stored procedure. You would add the following after the table in the appropriate location of your procedure:
SELECT col1, col2, col3
FROM YourTableHere WITH (INDEX (PK_YourIndexHere))
This will force the query plan to use the correct index which should resolve the issue. This does not answer why it happens but it does provide a means to resolve the issue without worrying about copying the parameters to avoid parameter sniffing.
As indicated it be a compilation issue. Does this issue still occur if you revert the procedure? One thing you can try if this occurs again to force a recompilation is to use:
sp_recompile [ #objname = ] 'object'
Right from BOL in regards to #objname parameter:
Is the qualified or unqualified name of a stored procedure, trigger, table, or view in the current database. object is nvarchar(776), with no default. If object is the name of a stored procedure or trigger, the stored procedure or trigger will be recompiled the next time that it is run. If object is the name of a table or view, all the stored procedures that reference the table or view will be recompiled the next time they are run.
If you drop and recreate the procedure you could cause clients to fail if they try and execute the procedure. You will also need to reapply security settings.
Is there any chance that the parameter value being provided is sometimes not int?
Is every query reference to the parameter comparing it with int values, without functions and without casting?
Can you increase the specificity of any expressions using the parameter to make the use of multifield indexes more likely?
It is a problem with plan caching, and it isn't always related to parameters, as it was in your scenario.
(Parameter Sniffing problems occur when a proc is called with unusual parameters the FIRST time it runs, and so the cached plan works great for those odd values, but lousy for most other times the proc is called.)
We had a similar situation when the app team deleted all old records from a highly-used log table on a production server. Removing records improves performance, right? Nope, performance immediately tanked.
Turns out that a frequently-used stored proc was recompiled right when the table was nearly empty, and it cached an extremely poor execution plan ("hey, there's only 50 records here, might as well do a Table Scan!"). Would have happened no matter what the initial parameters.
Our fix was to force a recompile with sp_recompile.

Stored Procedure Timing out.. Drop, then Create and it's up again?

I have a web-service that calls a stored procedure from a MS-SQL2005 DB. My Web-Service was timing out on a call to one of the stored procedures I have (this has been in production for a couple of months with no timeouts), so I tried running the query in Query Analyzer which also timed out. I decided to drop and recreate the stored procedure with no changes to the code and it started performing again..
Questions:
Would this typically be an error in the TSQL of my Stored Procedure?
-Or-
Has anyone seen this and found that it is caused by some problem with the compilation of the Stored Procedure?
Also, of course, any other insights on this are welcome as well.
Similar:
SQL poor stored procedure execution plan performance - parameter sniffing
Parameter Sniffing (or Spoofing) in SQL Server
Have you been updating your statistics on the database? This sounds like the original SP was using an out-of-date query plan. sp_recompile might have helped rather than dropping/recreating it.
There are a few things you can do to fix/diagnose this.
1) Update your statistics on a regular/daily basis. SQL generates query plans (think optimizes) bases on your statistics. If they get "stale" your stored procedure might not perform as well as it used to. (especially as your database changes/grows)
2) Look a your stored procedure. Are you using temp tables? Do those temp tables have indexes on them? Most of the time you can find the culprit by looking at the stored procedure (or the tables it uses)
3) Analyze your procedure while it is "hanging" take a look at your query plan. Are there any missing indexes that would help keep your procedure's query plan from going nuts. (Look for things like table scans, and your other most expensive queries)
It is like finding a name in a phone book, sure reading every name is quick if your phone book only consists of 20 or 30 names. Try doing that with a million names, it is not so fast.
This happend to me after moving a few stored procs from development into production, It didn't happen right away, it happened after the production data grew over a couple months time. We had been using Functions to create columns. In some cases there were several function calls for each row. When the data grew so did the function call time.The original approach was good in a testing environment but failed under a heavy load. Check if there are any Function calls in the proc.
I think the table the SP is trying to use is locked by some process. Use "exec sp_who" and "exec sp_lock" to find out what is going on to your tables.
If it was working quickly, is (with the passage of several months) no longer working quickly, and the code has not been changed, then it seems likely that the underlying data has changed.
My first guess would be data growth--so much data has been added over the past few months (or past few hours, ya never know) that the query is now bogging down.
Alternatively, as CodeByMoonlight implies, the data may have changed so much over time that the original query plan built for the procedure is no longer good (though this assumes that the query plan has not cleared and recompiled at all over a long period of time).
Similarly Index/Database statistics may also be out of date. Do you have AutoUpdateSatistics on or off for the database?
Again, this might only help if nothing but the data has changed over time.
Parameter sniffing.
Answered a day or 3 ago: "strange SQL server report performance problem related with update statistics"