Stored Procedure troubleshooting Execution time - sql

There is one Stored procedure. It took 2 minutes to run one day and next day it took almost 10 minutes to run.
There is NO change in code.
So, what could be the general trouble shooting tips to check the stored procedure and it's execution time.

It could be due to stored procedure parameter sniffing.
I recommend this article brentozar parameter sniffing for you understanding.

Since no code is changed. Your database or SQL server has some resource issues.

Related

SQL Server : unexpected performance issue between in-line and stored procedure execution

I have run into an enigma of sorts while researching a performance issue with a specific stored procedure. I did not create that stored procedure, and the logic is fairly ugly with nested selects in join statements, etc...
However, when I copy the logic of the stored procedure directly into a new query window, add the parameters at the top and execute it, this runs in under 400 milliseconds. Yet, when I call the stored procedure and execute it with the exact same parameter values, it takes 23 seconds to run!
This makes absolutely no sense at all to me!
Are there some server-level settings that I should check which could potentially be impacting this?
Thanks
Recompile your stored procedure.
The Microsoft documentation says
When a procedure is compiled for the first time or recompiled, the procedure's query plan is optimized for the current state of the database and its objects. If a database undergoes significant changes to its data or structure, recompiling a procedure updates and optimizes the procedure's query plan for those changes. This can improve the procedure's processing performance.
and
Another reason to force a procedure to recompile is to counteract the "parameter sniffing" behavior of procedure compilation. When SQL Server executes procedures, any parameter values that are used by the procedure when it compiles are included as part of generating the query plan. If these values represent the typical ones with which the procedure is subsequently called, then the procedure benefits from the query plan every time that it compiles and executes. If parameter values on the procedure are frequently atypical, forcing a recompile of the procedure and a new plan based on different parameter values can improve performance.
If you run the SP's queries yourself (in SSMS maybe) they get compiled and run.
How to recompile your SP? (See the doc page linked above.)
You can rerun its definition to recompile it once. That may be enough if the procedure was first defined long ago in the life of your database and your tables have grown a lot.
You can put CREATE PROCEDURE ... WITH RECOMPILE AS into its definition so it will recompile every time you run it.
You can EXEC procedure WITH RECOMPILE; when you run it.
You can restart your SQL Server. (This can be a nuisance; booting the server magically makes bad things stop happening and nobody's quite sure why.)
Recompiling takes some time. But it takes far less time than a complex query with a bad execution plan would.
So... I ended up turning the nested selects on the joins to table variables, and now the sproc is executing in 60-milliseconds, while the in-line sql is taking 250=ms.
However, I still do not understand why the sproc was performing so much slower than the in-line sql version with the original nested sql logic?
I mean, both were using the exact same sql logic, so why was the sproc taking 23-seconds while the in-line was 400-ms?

For long stored procedures, how to quickly identity the most time-consuming part?

Sometimes we need to deal with long stored procedure to make them run faster. What's the best way to quickly identify which part of the code is the slowest part? For me I just add some PRINT statement in the stored procedure and run it, then I can find which part is slow. I want to know are there any alternative methods?
For me almost the same as you, just insert the start time and end time of each part of the procedure into a log table and then check the records. print just help you to check 1 time. log table could help you to see if the procedure got some problems.
Execute the the procedure with "execution plan". This will help you to identify which part of the procedure is taking more time. Also it will suggest you if you require to add any indexes.
Before executing your script in "SQL Server Management Studio" select the "Include Actual Execution plan" or use Ctrl+M and then run the Script / Procedure call.
In the Execution Plan window (next to result tab) you can see and analyse it in detail.
Use SQL Profiler to connect and observe each statement and it's timing.
Use events starting with SP: to observe but be aware Profiler can have it's own impact on performance.
https://dba.stackexchange.com/questions/29284/how-to-profile-stored-procedures
Concur with Raffaello. Specifically:
--initialise
DELETE FROM DB..Perf_Log;
DECLARE #lastTime datetime
set #lastTime=getdate()
/* do some shit */
--add this block after each big block of functionality that you want to test
insert into DB..Perf_Log values ('did some stuff 1',datediff("MILLISECOND",#lastTime,getdate()))
set #lastTime=getdate()
This way you can see what's causing the trouble instantly, even if the stored proc takes ages to run. It's useful even if the stored proc hits a snag, because you can see what the last successful thing was. Good luck.

Stored Procedure taking time in execution

I am breaking my head on this issue since long. I have stored procedure in MS SQl and when I try to execute that procedure by providing all the parameters in SQL Query, it takes long time to execute but when I try to directly run the query which is there in SP it executes in no time. This is affecting my application performance also as we are using stored procedures to fetch the data from DB Server.
Please help.
Regards,
Vikram
Looks like parameter sniffing.
Here is a nice explanation: I Smell a Parameter!
Basically, sql server has cached query execution plan for the parameters it was first run with so the plan is not optimal for the new values you are passing. When you run the query directly the plan is generated at that moment so that's why it's fast.
You can mark the procedure for recompilation manually using sp_recompile or use With Recompile option in its definition so it is compiled on every run.

SQL - Calling "reader.NextResult" causes a long delay

Our application executes a long, hairy stored procedure with multiple result sets. The users are experiencing long wait times for this query, so I set out to determine what is causing the delay.
I put a Stopwatch on executing and reading the data, and it takes 6-7 seconds each time. I timed the execution of the stored procedure, expecting that this would be taking all the time. It wasn't - it took 30ms or so.
So I put timers around each of the ~20 result sets. Each "block" took very little time ( < 10ms) except for one in the middle of the processing, which took 5-6 seconds. Upon further research, I discovered it was the "reader.NextResult()" call that took all the time. This long delay happens in the same spot each time.
If I just exec the stored procedure, it seems to run real snappy, so it doesn't APPEAR to be a problem with the query - but I don't know...
How do I interpret this? Is SQL shipping me the result sets as it gets them, and is the result set in question likely to be a problem area in my SQL query? Or is something else possibly causing the delay?
EDIT:
Thanks for the answer and the comments - I am using SQL Server and .NET
What I was most curious about was WHY my delay happens on the "NextResult()" call. Being new to SQL development, I assumed that a delay due to a long stored procedure execution would show up in my application while waiting for the "ExecuteReader()" call to return. It now seems that SQL will start returning data BEFORE the query is complete, and if there is a delay it will delay on the NextResult() call.
I started out thinking my delay was in the stored procedure. When the ExecuteReader() call came back quickly, I thought my delay was in my code's handling of the reader. When the delay ended up being on the NextResult() call, I was confused. I am now back to reviewing the stored procedure.
Thanks to those of you who took the time to review my problem and offered your help.
When you execute a stored proc from a .Net command, the results will start streaming as soon as SQL has them ready.
This means that you may start seeing results in your .Net app before the entire stored proc has been executed.
Your bottleneck is probably in the stored procedure, run a sql server trace, and trace all the statements running inside the stored procedure (get the durations). You will be able to track down the exact statement in the proc that is slow and you also will be able to pick up on the params that are being passed to the proc so you can test this in Query Analyzer and look at the plan.
Another point that is missing from the question seems to be the amount of data you are moving, though unlikely, it may be that you have some really large chunks of data (like blobs) that are being sent and the time is being spent on the wire. You really need to expand the question a bit to help with the diagnosis.
The answer will be dependent on what RDBMS you are using.
If its SQL Server and .NET then from my experience:
Check other open transactions on the same connection which is used to invoke the sproc. They may have row locks on the table one of your selects is executing against. You can try adding "MultipleActiveResultSets=false" to your SQL Server connection string and see if you get an improvement, or more likely an exception (and you can hunt down the problem from an exception). This can also be an effect from an unreset connection returned to the connection pool (something I've ran into since I've started to use MARS).
You may need to specify the NOLOCK (or READUNCOMMITTED, same thing) table hint in your SELECT query if dirty reads are acceptible.
SELECT * FROM [table] WITH NOLOCK

Stored Procedure Timing out.. Drop, then Create and it's up again?

I have a web-service that calls a stored procedure from a MS-SQL2005 DB. My Web-Service was timing out on a call to one of the stored procedures I have (this has been in production for a couple of months with no timeouts), so I tried running the query in Query Analyzer which also timed out. I decided to drop and recreate the stored procedure with no changes to the code and it started performing again..
Questions:
Would this typically be an error in the TSQL of my Stored Procedure?
-Or-
Has anyone seen this and found that it is caused by some problem with the compilation of the Stored Procedure?
Also, of course, any other insights on this are welcome as well.
Similar:
SQL poor stored procedure execution plan performance - parameter sniffing
Parameter Sniffing (or Spoofing) in SQL Server
Have you been updating your statistics on the database? This sounds like the original SP was using an out-of-date query plan. sp_recompile might have helped rather than dropping/recreating it.
There are a few things you can do to fix/diagnose this.
1) Update your statistics on a regular/daily basis. SQL generates query plans (think optimizes) bases on your statistics. If they get "stale" your stored procedure might not perform as well as it used to. (especially as your database changes/grows)
2) Look a your stored procedure. Are you using temp tables? Do those temp tables have indexes on them? Most of the time you can find the culprit by looking at the stored procedure (or the tables it uses)
3) Analyze your procedure while it is "hanging" take a look at your query plan. Are there any missing indexes that would help keep your procedure's query plan from going nuts. (Look for things like table scans, and your other most expensive queries)
It is like finding a name in a phone book, sure reading every name is quick if your phone book only consists of 20 or 30 names. Try doing that with a million names, it is not so fast.
This happend to me after moving a few stored procs from development into production, It didn't happen right away, it happened after the production data grew over a couple months time. We had been using Functions to create columns. In some cases there were several function calls for each row. When the data grew so did the function call time.The original approach was good in a testing environment but failed under a heavy load. Check if there are any Function calls in the proc.
I think the table the SP is trying to use is locked by some process. Use "exec sp_who" and "exec sp_lock" to find out what is going on to your tables.
If it was working quickly, is (with the passage of several months) no longer working quickly, and the code has not been changed, then it seems likely that the underlying data has changed.
My first guess would be data growth--so much data has been added over the past few months (or past few hours, ya never know) that the query is now bogging down.
Alternatively, as CodeByMoonlight implies, the data may have changed so much over time that the original query plan built for the procedure is no longer good (though this assumes that the query plan has not cleared and recompiled at all over a long period of time).
Similarly Index/Database statistics may also be out of date. Do you have AutoUpdateSatistics on or off for the database?
Again, this might only help if nothing but the data has changed over time.
Parameter sniffing.
Answered a day or 3 ago: "strange SQL server report performance problem related with update statistics"