Total execution time on Client Statistics for SQL Server Management - sql

I'm working on optimizing a fairly complex stored procedure. I'm just wondering if what I'm doing to track the improvements is a good of doing it.
I set the DBCC FREEPROCCACHE and I have Include Client Statistics in SQL Management Studio.
I look at Total execution time on the Client Statistics tab to determine if my changes are making my stored procedure faster.
Is this a good of way of measuring improvements in stored procedure? Or should I be looking at other areas?

One way to see how long it took to execute the query:
. So this one took 3 seconds.
If you want to see the performance of a query, turn on client statistics and execution plan to see the performance of each query. To turn on Client Statistics:
Result:
To turn on Execution Plan:
Result:
You can also try using
SET STATISTICS TIME ON
SET STATISTICS IO ON.
They will show you the time and I/O required by each and every statement. Don't forget to turn them off when you're done. (SET STATISTICS TIME OFF , SET STATISTICS IO OFF)
Make sure every time you test a new query you clear the query cache so that the old query doesn’t affect your new test. To clear the query cache, execute this code:
CHECKPOINT;
GO
DBCC DROPCLEANBUFFERS; --Clears query cache
GO
DBCC FREEPROCCACHE; --Clears execution plan cache
GO

Related

Comparing SQL query performance

I'm redesigning some of my database tables. I have 2 keys in the same table that can be used to query data and I'd like to compare the difference in performance between them. Querying with the newer key is slower so I'd like to have a method which I can run after making schema changes to re-assess query performance.
I know about Execution plans in MS SQL Server and SET STATISTICS IO, TIME ON. However, I'd like to have a very simple absolute time taken which gives me realistic results. Considering each query takes about 4s, I have to run the same query multiple times consecutively in a loop. I currently have:
USE [MyDb]
CHECKPOINT
DBCC FREESYSTEMCACHE('ALL')
<query>
If I ran the above in a loop via powershel sqlcmd, would the clearing of the cache be enough to clear effects of running the same query just before the current run?
I ended up using the following snippet:
CHECKPOINT
DBCC FREESYSTEMCACHE('ALL')
DBCC DROPCLEANBUFFERS
DBCC FREEPROCCACHE

Query Plan Recompiled suddenly and degrades performance

Scenario: We have a simple select query
Declare P#
SELECT TOP(1) USERID
FROM table
WHERE non_clusteredindex_column = (#P) ORDER BY PK_column DESC
It usually executes with in 0.12sec since 1 year. But Yesterday suddenly exactly after mid night it started consuming all my CPU and taking 150 sec to execute. I checked SP_who2 and found no dead locks and nothing except this one query consuming all CPU. I decided to reboot the server to get rid of any Parameter sniffing issue or to kill any stale connections.I took a SLQ profiler Trace for 1 min before restarting the server for future Root Cause Analysis. After reboot, everything is back to normal. I was surprised and curiously started reviewing the Execution plan in profiler that I took and comparing to the current execution plan of the SAME query. I found both are different.
Execution plan before problematic Night is same as the execution plan after the Reboot. (Doing perfect Index seeks)
But the execution plan in Problematic Night SQL profiler is doing full Index Scan which is taking all CPU and taking 150 sec to execute.
Quesion:
I can say the execution plan was suddenly recompiled or query started using new execution plan(full scan) after yesterday midnight and after I rebooted, again it started using the old and good execution plan( Index SEEK).
Q1. What made SQL server to use new EXECUTION plan all of a sudden?
Q2. What made SQL server use the old & good execution plan after Reboot?
Q3. Anything related to Parameter Sniffing as I am passing Parameter. But technically, it shouldn't be as The parameter column is well organized with evenly distributed Data.
It sounds like you are having a parameter sniffing issue. I can't see your data but often we found these crop up even in simple query scenarios when either many rows match the parameter result and it flipped to a scan even when it shouldn't or there was some other problem with the data such as many values are unique but they decided under some scenario that column should have a 0 in a large portion of the table throwing everything for a loop. If the query from code is running slow but you can do a test procedure execution from ssms this is a pretty big red flag that something along this line is your issue.
You are correct that SQL restart flushes all the plan cache or you can manually flush all the plans out but you absolutely do not want to use this method to fix the plan of a single procedure. A quick fix is you can execute a EXEC sp_recompile 'dbo.procname'; to force it to flush just a single procedure execution plan and make a new one. Redoing all your plans especially in a busy database can cause significant performance concerns of other procs and restart of course has some downtime. This only temporarily fixes the problem when it crops up though if you have identified a parameter causing issues I would consider looking into addition an optimize for unknown hint specifically designed for parameter sniffing issues that have been identified. But also maybe make sure some good index maintenance is going on the regular in your environment in case that is causing bad plans not the sql engine.
In your case, you can do the following :
-- Activate the query store option in you database setting . Set Operation Mode To On.
-- This will start capturing the query plan for each request.
-- You can start tracking the query that consumes a lot of resources
-- Finally you can force an execution plan to be used for this query

SQL Server 2005 How Do You Clear Out a Query Execution Plan

Hello fellow programmers. I have a SQL Server 2005 query that is taking a long time to process the first time through. After the first run the query works much faster. It goes from one minute to one second.
I know that SQL Server is caching an execution plan (is that the right term? ). What I want to do is clear out this execution plan so that I can replicate the issue better. I'm trying to tune the query.
Does anyone know if this possible and how to do it?
Thanks in Advance
If you want to clear it then:
DBCC FreeProcCache
DBCC DropCleanbuffers
If you just want to force a query recompilation each time, then add a query hint to the end of the query:
OPTION (RECOMPILE)
This is what I run when ever I want to clear them
DBCC freeproccache
DBCC dropcleanbuffers
go
If I'm performance testing a query I usually just paste that at the top of the query so every time it runs its running with a clear cache.
Be warned : sometimes queries run faster the second time because the server has already done the disk IO and still has tables in RAM. Consider looking at IO costs in the execution plan.
DBCC FREEPROCCACHE

How to show the execution time before query result will display

How to show the execution time before query result will display
If you're calling the query from code, you can use a stopwatch to time the query. Start it before execution and stop it immediately after.
Do you mean get your program to display a "Time remaining until query completes" counter, or a progress bar, like when you delete a lot of files in the Windows Explorer?
That is not generally possible. Many queries cannot be estimated "in advance" without doing a significant amount of work, so that the estimated completion time wouldn't be available until the query was almost finished anyway.
A simple linear search through a table would be a simple case where this was possible, but adding other constraints or using indexes would cause headaches.
(Even the example from Windows of deleting a large directory is fraught with problems - they have to scan the whole directory to count the files before they start deleting them, just so they can show you the progress bar; which is why I tend to clobber large directories from the command line to save time).
Run the query asynchronously.
With ADO, it's something like this.
For ADO.NET, refer to this.
Then, display the timer until you get a Complete event.
How to show the execution time before query result will display
You can use "sys.dm_exec_requests" but it will support only a handful of operations listed below. Obviously it can't support normal DML/select queries.
sys.dm_exec_requests (Transact-SQL)
ALTER INDEX REORGANIZE
AUTO_SHRINK option with ALTER DATABASE
BACKUP DATABASE
CREATE INDEX
DBCC CHECKDB
DBCC CHECKFILEGROUP
DBCC CHECKTABLE
DBCC INDEXDEFRAG
DBCC SHRINKDATABASE
DBCC SHRINKFILE
KILL (Transact-SQL)
RESTORE DATABASE,
UPDATE STATISTICS.
long inicioBusquedaLong=Calendar.getInstance().getTimeInMillis();
this.setListaTramite(this.buscarCriterio());
long finBusquedaLong=Calendar.getInstance().getTimeInMillis();
this.tiempoBusqueda=finBusquedaLong-inicioBusquedaLong;

Different Execution Plan for the same Stored Procedure

We have a query that is taking around 5 sec on our production system, but on our mirror system (as identical as possible to production) and dev systems it takes under 1 second.
We have checked out the query plans and we can see that they differ. Also from these plans we can see why one is taking longer than the other. The data, schame and servers are similar and the stored procedures identical.
We know how to fix it by re-arranging the joins and adding hints, However at the moment it would be easier if we didn't have to make any changes to the SProc (Paperwork). We have also tried a sp_recompile.
What could cause the difference between the two query plans?
System: SQL 2005 SP2 Enterprise on Win2k3 Enterprise
Update: Thanks for your responses, it turns out that it was statistics. See summary below.
Your statistics are most likely out of date. If your data is the same, recompute the statistics on both servers and recompile. You should then see identical query plans.
Also, double-check that your indexes are identical.
Most likely statistics.
Some thoughts:
Do you do maintenance on your non-prod systems? (eg rebuidl indexes, which will rebuild statistics)
If so, do you use the same fillfactor and statistics sample ratio?
Do you restore the database regularly onto test so it's 100% like production?
is the data & data size between your mirror and production as close to the same as possible?
If you know why one query taking longer then the other? can you post some more details?
Execution plans can be different in such cases because of the data in the tables and/or the statistics. Even in cases where auto update statistics is turned on, the statistics can get out of date (especially in very large tables)
You may find that the optimizer has estimated a table is not that large and opted for a table scan or something like that.
Provided there is no WITH RECOMPILE option on your proc, the execution plan will get cached after the first execution.
Here is a trivial example on how you can get the wrong query plan cached:
create proc spTest
#id int
as
select * from sysobjects where #id is null or id = id
go
exec spTest null
-- As expected its a clustered index scan
go
exec spTest 1
-- OH no its a clustered index scan
Try running your Sql in QA on the production server outside of the stored proc to determine if you have an issue with your statistics being out of date or mysterious indexes missing from production.
Tying in to the first answer, the problem may lie with SQL Server's Parameter Sniffing feature. It uses the first value that caused compilation to help create the execution plan. Usually this is good but if the value is not normal (or somehow strange), it can contribute to a bad plan. This would also explain the difference between production and testing.
Turning off parameter sniffing would require modifying the SProc which I understand is undesirable. However, after using sp_recompile, pass in parameters that you'd consider "normal" and it should recompile based off of these new parameters.
I think the parameter sniffing behavior is different between 2005 and 2008 so this may not work.
The solution was to recalculate the statistics. I overlooked that as usually we have scheduled tasks to do all of that, but for some reason the admins didn't put one one this server, Doh.
To summarize all the posts:
Check the setup is the same
Indexes
Table sizes
Restore Database
Execution Plan Caching
If the query runs the same outside the SProc, it's not the Execution Plan
sp_recompile if it is different
Parameter sniffing
Recompute Statistics