Comparing SQL query performance - sql

I'm redesigning some of my database tables. I have 2 keys in the same table that can be used to query data and I'd like to compare the difference in performance between them. Querying with the newer key is slower so I'd like to have a method which I can run after making schema changes to re-assess query performance.
I know about Execution plans in MS SQL Server and SET STATISTICS IO, TIME ON. However, I'd like to have a very simple absolute time taken which gives me realistic results. Considering each query takes about 4s, I have to run the same query multiple times consecutively in a loop. I currently have:
USE [MyDb]
CHECKPOINT
DBCC FREESYSTEMCACHE('ALL')
<query>
If I ran the above in a loop via powershel sqlcmd, would the clearing of the cache be enough to clear effects of running the same query just before the current run?

I ended up using the following snippet:
CHECKPOINT
DBCC FREESYSTEMCACHE('ALL')
DBCC DROPCLEANBUFFERS
DBCC FREEPROCCACHE

Related

Total execution time on Client Statistics for SQL Server Management

I'm working on optimizing a fairly complex stored procedure. I'm just wondering if what I'm doing to track the improvements is a good of doing it.
I set the DBCC FREEPROCCACHE and I have Include Client Statistics in SQL Management Studio.
I look at Total execution time on the Client Statistics tab to determine if my changes are making my stored procedure faster.
Is this a good of way of measuring improvements in stored procedure? Or should I be looking at other areas?
One way to see how long it took to execute the query:
. So this one took 3 seconds.
If you want to see the performance of a query, turn on client statistics and execution plan to see the performance of each query. To turn on Client Statistics:
Result:
To turn on Execution Plan:
Result:
You can also try using
SET STATISTICS TIME ON
SET STATISTICS IO ON.
They will show you the time and I/O required by each and every statement. Don't forget to turn them off when you're done. (SET STATISTICS TIME OFF , SET STATISTICS IO OFF)
Make sure every time you test a new query you clear the query cache so that the old query doesn’t affect your new test. To clear the query cache, execute this code:
CHECKPOINT;
GO
DBCC DROPCLEANBUFFERS; --Clears query cache
GO
DBCC FREEPROCCACHE; --Clears execution plan cache
GO

Option Recompile makes query fast - good or bad?

I have two SQL queries with about 2-3 INNER JOINS each. I need to do an INTERSECT between them.
Problem is that indiividually the queryes work fast, but after intersecting take about 4 seconds in total.
Now, if I put an OPTION (RECOMPILE) at the end of this whole query, the query works great again working quite fast returning almost instantly!.
I understand that option recopile forces a rebuild of execution plan, so I am confused now if my earler query taking 4 seconds is better or now the one with recompile, but taking 0 seconds is better.
Rather than answer the question you asked, here's what you should do:
Update your statistics:
EXEC sp_updatestats
If that doesn't work, rebuild indexes.
If that doesn't work, look at OPTIMIZE FOR
WITH RECOMPILE is specified SQL Server does not cache a plan for this stored procedure,
the stored procedure is recompiled each time it is executed.
Whenever a stored procedure is run in SQL Server for the first time, it is optimized and a query plan is compiled and cached in SQL Server's memory. Each time the same stored procedure is run after it is cached, it will use the same query plan eliminating the need for the same stored procedure from being optimized and compiled every time it is run. So if you need to run the same stored procedure 1,000 times a day, a lot of time and hardware resources can be saved and SQL Server doesn't have to work as hard.
you should not use this option because by using this option, you lose most of the advantages you get by substituting SQL queries with the stored procedures.

SP taking 15 minutes, but the same query when executed returns results in 1-2 minutes

So basically I have this relatively long stored procedure. The basic execution flow is that it SELECTS INTO some data into temp tables declared with the # sign and then runs a cursor through these tables to generate a 'running total' into a third temp table which is created using CREATE. Then this resulting temp table is joined with other tables in the DB to generated the result after some grouping etc. The problem is, this SP had been running fine until now returning results in 1-2 minutes. And now, suddenly, its taking 12-15 minutes. If I extract the query from the SP and executed it in management studio by manually setting the same parameters, it returns results in 1-2 minutes but the SP takes very long. Any idea what could be happening? I tried to generate the Actual Execution plans of both the query and the SP but it couldn't generate it because of the cursor. Any idea why the SP takes so long while the query doesn't?
This is the footprint of parameter-sniffing. See here for another discussion about it; SQL poor stored procedure execution plan performance - parameter sniffing
There are several possible fixes, including adding WITH RECOMPILE to your stored procedure which works about half the time.
The recommended fix for most situations (though it depends on the structure of your query and sproc) is to NOT use your parameters directly in your queries, but rather store them into local variables and then use those variables in your queries.
its due to parameter sniffing. first of all declare temporary variable and set the incoming variable value to temp variable and use temp variable in whole application here is an example below.
ALTER PROCEDURE [dbo].[Sp_GetAllCustomerRecords]
#customerId INT
AS
declare #customerIdTemp INT
set #customerIdTemp = #customerId
BEGIN
SELECT *
FROM Customers e Where
CustomerId = #customerIdTemp
End
try this approach
Try recompiling the sproc to ditch any stored query plan
exec sp_recompile 'YourSproc'
Then run your sproc taking care to use sensible paramters.
Also compare the actual execution plans between the two methods of executing the query.
It might also be worth recomputing any statistics.
I'd also look into parameter sniffing. Could be the proc needs to handle the parameters slighlty differently.
I usually start troubleshooting issues like that by using
"print getdate() + ' - step '". This helps me narrow down what's taking the most time. You can compare from where you run it from query analyzer and narrow down where the problem is at.
I would guess it could possible be down to caching. If you run the stored procedure twice is it faster the second time?
To investigate further you could run them both from management studio the stored procedure and the query version with the show query plan option turned on in management studio, then compare what area is taking longer in the stored procedure then when run as a query.
Alternativly you could post the stored procedure here for people to suggest optimizations.
For a start it doesn't sound like the SQL is going to perform too well anyway based on the use of a number of temp tables (could be held in memory, or persisted to tempdb - whatever SQL Server decides is best), and the use of cursors.
My suggestion would be to see if you can rewrite the sproc as a set-based query instead of a cursor-approach which will give better performance and be a lot easier to tune and optimise. Obviously I don't know exactly what your sproc does, to give an indication as to how easy/viable this is for you.
As to why the SP is taking longer than the query - difficult to say. Is there the same load on the system when you try each approach? If you run the query itself when there's a light load, it will be better than when you run the SP during a heavy load.
Also, to ensure the query truly is quicker than the SP, you need to rule out data/execution plan caching which makes a query faster for subsequent runs. You can clear the cache out using:
DBCC FREEPROCCACHE
DBCC DROPCLEANBUFFERS
But only do this on a dev/test db server, not on production.
Then run the query, record the stats (e.g. from profiler). Clear the cache again. Run the SP and compare stats.
1) When you run the query for the first time it may take more time. One more point is if you are using any corellated sub query and if you are hardcoding the values it will be executed for only one time. When you are not hardcoding it and run it through the procedure and if you are trying to derive the value from the input value then it might take more time.
2) In rare cases it can be due to network traffic, also where we will not have consistency in the query execution time for the same input data.
I too faced a problem where we had to create some temp tables and then manipulating them had to calculate some values based on rules and finally insert the calculated values in a third table. This all if put in single SP was taking around 20-25 min. So to optimize it further we broke the sp into 3 different sp's and the total time now taken was around 6-8 mins. Just identify the steps that are involved in the whole process and how to break them up in different sp's. Surely by using this approach the overall time taken by the entire process will reduce.
This is because of parameter snipping. But how can you confirm it?
Whenever we supposed to optimize SP we look for execution plan. But in your case, you will see an optimized plan from SSMS because it's taking more time only when it called through Code.
For every SP and Function, the SQL server generates two estimated plans because of ARITHABORT option. One for SSMS and second is for the external entities(ADO Net).
ARITHABORT is by default OFF in SSMS. So if you want to check what exact query plan your SP is using when it calls from Code.
Just enable the option in SSMS and execute your SP you will see that SP will also take 12-13 minutes from SSMS.
SET ARITHABORT ON
EXEC YourSpName
SET ARITHABORT OFF
To solve this problem you just need to update the estimate query plan.
There are a couple of ways to update the estimate query plan.
1. Update table statistics.
2. recompile SP
3. SET ARITHABORT OFF in SP so it will always use query plan created for SSMS (this option is not recommended)
For more options please refer to this awesome article -
http://www.sommarskog.se/query-plan-mysteries.html
I would suggest the issue is related to the type of temp table (the # prefix). This temp table holds the data for that database session. When you run it through your app the temp table is deleted and recreated.
You might find when running in SSMS it keeps the session data and updates the table instead of creating it.
Hope that helps :)

SQL Server 2005 How Do You Clear Out a Query Execution Plan

Hello fellow programmers. I have a SQL Server 2005 query that is taking a long time to process the first time through. After the first run the query works much faster. It goes from one minute to one second.
I know that SQL Server is caching an execution plan (is that the right term? ). What I want to do is clear out this execution plan so that I can replicate the issue better. I'm trying to tune the query.
Does anyone know if this possible and how to do it?
Thanks in Advance
If you want to clear it then:
DBCC FreeProcCache
DBCC DropCleanbuffers
If you just want to force a query recompilation each time, then add a query hint to the end of the query:
OPTION (RECOMPILE)
This is what I run when ever I want to clear them
DBCC freeproccache
DBCC dropcleanbuffers
go
If I'm performance testing a query I usually just paste that at the top of the query so every time it runs its running with a clear cache.
Be warned : sometimes queries run faster the second time because the server has already done the disk IO and still has tables in RAM. Consider looking at IO costs in the execution plan.
DBCC FREEPROCCACHE

How to show the execution time before query result will display

How to show the execution time before query result will display
If you're calling the query from code, you can use a stopwatch to time the query. Start it before execution and stop it immediately after.
Do you mean get your program to display a "Time remaining until query completes" counter, or a progress bar, like when you delete a lot of files in the Windows Explorer?
That is not generally possible. Many queries cannot be estimated "in advance" without doing a significant amount of work, so that the estimated completion time wouldn't be available until the query was almost finished anyway.
A simple linear search through a table would be a simple case where this was possible, but adding other constraints or using indexes would cause headaches.
(Even the example from Windows of deleting a large directory is fraught with problems - they have to scan the whole directory to count the files before they start deleting them, just so they can show you the progress bar; which is why I tend to clobber large directories from the command line to save time).
Run the query asynchronously.
With ADO, it's something like this.
For ADO.NET, refer to this.
Then, display the timer until you get a Complete event.
How to show the execution time before query result will display
You can use "sys.dm_exec_requests" but it will support only a handful of operations listed below. Obviously it can't support normal DML/select queries.
sys.dm_exec_requests (Transact-SQL)
ALTER INDEX REORGANIZE
AUTO_SHRINK option with ALTER DATABASE
BACKUP DATABASE
CREATE INDEX
DBCC CHECKDB
DBCC CHECKFILEGROUP
DBCC CHECKTABLE
DBCC INDEXDEFRAG
DBCC SHRINKDATABASE
DBCC SHRINKFILE
KILL (Transact-SQL)
RESTORE DATABASE,
UPDATE STATISTICS.
long inicioBusquedaLong=Calendar.getInstance().getTimeInMillis();
this.setListaTramite(this.buscarCriterio());
long finBusquedaLong=Calendar.getInstance().getTimeInMillis();
this.tiempoBusqueda=finBusquedaLong-inicioBusquedaLong;