SQL Server 2005 How Do You Clear Out a Query Execution Plan - sql-server-2005

Hello fellow programmers. I have a SQL Server 2005 query that is taking a long time to process the first time through. After the first run the query works much faster. It goes from one minute to one second.
I know that SQL Server is caching an execution plan (is that the right term? ). What I want to do is clear out this execution plan so that I can replicate the issue better. I'm trying to tune the query.
Does anyone know if this possible and how to do it?
Thanks in Advance

If you want to clear it then:
DBCC FreeProcCache
DBCC DropCleanbuffers
If you just want to force a query recompilation each time, then add a query hint to the end of the query:
OPTION (RECOMPILE)

This is what I run when ever I want to clear them
DBCC freeproccache
DBCC dropcleanbuffers
go
If I'm performance testing a query I usually just paste that at the top of the query so every time it runs its running with a clear cache.

Be warned : sometimes queries run faster the second time because the server has already done the disk IO and still has tables in RAM. Consider looking at IO costs in the execution plan.

DBCC FREEPROCCACHE

Related

Comparing SQL query performance

I'm redesigning some of my database tables. I have 2 keys in the same table that can be used to query data and I'd like to compare the difference in performance between them. Querying with the newer key is slower so I'd like to have a method which I can run after making schema changes to re-assess query performance.
I know about Execution plans in MS SQL Server and SET STATISTICS IO, TIME ON. However, I'd like to have a very simple absolute time taken which gives me realistic results. Considering each query takes about 4s, I have to run the same query multiple times consecutively in a loop. I currently have:
USE [MyDb]
CHECKPOINT
DBCC FREESYSTEMCACHE('ALL')
<query>
If I ran the above in a loop via powershel sqlcmd, would the clearing of the cache be enough to clear effects of running the same query just before the current run?
I ended up using the following snippet:
CHECKPOINT
DBCC FREESYSTEMCACHE('ALL')
DBCC DROPCLEANBUFFERS
DBCC FREEPROCCACHE

Total execution time on Client Statistics for SQL Server Management

I'm working on optimizing a fairly complex stored procedure. I'm just wondering if what I'm doing to track the improvements is a good of doing it.
I set the DBCC FREEPROCCACHE and I have Include Client Statistics in SQL Management Studio.
I look at Total execution time on the Client Statistics tab to determine if my changes are making my stored procedure faster.
Is this a good of way of measuring improvements in stored procedure? Or should I be looking at other areas?
One way to see how long it took to execute the query:
. So this one took 3 seconds.
If you want to see the performance of a query, turn on client statistics and execution plan to see the performance of each query. To turn on Client Statistics:
Result:
To turn on Execution Plan:
Result:
You can also try using
SET STATISTICS TIME ON
SET STATISTICS IO ON.
They will show you the time and I/O required by each and every statement. Don't forget to turn them off when you're done. (SET STATISTICS TIME OFF , SET STATISTICS IO OFF)
Make sure every time you test a new query you clear the query cache so that the old query doesn’t affect your new test. To clear the query cache, execute this code:
CHECKPOINT;
GO
DBCC DROPCLEANBUFFERS; --Clears query cache
GO
DBCC FREEPROCCACHE; --Clears execution plan cache
GO

Can you force Linq2SQL to NOT use sp_executesql?

So I write a Linq query and it takes 16 seconds to run. Decide to see what the query plan is, so I get that out of Linq to SQL Profiler and the query only takes 2 seconds to run. sigh
After spending most of the day poking at things and finally getting around to using SQL Server Profiler I see that Linq2SQL is using sp_executesql to run the query. I understand that it's supposed to improve performance because it's more likely to re-use the execution plan... but it seems to have chosen a horrible execution plan to use.
The weirder part is that it only gets slow if I join a specific table, and I have no idea why that specific table is causing a problem.
EDIT Just to clarify the actual issue here:
It's actually getting to different queries. One is, essentially,
SELECT col1, col2, ... FROM table1, table2 WHERE table1.val IN (1234, 2343, 2435)
The other is
EXEC sp_executesql 'SELECT col1, col2, ... FROM table1, table2 WHERE table1.val IN (#p1, #p2, #p3)',
N'#p0 int,#p1 int,#p2 int,#p3 int',
#p0=1234, #p1=2343, #p3=2435
Your problem doesn't stem from the use of sp_executesql, and so circumventing it (which you can't) will not solve your problems. I suggest you read Erland Sommarskog's excellent article:
Slow in the Application, Fast in SSMS?
Understanding Performance Mysteries
This will give you a deep understanding of why you're getting a performance difference, how to diagnose and consistently reproduce it, and finally, how to solve it.
If the exact same query is fast from one application or server, but slow from another, it's usually all about execution plans. An execution plan is the blueprint the server uses to run the query. The plan is supposed to be created once, and then reused for all queries which differ only in parameter values.
Different execution plans can lead to wildly difference performance, a factor of 100 is not at all unusual. As a first step, examine if the execution plans are different. The profiler event performance -> showplan xml logs the plan.
If the plan is different, one possible cause can be the session options, like ansi nulls:
SET ANSI_NULLS
Another possibility is a different login (the blueprint contains security information, so each security context has its own set of cached execution plans.)
The easiest way to clear the plan cache is to restart the SQL Server service. There's also an advanced command to clear the entire query plan cache:
DBCC FREEPROCCACHE
P.S. If you have a stored procedure that performs differently based on the value of parameters, it's worth to check out parameter sniffing. But since you're copying the exact same procedure from the profiler, I assume the parameters are identical for both the slow and the fast invocations.
To answer your question....
NO, you can't...

Option Recompile makes query fast - good or bad?

I have two SQL queries with about 2-3 INNER JOINS each. I need to do an INTERSECT between them.
Problem is that indiividually the queryes work fast, but after intersecting take about 4 seconds in total.
Now, if I put an OPTION (RECOMPILE) at the end of this whole query, the query works great again working quite fast returning almost instantly!.
I understand that option recopile forces a rebuild of execution plan, so I am confused now if my earler query taking 4 seconds is better or now the one with recompile, but taking 0 seconds is better.
Rather than answer the question you asked, here's what you should do:
Update your statistics:
EXEC sp_updatestats
If that doesn't work, rebuild indexes.
If that doesn't work, look at OPTIMIZE FOR
WITH RECOMPILE is specified SQL Server does not cache a plan for this stored procedure,
the stored procedure is recompiled each time it is executed.
Whenever a stored procedure is run in SQL Server for the first time, it is optimized and a query plan is compiled and cached in SQL Server's memory. Each time the same stored procedure is run after it is cached, it will use the same query plan eliminating the need for the same stored procedure from being optimized and compiled every time it is run. So if you need to run the same stored procedure 1,000 times a day, a lot of time and hardware resources can be saved and SQL Server doesn't have to work as hard.
you should not use this option because by using this option, you lose most of the advantages you get by substituting SQL queries with the stored procedures.

How to show the execution time before query result will display

How to show the execution time before query result will display
If you're calling the query from code, you can use a stopwatch to time the query. Start it before execution and stop it immediately after.
Do you mean get your program to display a "Time remaining until query completes" counter, or a progress bar, like when you delete a lot of files in the Windows Explorer?
That is not generally possible. Many queries cannot be estimated "in advance" without doing a significant amount of work, so that the estimated completion time wouldn't be available until the query was almost finished anyway.
A simple linear search through a table would be a simple case where this was possible, but adding other constraints or using indexes would cause headaches.
(Even the example from Windows of deleting a large directory is fraught with problems - they have to scan the whole directory to count the files before they start deleting them, just so they can show you the progress bar; which is why I tend to clobber large directories from the command line to save time).
Run the query asynchronously.
With ADO, it's something like this.
For ADO.NET, refer to this.
Then, display the timer until you get a Complete event.
How to show the execution time before query result will display
You can use "sys.dm_exec_requests" but it will support only a handful of operations listed below. Obviously it can't support normal DML/select queries.
sys.dm_exec_requests (Transact-SQL)
ALTER INDEX REORGANIZE
AUTO_SHRINK option with ALTER DATABASE
BACKUP DATABASE
CREATE INDEX
DBCC CHECKDB
DBCC CHECKFILEGROUP
DBCC CHECKTABLE
DBCC INDEXDEFRAG
DBCC SHRINKDATABASE
DBCC SHRINKFILE
KILL (Transact-SQL)
RESTORE DATABASE,
UPDATE STATISTICS.
long inicioBusquedaLong=Calendar.getInstance().getTimeInMillis();
this.setListaTramite(this.buscarCriterio());
long finBusquedaLong=Calendar.getInstance().getTimeInMillis();
this.tiempoBusqueda=finBusquedaLong-inicioBusquedaLong;