SQL Server 2000: How can I tell how many plans a stored procedure has cached? - sql

Sometimes when diagnosing issues with our SQL Server 2000 database it might be helpful to know that a stored procedure is using a bad plan or is having trouble coming up with a good plan at the time I'm running into problems. I'm wondering if there is a query or command I can run to tell me how many execution plans are cached currently for a particular stored procedure.

You can query the cache in a number of different ways, either looking at its contents, or looking at some related statistics.
A couple of commands to help you along your way:
SELECT * FROM syscacheobjects -- shows the contents of the procedure
-- cache for all databases
DBCC PROCCACHE -- shows some general cache statistics
DBCC CACHESTATS -- shows the usage statistics for the cache, things like hit ratio
If you need to clear the cache for just one database, you can use:
DBCC FLUSHPROCINDB (#dbid) -- that's an int, not the name of it.
-- The int you'd get from sysdatabases or the dbid() function
Edit: above the line is for 2000, which is what the question asked. However, for anyone visiting who's using SQL Server 2005, it's a slightly different arrangement to the above:
select * from sys.dm_exec_cached_plans -- shows the basic cache stuff
A useful query for showing plans in 2005:
SELECT cacheobjtype, objtype, usecounts, refcounts, text
from sys.dm_exec_cached_plans p
join sys.dm_exec_query_stats s on p.plan_handle = s.plan_handle
cross apply sys.dm_exec_sql_text(s.sql_handle)

Related

Why one T-SQL function get different performance running in same server, using same tables but call it from different databases instances

I have the follow situation, I have one server running SQL Server 2017 with different databases (DB1, DB2, DB3), each one has different use and different tables. I have created a function in DB1 called DBFTN1 where it use tables allocated only in DB1.
When I run this function in SSMS over one instance of DB1 the result is 4 seconds, returning around 4K records, for reference the command that I am using is similar to
SELECT * FROM DBFTN1('20220801');
When I run the same function using one instance in DB2
SELECT * FROM DB1.DBFTN1('20220801');
the result is similar around 4 seconds and returns the same records. But when I run the same function from DB3 the performance is very slow, the first records appear in around 25 seconds and it needs around 15 minutes to download the 4k records, the command used is similar
SELECT * FROM DB1.DBFTN1('20220801');
What is happening and how can I get similar performance over any instance?
Is important to mention that the server doesn't have another process running when I run the test.
I have tried run the following sentence over the DB3 instance but the result continue being the same:
EXEC sp_updatestats
EXEC sp_msForEachTable #COMMAND1= 'DBCC DBREINDEX ( "?")';
CHECKPOINT
DBCC DROPCLEANBUFFERS
DBCC FREEPROCCACHE
Thank in advance for your help.
Without the complete code of your UDF it is difficult to answer such a question...
Please put the complete code and all the execution plan. There is a tool in SSMS to compare two different execution plans.
I suspect that you have some differences of config parameters between your databases. There is two levels of configuration that can modify an execution plan :
the database parameters
the scoped database parameters
Just have a look over these values with some queries :
SELECT * FROM sys.databases;
SELECT "DB1" AS DBAME, * FROM DB1.sys.database_scoped_configurations
UNION ALL
SELECT "DB2" AS DBAME, * FROM DB1.sys.database_scoped_configurations
UNION ALL
SELECT "DB3" AS DBAME, * FROM DB1.sys.database_scoped_configurations;

ALTER DATABASE SCOPED CONFIGURATION CLEAR PROCEDURE_CACHE not removing cache?

I have an azure PaaS database and would like to clear cache to test some SP. So I found some scripts from the Internet:
-- run this script against a user database, not master
-- count number of plans currently in cache
select count(*) from sys.dm_exec_cached_plans;
-- Executing this statement will clear the procedure cache in the current database, which means that all queries will have to recompile.
ALTER DATABASE SCOPED CONFIGURATION CLEAR PROCEDURE_CACHE;
-- count number of plans in cache now, after they were cleared from cache
select count(*) from sys.dm_exec_cached_plans;
-- list available plans
select * from sys.dm_exec_cached_plans;
select * from sys.dm_exec_procedure_stats
However, the amount of cache is always around 600-800, so somehow it is not dropping.
I didn't get any error (no permission denied etc), so how this command is not cleaning cache?
I haven't had time to debug through the code to be 100% sure, but based on my understanding of the system it is likely that merely bumping the database schema version (which happens on any alter database command) will invalidate the entries in the cache on next use. Since the procedure cache is instance wide, any attempt to clear the entries associated with the database would need to walk all entries one-by-one instead of merely freeing the whole cache.
So, you can think of this as invalidating the whole cache but lazily removing entries from the cache as they are recompiled or if the memory is reclaimed by other parts of the system through later actions.
Conor Cunningham
Architect, SQL
I contacted Microsoft support and understood it now.
Try to run the following T-SQL on AdventureWorks database.
-- create test procedure
create or alter procedure pTest1
as begin
select * from salesLT.product where ProductCategoryID =27
end
go
-- exec the procedure once.
exec pTest1
-- check for cached plan for this specific procedure - cached plan exists
select * from sys.dm_exec_cached_plans p
cross apply sys.dm_exec_sql_text(p.plan_handle) st
where p.objtype = 'proc' and st.objectid = OBJECT_ID('pTest1')
-- clear plan cache
ALTER DATABASE SCOPED CONFIGURATION CLEAR PROCEDURE_CACHE;
-- check for cached plan for this specific procedure - not exists anymore
select * from sys.dm_exec_cached_plans p
cross apply sys.dm_exec_sql_text(p.plan_handle) st
where p.objtype = 'proc' and st.objectid = OBJECT_ID('pTest1')
-- cleanup
drop procedure pTest1
select 'cleanup complete'
with this sample, we can confirm that plan cache is cleared for the database, however, sys.dm_exec_cached_plans is server wide and give you results from other databases as well (internal system databases) that for them the cache was not cleared with the CLEAR PROCEDURE_CACHE command.

Parameter Sniffing causing slowdown for text-base query, how to remove execution plan?

I have a sql query, the exact code of which is generated in C#, and passed through ADO.Net as a text-based SqlCommand.
The query looks something like this:
SELECT TOP (#n)
a.ID,
a.Event_Type_ID as EventType,
a.Date_Created,
a.Meta_Data
FROM net.Activity a
LEFT JOIN net.vu_Network_Activity na WITH (NOEXPAND)
ON na.Member_ID = #memberId AND na.Activity_ID = a.ID
LEFT JOIN net.Member_Activity_Xref ma
ON ma.Member_ID = #memberId AND ma.Activity_ID = a.ID
WHERE
a.ID < #LatestId
AND (
Event_Type_ID IN(1,2,3))
OR
(
(na.Activity_ID IS NOT NULL OR ma.Activity_ID IS NOT NULL)
AND
Event_Type_ID IN(4,5,6)
)
)
ORDER BY a.ID DESC
This query has been working well for quite some time. It takes advantage of some indexes we have on these tables.
In any event, all of a sudden this query started running really slow, but ran almost instantaneously in SSMS.
Eventually, after reading several resources, I was able to verify that the slowdown we were getting was from poor parameter sniffing.
By copying all of the parameters to local variables, I was able to successfully reduce the problem. The thing is, this just feels like all kind of wrong to me.
I'm assuming that what happened was the statistics of one of these tables was updated, and then by some crappy luck, the very first time this query was recompiled, it was called with parameter values that cause the execution plan to differ?
I was able to track down the query in the Activity Monitor, and the execution plan resulting in the query to run in ~13 seconds was:
Running in SSMS results in the following execution plan (and only takes ~100ms):
So what is the question?
I guess my question is this: How can I fix this problem, without copying the parameters to local variables, which could lead to a large number of cached execution plans?
Quote from the linked comment / Jes Borland:
You can use local variables in stored procedures to “avoid” parameter sniffing. Understand, though, that this can lead to many plans stored in the cache. That can have its own performance implications. There isn’t a one-size-fits-all solution to the problem!
My thinking is that if there is some way for me to manually remove the current execution plan from the temp db, that might just be good enough... but everything I have found online only shows me how to do this for an actual named stored procedure.
This is a text-based SqlCommand coming from C#, so I do not know how to find the cached execution plan, with the sniffed parameter values, and remove it?
Note: the somewhat obvious solution of "just create a proper stored procedure" is difficult to do because this query can get generated in a number of different ways... and would require a somewhat unpleasant refactor.
If you want to remove a specific plan from the cache then it is really a two step process: first obtain the plan handle for that specific plan; and then use DBCC FREEPROCCACHE to remove that plan from the cache.
To get the plan handle, you need to look in the execution plan cache. The T-SQL below is an example of how you could search for the plan and get the handle (you may need to play with the filter clause a bit to hone in on your particular plan):
SELECT top (10)
qs.last_execution_time,
qs.creation_time,
cp.objtype,
SUBSTRING(qt.[text], qs.statement_start_offset/2, (
CASE
WHEN qs.statement_end_offset = -1
THEN LEN(CONVERT(NVARCHAR(MAX), qt.[text])) * 2
ELSE qs.statement_end_offset
END - qs.statement_start_offset)/2 + 1
) AS query_text,
qt.text as full_query_text,
tp.query_plan,
qs.sql_handle,
qs.plan_handle
FROM
sys.dm_exec_query_stats qs
LEFT JOIN sys.dm_exec_cached_plans cp ON cp.plan_handle=qs.plan_handle
CROSS APPLY sys.dm_exec_sql_text (qs.[sql_handle]) AS qt
OUTER APPLY sys.dm_exec_query_plan(qs.plan_handle) tp
WHERE qt.text like '%vu_Network_Activity%'
Once you have the plan handle, call DBCC FREEPROCCACHE as below:
DBCC FREEPROCCACHE(<plan_handle>)
There are many ways to delete/invalidate a query plan:
DBCC FREEPROCCACHE(plan_handle)
or
EXEC sp_recompile 'net.Activity'
or
adding OPTION (RECOMPILE) query hint at the end of your query
or
using optimize for ad hoc workloads server settings
or
updating statistics
If you have a crappy product from a crappy vendor, the best way to handle parameter sniffing is to create you own plan using EXEC sp_create_plan_guide/

Optimizing the SQL query using SQL Profiler

I have got below records in SQL Profiler if my proc is called.
CPU - 78, Reads - 3508, Writes - 0, Duration - 81
Is above data is ok for concurrent hits for 70 person on my website, My proc is called on evey page the user is visiting, the performance monitor on my server shows anonmyous user hit keep increasing when I enable my SQL proc calling.
Please suggest what and where I can look!!, my proc query is given below:
ALTER PROCEDURE [dbo].[GETDataFromLinkInfo]
-- Add the parameters for the stored procedure here
(#PageID INT)
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON;
-- Insert statements for procedure here
SELECT DISTINCT [PUBLICATION_ID] AS n,
[URL] AS u
FROM [LINK_INFO] WITH(NOLOCK)
WHERE Page_ID = #PageID
AND Component_Template_Priority > 0
AND PUBLICATION_ID NOT IN( 232, 481 )
ORDER BY URL
FOR XML RAW ('p'), ROOT ('ps');
RETURN
END
Execute the proc 1000 times and measure how long it took. This is the most important metric to judge performance (time elapsed). Don't measure reads and writes.
Regarding my advice not to look at reads/writes: Reads, however, can either be cached or non-cached which is a huge difference. I always look at the execution plan to see why a query is behaving the way it is and what do to. The read/write metrics neither tell you if you need to act nor what do to.
As Phil suggests, analyzing the query with the Database Engine Tuning Advisor is probably your best bet. You'd have to provide quite a bit more info to get any precise answers here - people might suggest one or more indexes, but an index's efficacy depends on the selectivity of the columns in it. Adding an un-useful index could actually negatively impact performance in other ways.

SP taking 15 minutes, but the same query when executed returns results in 1-2 minutes

So basically I have this relatively long stored procedure. The basic execution flow is that it SELECTS INTO some data into temp tables declared with the # sign and then runs a cursor through these tables to generate a 'running total' into a third temp table which is created using CREATE. Then this resulting temp table is joined with other tables in the DB to generated the result after some grouping etc. The problem is, this SP had been running fine until now returning results in 1-2 minutes. And now, suddenly, its taking 12-15 minutes. If I extract the query from the SP and executed it in management studio by manually setting the same parameters, it returns results in 1-2 minutes but the SP takes very long. Any idea what could be happening? I tried to generate the Actual Execution plans of both the query and the SP but it couldn't generate it because of the cursor. Any idea why the SP takes so long while the query doesn't?
This is the footprint of parameter-sniffing. See here for another discussion about it; SQL poor stored procedure execution plan performance - parameter sniffing
There are several possible fixes, including adding WITH RECOMPILE to your stored procedure which works about half the time.
The recommended fix for most situations (though it depends on the structure of your query and sproc) is to NOT use your parameters directly in your queries, but rather store them into local variables and then use those variables in your queries.
its due to parameter sniffing. first of all declare temporary variable and set the incoming variable value to temp variable and use temp variable in whole application here is an example below.
ALTER PROCEDURE [dbo].[Sp_GetAllCustomerRecords]
#customerId INT
AS
declare #customerIdTemp INT
set #customerIdTemp = #customerId
BEGIN
SELECT *
FROM Customers e Where
CustomerId = #customerIdTemp
End
try this approach
Try recompiling the sproc to ditch any stored query plan
exec sp_recompile 'YourSproc'
Then run your sproc taking care to use sensible paramters.
Also compare the actual execution plans between the two methods of executing the query.
It might also be worth recomputing any statistics.
I'd also look into parameter sniffing. Could be the proc needs to handle the parameters slighlty differently.
I usually start troubleshooting issues like that by using
"print getdate() + ' - step '". This helps me narrow down what's taking the most time. You can compare from where you run it from query analyzer and narrow down where the problem is at.
I would guess it could possible be down to caching. If you run the stored procedure twice is it faster the second time?
To investigate further you could run them both from management studio the stored procedure and the query version with the show query plan option turned on in management studio, then compare what area is taking longer in the stored procedure then when run as a query.
Alternativly you could post the stored procedure here for people to suggest optimizations.
For a start it doesn't sound like the SQL is going to perform too well anyway based on the use of a number of temp tables (could be held in memory, or persisted to tempdb - whatever SQL Server decides is best), and the use of cursors.
My suggestion would be to see if you can rewrite the sproc as a set-based query instead of a cursor-approach which will give better performance and be a lot easier to tune and optimise. Obviously I don't know exactly what your sproc does, to give an indication as to how easy/viable this is for you.
As to why the SP is taking longer than the query - difficult to say. Is there the same load on the system when you try each approach? If you run the query itself when there's a light load, it will be better than when you run the SP during a heavy load.
Also, to ensure the query truly is quicker than the SP, you need to rule out data/execution plan caching which makes a query faster for subsequent runs. You can clear the cache out using:
DBCC FREEPROCCACHE
DBCC DROPCLEANBUFFERS
But only do this on a dev/test db server, not on production.
Then run the query, record the stats (e.g. from profiler). Clear the cache again. Run the SP and compare stats.
1) When you run the query for the first time it may take more time. One more point is if you are using any corellated sub query and if you are hardcoding the values it will be executed for only one time. When you are not hardcoding it and run it through the procedure and if you are trying to derive the value from the input value then it might take more time.
2) In rare cases it can be due to network traffic, also where we will not have consistency in the query execution time for the same input data.
I too faced a problem where we had to create some temp tables and then manipulating them had to calculate some values based on rules and finally insert the calculated values in a third table. This all if put in single SP was taking around 20-25 min. So to optimize it further we broke the sp into 3 different sp's and the total time now taken was around 6-8 mins. Just identify the steps that are involved in the whole process and how to break them up in different sp's. Surely by using this approach the overall time taken by the entire process will reduce.
This is because of parameter snipping. But how can you confirm it?
Whenever we supposed to optimize SP we look for execution plan. But in your case, you will see an optimized plan from SSMS because it's taking more time only when it called through Code.
For every SP and Function, the SQL server generates two estimated plans because of ARITHABORT option. One for SSMS and second is for the external entities(ADO Net).
ARITHABORT is by default OFF in SSMS. So if you want to check what exact query plan your SP is using when it calls from Code.
Just enable the option in SSMS and execute your SP you will see that SP will also take 12-13 minutes from SSMS.
SET ARITHABORT ON
EXEC YourSpName
SET ARITHABORT OFF
To solve this problem you just need to update the estimate query plan.
There are a couple of ways to update the estimate query plan.
1. Update table statistics.
2. recompile SP
3. SET ARITHABORT OFF in SP so it will always use query plan created for SSMS (this option is not recommended)
For more options please refer to this awesome article -
http://www.sommarskog.se/query-plan-mysteries.html
I would suggest the issue is related to the type of temp table (the # prefix). This temp table holds the data for that database session. When you run it through your app the temp table is deleted and recreated.
You might find when running in SSMS it keeps the session data and updates the table instead of creating it.
Hope that helps :)