I have an azure PaaS database and would like to clear cache to test some SP. So I found some scripts from the Internet:
-- run this script against a user database, not master
-- count number of plans currently in cache
select count(*) from sys.dm_exec_cached_plans;
-- Executing this statement will clear the procedure cache in the current database, which means that all queries will have to recompile.
ALTER DATABASE SCOPED CONFIGURATION CLEAR PROCEDURE_CACHE;
-- count number of plans in cache now, after they were cleared from cache
select count(*) from sys.dm_exec_cached_plans;
-- list available plans
select * from sys.dm_exec_cached_plans;
select * from sys.dm_exec_procedure_stats
However, the amount of cache is always around 600-800, so somehow it is not dropping.
I didn't get any error (no permission denied etc), so how this command is not cleaning cache?
I haven't had time to debug through the code to be 100% sure, but based on my understanding of the system it is likely that merely bumping the database schema version (which happens on any alter database command) will invalidate the entries in the cache on next use. Since the procedure cache is instance wide, any attempt to clear the entries associated with the database would need to walk all entries one-by-one instead of merely freeing the whole cache.
So, you can think of this as invalidating the whole cache but lazily removing entries from the cache as they are recompiled or if the memory is reclaimed by other parts of the system through later actions.
Conor Cunningham
Architect, SQL
I contacted Microsoft support and understood it now.
Try to run the following T-SQL on AdventureWorks database.
-- create test procedure
create or alter procedure pTest1
as begin
select * from salesLT.product where ProductCategoryID =27
end
go
-- exec the procedure once.
exec pTest1
-- check for cached plan for this specific procedure - cached plan exists
select * from sys.dm_exec_cached_plans p
cross apply sys.dm_exec_sql_text(p.plan_handle) st
where p.objtype = 'proc' and st.objectid = OBJECT_ID('pTest1')
-- clear plan cache
ALTER DATABASE SCOPED CONFIGURATION CLEAR PROCEDURE_CACHE;
-- check for cached plan for this specific procedure - not exists anymore
select * from sys.dm_exec_cached_plans p
cross apply sys.dm_exec_sql_text(p.plan_handle) st
where p.objtype = 'proc' and st.objectid = OBJECT_ID('pTest1')
-- cleanup
drop procedure pTest1
select 'cleanup complete'
with this sample, we can confirm that plan cache is cleared for the database, however, sys.dm_exec_cached_plans is server wide and give you results from other databases as well (internal system databases) that for them the cache was not cleared with the CLEAR PROCEDURE_CACHE command.
Related
I have a session created in my vb.net codes and running some SQL queries, there are some local temp tables like #T1, #T2 , ...
Execution process has some steps and I need to know which data changes in my local tables in each step.
Currently I use this to view the data in my code:
select * into ##T1 from #T1
I can't use sp_getbindtoken because there is no active transaction. I can not use DBCC because I don't have permission.
I can run sys.dm_exec_sessions view and therefor I have active session_id,
I also have connection Index of active sql connection
is there any way to connect to a active session and access local temp tables?
or is there any way to get those data of #T1, #T2,...?
EDIT1:
according to the comment which commented by #SeanLange
I have some temp tables as I said, and in the steps mentioned before I do some calculations on these temp tables, for tracing these calculations I need to know what happens in these steps, and I want to execute a simple select statement on these temp tables. what I wanted to do was connect to the active session created in my source code from an external project called Tracer, and perform select statements while my source is on the fly and meanwhile trace the data created in these session
You can't do it. Sorry. (at least without sa privileges).
Run your queries from within a stored procedure and add code to log whatever you need to a table, then query the log table as needed.
Execution process has some steps and I need to know which data changes in my local tables in each step.
If you have permission, you can create a trigger to do the logging for you
I made a silly mistake at work once on one of our in house test databases. I was updating a record I just added because I made a typo but it resulted in many records being updated because in the where clause I used the foreign key instead of the unique id for the particular record I just added
One of our senior developers told me to do a select to test out what rows it will affect before actually editing it. Besides this, is there a way you can execute your query, see the results but not have it commit to the db until I tell it to do so? Next time I might not be so lucky. It's a good job only senior developers can do live updates!.
It seems to me that you just need to get into the habit of opening a transaction:
BEGIN TRANSACTION;
UPDATE [TABLENAME]
SET [Col1] = 'something', [Col2] = '..'
OUTPUT DELETED.*, INSERTED.* -- So you can see what your update did
WHERE ....;
ROLLBACK;
Than you just run again after seeing the results, changing ROLLBACK to COMMIT, and you are done!
If you are using Microsoft SQL Server Management Studio you can go to Tools > Options... > Query Execution > ANSI > SET IMPLICIT_TRANSACTIONS and SSMS will open the transaction automatically for you. Just dont forget to commit when you must and that you may be blocking other connections while you dont commit / rollback close the connection.
First assume you will make a mistake when updating a db so never do it unless you know how to recover, if you don't don't run the code until you do,
The most important idea is it is a dev database expect it to be messed up - so make sure you have a quick way to reload it.
The do a select first is always a good idea to see which rows are affected.
However for a quicker way back to a good state of the database which I would do anyway is
For a simple update etc
Use transactions
Do a begin transaction and then do all the updates etc and then select to check the data
The database will not be affected as far as others can see until you do a last commit which you only do when you are sure all is correct or a rollback to get to the state that was at the beginning
If you must test in a production database and you have the requisite permissions, then write your queries to create and use temporary tables that in name are similar to the production tables and whose schema other than index names is identical. Index names are unique across a databse, at least on Informix.
Then run your queries and look at the data.
Other than that, IMHO you need a development database, and perhaps even a development server with a development instance. That's paranoid advice, but you'd have to be very careful, even if you were allowed -- MS SQLSERVER lingo here -- a second instance on the same server.
I can reload our test database at will, and that's why we have a test system. Our production system contains citizens' tax payments and other information that cannot be harmed, "or else".
For our production data changes, we always ensure that we use a BEGIN TRAN and a ROLLBACK TRAN and then all statements have an OUTPUT clause. This way we can run the script first (usually in a copy of PRODUCTION db first) and see what is affected before changing the ROLLBACK TRAN to COMMIT TRAN
Have you considered explain ?
If there is a mistake in the command, it will report it as with usual commands.
But if there are no mistakes it will not run the command, it will just explain it.
Example of a "passed" test:
testdb=# explain select * from sometable ;
QUERY PLAN
------------------------------------------------------------
Seq Scan on sometable (cost=0.00..12.60 rows=260 width=278)
(1 row)
Example of a "failed" test:
testdb=# explain select * from sometaaable ;
ERROR: relation "sometaaable" does not exist
LINE 1: explain select * from sometaaable ;
It also works with insert, update and delete (i.e. the "dangerous" ones)
I'm using SQL Server 2008 R2.
I have a view; let's call it view1. This view is complex and slow. It cannot be made into an indexed view because it uses left joins and various other trickery. As such, we created a stored procedure which basically:
obtains an exclusive lock
selects * into computed_view1_tmp from view1; (slow)
creates indexes on the above computed table (slow)
renames computed_view1 to computed_view1_todelete; and does the same for its indexes (assumed fast)
renames computed_view1_tmp to computed_view1; and does the same for its indexes (assumed fast)
drops the table computed_view1_todelete (slow)
releases the lock.
We run this procedure when we know we're changing the data in our web application. We then have other views, such as view2 using computed_view1 instead of view1.
Once in a while, we get:
Invalid object name 'dbo.computed_view1'. Could not use view or
function 'dbo.view2 because of binding errors.
I assume this is because we're trying to access dbo.computed_view1 at the same time as it's being renamed. I assume this is a very short period, but the frequency I am seeing this error in my logs makes me wonder if something else might be at play. I'm getting the error many times per day on a site with about a dozen users active throughout the day.
In development, this procedure takes about five seconds given the amount of data in the view. Renaming is instantaneous. In production, it must be taking longer but I don't understand why. I once saw the procedure fail to obtain the exclusive lock within 90 seconds.
Any thoughts on how to fix or a better solution?
Edit: Extra notes on my locking - maybe I'm not doing this right:
BEGIN TRANSACTION
DECLARE #result int
EXEC #result = sp_getapplock #Resource = 'lock_computed_view1', #LockMode = 'Exclusive', #LockTimeout = 90
IF #result NOT IN ( 0, 1 ) -- Only successful return codes
BEGIN
PRINT #result
RAISERROR ( 'Lock failed to acquire...', 16, 1 )
END
ELSE
BEGIN
// rest of the magic
END
EXEC #result = sp_releaseapplock #Resource = 'lock_computed_view1'
COMMIT TRANSACTION
If you're locking and transaction scope is right I would expect other transactions to wait and never see the view missing. This might be a SQL Server idiosyncrasy that I don't know about.
It is often possible to do without dynamic DDL. Here are two ways to do it:
TRUNCATE the computed table and insert into it. This takes an exclusive automatically. No need to rename. All of this is atomic and supports rollback.
Use a staging table with the same schema. Work on that. So far no service interruption at all. Then, SWITCH PARTITION the staging table with the production table. This is quick and atomic. This does not require Enterprise Edition.
With these approaches the problem is solved by just not renaming.
Sometimes when diagnosing issues with our SQL Server 2000 database it might be helpful to know that a stored procedure is using a bad plan or is having trouble coming up with a good plan at the time I'm running into problems. I'm wondering if there is a query or command I can run to tell me how many execution plans are cached currently for a particular stored procedure.
You can query the cache in a number of different ways, either looking at its contents, or looking at some related statistics.
A couple of commands to help you along your way:
SELECT * FROM syscacheobjects -- shows the contents of the procedure
-- cache for all databases
DBCC PROCCACHE -- shows some general cache statistics
DBCC CACHESTATS -- shows the usage statistics for the cache, things like hit ratio
If you need to clear the cache for just one database, you can use:
DBCC FLUSHPROCINDB (#dbid) -- that's an int, not the name of it.
-- The int you'd get from sysdatabases or the dbid() function
Edit: above the line is for 2000, which is what the question asked. However, for anyone visiting who's using SQL Server 2005, it's a slightly different arrangement to the above:
select * from sys.dm_exec_cached_plans -- shows the basic cache stuff
A useful query for showing plans in 2005:
SELECT cacheobjtype, objtype, usecounts, refcounts, text
from sys.dm_exec_cached_plans p
join sys.dm_exec_query_stats s on p.plan_handle = s.plan_handle
cross apply sys.dm_exec_sql_text(s.sql_handle)
I need to add an index to a table, and I want to recompile only/all the stored procedures that make reference to this table. Is there any quick and easy way?
EDIT:
from SQL Server 2005 Books Online, Recompiling Stored Procedures:
As a database is changed by such actions as adding indexes or changing data in indexed columns, the original query plans used to access its tables should be optimized again by recompiling them. This optimization happens automatically the first time a stored procedure is run after Microsoft SQL Server 2005 is restarted. It also occurs if an underlying table used by the stored procedure changes. But if a new index is added from which the stored procedure might benefit, optimization does not happen until the next time the stored procedure is run after Microsoft SQL Server is restarted. In this situation, it can be useful to force the stored procedure to recompile the next time it executes
Another reason to force a stored procedure to recompile is to counteract, when necessary, the "parameter sniffing" behavior of stored procedure compilation. When SQL Server executes stored procedures, any parameter values used by the procedure when it compiles are included as part of generating the query plan. If these values represent the typical ones with which the procedure is called subsequently, then the stored procedure benefits from the query plan each time it compiles and executes. If not, performance may suffer
You can exceute sp_recompile and supply the table name you've just indexed. all procs that depend on that table will be flushed from the stored proc cache, and be "compiled" the next time they are executed
See this from the msdn docs:
sp_recompile (Transact-SQL)
They are generally recompiled automatically. I guess I don't know if this is guaranteed, but it has been what I have observed - if you change (e.g. add an index) the objects referenced by the sproc then it recompiles.
create table mytable (i int identity)
insert mytable default values
go 100
create proc sp1 as select * from mytable where i = 17
go
exec sp1
If you look at the plan for this execution, it shows a table scan as expected.
create index mytablei on mytable(i)
exec sp1
The plan has changed to an index seek.
EDIT: ok I came up with a query that appears to work - this gives you all sproc names that have a reference to a given table in the plan cache. You can concatenate the sproc name with the sp_recompile syntax to generate a bunch of sp_recompile statements you can then execute.
;WITH XMLNAMESPACES (default 'http://schemas.microsoft.com/sqlserver/2004/07/showplan')
,TableRefs (SProcName, ReferencedTableName) as
(
select
object_name(qp.objectid) as SProcName,
objNodes.objNode.value('#Database', 'sysname') + '.' + objNodes.objNode.value('#Schema', 'sysname') + '.' + objNodes.objNode.value('#Table', 'sysname') as ReferencedTableName
from sys.dm_exec_cached_plans cp
outer apply sys.dm_exec_sql_text(cp.plan_handle) st
outer apply sys.dm_exec_query_plan(cp.plan_handle) as qp
outer apply qp.query_plan.nodes('//Object[#Table]') as objNodes(objNode)
where cp.cacheobjtype = 'Compiled Plan'
and cp.objtype = 'Proc'
)
select
*
from TableRefs
where SProcName is not null
and isnull(ReferencedTableName,'') = '[db].[schema].[table]'
I believe that the stored procedures that would potentially benefit from the presence of the index in question will automatically have a new query plan generated, provided the auto generate statistics option has been enabled.
See the section entitled Recompiling Execution Plans for details of what eventualities cause an automatic recompilation.
http://technet.microsoft.com/en-us/library/ms181055(SQL.90).aspx