Does "With recompile" recompile all the queries in stored procedure? - sql

So we have quite a large database and a stored procedure which searches through a large number of documents. Depending on the context, it either fetches millions of documents or just one hundred.
Point is, the procedure takes 30 seconds for both millions and hundred documents which is surreal. If I add OPTION (RECOMPILE) after each of the five queries, it takes a second for 100 documents and (expected) 30 seconds for millions of documents.
I've tried creating a procedure with WITH RECOMPILE option but it seems that it doesn't recompile queries in it.
Is this correct? Does the WITH RECOMPILE on stored procedure recompiles inner queries or just the execution plan for an entire SP? How can I do this without repeating OPTION (RECOMPILE) after each query?

Does the WITH RECOMPILE on stored procedure recompiles inner queries or just the execution plan for an entire SP?
inner queries or just execution plan,i am not sure what does this mean ? With Recompile at Stored proc level,will cause recompilation every time the proc is executed and query is not saved to Cache
How can I do this without repeating OPTION (RECOMPILE) after each query?
create proc usp_test
with Recompile
as
Begin
--code
End
Some More Details:
With Recompile will recompile a new plan for the entire stored proc ,everytime its run..
suppose ,you have below proc
create proc usp_test
as
Begin
select * from t1
go
select * from t2
End
adding recompile on top of stored proc,will cause SQLServer to recompile all the batches in Stored proc
Instead of recompiling,the total proc,if you know for sure ,which batch is causing issues,you can add Option(Recompile) like below
create proc usp_test
as
Begin
select * from t1 option(recompile)
select * from t2
End
doing this,you are avoiding unnecessary recompilation of other batches

Both OPTION(RECOMPILE) and WITH RECOMPILE will give you execution plans based on the parameters used each time, but only OPTION (RECOMPILE) allows "parameter embedding optimization" where the parameters are replaced with literal constants when its parsing the queries. This can allow the optimizer to make simplifications to the query.
I would find out which query is actually causing the performance issue and use OPTION(RECOMPILE) on that.

Related

SQL Server : unexpected performance issue between in-line and stored procedure execution

I have run into an enigma of sorts while researching a performance issue with a specific stored procedure. I did not create that stored procedure, and the logic is fairly ugly with nested selects in join statements, etc...
However, when I copy the logic of the stored procedure directly into a new query window, add the parameters at the top and execute it, this runs in under 400 milliseconds. Yet, when I call the stored procedure and execute it with the exact same parameter values, it takes 23 seconds to run!
This makes absolutely no sense at all to me!
Are there some server-level settings that I should check which could potentially be impacting this?
Thanks
Recompile your stored procedure.
The Microsoft documentation says
When a procedure is compiled for the first time or recompiled, the procedure's query plan is optimized for the current state of the database and its objects. If a database undergoes significant changes to its data or structure, recompiling a procedure updates and optimizes the procedure's query plan for those changes. This can improve the procedure's processing performance.
and
Another reason to force a procedure to recompile is to counteract the "parameter sniffing" behavior of procedure compilation. When SQL Server executes procedures, any parameter values that are used by the procedure when it compiles are included as part of generating the query plan. If these values represent the typical ones with which the procedure is subsequently called, then the procedure benefits from the query plan every time that it compiles and executes. If parameter values on the procedure are frequently atypical, forcing a recompile of the procedure and a new plan based on different parameter values can improve performance.
If you run the SP's queries yourself (in SSMS maybe) they get compiled and run.
How to recompile your SP? (See the doc page linked above.)
You can rerun its definition to recompile it once. That may be enough if the procedure was first defined long ago in the life of your database and your tables have grown a lot.
You can put CREATE PROCEDURE ... WITH RECOMPILE AS into its definition so it will recompile every time you run it.
You can EXEC procedure WITH RECOMPILE; when you run it.
You can restart your SQL Server. (This can be a nuisance; booting the server magically makes bad things stop happening and nobody's quite sure why.)
Recompiling takes some time. But it takes far less time than a complex query with a bad execution plan would.
So... I ended up turning the nested selects on the joins to table variables, and now the sproc is executing in 60-milliseconds, while the in-line sql is taking 250=ms.
However, I still do not understand why the sproc was performing so much slower than the in-line sql version with the original nested sql logic?
I mean, both were using the exact same sql logic, so why was the sproc taking 23-seconds while the in-line was 400-ms?

Index healthy and statistics update after 1000 updates

I have a stored procedure which gets data from 5 tables. Tables are updated approximately 1000 records and 1000 updates in one hour. After inserting and updating, the stored procedure runs into a timeout.
When I rebuild one of the index of a table which is referenced in the stored procedure, it starts working normal again.. but it breaks down again after each new 1000 records updated.
What should I do?
Ok I think you are mistaken here when you say rebuilding the Index is solving the problem.
I think it is actually that rebuilding indexes invalids the the cached execution plan and on next execution after rebuilding the index will force sql server to recompile the execution plan.
Normally SQL Serve would use a cached execution plan for a stored procedure, but there are some factors that can cause sql server to recompile an execution plan for a stored procedure even if there a cached execution plan in proc cache memory. Rebuilding or any changes to an index that is being used in the execution of a stored procedure will result in recompilation of execution plan.
Since you are inserting 1000 rows every hour and you would also want to keep your statistics updated. I would say run an nightly job to update statistics.
But for your Store Procedure use WITH RECOMPILE option in your procedure's definition or use this option when executing this Stored Procedure and I think it will solve the issue.
to add this option in sp's definition
ALTER PROCEDURE myProc
WITH RECOMPILE
AS.......
Or to add this option when executing your stored procedure you can do as follows
EXECUTE myProc WITH RECOMPILE
Or you can also use a system stored procedure sp_recompile to force sql server to compile an execution plan even if there is one in cache memory.
EXECUTE sp_recompile N'dbo.MyProc';
GO
EXECUTE dbo.MyPrco;
GO

Does the prepared SQL statements in a stored procedure make the performance better?

I know there have already been lots of question about stored procedure vs prepared SQL statements, but I want to find out something different - if the prepared statements inside a procedure contribute to the performance of this stored procedure, which means make it better.
I have this question because I was told following points when searching some introduction of these 2 skills.
Stored procedure will store and compile your series of statements in
db, which will reduce the overhead of transferring & compiling.
Prepare statements will be compiled and cached in db for multiple
access which lead to less overhead.
I am puzzled about these 'compile', 'store', and 'overhead' - a little bit abstract.
I use prepared statement to avoid re-parse if it will be called frequently.
However should I use prepared statements (to cache & compile) inside a procedure? Since my procedure would have already been stored and compiled in DB, prepare something inside seems meaningless. (compile what was compiled?)
edit with sample code:
Create or Replace procedure MY_PROCEDURE
Begin
//totally meaningless here?
declare sqlStmt varchar(300);
declare stmt statement;
set sqlStmt='update MY_TABLE set NY_COLUMN=? where NY_COLUMN=?';
prepare stmt from sqlStmt;
execute stmt using 2,1
execute stmt using 4,3
..............
END
Is the the above one better than below, since it only parse the statement once? Or same, because statements in procedure will have been pre-compiled.
Create or Replace procedure MY_PROCEDURE
Begin
update MY_TABLE set NY_COLUMN=2 where NY_COLUMN=1;
update MY_TABLE set NY_COLUMN=4 where NY_COLUMN=3;
..............
END
When you first run a stored procedure the database engine parses the procedure and works out the optimal query plan to use when executing it - it then stores this query plan so that every time you run the procedure it doesn't have to recalculate it.
You can see this youself in Management Studio. If you CREATE or ALTER the stored procedure in question, then open a new query and use:
SET STATISTICS TIME ON
In that same query window run the stored procedure. In the messages tab of the result the first message will be something like:
SQL Server parse and compile time:
CPU time = 1038 ms, elapsed time = 1058 ms.
This is the overhead, execute the query again and you will see that the parse and compile time is now 0.
When you prepare a statement in code you get to take advantage of the same benefit. If you query is 'SELECT * FROM table WHERE #var = '+$var, each time you run that query SQL Server has to parse it and calculate the optimal execution plan. If you use a prepared statement SELECT * FROM table WHERE ?, SQL Server will calculate the optimal execution plan the first time you run the prepared statement, and from then on it can reuse the execution plan as with a stored procedure. The same goes if the statement you are executing is 'EXEC dbo.myProc #var = '+$var, SQL Server would still have to parse this statement each time so a prepared statement should still be used.
You do not need to prepare statements that you write inside stored procedures because they are already compiled as shown above - they are prepared statements in themselves.
On thing you should be aware of when using stored procedure and prepared statements is parameter sniffing.
SQL Server calculates and stores the optimal execution plan for the first variables used, if you happen to execute the stored procedure with some unusual variable on the first run, the execution plan stored may be completely suboptimal for the sorts of variables you typically use.
If you find you can execute a stored procedure from Management Studio and it takes say 2 seconds to execute, but performing the same action in your application takes 20 seconds, it's probably as a result of parameter sniffing.
In DB2 actually the opposite may be true. Statements in an SQL routine are prepared when the routine is compiled. Dynamic SQL statements, as in your example, are prepared during the routine run time.
As a consequence, the preparation of dynamic statements will take into account the most current table and index statistics and other compilation environment settings, such as isolation level, while static statements will use the statistics that were in effect during the routine compilation or the latest bind.
If you want stable execution plans, use static SQL. If your statistics change frequently, you may want to use dynamic SQL (or make sure you rebind your routines' packages accordingly).
The same logic applies to Oracle PL/SQL routines, although the way to recompile static SQL differs -- you'll need to invalidate the corresponding routines.

Option Recompile makes query fast - good or bad?

I have two SQL queries with about 2-3 INNER JOINS each. I need to do an INTERSECT between them.
Problem is that indiividually the queryes work fast, but after intersecting take about 4 seconds in total.
Now, if I put an OPTION (RECOMPILE) at the end of this whole query, the query works great again working quite fast returning almost instantly!.
I understand that option recopile forces a rebuild of execution plan, so I am confused now if my earler query taking 4 seconds is better or now the one with recompile, but taking 0 seconds is better.
Rather than answer the question you asked, here's what you should do:
Update your statistics:
EXEC sp_updatestats
If that doesn't work, rebuild indexes.
If that doesn't work, look at OPTIMIZE FOR
WITH RECOMPILE is specified SQL Server does not cache a plan for this stored procedure,
the stored procedure is recompiled each time it is executed.
Whenever a stored procedure is run in SQL Server for the first time, it is optimized and a query plan is compiled and cached in SQL Server's memory. Each time the same stored procedure is run after it is cached, it will use the same query plan eliminating the need for the same stored procedure from being optimized and compiled every time it is run. So if you need to run the same stored procedure 1,000 times a day, a lot of time and hardware resources can be saved and SQL Server doesn't have to work as hard.
you should not use this option because by using this option, you lose most of the advantages you get by substituting SQL queries with the stored procedures.

add SQL Server index but how to recompile only affected stored procedures?

I need to add an index to a table, and I want to recompile only/all the stored procedures that make reference to this table. Is there any quick and easy way?
EDIT:
from SQL Server 2005 Books Online, Recompiling Stored Procedures:
As a database is changed by such actions as adding indexes or changing data in indexed columns, the original query plans used to access its tables should be optimized again by recompiling them. This optimization happens automatically the first time a stored procedure is run after Microsoft SQL Server 2005 is restarted. It also occurs if an underlying table used by the stored procedure changes. But if a new index is added from which the stored procedure might benefit, optimization does not happen until the next time the stored procedure is run after Microsoft SQL Server is restarted. In this situation, it can be useful to force the stored procedure to recompile the next time it executes
Another reason to force a stored procedure to recompile is to counteract, when necessary, the "parameter sniffing" behavior of stored procedure compilation. When SQL Server executes stored procedures, any parameter values used by the procedure when it compiles are included as part of generating the query plan. If these values represent the typical ones with which the procedure is called subsequently, then the stored procedure benefits from the query plan each time it compiles and executes. If not, performance may suffer
You can exceute sp_recompile and supply the table name you've just indexed. all procs that depend on that table will be flushed from the stored proc cache, and be "compiled" the next time they are executed
See this from the msdn docs:
sp_recompile (Transact-SQL)
They are generally recompiled automatically. I guess I don't know if this is guaranteed, but it has been what I have observed - if you change (e.g. add an index) the objects referenced by the sproc then it recompiles.
create table mytable (i int identity)
insert mytable default values
go 100
create proc sp1 as select * from mytable where i = 17
go
exec sp1
If you look at the plan for this execution, it shows a table scan as expected.
create index mytablei on mytable(i)
exec sp1
The plan has changed to an index seek.
EDIT: ok I came up with a query that appears to work - this gives you all sproc names that have a reference to a given table in the plan cache. You can concatenate the sproc name with the sp_recompile syntax to generate a bunch of sp_recompile statements you can then execute.
;WITH XMLNAMESPACES (default 'http://schemas.microsoft.com/sqlserver/2004/07/showplan')
,TableRefs (SProcName, ReferencedTableName) as
(
select
object_name(qp.objectid) as SProcName,
objNodes.objNode.value('#Database', 'sysname') + '.' + objNodes.objNode.value('#Schema', 'sysname') + '.' + objNodes.objNode.value('#Table', 'sysname') as ReferencedTableName
from sys.dm_exec_cached_plans cp
outer apply sys.dm_exec_sql_text(cp.plan_handle) st
outer apply sys.dm_exec_query_plan(cp.plan_handle) as qp
outer apply qp.query_plan.nodes('//Object[#Table]') as objNodes(objNode)
where cp.cacheobjtype = 'Compiled Plan'
and cp.objtype = 'Proc'
)
select
*
from TableRefs
where SProcName is not null
and isnull(ReferencedTableName,'') = '[db].[schema].[table]'
I believe that the stored procedures that would potentially benefit from the presence of the index in question will automatically have a new query plan generated, provided the auto generate statistics option has been enabled.
See the section entitled Recompiling Execution Plans for details of what eventualities cause an automatic recompilation.
http://technet.microsoft.com/en-us/library/ms181055(SQL.90).aspx