In my stored procedure, it will do a lot of insert/update queries, and call a few nested stored procedures.
I want to disable the execution plan, bypassing the above mentioned queries, then switch it on to start profiling those queries I am interested in.
for example:
...switch off execution plan
INSERT INTO FOO ...
UPDATE FOO...
EXEC usp_FOO
...switch on execution plan here then I can start getting the performance stat
SELECT * FROM FOO
In SQL Server Management Studio, we have "Include Actual Execution Plan" for performance trace/debug, but if there are more than 100 queries, the execution plan output will exceed and stop working. So I believe there should be some switch like 'SET EXECUTION PLAN OFF' something like that
I recommend reading Capture execution plans with SQL Server 2005 Profiler. Using profiler, you can generate execution plans for every single query that is run in your stored procedure without worrying about the output limitations of SQL Server Management Studio.
You can use query hints to force a 'RECOMPILE' (ignore the cached plan) for all CRUD and MERGE -- def & samples here: http://msdn.microsoft.com/en-us/library/ms181714.aspx
Related
I have a stored procedure that runs every 5 minutes in a job on SQL server. The job will run for 80% of the time with no results , this is expected, but when it does have data to process it is a very long process.
The code is like this below simplified.
IF exists (Select top 1 col1 from tbl1 where processed = '0' )
BEGIN
HUGE PROCESS with multiple selects joins and updates
END
How will the execution plan evaluate this SP? Is this a rare case that using with WITH RECOMPILE is the best option?
If you use Include Actual Execution Plan option in the SQL Server Management Studio, you will see that when the IF expression is evaluated to false its body's operators are not included in the execution plan.
So, there is no need to worry - the SQL Engine will use correct execution plan and will not touch the data.
The recompile option can be helpful in particular queries but I believe you can skip it for now.
In SQL Server 2008, if a stored procedure is created before indices is created, will the stored procedure use those indices after they have been created?
The short answer is yes it would. Stored procedures can even exist before the tables that they use exist.
A longer answer means you need to know about execution plans and the plan cache that SQL Server keeps. When a procedure is run, the plan for it (which can include the indexes to use) is cached and kept for a period of time. So it's possible that the index will get used immediately or after the current execution plan has expired from the cache.
Take a look at Execution plan basics for more info.
I'm working on optimizing a fairly complex stored procedure. I'm just wondering if what I'm doing to track the improvements is a good of doing it.
I set the DBCC FREEPROCCACHE and I have Include Client Statistics in SQL Management Studio.
I look at Total execution time on the Client Statistics tab to determine if my changes are making my stored procedure faster.
Is this a good of way of measuring improvements in stored procedure? Or should I be looking at other areas?
One way to see how long it took to execute the query:
. So this one took 3 seconds.
If you want to see the performance of a query, turn on client statistics and execution plan to see the performance of each query. To turn on Client Statistics:
Result:
To turn on Execution Plan:
Result:
You can also try using
SET STATISTICS TIME ON
SET STATISTICS IO ON.
They will show you the time and I/O required by each and every statement. Don't forget to turn them off when you're done. (SET STATISTICS TIME OFF , SET STATISTICS IO OFF)
Make sure every time you test a new query you clear the query cache so that the old query doesn’t affect your new test. To clear the query cache, execute this code:
CHECKPOINT;
GO
DBCC DROPCLEANBUFFERS; --Clears query cache
GO
DBCC FREEPROCCACHE; --Clears execution plan cache
GO
I know there have already been lots of question about stored procedure vs prepared SQL statements, but I want to find out something different - if the prepared statements inside a procedure contribute to the performance of this stored procedure, which means make it better.
I have this question because I was told following points when searching some introduction of these 2 skills.
Stored procedure will store and compile your series of statements in
db, which will reduce the overhead of transferring & compiling.
Prepare statements will be compiled and cached in db for multiple
access which lead to less overhead.
I am puzzled about these 'compile', 'store', and 'overhead' - a little bit abstract.
I use prepared statement to avoid re-parse if it will be called frequently.
However should I use prepared statements (to cache & compile) inside a procedure? Since my procedure would have already been stored and compiled in DB, prepare something inside seems meaningless. (compile what was compiled?)
edit with sample code:
Create or Replace procedure MY_PROCEDURE
Begin
//totally meaningless here?
declare sqlStmt varchar(300);
declare stmt statement;
set sqlStmt='update MY_TABLE set NY_COLUMN=? where NY_COLUMN=?';
prepare stmt from sqlStmt;
execute stmt using 2,1
execute stmt using 4,3
..............
END
Is the the above one better than below, since it only parse the statement once? Or same, because statements in procedure will have been pre-compiled.
Create or Replace procedure MY_PROCEDURE
Begin
update MY_TABLE set NY_COLUMN=2 where NY_COLUMN=1;
update MY_TABLE set NY_COLUMN=4 where NY_COLUMN=3;
..............
END
When you first run a stored procedure the database engine parses the procedure and works out the optimal query plan to use when executing it - it then stores this query plan so that every time you run the procedure it doesn't have to recalculate it.
You can see this youself in Management Studio. If you CREATE or ALTER the stored procedure in question, then open a new query and use:
SET STATISTICS TIME ON
In that same query window run the stored procedure. In the messages tab of the result the first message will be something like:
SQL Server parse and compile time:
CPU time = 1038 ms, elapsed time = 1058 ms.
This is the overhead, execute the query again and you will see that the parse and compile time is now 0.
When you prepare a statement in code you get to take advantage of the same benefit. If you query is 'SELECT * FROM table WHERE #var = '+$var, each time you run that query SQL Server has to parse it and calculate the optimal execution plan. If you use a prepared statement SELECT * FROM table WHERE ?, SQL Server will calculate the optimal execution plan the first time you run the prepared statement, and from then on it can reuse the execution plan as with a stored procedure. The same goes if the statement you are executing is 'EXEC dbo.myProc #var = '+$var, SQL Server would still have to parse this statement each time so a prepared statement should still be used.
You do not need to prepare statements that you write inside stored procedures because they are already compiled as shown above - they are prepared statements in themselves.
On thing you should be aware of when using stored procedure and prepared statements is parameter sniffing.
SQL Server calculates and stores the optimal execution plan for the first variables used, if you happen to execute the stored procedure with some unusual variable on the first run, the execution plan stored may be completely suboptimal for the sorts of variables you typically use.
If you find you can execute a stored procedure from Management Studio and it takes say 2 seconds to execute, but performing the same action in your application takes 20 seconds, it's probably as a result of parameter sniffing.
In DB2 actually the opposite may be true. Statements in an SQL routine are prepared when the routine is compiled. Dynamic SQL statements, as in your example, are prepared during the routine run time.
As a consequence, the preparation of dynamic statements will take into account the most current table and index statistics and other compilation environment settings, such as isolation level, while static statements will use the statistics that were in effect during the routine compilation or the latest bind.
If you want stable execution plans, use static SQL. If your statistics change frequently, you may want to use dynamic SQL (or make sure you rebind your routines' packages accordingly).
The same logic applies to Oracle PL/SQL routines, although the way to recompile static SQL differs -- you'll need to invalidate the corresponding routines.
I am breaking my head on this issue since long. I have stored procedure in MS SQl and when I try to execute that procedure by providing all the parameters in SQL Query, it takes long time to execute but when I try to directly run the query which is there in SP it executes in no time. This is affecting my application performance also as we are using stored procedures to fetch the data from DB Server.
Please help.
Regards,
Vikram
Looks like parameter sniffing.
Here is a nice explanation: I Smell a Parameter!
Basically, sql server has cached query execution plan for the parameters it was first run with so the plan is not optimal for the new values you are passing. When you run the query directly the plan is generated at that moment so that's why it's fast.
You can mark the procedure for recompilation manually using sp_recompile or use With Recompile option in its definition so it is compiled on every run.