I have some procedures which are not optimized . I want to analyse that whether these procedures get optimised or not , as there are only not bulk data to check.
I want to put a sql query into all the procedure which will insert in to one table that what is exact time taken by the above query to execute. I will create a table in which my procedure will insert the query execution time.
Is there any way that I can do this.
If you want to integrate it in your procedure, set a variable which stores the execution start time at the beginning, and a subtract it with the current time at the end of the procedure.
/* Start of your procedure */
Declare #startTime datetime = GETDATE()
Declare #duration varchar
/* Your procedure */
...
/* End of your procedure */
Set #duration = CONVERT(VARCHAR(8),GETDATE() - #startTime,108)
Insert into statisticTable
Values ('procedureName', #duration)
Did you try to use the SQL Profiler?
In Trace Properties you can can define Start Time and End time of Stored Procedures. It is a very powerful tool.
You can check above mentioned stuffs using Dynamic Management Views....
If you are using old version of SQL SERVER, use SQL Server Profiler....
Why would you want to do that?There are plenty of tools like DMV's ,trace ,exevents etc. You will add to overhead to your proc if you put that kind of code. For procedure best is dm_procedure_stats DMV but is availble in 2008 R2 and above only.Use the dm_exec_query_stats for older versions. These will give you much better stats about the time,logical io,cputime, and physical IO's etc.
Related
Should we end stored procedures with GO statement, if so what are the advantages of using GO?
CREATE PROCEDURE uspGetAddress #City nvarchar(30)
AS
SELECT *
FROM AdventureWorks.Person.Address
WHERE City = #City
GO
The statement go, per the documentation
Signals the end of a batch of Transact-SQL statements to the SQL Server utilities.
...
GO is not a Transact-SQL statement; it is a command recognized by the sqlcmd and osql
utilities and SQL Server Management Studio Code editor.
SQL Server utilities interpret GO as a signal that they should send the current batch
of Transact-SQL statements to an instance of SQL Server. The current batch of statements
is composed of all statements entered since the last GO, or since the start of the
ad-hoc session or script if this is the first GO.
A Transact-SQL statement cannot occupy the same line as a GO command. However, the line
can contain comments.
Users must follow the rules for batches. For example, any execution of a stored procedure
after the first statement in a batch must include the EXECUTE keyword. The scope of
local (user-defined) variables is limited to a batch, and cannot be referenced after a
GO command.
A stored procedure definition, per the documentation for create procedure, comes with restrictions. it must be the first (and only) statement in the batch:
The CREATE PROCEDURE statement cannot be combined with other Transact-SQL statements in
a single batch.
That means the body of stored procedure ends with the batch. Adding GO in your source file is good practice. Especially since it's common to do things prior to and following the creation of a stored procedure. You'll often see source files that look something like this:
if (object_id('dbo.foobar') is not null ) drop procedure dbo.foobar
GO
-- dbo.foobar --------------------------------------------
--
-- This stored procedure does amazing and wonderful things
----------------------------------------------------------
create procedure dbo.foobar
as
...
{a sequence of amazing and wonderful SQL statements}
...
return 0
GO
grant execute on dbo.foobar to some_schema
GO
And the value for GO is adjustable in Sql Server Management Studio's options. If you'd like to use something like jump instead of go, you can (bearing in mind that you're almost certainly going to give yourself grief in doing so.).
No, you should end your procedure with RETURN.
CREATE PROCEDURE uspGetAddress #City nvarchar(30)
AS
SELECT *
FROM AdventureWorks.Person.Address
WHERE City = #City
RETURN
The GO is really meant to separate commands in a sql script.
Just wanted to point out that without a GO at the end of your stored procedure, any T-SQL after the supposed end of the procedure body will still be included in the body of the proc.
For example
CREATE PROCEDURE Foo
BEGIN
SELECT * FROM dbo.Bar;
END
DROP TABLE dbo.Bar;
In this example, running EXEC dbo.Foo will end up dropping the table even though it is after the END. To avoid that, you need to place a GO after the END.
I prefer to surround the body of the stored procedure with begin and end statements:
CREATE PROCEDURE uspGetAddress (
#City nvarchar(30)
) AS
BEGIN
SELECT *
FROM AdventureWorks.Person.Address
WHERE City = #City;
END;
GO is a not a T-SQL command. It is understood by the tools that run scripts. As the documentation describes:
GO is not a Transact-SQL statement; it is a command recognized by the
sqlcmd and osql utilities and SQL Server Management Studio Code
editor.
SQL Server utilities interpret GO as a signal that they should send
the current batch of Transact-SQL statements to an instance of SQL
Server. The current batch of statements is composed of all statements
entered since the last GO, or since the start of the ad hoc session or
script if this is the first GO.
By the way, in your case, a user-defined table function might be more appropriate than a stored procedure.
I know there have already been lots of question about stored procedure vs prepared SQL statements, but I want to find out something different - if the prepared statements inside a procedure contribute to the performance of this stored procedure, which means make it better.
I have this question because I was told following points when searching some introduction of these 2 skills.
Stored procedure will store and compile your series of statements in
db, which will reduce the overhead of transferring & compiling.
Prepare statements will be compiled and cached in db for multiple
access which lead to less overhead.
I am puzzled about these 'compile', 'store', and 'overhead' - a little bit abstract.
I use prepared statement to avoid re-parse if it will be called frequently.
However should I use prepared statements (to cache & compile) inside a procedure? Since my procedure would have already been stored and compiled in DB, prepare something inside seems meaningless. (compile what was compiled?)
edit with sample code:
Create or Replace procedure MY_PROCEDURE
Begin
//totally meaningless here?
declare sqlStmt varchar(300);
declare stmt statement;
set sqlStmt='update MY_TABLE set NY_COLUMN=? where NY_COLUMN=?';
prepare stmt from sqlStmt;
execute stmt using 2,1
execute stmt using 4,3
..............
END
Is the the above one better than below, since it only parse the statement once? Or same, because statements in procedure will have been pre-compiled.
Create or Replace procedure MY_PROCEDURE
Begin
update MY_TABLE set NY_COLUMN=2 where NY_COLUMN=1;
update MY_TABLE set NY_COLUMN=4 where NY_COLUMN=3;
..............
END
When you first run a stored procedure the database engine parses the procedure and works out the optimal query plan to use when executing it - it then stores this query plan so that every time you run the procedure it doesn't have to recalculate it.
You can see this youself in Management Studio. If you CREATE or ALTER the stored procedure in question, then open a new query and use:
SET STATISTICS TIME ON
In that same query window run the stored procedure. In the messages tab of the result the first message will be something like:
SQL Server parse and compile time:
CPU time = 1038 ms, elapsed time = 1058 ms.
This is the overhead, execute the query again and you will see that the parse and compile time is now 0.
When you prepare a statement in code you get to take advantage of the same benefit. If you query is 'SELECT * FROM table WHERE #var = '+$var, each time you run that query SQL Server has to parse it and calculate the optimal execution plan. If you use a prepared statement SELECT * FROM table WHERE ?, SQL Server will calculate the optimal execution plan the first time you run the prepared statement, and from then on it can reuse the execution plan as with a stored procedure. The same goes if the statement you are executing is 'EXEC dbo.myProc #var = '+$var, SQL Server would still have to parse this statement each time so a prepared statement should still be used.
You do not need to prepare statements that you write inside stored procedures because they are already compiled as shown above - they are prepared statements in themselves.
On thing you should be aware of when using stored procedure and prepared statements is parameter sniffing.
SQL Server calculates and stores the optimal execution plan for the first variables used, if you happen to execute the stored procedure with some unusual variable on the first run, the execution plan stored may be completely suboptimal for the sorts of variables you typically use.
If you find you can execute a stored procedure from Management Studio and it takes say 2 seconds to execute, but performing the same action in your application takes 20 seconds, it's probably as a result of parameter sniffing.
In DB2 actually the opposite may be true. Statements in an SQL routine are prepared when the routine is compiled. Dynamic SQL statements, as in your example, are prepared during the routine run time.
As a consequence, the preparation of dynamic statements will take into account the most current table and index statistics and other compilation environment settings, such as isolation level, while static statements will use the statistics that were in effect during the routine compilation or the latest bind.
If you want stable execution plans, use static SQL. If your statistics change frequently, you may want to use dynamic SQL (or make sure you rebind your routines' packages accordingly).
The same logic applies to Oracle PL/SQL routines, although the way to recompile static SQL differs -- you'll need to invalidate the corresponding routines.
I want to get transaction execute time in SQL Server 2008 R2
i want to get this time programmatically and save it in a table
. How can do it?
Save the time before and after the query and to calculate
the difference, something like this:
DECLARE #start_time DATETIME, #end_time DATETIME
SET #start_time = CURRENT_TIMESTAMP
-- query goes here
SET #end_time = CURRENT_TIMESTAMP
SELECT DATEDIFF(ms, #start_time, #end_time)
Try executing "SET STATISTICS TIME ON;" on your connection before executing your transaction.
JohnC is right and that is probably what you want.
But depending on your exact needs, there are other options. For one, SSMS tells you the amount of time, at least to the second, passed in the lower right cornor when you execute a query.
And of course you use getdate() to get the time from SQL Server immediately before and immediately after the execution and find the difference.
For repeated testing of large numbers of queries, you could also build a test harness in some other language like python with a library like timeit.
I have a stored procedure that takes a lot of time to execute. We want it to execute during the night, but it has to check for the current time every now and then, and at a given time it has to stop executing. How do I do that? Please provide me with the code I can use in my stored procedure. We are using Microsoft SQL Server 2005.
You can get the current date:
SELECT GETDATE()
Stop executing:
If #date > GETDATE()
RETURN --Exits procedure
Where #date is the date/time when you want to stop executing
Create Maintenance plan and sprecify start time for it. Create "Execute T-SQL statement task" in this plan and specify execution timeout for it (in seconds).
Why woudl you want to exist a proc that is not finished? Wouldn't you be leaving things in a bad state as far as data integrity or rolling back all the work you just did??
Wouldn't it be a better solution to try to improve the performance of the proc?
I have a really large stored procedure which calls other stored procedures and applies the results into temp tables.
I am debugging in SQL 2008 Management Studio and can use the watch window to query local parameters but how can I query a temp table on whilst debugging?
If its not possible is there an alternative approach? I have read about using table variables instead, would it be possible to query these? If so how would I do this?
Use global temporary tables, i.e. with double hash.
insert into ##temp select ...
While debugging, you can pause the SP at some point, and in another query window, the ## table is available for querying.
select * from ##temp
Single hash tables (#tmp) is session specific and is only visible from the session.
I built a procedure which will display the content of a temp table from another database connection. (which is not possible with normal queries).
Note that it uses DBCC PAGE & the default trace to access the data so only use it for debugging purposes.
an alternative would be to use a variable in your stored proc that allows for debug on the fly.
i use a variable called #debug_out (BIT).
works something like this
ALTER PROCEDURE [dbo].[usp_someProc]
#some_Var VARCHAR(15) = 'AUTO',
#debug_Out BIT = 0
BEGIN
IF #debug_Out = 1
BEGIN
PRINT('THIS IS MY TABLE');
SELECT * FROM dbo.myTable;
END ................
END
the great thing about doing this is when your code launches your stored procedure, the default is to show none of these debug sections. when you want to debug, you just pass in your debug variable.
EXEC usp_someProc #debug_Out = 1
simply dont drop temp table or close transaction
eg
select * into #temp from myTable
select * from #temp