SQL Server stored procedure performance testing - sql

I have a task to test stored procedures performance in SQL Server. My goal is to report the average time and standard deviation of the stored procedure's execution time to the stakeholders. Realistic data input is a must here :)
My question: as I was trying to realistically stage the test I created a simple script that supposedly should measure the time it takes to perform stored procedure:
DECLARE #ValidCharacters varchar(20),
#DataLength tinyint, #LocalPart smallint, #DomainPart smallint
SET #ValidCharacters = 'abcdefghijklmnopqrstuvwxy'
SET #DataLength = DATALENGTH (#ValidCharacters) - 1
CREATE TABLE #LocalTempTable(EmailID int PRIMARY KEY IDENTITY(1,1), email varchar(30));
CREATE TABLE #LocalTempTableTimesOfInserting(TimesOfInsertingID int PRIMARY KEY IDENTITY(1,1), TimesOfInserting int);
DECLARE #counter int, #boundary int, #email varchar(25), #start DateTime, #end DateTime
SET #counter=0
SET #boundary=25
WHILE (#counter < #boundary)
BEGIN
DBCC FREEPROCCACHE;
DBCC DROPCLEANBUFFERS;
SET #email = SUBSTRING(#ValidCharacters, ABS(CHECKSUM(NewId())) % #DataLength + 1, ABS(CHECKSUM(NewId())) % #DataLength + 1) +
'#' + SUBSTRING(#ValidCharacters, ABS(CHECKSUM(NewId())) % #DataLength + 1, ABS(CHECKSUM(NewId())) % #DataLength + 1) + '.com'
SET #start = SYSDATETIME()
INSERT INTO #LocalTempTable VALUES (#email);
SET #end = SYSDATETIME()
INSERT INTO #LocalTempTableTimesOfInserting
VALUES (DATEDIFF(ns, #start, #end));
SELECT DATEDIFF(ms, #start, #end)
SET #counter = #counter + 1;
END
You see that I'm doing a micro benchmark on an insert and recording the results to a local temporary table (my idea was to export is latter to excel and do my calculations there and share it with colleagues) :)
My questions:
Do you have any advice how to improve the performance test - the real stored procedure is much heavier than the one in the example (I have read many a post, tried tools like SQLQueryStress - I'm really interested in doing the test this way, mainly for the interesting questions I go here);
Why do I get useless results in this case, like so (measuring in nanoseconds) - maybe as the operations are beginning to be simple and fast I will see such fluctuating performance, instability in results? Or maybe this is the result of SQL Server returning a cached result (even if I'm using DBCC; how to turn it off then?). Another explanation could be thread and parallel execution (different threads are executing time functions and thus they are executing them in parallel - which would explain the zeros):

To answer your second question: like you said, it's probably the result of query execution plan caching.
Execution Plan Caching and Reuse
When you manually run the sp and it runs quickly, you can try executing another query and then running your sp test again. If it runs more slowly the second time around, your quick speeds are probably due to the caching.

Related

SQL query performance, unexpected results - SQL Server

I wrote a simple loop code in SQL Server to measure performance of my SQL query. The measurement times I get from that code are in 95% of times as expected, very similar to each other, but sometimes the time can be 3-4 times higher than the normal one. There is no other process obtaining database at the same time.
declare #tTOTAL int = 0
declare #i integer = 0
declare #itrs integer = 100
while #i < #itrs
begin
CHECKPOINT; DBCC DROPCLEANBUFFERS; DBCC FREEPROCCACHE;
declare #t0 datetime2 = GETDATE()
// Here goes the query to measure its performance
SELECT COUNT(*) FROM [Comments] AS [c] WHERE [c].[Score] > 100
declare #t1 datetime2 = GETDATE()
set #tTotal = DATEDIFF(MILLISECOND,#t0,#t1)
select #tTotal as TimeT
set #i = #i + 1
end
If that is not the perfect way to measure query performance do you recommend any tool or way to do simple measurements as I want to do? (I need to get result of every single measurement, not only averages ...)

Adapt SQL query in a while loop

I have been spending a fair amount of time researching a method to adapt an sql query, while in a loop in order to bring back from multiple tables.
The one method I came across that makes this possible would be executing the query as a loadstring, then you could adapt the query each time the loop runs ( as explained via this link: https://learn.microsoft.com/en-us/sql/relational-databases/system-stored-procedures/sp-executesql-transact-sql ).
To be more specific, I am attempting to run a rather large query, which loops through multiple databases - however each database has a branch number, such as A, B, C, D, E etc.. So each time I execute the query I am using joins to go to all the databases I need from A. In order to make this work, I would need to copy and paste this entire 500 line query, over 5 times in order to cover every branch.
The method using loadstring would end up being similar to this:
DECLARE process varchar(max) = 'select * from Vis_' + Branch[i] + '_Quotes' exec(process)
Is there a better method to adapt your query search while its running?
Here is one example of how this might be used. It's not clear if this fits your requirements, but it appears that dynamic SQL is new to you so I've supplied an example that includes both looping and passing in parameters safely. This is untested, but hopefully should get you on the right track.
This assumes you have an existing table of branches with the corresponding branch codes (ideal, as then the script doesn't need updating when adding/disabling/removing a branch). If you don't, then you could always create a table variable and insert branches at the top of your script:
declare #sql varchar(max),
#BranchCode nvarchar(10) = '',
#param1 int,
#param2 nvarchar(10);
while 1=1 begin
set #BranchCode =
(select top 1 Code from Branch where Active = 1 and Code > #BranchCode order by Code)
if #BranchCode is null break;
set #sql = #sql + 'select * from Vis_' + #BranchCode + '_Quotes
where col1 = #param1 and #col2 like #param2
' -- notice extra linebreak (or space) added to separate each query
end
exec sp_executesql #sql,
'#param1 int, #param2 nvarchar(10), ...', -- parameter definitions
#param1, #param2, ... -- any additional parameters you need to safely pass in

Can you save and insert SQL Query Messages in a table?

I have a feeling this is an extremely newbie question, but it's hard to find the answer as anything to do with logging points me to SQL errors and issues. If not that, then the answer is querying the entire log to sift through.
When I insert data into an existing table via TSQL. How can I save or reference the Query Message for that specific statement? That way I can take the Query Message and insert the result into a log table that specifies how many records got inserted, maybe a duration of time it took and etc.
I'm using SQL Server 2008 R2 and these SQL statements are stored procedures inserting data and updating data. I want to ensure every step of the process is logged and inserted into a specific log table with details about that step of the process.
Thanks for your help on this (I'm assuming) newbie question. I'm still learning MSSQL.
DECLARE #dt DATETIME2(7), #duration INT, #rowcount INT;
SET #dt = SYSDATETIME();
INSERT dbo.foo(bar) VALUES('x');
SELECT #rowcount = ##ROWCOUNT, #duration = DATEDIFF(MICROSECOND, #dt, SYSDATETIME());
INSERT dbo.LoggingTable(duration,row_count) SELECT #duration, #rowcount;
In 2005 or lower, you can't get quite that precise, e.g.
DECLARE #dt DATETIME, ...
SET #dt = GETDATE();
...
... , #duration = DATEDIFF(MILLISECOND, #dt, GETDATE());

Different Parameter Value Results In Slow Query

I have an sproc in SQL Server 2008. It basically builds a string, and then runs the query using EXEC():
SELECT * FROM [dbo].[StaffRequestExtInfo] WITH(nolock,readuncommitted)
WHERE [NoteDt] < #EndDt
AND [NoteTypeCode] = #RequestTypeO
AND ([FNoteDt] >= #StartDt AND [FNoteDt] <= #EndDt)
AND [FStaffID] = #StaffID
AND [FNoteTypeCode]<>#RequestTypeC
ORDER BY [LocName] ASC,[NoteID] ASC,[CNoteDt] ASC
All but #RequestTypeO and #RequestTypeF are passed in as sproc parameters. The other two are built from a parameter into local variables. Normally, the query runs under one second. However, for one particular value of #StaffID, the execution plan is different and about 30x slower. In either case, the amount of data returned is generally the same, but execution time goes way up.
I tried to recompile the sproc. I also tried to "copy" #StaffID into a local #LocalStaffID. Neither approach made any difference.
Any ideas?
UPDATE: Tried to drop specific plans using:
DECLARE #ph VARBINARY(64), #pt VARCHAR(128), #sql VARCHAR(1024)
DECLARE cur CURSOR FAST_FORWARD FOR
SELECT p.plan_handle
FROM sys.[dm_exec_cached_plans] p
CROSS APPLY sys.dm_exec_sql_text(p.plan_handle) t
WHERE t.text LIKE N'%cms_selectStaffRequests%'
OPEN cur
FETCH NEXT FROM cur INTO #ph
WHILE ##FETCH_STATUS = 0
BEGIN
SELECT #pt = master.dbo.fn_varbintohexstr(#ph)
PRINT 'DBCC FREEPROCCACHE(' + #pt + ')'
SET #sql = 'DBCC FREEPROCCACHE(' + #pt + ')'
EXEC(#sql)
FETCH NEXT FROM cur INTO #ph
END
CLOSE cur
DEALLOCATE cur
Either the wrong plans were dropped, or the same plans ended up being recreated, but it had no effect.
Check the distribution/frequency/cardinality of the values in column FStaffID, and review your indexes. It may be that you have one staff member doing 50% of the work (probably the DBA :) and that may change how the optimizer chooses which indexes to use and how the data is read.
Alternatively, the execution plan generated by the dynamic code may be being saved and re-used, resulting in a poorly performing query (like HLGEM says). I'm not up on the details, but SQL 2008 has more ways to confuse you while doing this than its predecessors.
Doing an UPDATE STATISTICS ... WITH FULLSCAN on the main base table in the query resulted in the "slow" value not being associated with a slow plan.

SQL Batched Delete

I have a table in SQL Server 2005 which has approx 4 billion rows in it. I need to delete approximately 2 billion of these rows. If I try and do it in a single transaction, the transaction log fills up and it fails. I don't have any extra space to make the transaction log bigger. I assume the best way forward is to batch up the delete statements (in batches of ~ 10,000?).
I can probably do this using a cursor, but is the a standard/easy/clever way of doing this?
P.S. This table does not have an identity column as a PK. The PK is made up of an integer foreign key and a date.
You can 'nibble' the delete's which also means that you don't cause a massive load on the database. If your t-log backups run every 10 mins, then you should be ok to run this once or twice over the same interval. You can schedule it as a SQL Agent job
try something like this:
DECLARE #count int
SET #count = 10000
DELETE FROM table1
WHERE table1id IN (
SELECT TOP (#count) tableid
FROM table1
WHERE x='y'
)
What distinguishes the rows you want to delete from those you want to keep? Will this work for you:
while exists (select 1 from your_table where <your_condition>)
delete top(10000) from your_table
where <your_condition>
In addition to putting this in a batch with a statement to truncate the log, you also might want to try these tricks:
Add criteria that matches the first column in your clustered index in addition to your other criteria
Drop any indexes from the table and then put them back after the delete is done if that's possible and won't interfere with anything else going on in the DB, but KEEP the clustered index
For the first point above, for example, if your PK is clustered then find a range which approximately matches the number of rows that you want to delete each batch and use that:
DECLARE #max_id INT, #start_id INT, #end_id INT, #interval INT
SELECT #start_id = MIN(id), #max_id = MAX(id) FROM My_Table
SET #interval = 100000 -- You need to determine the right number here
SET #end_id = #start_id + #interval
WHILE (#start_id <= #max_id)
BEGIN
DELETE FROM My_Table WHERE id BETWEEN #start_id AND #end_id AND <your criteria>
SET #start_id = #end_id + 1
SET #end_id = #end_id + #interval
END
Sounds like this is one-off operation (I hope for you) and you don't need to go back to a state that's halfway this batched delete - if that's the case why don't you just switch to SIMPLE transaction mode before running and then back to FULL when you're done?
This way the transaction log won't grow as much. This might not be ideal in most situations but I don't see anything wrong here (assuming as above you don't need to go back to a state that's in between your deletes).
you can do this in your script with smt like:
ALTER DATABASE myDB SET RECOVERY FULL/SIMPLE
Alternatively you can setup a job to shrink the transaction log every given interval of time - while your delete is running. This is kinda bad but I reckon it'd do the trick.
Well, if you were using SQL Server Partitioning, say based on the date column, you would have possibly switched out the partitions that are no longer required. A consideration for a future implementation perhaps.
I think the best option may be as you say, to delete the data in smaller batches, rather than in one hit, so as to avoid any potential blocking issues.
You could also consider the following method:
Copy the data to keep into a temporary table
Truncate the original table to purge all data
Move everything from the temporary table back into the original table
Your indexes would also be rebuilt as the data was added back to the original table.
I would do something similar to the temp table suggestions but I'd select into a new permanent table the rows you want to keep, drop the original table and then rename the new one. This should have a relatively low tran log impact. Obviously remember to recreate any indexes that are required on the new table after you've renamed it.
Just my two p'enneth.
Here is my example:
-- configure script
-- Script limits - transaction per commit (default 10,000)
-- And time to allow script to run (in seconds, default 2 hours)
--
DECLARE #MAX INT
DECLARE #MAXT INT
--
-- These 4 variables are substituted by shell script.
--
SET #MAX = $MAX
SET #MAXT = $MAXT
SET #TABLE = $TABLE
SET #WHERE = $WHERE
-- step 1 - Main loop
DECLARE #continue INT
-- deleted in one transaction
DECLARE #deleted INT
-- deleted total in script
DECLARE #total INT
SET #total = 0
DECLARE #max_id INT, #start_id INT, #end_id INT, #interval INT
SET #interval = #MAX
SELECT #start_id = MIN(id), #max_id = MAX(id) from #TABLE
SET #end_id = #start_id + #interval
-- timing
DECLARE #start DATETIME
DECLARE #now DATETIME
DECLARE #timee INT
SET #start = GETDATE()
--
SET #continue = 1
IF OBJECT_ID (N'EntryID', 'U') IS NULL
BEGIN
CREATE TABLE EntryID (startid INT)
INSERT INTO EntryID(startid) VALUES(#start_id)
END
ELSE
BEGIN
SELECT #start_id = startid FROM EntryID
END
WHILE (#continue = 1 AND #start_id <= #max_id)
BEGIN
PRINT 'Start issued: ' + CONVERT(varchar(19), GETDATE(), 120)
BEGIN TRANSACTION
DELETE
FROM #TABLE
WHERE id BETWEEN #start_id AND #end_id AND #WHERE
SET #deleted = ##ROWCOUNT
UPDATE EntryID SET EntryID.startid = #end_id + 1
COMMIT
PRINT 'Deleted issued: ' + STR(#deleted) + ' records. ' + CONVERT(varchar(19), GETDATE(), 120)
SET #total = #total + #deleted
SET #start_id = #end_id + 1
SET #end_id = #end_id + #interval
IF #end_id > #max_id
SET #end_id = #max_id
SET #now = GETDATE()
SET #timee = DATEDIFF (second, #start, #now)
if #timee > #MAXT
BEGIN
PRINT 'Time limit exceeded for the script, exiting'
SET #continue = 0
END
-- ELSE
-- BEGIN
-- SELECT #total 'Removed now', #timee 'Total time, seconds'
-- END
END
SELECT #total 'Removed records', #timee 'Total time sec' , #start_id 'Next id', #max_id 'Max id', #continue 'COMPLETED? '
SELECT * from EntryID next_start_id
GO
The short answer is, you can't delete 2 billion rows without incurring some kind of major database downtime.
Your best option may be to copy the data to a temp table and truncate the original table, but this will fill your tempDB and would use no less logging than deleting the data.
You will need to delete as many rows as you can until the transaction log fills up, then truncate it each time. The answer provided by Stanislav Kniazev could be modified to do this by increasing the batch size and adding a call to truncate the log file.
I agree with the people who want you loop over a smaller set of records, this will be faster than trying to do the whole operation in one step. You may to experience withthe number of records you should include inthe loop. About 2000 at a time seems to be the sweet spot in most of the tables I do large deltes from althouhg a few need smaller amounts like 500. Depends on number of forign keys, size of the record, triggers etc, so it really will take some experimenting to find what you need. It also depends on how heavy the use of the table is. A heavily accessed table will need each iteration of the loop to run a shorter amount of time. If you can run during off hours, or best yet in single user mode, then you can have more records deleted in one loop.
If you don't think you do this in one night during off hours, it might be best to design the loop with a counter and only do a set number of iterations each night until it is done.
Further, if you use an implicit transaction rather than an explicit one, you can kill the loop query at any time and records already deleted will stay deleted except those in the current round of the loop. Much faster than trying to rollback half a million records becasue you've brought the system to a halt.
It is usually a good idea to backup a database immediately before undertaking an operation of this nature.