I am trying to diagnose slow application performance on a client site.
A log file on the client machine tells me execution time for each query measured From the application side. It appears that many bare-bones simple queries to the remote DB are taking an exorbitant amount of time to complete. For example,
SELECT CONVERT(varchar, GETDATE(), 121)
This query is repeatedly taking over 5 seconds to execute as timed from the application. Other queries almost as simple (inserting one recordset into one table) are taking over a minute to complete. On my test system (with a copy of the client's database) I do not experience any of these problems.
I would suspect a slow network, except that the problem reliably disappears after running a report from Crystal Reports. Then after 1-2 hours the application slows down again.
For the sake of isolating the problem further, I would like to retrieve/log the execution time on the server side. I am trying to figure out what the best way of doing this is. I could use a variable to obtain the execution time for a single query, but I don't have the option of modifying every single query in my application.
sys.dm_exec_query_stats looked very promising for retrieving execution times for previous queries, but the millisecond values it reports for last_elapsed_time seem far too high.
Can anyone help me figure out how to obtain timing for my queries?
Here it is A way
set statistics time on
SELECT CONVERT(varchar, GETDATE(), 121)
set statistics time off
And it will report the time spend for the query as
SQL Server parse and compile time:
CPU time = 0 ms, elapsed time = 0 ms.
Related
I have a query in SQL Server 2019 that does a SELECT on the primary key fields of a table. This table has about 6 million rows of data in it. I want to know exactly how fast my query is down to the microsecond (or at least the 100 microsecond). My query is faster than a millisecond, but all I can find in SQL server is query measurements accurate to the millisecond.
What I've tried so far:
SET STATISTICS TIME ON
This only shows milliseconds
Wrapping my query like so:
SELECT #Start=SYSDATETIME()
SELECT TOP 1 b.COL_NAME FROM BLAH b WHERE b.key = 0x1234
SELECT #End=SYSDATETIME();
SELECT DATEDIFF(MICROSECOND,#Start,#End)
This shows that no time has elapsed at all. But this isn't accurate because if I add WAITFOR DELAY '00:00:00.001', which should add a measurable millisecond of delay, it still shows 0 for the datediff. Only if I wat for 2 milliseconds do I see it show up in the datediff
Looking up the execution plan and getting the total_worker_time from the sys.dm_exec_query_stats table.
Here I see 600 microseconds, however the microsoft docs seem to indicate that this number cannot be trusted:
total_worker_time ... Total amount of CPU time, reported in microseconds (but only accurate to milliseconds)
I've run out of ideas and could use some help. Does anyone know how I can accurately measure my query in microseconds? Would extended events help here? Is there another performance monitoring tool I could use? Thank you.
This is too long for a comment.
In general, you don't look for performance measurements measured in microseconds. There is just too much variation, based on what else is happening in the database, on the server, and in the network.
Instead, you set up a loop and run the query thousands -- or even millions -- of times and then average over the executions. There are further nuances, such as clearing caches if you want to be sure that the query is using cold caches.
I've just started using the SentryOne Plan Explorer to help tune my SQL Server queries, and have a question, I can't seem to find an answer for. What is Duration?
I would think it's the total time it took for the query to run. However, every query I am testing goes much longer in real-time than what ends up showing under Duration.
Below is a screenshot of what I'm seeing. Watching the query run takes over 2 minutes, but the final duration ends up being .770?
Thanks for any insight!
This is the answer provided by SentryOne:
While a query is running, we show clock time on the status bar. However, at the end, we sum up the total duration, in milliseconds, as reported by the trace rows we collected. We subtract duration from any trace rows that are discarded (e.g. events that don't generate plans, like WAITFOR).
I'm using an SQL Server 2012 and SET STATISTICS TIME ON to measure the CPU-time for my sql statements. I use this because i only want to get the time the database needs to execute the statement.
When returning large data from a select, i noticed the CPU-time going up pretty high, like using TOP 2000 will need about 400ms, but without it will need about 10000ms CPU-time.
What i'm not sure about is:
Is it possible that the CPU-time i get returned includes something like the time it needs to display the millions of rows returned in my Sql Server Management Studio? That would be pretty much of a bad situation.
Update:
The time i want to recieve is the execution time of the sql server without the time needed for the ssms to display the rows. There are several time statistics display in the Client statistics , but after searching for a long time it's really hard to find good references explaining what they are. Any suggestions?
Idea: elapsed time(sql server execution time) - client processing time (Client statistics)
Maybe this is an option?
In a multi-threaded world, CPU time is increasingly less helpful for simple tuning. Execution time is worth looking at.
To see if execution time (elapsed time) spent on displaying results is included you could SELECT TOP 2000 * INTO #temp to compare execution times.
Update:
My quick tests suggest the overhead of creating/inserting into a #temp table outweighs that of displaying results (at 5000). When I go to 50,000 results the SELECT INTO runs more quickly. The counts at which the two become equivalent depends on how many and what type of fields are returned. I tested with:
SET STATISTICS TIME ON
SELECT TOP 50000 NEWID()
FROM master..spt_values v1, master..spt_values v2
WHERE v1.number > 100
SET STATISTICS TIME OFF
-- CPU time = 32 ms, elapsed time = 121 ms.
SET STATISTICS TIME ON
SELECT TOP 50000 NEWID() col1
INTO #test
FROM master..spt_values v1, master..spt_values v2
WHERE v1.number > 100
SET STATISTICS TIME OFF
-- CPU time = 15 ms, elapsed time = 87 ms.
CPU time in SET STATISTICS TIME ON only counts the time that SQL Server needs to execute the query. It doesn't include any time the client takes to render the results. It also excludes any time SQL Server spends waiting for buffers to clear. In short, it really is pretty independent of the client.
I wrote an application that performs around 40 queries and then does some processing on the results of each query. (Right now I'm using Qt 3.2.2 in Visual C++ 6.0 on Windows XP with SQL Server 2005, but that's not required.) The paradigm is to create a QSqlQuery object with the query (this causes the query to be performed) and then while (query.next()) { operate(query.value(0)); } By profiling I find that the query.next() call is taking up half the time of the program, which seems excessive as it's just fetching a row of data (6 or 7 fields).
This performance is unacceptable, and I'm looking for a way to improve this. I'm open to changing anything -- switching my compiler, switching languages, switching the paradigm I use to get data from the database. How can I speed this up?
Here's the query:
select rtrim(Portfolio.securityid), rtrim(type), rtrim(coordinate), rtrim(value)
from MarketData
inner join portfolio
on cast(MarketData.securityid as varchar(36))=portfolio.securityid
where Portfolioname=?
and type in
('bond_profit', 'bondoption_profit', 'equity_profit', 'equityoption_profit')
and marketdate=?
order by Portfolio.securityid, type, coordinate
CPU usage is around 40% while the program is running, so I suspect it's spending the majority of its time waiting for the .next() call to return with more data.
Performing the same query in SSMS returns 4.5 million rows in about 5 minutes, but the total time spent waiting on .next() during the run of the program is 30 minutes.
I am trying to speed up a long running query that I have (takes about 10 minutes to run...). In order to track down what part of the query is costing me the most time I included the Actual Execution Plan when I ran it and found a particular section that was taking up 55% (screen shot below)
alt text http://img109.imageshack.us/img109/9571/53218794.png
This didn't quite seem right to me so I added Print '1' and Print '2' before and after this trouble section. When I run the query for a mere 17 seconds and then cancel it the 1 and 2 print out which I'm assuming means it's getting through that section in the first 17 seconds.
alt text http://img297.imageshack.us/img297/4739/66797633.png
Am I doing something wrong here or is my Execution plan misleading me?
Metrics from perfmon will also help figure out what's going wrong... you could be running into some serious IO issues with the drive your tempDB is residing on. Additionally, run a trace and look at the CPU & IO of the actual run.
Good perfmon metrics to look at are disk queue length (avg & writes).
If you don't have access to perfmon or don't want to trace things, use "SET STATISTICS IO ON" at the beginning of your query and allow it to complete...don't stop it. Just because an execution plan says it's taking over have the cost doesn't mean it will run for half of the query time...it could be much more (or less).
It says Query 10: Query cost (relative to the batch): 55%. Are 100% positive that it is the 10th statement in the batch that you surounded with Print statements? Could the INSERT ... INTO #mpProgramSet2 execute multiple times, some times in under 17 seconds other time for 5 minutes, depending on how much data was selected/inserted?
As a side note you should run with SET STATISTICS TIME ON rather that prints, this will give you exact compile/time and execution time of each statement in the batch.
I wouldn't trust that printing the '1' and '2' will prove anything about what has executed and what has not. I do the same thing, but I just wouldn't rely on it as proof. You could print the ##rowcount from that first insert query - that would indicate for sure that the insert has occurred.
Although the plan says that query may take 55% of the cost, it may not be 55% of the execution time, especially if the query results are cached.
Another advantage of printing the ##rowcount is to compare the actual number of rows to the estimated rows (51K). If they differ by a lot then you might investigate the statistics for your indexes.
We would need the full query to understand what's going on; but I would probably start with setting MAXDOP to 1 in order to limit the number of processors it's running on.
Note that sometimes queries need to be limited to only 1 processor due to locks etc.
Further you might try adding NOLOCKs to any of your selects which can get away with dirty reads.