SQL Server query degrades over time from 0 to 60msec - sql

I need some help in explaining this behavior in SQL Server 2008. I have a simple c++ program that is running a query in a loop. Here is the pseudocode of the program.
myTempid = 0;
while (1) {
execQuery(select a.id
from a,b,c,d
where a.id = b.id
and b.id = c.id
and c.id = d.id
and id.id = myTempID)
}
Here are some facts
a,b,c are empty tables
d has about 5500 rows
The query starts out taking '0msec' (i can see this from the profiler); but then after X number of iterations; it jumps to about 60msec and stays there. The X is variant; sometimes its 100.. sometimes 200. The weird thing is that once it makes the jump from 0 to 60msec; it just stays there no matter the myID.
To me it sounds like SQL Server is somehow 'de-caching' the query plan?? Does this make sense to anyone
Thanks!

The results from SQL Profiler can by tricky to interpret.
The time shown for a command includes the time for the record set to be delivered to the client. To see this, create a SELECT statement that returns at least a million rows. Run these tests is SQL Management Studio and run SQL Profiler to trace the results.
First run, send the SQL results to a temporary table (should take a second or so). Second run, send the the SQL results to the Results window (should take a few seconds). Note the run time shown in SSMS. Then note the time reported by SQL Profiler.
You should see that the the time SSMS takes to read the record set, format the results, and display them to Result window increases the duration that is reported for the query.
After all that, I'm saying that when you are running the query from your application, at that level of precision (60 ms), you cannot tell where the slow down is coming from: database, network, or application, just from the reported duration.
You should create a test script and run the query in SSMS and see if the query time degrades when your application is not part of the loop.
SQL Profiler 2008 records duration in microseconds, but only displays it in milliseconds; so rounding is an issue. Save the trace as a Trace Table and look at results in the Duration column to see the microseconds. If the problem is within SQL Server, you may see the duration increasing over time.
You can also have the Profiler return the execution plan. You can look at the execution plan before and after the duration increases and see if the execution plan is changing.

Related

How can you tell how many MBs a sql query is returning?

I'm running a SQL query that is taking at least 7 seconds to return. I'm wondering if there is a way to determine how many MBs that SQL is returning. I'm trying to figure out how much time it is taking to do the actual query vs how much time is spent with transferring the results from the server.
It is a simple sql query, something like:
select * from Table where this = 'that'

If I query from V$SQL view, does it include SQL that are still running and not yet finished?

I'm running a query below in an oracle db.
SELECT program_id, program_line#, sql_text
FROM V$SQL VS ,
ALL_USERS AU
WHERE (executions >= 1)
AND (parsing_user_id != 0)
AND (AU.user_id(+) = VS.parsing_user_id)
AND UPPER(AU.USERNAME) IN (UPPER('CARGO'))
ORDER BY last_active_time DESC;
I just wanted to ask if the result returned by this query includes sql that are still running or query that has timeout or cancelled by the user?
Yes, V$SQL shows information about queries that are still running. From Oracle's documentation:
V$SQL lists statistics on shared SQL areas without the GROUP BY clause
and contains one row for each child of the original SQL text entered.
Statistics displayed in V$SQL are normally updated at the end of query
execution. However, for long running queries, they are updated every 5
seconds. This makes it easy to see the impact of long running SQL
statements while they are still in progress.
As for the second part of your question, the answer is: It Depends. The length of time a query stays in the cache (where V$SQL gathers information,) depends on the size of your cache and the amount of unique/distinct queries that are running at any given time. If the same type of queries are frequently ran in the database (i.e. they are cached,) old queries will remain in the V$SQL view for a longer period of time than in databases where there are many distinct queries being executed (assuming everything else is the same.) Distinct queries that aren't already stored in the cache are added to the cache library, pushing older/timed out queries out of the cache. If you want to configure the amount of time queries spend in cache, you will have to configure the size of the shared pool. I would recommend reading up on the Library Cache at https://docs.oracle.com/database/121/TGDBA/tune_shared_pool.htm#TGDBA560

Duplicated execution plan for Query on the Azure SQL

I have a query and run it using SQL management studio. Usually, there is created one execution plan for a query in the studio. But sometimes I can catch up the duplicated execution plans for a single Query on the Azure SQL like below.
When I open the query from this plan I see the duplicated query. As if the copied query is pasted into the same query. The same in Query 1 and Query 2. See below.
Maybe someone knows why does this happen and how to avoid this behavior? How is that even possible?
P.S. Time of execution query was increased from 2 sec to 20 sec and more.
P.P.S. The warning in the Query 2
It could be that the queries were ran with different settings. I can notice that one has a warning and the other doesn't.
Reference:
https://blogs.msdn.microsoft.com/psssql/2014/04/03/i-think-i-am-getting-duplicate-query-plan-entries-in-sql-servers-procedure-cache/

high 'total_worker_time' for stored proc using OPENQUERY in SQL Server 2005

[Cross posted from the Database Administrators site, in the hope that it may gain better traction here. I'll update either site as appropriate.]
I have a stored procedure in SQL Server 2005 (SP2) which contains a single query like the following (simplified for clarity)
SELECT * FROM OPENQUERY(MYODBC, 'SELECT * FROM MyTable WHERE Option = ''Y'' ')
OPTION (MAXDOP 1)
When this proc is run (and run only once) I can see the plan appear in sys.dm_exec_query_stats with a high 'total_worker_time' value (eg. 34762.196 ms). This is close to the elapsed time. However, in SQL Management Studio the statistics show a much lower CPU time, as I'd expect (eg. 6828 ms). The query takes some time to return, because of the slowness of the server it is talking to, but it doesn't return many rows.
I'm aware of the issue that parallel queries in SQL Server 2005 can present odd CPU times, which is why I've tried to turn off any parallism with the query hint (though I really don't think that there was any in any case).
I don't know how to account for the fact that the two ways of looking at CPU usage can differ, nor which of the two might be accurate (I have other reasons for thinking that the CPU usage may be the higher number, but it's tricky to measure). Can anyone give me a steer?
UPDATE: I was assuming that the problem was with the OPENQUERY so I tried looking at times for a long-running query which doesn't use OPENQUERY. In this case the statistics (gained by setting STATISTICS TIME ON) reported the CPU time at 3315ms, whereas the DMV gave it at 0.511ms. The total elapsed times in each case agreed.
total_worker_time in sys.dm_exec_query_stats is cumulative - it is the total execution time for all the executions of the currently compiled version of the query - see execution_count for the number of executions this represents.
See last_worker_time, min_worker_time or max_worker_time for timing of individual executions.
reference: http://msdn.microsoft.com/en-us/library/ms189741.aspx

Begin and monitor progress on long running SQL queries via ajax

Is it possible to start a query that will take a significant amount of time and monitor progress from the UI via Ajax?
I considered starting the process as a "run once" job that is scheduled to run immediately. I could store the results in a temporary table for quick retrieval once it's complete. I could also log the run time of the report and average that out, to guestimate the running time for the progress bar.
I use Microsoft SQL 2005 at the moment, but I'm willing to other DBMS such as SQL 2008, MySQL, etc if necessary.
One idea, if the long running job populates another table.
You have a 2nd database connection to monitor how many rows are processed out of the source rows, and show a simple "x rows processed" every few second
SELECT COUNT(*) FROM TargetTable WITH (NOLOCK)
If you have a source table too:
SELECT COUNT(*) FROM SourceTable WITH (NOLOCK)
..then you can use "x of y rows processed"
Basically, you have to use a 2nd connection to monitor the first. However, you also need something to measure...