I'm running a SQL query that is taking at least 7 seconds to return. I'm wondering if there is a way to determine how many MBs that SQL is returning. I'm trying to figure out how much time it is taking to do the actual query vs how much time is spent with transferring the results from the server.
It is a simple sql query, something like:
select * from Table where this = 'that'
I have a few users sometimes executing highly imperformant DAX queries against an OnPrem SSAS Tabular database.
Is there are server side setting to limit the duration of queries so that the server cancels long running queries automatically?
Try changing the ServerTimeout property.
I've a strange problem with a postgres 10 database.
After restoring a table with around 2 million record i tried to run a query to measure it's execution time since it has been acting slow on another server.
Shortly after the restore the query was executing in about 1.5 seconds.
After around one hour the same query was executing in 30/40 seconds.
The query is nothing fancy :
SELECT f1,f2,f3 FROM table WHERE f4=false
The planned execution is the same has before.
No writes has been done on the table and the server wasn't on load from other tasks.
How this is possible ? how can i investigate the cause of the problem ?
I'm using PostgreSQL server and there's the thing I need to know about query execution on it. I have two different clients connected to the server simultaneously. Now, the clients are executing some queries at the same time. Is it true that any clien't query will be exectueted in a separated thread?
Or the server can split the query excution into multiple threads for optimization's sakes.
I need some help in explaining this behavior in SQL Server 2008. I have a simple c++ program that is running a query in a loop. Here is the pseudocode of the program.
myTempid = 0;
while (1) {
execQuery(select a.id
from a,b,c,d
where a.id = b.id
and b.id = c.id
and c.id = d.id
and id.id = myTempID)
}
Here are some facts
a,b,c are empty tables
d has about 5500 rows
The query starts out taking '0msec' (i can see this from the profiler); but then after X number of iterations; it jumps to about 60msec and stays there. The X is variant; sometimes its 100.. sometimes 200. The weird thing is that once it makes the jump from 0 to 60msec; it just stays there no matter the myID.
To me it sounds like SQL Server is somehow 'de-caching' the query plan?? Does this make sense to anyone
Thanks!
The results from SQL Profiler can by tricky to interpret.
The time shown for a command includes the time for the record set to be delivered to the client. To see this, create a SELECT statement that returns at least a million rows. Run these tests is SQL Management Studio and run SQL Profiler to trace the results.
First run, send the SQL results to a temporary table (should take a second or so). Second run, send the the SQL results to the Results window (should take a few seconds). Note the run time shown in SSMS. Then note the time reported by SQL Profiler.
You should see that the the time SSMS takes to read the record set, format the results, and display them to Result window increases the duration that is reported for the query.
After all that, I'm saying that when you are running the query from your application, at that level of precision (60 ms), you cannot tell where the slow down is coming from: database, network, or application, just from the reported duration.
You should create a test script and run the query in SSMS and see if the query time degrades when your application is not part of the loop.
SQL Profiler 2008 records duration in microseconds, but only displays it in milliseconds; so rounding is an issue. Save the trace as a Trace Table and look at results in the Duration column to see the microseconds. If the problem is within SQL Server, you may see the duration increasing over time.
You can also have the Profiler return the execution plan. You can look at the execution plan before and after the duration increases and see if the execution plan is changing.