I am trying to run a simple select statement and each time I run it it takes a different amount of time to complete.
The first time takes 0 seconds, the second time it takes 3 seconds and the third time it takes 10 seconds. And if run again the query will start from 0,3,10 and keeps going on.
Why is this happening? It seems there is some kind of logic behind it.
This is causing the service that uses the database to timeout.This query is run by a specific software for thousands of times.
SQL Query:
SELECT * FROM CONTACT_CONTACT WITH (NOLOCK) WHERE MKEY ='XXXXXXXXXXXX'
I am using SQL Server 2012. The db contains 369 tables. The table CONTACT_CONTACT contains 62497 records.
Related
I'm running a SQL query that is taking at least 7 seconds to return. I'm wondering if there is a way to determine how many MBs that SQL is returning. I'm trying to figure out how much time it is taking to do the actual query vs how much time is spent with transferring the results from the server.
It is a simple sql query, something like:
select * from Table where this = 'that'
I've a strange problem with a postgres 10 database.
After restoring a table with around 2 million record i tried to run a query to measure it's execution time since it has been acting slow on another server.
Shortly after the restore the query was executing in about 1.5 seconds.
After around one hour the same query was executing in 30/40 seconds.
The query is nothing fancy :
SELECT f1,f2,f3 FROM table WHERE f4=false
The planned execution is the same has before.
No writes has been done on the table and the server wasn't on load from other tasks.
How this is possible ? how can i investigate the cause of the problem ?
I have a single Universe query that has 4-5 filters that takes almost 5 minutes to run using the Webi rich client. When I copy the SQL code and run it from SQL management studio (SSMS) it takes 10 seconds. I have created only the data query and don't have any reports or variables. When I run the query using the Webi HTML, it also runs in 10 seconds.
Both SSMS and Webi return 110,000 rows. If I stop the Webi query after about 20 seconds, it only returned 5000 rows, so it's not finishing and then getting hung up.
If I replace the Webi Universe query with a stored procedure (FHSQL) using the same SQL code, it takes 80 seconds. There are query filters in place. Without the Where clauses, SMSS takes 65 seconds to return 990,000 rows
Filtered All_Records
# of Rows: 110,000 990,000
--------------------------------------------
SQL (SSMS): 10 sec 65 sec
Webi HTML: 10 sec
Stored Proc: 80 sec
Rich client: 270 sec
Just the rich client is slow, but much more than would be expected.
This is mostly because of non-tuned Array Fetch Size and Array Bind Size. (You can find them in the universe parameters.) The easiest way to go about would be:
Identify 2-3 reports which retrieves a considerable no of rows.
Record their execution times (probably you can use scheduling).
Increment the parameters, majorly the Array Fetch Size, by steps of 50
Check the execution times again.
Based on the performance gain/loss fine-tune the parameters.
I recently experienced this issue again after making changes to the PRM configuration files:
C:\Program Files (x86)\SAP BusinessObjects\SAP BusinessObjects Enterprise XI 4.0\dataAccess\connectionServer\odbc\extensions\export
I was having date conversion errors when running my query and fixed it by setting the date format in the configuration files. The error went away, but the query started to run for 9 minutes instead of 1 minute.
I corrected the configuration file and the query would refresh in 1 minute once again.
So, incorrect changes to the PRM / date configuration files can cause Webi to do unnecessary date conversions and really slow down the query response times.
This information is in addition to the tips provided by Vimal above.
I am facing an issue in my SQL server 2008 R2 version previously it was good on executing everything. But from 2 days it not even responding for a small select queries. I didn't do any update or changed any thing but it is now throwing an issue and I couldn't find where is the issue.
I have a table which contains record count of 36 581.
When I am write the simple select query for that table:
SELECT * FROM [TABLE NAME]
It is showing the first 152 records and after that it is not showing any record but taking soo much time which I can say as infinite time as I have seen the time elapsed is around 30 minutes but there is no records extra showed in the result query except those 152 which shown at first.
Try running DBCC CHECKDB on your database like
DBCC CHECKDB('#databasename')
I need some help in explaining this behavior in SQL Server 2008. I have a simple c++ program that is running a query in a loop. Here is the pseudocode of the program.
myTempid = 0;
while (1) {
execQuery(select a.id
from a,b,c,d
where a.id = b.id
and b.id = c.id
and c.id = d.id
and id.id = myTempID)
}
Here are some facts
a,b,c are empty tables
d has about 5500 rows
The query starts out taking '0msec' (i can see this from the profiler); but then after X number of iterations; it jumps to about 60msec and stays there. The X is variant; sometimes its 100.. sometimes 200. The weird thing is that once it makes the jump from 0 to 60msec; it just stays there no matter the myID.
To me it sounds like SQL Server is somehow 'de-caching' the query plan?? Does this make sense to anyone
Thanks!
The results from SQL Profiler can by tricky to interpret.
The time shown for a command includes the time for the record set to be delivered to the client. To see this, create a SELECT statement that returns at least a million rows. Run these tests is SQL Management Studio and run SQL Profiler to trace the results.
First run, send the SQL results to a temporary table (should take a second or so). Second run, send the the SQL results to the Results window (should take a few seconds). Note the run time shown in SSMS. Then note the time reported by SQL Profiler.
You should see that the the time SSMS takes to read the record set, format the results, and display them to Result window increases the duration that is reported for the query.
After all that, I'm saying that when you are running the query from your application, at that level of precision (60 ms), you cannot tell where the slow down is coming from: database, network, or application, just from the reported duration.
You should create a test script and run the query in SSMS and see if the query time degrades when your application is not part of the loop.
SQL Profiler 2008 records duration in microseconds, but only displays it in milliseconds; so rounding is an issue. Save the trace as a Trace Table and look at results in the Duration column to see the microseconds. If the problem is within SQL Server, you may see the duration increasing over time.
You can also have the Profiler return the execution plan. You can look at the execution plan before and after the duration increases and see if the execution plan is changing.