I have a single Universe query that has 4-5 filters that takes almost 5 minutes to run using the Webi rich client. When I copy the SQL code and run it from SQL management studio (SSMS) it takes 10 seconds. I have created only the data query and don't have any reports or variables. When I run the query using the Webi HTML, it also runs in 10 seconds.
Both SSMS and Webi return 110,000 rows. If I stop the Webi query after about 20 seconds, it only returned 5000 rows, so it's not finishing and then getting hung up.
If I replace the Webi Universe query with a stored procedure (FHSQL) using the same SQL code, it takes 80 seconds. There are query filters in place. Without the Where clauses, SMSS takes 65 seconds to return 990,000 rows
Filtered All_Records
# of Rows: 110,000 990,000
--------------------------------------------
SQL (SSMS): 10 sec 65 sec
Webi HTML: 10 sec
Stored Proc: 80 sec
Rich client: 270 sec
Just the rich client is slow, but much more than would be expected.
This is mostly because of non-tuned Array Fetch Size and Array Bind Size. (You can find them in the universe parameters.) The easiest way to go about would be:
Identify 2-3 reports which retrieves a considerable no of rows.
Record their execution times (probably you can use scheduling).
Increment the parameters, majorly the Array Fetch Size, by steps of 50
Check the execution times again.
Based on the performance gain/loss fine-tune the parameters.
I recently experienced this issue again after making changes to the PRM configuration files:
C:\Program Files (x86)\SAP BusinessObjects\SAP BusinessObjects Enterprise XI 4.0\dataAccess\connectionServer\odbc\extensions\export
I was having date conversion errors when running my query and fixed it by setting the date format in the configuration files. The error went away, but the query started to run for 9 minutes instead of 1 minute.
I corrected the configuration file and the query would refresh in 1 minute once again.
So, incorrect changes to the PRM / date configuration files can cause Webi to do unnecessary date conversions and really slow down the query response times.
This information is in addition to the tips provided by Vimal above.
Related
I've a strange problem with a postgres 10 database.
After restoring a table with around 2 million record i tried to run a query to measure it's execution time since it has been acting slow on another server.
Shortly after the restore the query was executing in about 1.5 seconds.
After around one hour the same query was executing in 30/40 seconds.
The query is nothing fancy :
SELECT f1,f2,f3 FROM table WHERE f4=false
The planned execution is the same has before.
No writes has been done on the table and the server wasn't on load from other tasks.
How this is possible ? how can i investigate the cause of the problem ?
I am trying to run a simple select statement and each time I run it it takes a different amount of time to complete.
The first time takes 0 seconds, the second time it takes 3 seconds and the third time it takes 10 seconds. And if run again the query will start from 0,3,10 and keeps going on.
Why is this happening? It seems there is some kind of logic behind it.
This is causing the service that uses the database to timeout.This query is run by a specific software for thousands of times.
SQL Query:
SELECT * FROM CONTACT_CONTACT WITH (NOLOCK) WHERE MKEY ='XXXXXXXXXXXX'
I am using SQL Server 2012. The db contains 369 tables. The table CONTACT_CONTACT contains 62497 records.
We have situation where query takes time through SSIS packages during the weekend run whereas its run in 3 mins during weekdays run. There is considerable increase in the records count but with that it should run in max of 15 mins duration.
We have found temporary solution to overcome this but this one is manual effort need to performed every weekend run.
Temporary solution is, we run the SQL task source query in the SSMS before the package gets triggered from a job.
Whereas the query running through will also take longer time execute but we abort manually ran query. This creates Execution plan Cache of that DB server.
After this when query runs from package it will execute in 3 mins regardless the no of records.
Kindly let us know if any permanent fix can be done for this.
Thanks,
SANDY
I need some help in explaining this behavior in SQL Server 2008. I have a simple c++ program that is running a query in a loop. Here is the pseudocode of the program.
myTempid = 0;
while (1) {
execQuery(select a.id
from a,b,c,d
where a.id = b.id
and b.id = c.id
and c.id = d.id
and id.id = myTempID)
}
Here are some facts
a,b,c are empty tables
d has about 5500 rows
The query starts out taking '0msec' (i can see this from the profiler); but then after X number of iterations; it jumps to about 60msec and stays there. The X is variant; sometimes its 100.. sometimes 200. The weird thing is that once it makes the jump from 0 to 60msec; it just stays there no matter the myID.
To me it sounds like SQL Server is somehow 'de-caching' the query plan?? Does this make sense to anyone
Thanks!
The results from SQL Profiler can by tricky to interpret.
The time shown for a command includes the time for the record set to be delivered to the client. To see this, create a SELECT statement that returns at least a million rows. Run these tests is SQL Management Studio and run SQL Profiler to trace the results.
First run, send the SQL results to a temporary table (should take a second or so). Second run, send the the SQL results to the Results window (should take a few seconds). Note the run time shown in SSMS. Then note the time reported by SQL Profiler.
You should see that the the time SSMS takes to read the record set, format the results, and display them to Result window increases the duration that is reported for the query.
After all that, I'm saying that when you are running the query from your application, at that level of precision (60 ms), you cannot tell where the slow down is coming from: database, network, or application, just from the reported duration.
You should create a test script and run the query in SSMS and see if the query time degrades when your application is not part of the loop.
SQL Profiler 2008 records duration in microseconds, but only displays it in milliseconds; so rounding is an issue. Save the trace as a Trace Table and look at results in the Duration column to see the microseconds. If the problem is within SQL Server, you may see the duration increasing over time.
You can also have the Profiler return the execution plan. You can look at the execution plan before and after the duration increases and see if the execution plan is changing.
The query has been canceled because the estimated cost of this query (1660) exceeds the configured threshold of 1500. Contact the system administrator.
I am getting error as above on live while running one of the stored procedure threads where parameter contain XML variable.
I have checked the configuration value of QUERY_GOVERNOR_COST_LIMIT is set to 1500.
To get resolve this problem I have added SET QUERY_GOVERNOR_COST_LIMIT 0 in stored procedures. And it is working fine.
When I run stored procedures in back end with and without SET QUERY_GOVERNOR_COST_LIMIT 0 statement, it is running fine, and run within 0 seconds.
But it is creating problem with .Net application and getting error.
So, why it is giving error with application and not with SQL Query analyzer?
Even query is run within 0 seconds as it can give error when execution time will exceed more then 15 seconds (as configured QUERY_GOVERNOR_COST_LIMIT 1500 )?
Please share your idea for the analysis and solution.
Could be because SET ARITHABORT is OFF from .NET
could also be a conversion problem, look at your execution plan do you see any conversions. How are you executing this from .NET and are you using the correct data types?
usually this happens because of different default ANSI setting for SSMS and .net
they could create different execution plans.
the first you need to check is the execution plans from both sources.
you can do this with sql profiler's Showplan XML event
QUERY_GOVERNOR_COST_LIMIT is a connection level Runtime setting. So while making connection this needs to be set. When you test in SSMS Query window you need to set this setting in QueryOption property window (right click inside query window, QueryOptions, Advance,...)
You also mentioned that query executes in 0 seconds so why even from .NET it errors out with setting of 15sec? Because the setting works on Estimated Query execution Cost, not the Actual one. So right question is why sql server has estimated the execution cost more than 15sec.? And there is no single answer for this one.
Though I would like to know what is the user workflow/situation where you actually need to use this setting. Many times estimated cost is lot different than actual so unless dev/dba know exactly what they are doing and what will be execution... looks like I don't understand the practical usage of this setting.