I have SQL where am using row offset to perform pagination, which is taking around 10minutes to return 32 records, but without using row offset my query giving 500+ records within second.
So I want to understand what could be have been lead to this issue. Could anyone help.
Thank you!
My user database(App) and system database (temp db) were using different compatibility, as soon I set the compatibility same as system database, it worked like charm.
Related
I am using Typeorm for SQL Server in my application. When I pass the native query like connection.query(select * from user where id = 1), the performance is really good and it less that 0.5 seconds.
If we use the findone or QueryBuilder method, it is taking around 5 seconds to get a response.
On further debugging, we found that passing value directly to query like this,
return getConnection("vehicle").createQueryBuilder()
.select("vehicle")
.from(Vehicle, "vehicle")
.where("vehicle.id='" + id + "'").getOne();
is faster than
return getConnection("vehicle").createQueryBuilder()
.select("vehicle")
.from(Vehicle, "vehicle")
.where("vehicle.id =:id", {id:id}).getOne();
Is there any optimization we can do to fix the issue with parameterized query?
I don't know Typeorm but it seems to me clear the difference. In one case you query the database for the whole table and filter it locally and in the other you send the filter to the database and it filters the data before it sends it back to the client.
Depending on the size of the table this has a big impact. Consider picking one record from 10 million. Just the time to transfer the data to the local machine is 10 million times slower.
Looking around, with QSqlTableModel, the way to get all rows out of a table is
select();
while( canFetchMore() ) {
fetchMore();
}
The first select seems fine, but the fetchMore() seems to grab one row at a time. I'm hammering the Sql server and a fetch of about 350 rows is taking up to a couple of seconds, not to mention wasting a bunch of CPU.
The Database is SQL-Server. Is there no better way?
EDIT: After some digging and help from a DBA, I can confirm that I get different behavior out of two different databases. Unfortunately, they are the same version of SQL Server, and both are using the ODBC Driver on Linux (written by Microsoft). Under the sheets, one will select 256 rows, then each iteration of fetchMore() will select another 256 rows. For the other one, select() and fetchMore() will only get one row at a time, and is causing all kinds of problems.
My solution is, unfortunately, is to pass QSqlQuery to a QSqlDatabase.
When type query in SQL Developer, it return data less than a second. When do the same in Oracle APEX it take much more time, over 5 seconds. I go in DEBUG section to see what's wrong, and it return this to me:
-IR binding: "APXWS_MAX_ROW_CNT" value="1000000"
I figure it out, that it returns more than 1.000.000 rows, and that's why is slower. But don't know how to fix it, to get approximately the same time as in SQL Developer?
"Leave the Maximum Row Count property null, so classic reports won't fetch all the way to this number and interactive reports won't introduce the analytic function count(*) over ().
Don't use a Pagination Type with a Z, so classic reports won't fetch all rows and interactive reports again won't introduce count(*) over ()."
source: http://rwijk.blogspot.ca/2016/11/performance-aspects-of-apex-reports.html
(I saved it in the wayback machine too if the link goes away: http://web.archive.org/web/20170706183715/http://rwijk.blogspot.ca/2016/11/performance-aspects-of-apex-reports.html
Put some limits on Maximum Row Count and Maximum Rows per Page can help you to mitigate loading.
You never had same performance as SQL Developer in a web page apex or not.
I am beginner with Oracle DB.
I want to know execution time for a query. This query returns around 20,000 records.
When I see the SQL Developer, it shows only 50 rows, maximum tweakable to 500. And using F5 upto 5000.
I would have done by making changes in the application, but application redeployment is not possible as it is running on production.
So, I am limited to using only SQL Developer. I am not sure how to get the seconds spent for execution of the query ?
Any ideas on this will help me.
Thank you.
Regards,
JE
If you scroll down past the 50 rows initially returned, it fetches more. When I want all of them, I just click on the first of the 50, then press CtrlEnd to scroll all the way to the bottom.
This will update the display of the time that was used (just above the results it will say something like "All Rows Fetched: 20000 in 3.606 seconds") giving you an accurate time for the complete query.
If your statement is part of an already deployed application and if you have rights to access the view V$SQLAREA, you could check for number of EXECUTIONS and CPU_TIME. You can search for the statement using SQL_TEXT:
SELECT CPU_TIME, EXECUTIONS
FROM V$SQLAREA
WHERE UPPER (SQL_TEXT) LIKE 'SELECT ... FROM ... %';
This is the most precise way to determine the actual run time. The view V$SESSION_LONGOPS might also be interesting for you.
If you don't have access to those views you could also use a cursor loop for running through all records, e.g.
CREATE OR REPLACE PROCEDURE speedtest AS
count number;
cursor c_cursor is
SELECT ...;
BEGIN
-- fetch start time stamp here
count := 0;
FOR rec in c_cursor
LOOP
count := count +1;
END LOOP;
-- fetch end time stamp here
END;
Depending on the architecture this might be more or less accurate, because data might need to be transmitted to the system where your SQL is running on.
You can change those limits; but you'll be using some time in the data transfer between the DB and the client, and possibly for the display; and that in turn would be affected by the number of rows pulled by each fetch. Those things affect your application as well though, so looking at the raw execution time might not tell you the whole story anyway.
To change the worksheet (F5) limit, go to Tools->Preferences->Database->Worksheet, and increase the 'Max rows to print in a script' value (and maybe 'Max lines in Script output'). To change the fetch size go to the Database->Advanced panel in the preferences; maybe to match your application's value.
This isn't perfect but if you don't want to see the actual data, just get the time it takes to run in the DB, you can wrap the query to get a single row:
select count(*) from (
<your original query
);
It will normally execute the entire original query and then count the results, which won't add anything significant to the time. (It's feasible it might rewrite the query internally I suppose, but I think that's unlikely, and you could use hints to avoid it if needed).
Is there a way to prevent screen output for the query --destination_table?
I wan to move data sets through the workflow, but not necessarily see the all the rows
bug on job_73d3dffab7974d9db360f5c31a3a9fa7
This is a known issue, we'll fix it in the next version of bq. To work around, you can add --max_rows=0. This only changes the number of rows that get sent back, not the number of rows that get returned by the query (you can use LIMIT N for that in the query).