Looking around, with QSqlTableModel, the way to get all rows out of a table is
select();
while( canFetchMore() ) {
fetchMore();
}
The first select seems fine, but the fetchMore() seems to grab one row at a time. I'm hammering the Sql server and a fetch of about 350 rows is taking up to a couple of seconds, not to mention wasting a bunch of CPU.
The Database is SQL-Server. Is there no better way?
EDIT: After some digging and help from a DBA, I can confirm that I get different behavior out of two different databases. Unfortunately, they are the same version of SQL Server, and both are using the ODBC Driver on Linux (written by Microsoft). Under the sheets, one will select 256 rows, then each iteration of fetchMore() will select another 256 rows. For the other one, select() and fetchMore() will only get one row at a time, and is causing all kinds of problems.
My solution is, unfortunately, is to pass QSqlQuery to a QSqlDatabase.
Related
I have SQL where am using row offset to perform pagination, which is taking around 10minutes to return 32 records, but without using row offset my query giving 500+ records within second.
So I want to understand what could be have been lead to this issue. Could anyone help.
Thank you!
My user database(App) and system database (temp db) were using different compatibility, as soon I set the compatibility same as system database, it worked like charm.
I have situation need to execute patch script for million row of data.The current query execution time is not meet the expectation for few rows (18000) which is take around 4 hours( testing data before deploy for live ).
The patch script actually select million row of data in loop and update according to the specification , im just wonder how long it could take for million row of data since it take around 4 hour for just 18000 rows.
to overcome this problem im decided to create temp table hold the entire select statement data and proceed with the patch process using the temp table where the process could be bit faster compare select and update.
is there any other ways i can use to handle this situation ? Any suggestion and ways to solve this.
(Due to company policy im unable to post the PL/SQl script here )
seems there is no one can answer my question here i post my answer.
In oracle there is Parallel Execution which is allows spreading the processing of a single SQL statement across multiple threads.
So by using this method i solved my long running query (4 hours ) to 6 minz ..
For more information :
https://docs.oracle.com/cd/E11882_01/server.112/e25523/parallel002.htm
http://www.oracle.com/technetwork/articles/database-performance/geist-parallel-execution-1-1872400.html
I have the following script:
SELECT
DEPT.F03 AS F03, DEPT.F238 AS F238, SDP.F04 AS F04, SDP.F1022 AS F1022,
CAT.F17 AS F17, CAT.F1023 AS F1023, CAT.F1946 AS F1946
FROM
DEPT_TAB DEPT
LEFT OUTER JOIN
SDP_TAB SDP ON SDP.F03 = DEPT.F03,
CAT_TAB CAT
ORDER BY
DEPT.F03
The tables are huge, when I execute the script in SQL Server directly it takes around 4 min to execute, but when I run it in the third party program (SMS LOC based on Delphi) it gives me the error
<msg> out of memory</msg> <sql> the code </sql>
Is there anyway I can lighten the script to be executed? or did anyone had the same problem and solved it somehow?
I remember having had to resort to the ROBUST PLAN query hint once on a query where the query-optimizer kind of lost track and tried to work it out in a way that the hardware couldn't handle.
=> http://technet.microsoft.com/en-us/library/ms181714.aspx
But I'm not sure I understand why it would work for one 'technology' and not another.
Then again, the error message might not be from SQL but rather from the 3rd-party program that gathers the output and does so in a 'less than ideal' way.
Consider adding paging to the user edit screen and the underlying data call. The point being you dont need to see all the rows at one time, but they are available to the user upon request.
This will alleviate much of your performance problem.
I had a project where I had to add over 7 million individual lines of T-SQL code via batch (couldn't figure out how to programatically leverage the new SEQUENCE command). The problem was that there was limited amount of memory available on my VM (I was allocated the max amount of memory for this VM). Because of the large amount lines of T-SQL code I had to first test how many lines it could take before the server crashed. For whatever reason, SQL (2012) doesn't release the memory it uses for large batch jobs such as mine (we're talking around 12 GB of memory) so I had to reboot the server every million or so lines. This is what you may have to do if resources are limited for your project.
I have an INSERT query in Oracle 10g that is getting stuck on a "SQL*Net message from dblink" event. It looks like:
INSERT INTO my_table (A, B, C, ...)
SELECT A, B, C, ... FROM link_table#other_system;
I do not see any locks on my_table besides the one from the INSERT I'm trying to do. The SELECT query on link_table#other_system completes without any trouble when run on its own. I only get this issue when I try to do the INSERT.
Does anyone know what could be going on here?
UPDATE
The SELECT returns 4857 rows in ~1.5 mins when run alone. The INSERT was running over an hour with this wait message before I decided to kill it.
UPDATE
I found an error in my methods. I was using a date range to limit the results. The date range I used when testing the SELECT only was before the last OraStats run on the link_table, but the date range that I used when testing the INSERT was after the last OraStats run on the link_table. So, that mislead me to believe there was a problem with the INSERT. Not very scientific of me to do this; my mistake.
SQL*Net message from dblink generally means that your local system is waiting on the network to transfer the data across the network. It's a very normal wait event for this sort of query.
How many rows does the SELECT statement return? How much data (in MB/ GB) does that represent?
When you say that it "completes without any trouble on its own", are you actually fetching all the data? If you're using something like TOAD or SQL Developer, the GUI will generally fetch the first N rows and return to you. That can be very quick but it doesn't imply that the database is done executing the query-- it may take much more time to finish producing all the rows your query is going to return. It's pretty common for people to measure the time required to fetch the first N rows rather than the time to fetch the last row-- your INSERT statement, obviously, can't return until all the rows have been fetched from the remote table.
Are you using a /*+ driving_site(link_table) */ hint to make Oracle perform the joins on the remote server?
If so, that hint will not work with DML, as explained by Jonathan Lewis on this page.
This may be a rare case where running the query just as a SELECT uses a very different plan than running the query as part of an INSERT. (You will definitely want to learn how to generate explain plans in your environment. Most tools have a button to do this.)
As Andras Gabor recommended in the link, you may want to use PL/SQL BULK COLLECT to improve performance. This may be a rare case where PL/SQL will work faster than SQL.
I have an sqlite3 table that tells when I gain/lose points in a game. Sample/query result:
SELECT time,p2 FROM events WHERE p1='barrycarter' AND action='points'
ORDER BY time;
1280622305|-22
1280625580|-9
1280627919|20
1280688964|21
1280694395|-11
1280698006|28
1280705461|-14
1280706788|-13
[etc]
I now want my running point total. Given that I start w/ 1000 points,
here's one way to do it.
SELECT DISTINCT(time), (SELECT
1000+SUM(p2) FROM events e WHERE p1='barrycarter' AND action='points'
AND e.time <= e2.time) AS points FROM events e2 WHERE p1='barrycarter'
AND action='points' ORDER BY time
but this is highly inefficient. What's a better way to write this?
MySQL has #variables so you can do things like:
SELECT time, #tot := #tot+points ...
but I'm using sqlite3 and the above isn't ANSI standard SQL anyway.
More info on the db if anyone needs it: http://ccgames.db.94y.info/
EDIT: Thanks for the answers! My dilemma: I let anyone run any
single SELECT query on "http://ccgames.db.94y.info/". I want to give
them useful access to my data, but not to the point of allowing
scripting or allowing multiple queries with state. So I need a single
SQL query that can do accumulation. See also:
Existing solution to share database data usefully but safely?
SQLite is meant to be a small embedded database. Given that definition, it is not unreasonable to find many limitations with it. The task at hand is not solvable using SQLite alone, or it will be terribly slow as you have found. The query you have written is a triangular cross join that will not scale, or rather, will scale badly.
The most efficient way to tackle the problem is through the program that is making use of SQLite, e.g. if you were using Web SQL in HTML5, you can easily accumulate in JavaScript.
There is a discussion about this problem in the sqlite mailing list.
Your 2 options are:
Iterate through all the rows with a cursor and calculate the running sum on the client.
Store sums instead of, or as well as storing points. (if you only store sums you can get the points by doing sum(n) - sum(n-1) which is fast).