Very slow performance when Count(*) on subquery with - sql

i need to know the total rows returned by a query to fill pagination text in a web page.
Im doing pagination on SQL side to improve performance.
Using the query below, i get 6560 records in 15 seconds, wich is slow for my needs:
1.
SELECT COUNT(*)
FROM dbo.vw_Lista_Pedidos_Backoffice_ix vlpo WITH (NOLOCK)
WHERE dataCriacaoPedido>=DATEADD(month,-6,getdate())
Using this query, i get the same result in 1 second:
2.
SELECT COUNT(*) FROM
(SELECT *, ROW_NUMBER() over (order by pedidoid desc) as RowNumber
FROM dbo.vw_Lista_Pedidos_Backoffice_ix vlpo WITH (NOLOCK)
WHERE
dataCriacaoPedido>=DATEADD(month,-6,getdate())
) records
WHERE RowNumber BETWEEN 1 AND 6560
If i change the above query (2.) and set the upper limit of RowNumber to a number greater than 6560 (the result of count(*)), the query takes again 15 seconds to run!
So, my questions are:
- why is the query 2. takes so less time, even that the limit on RowNumber actualy dont limit any of the rows in the subquery?
- is there any way i can use the query 2. on my advantage to get the total rows?
Ty all :)

This isn't going to fully answer your question, because the real answer lies in the view definition and optimizing that. This is intended to answer questions about behavior.
The reason why COUNT(*) is slower is because it has to generate all the rows in the view, and then count them. The counting isn't the issue. The generation is.
The reason why ROW_NUMBER() over (order by pedidoid desc) is fast is because an index exists on pedidoid. SQL Server uses the index for ROW_NUMBER(). And, just as important, it can access the data in the view using the same index. So, that speeds the query.
The reason why there is a magic number at 6,561. Well, that I don't know. That has to do with the vagaries of the SQL Server optimizer and your configuration. One possibility has to do with the WHERE clause:
WHERE dataCriacaoPedido >= DATEADD(month, -6, getdate())
My guess is that there are 6,560 matches to the condition. But, SQL Server has to scan the whole table. It scans the table, finds the matching values. However, the engine does not know that it is done, so it keeps searching for rows. As I say, though, this is speculation that explains the behavior.
The really fix the query, you need to understand how the view works.

Related

Splitting large table into 2 dataframes via JDBC connection in RStudio

Through R I connect to a remotely held database. The issue I have is my hardware isn't so great and the dataset contains tens of millions of rows with about 10 columns per table. When I run the below code, at the df step, I get a "Not enough RAM" error from R:
library(DatabaseConnector)
conn <- connect(connectionDetails)
df <- querySql(conn,"SELECT * FROM Table1")
What I thought about doing was splitting the tables into two parts any filter/analyse/combine as needed going forward. I think because I use the conn JDBC conection I have to use SQL syntax to make it work. With SQL, I start with the below code:
df <- querySql(conn,"SELECT TOP 5000000 FROM Table1")
And then where I get stuck is how do I create a second dataframe starting with n - 5000000 rows and ending at the final row, retrieved from Table1.
I'm open to suggestions but I think there are two potential answers to this question. The first is to work within the querySql to get it working. The second is to use an R function other than querySql (no idea what this would look like). I'm limited to R due to work environment.
The SQL statement
SELECT TOP 5000000 * from Table1
is not doing what you think it's doing.
Relational tables are conceptually unordered.
A relation is defined as a set of n-tuples. In both mathematics and the relational database model, a set is an unordered collection of unique, non-duplicated items, although some DBMSs impose an order to their data.
Selecting from a table produces a result-set. Result-sets are also conceptually unordered unless and until you explicitly specify an order for them, which is generally done using an order by clause.
When you use a top (or limit, depending on the DBMS) clause to reduce the number of records to be returned by a query (let's call these the "returned records") below the number of records that could be returned by that query (let's call these the "selected records") and if you have not specified an order by clause, then it is conceptually unpredictable and random which of the selected records will be chosen as the returned records.
Since you have not specified an order by clause in your query, you are effectively getting 5,000,000 unpredictable and random records from your table. Every single time you run the query you might get a different set of 5,000,000 records (conceptually, at least).
Therefore, it doesn't make sense to ask about how to get a second result-set "starting with n - 5000000 and ending at the final row". There is no n, and there is no final row. The choice of returned records was not deterministic, and the DBMS does not remember such choices of past queries. The only conceivable way such information could be incorporated into a subsequent query would be to explicitly include it in the SQL, such as by using a not in condition on an id column and embedding id values from the first query as literals, or doing some kind of negative join, again, involving the embedding of id values as literals. But obviously that's unreasonable.
There are two possible solutions here.
1: order by with limit and offset
Take a look at the PostgreSQL documentation on limit and offset. First, just to reinforce the point about lack of order, take note of the following paragraphs:
When using LIMIT, it is important to use an ORDER BY clause that constrains the result rows into a unique order. Otherwise you will get an unpredictable subset of the query's rows. You might be asking for the tenth through twentieth rows, but tenth through twentieth in what ordering? The ordering is unknown, unless you specified ORDER BY.
The query optimizer takes LIMIT into account when generating query plans, so you are very likely to get different plans (yielding different row orders) depending on what you give for LIMIT and OFFSET. Thus, using different LIMIT/OFFSET values to select different subsets of a query result will give inconsistent results unless you enforce a predictable result ordering with ORDER BY. This is not a bug; it is an inherent consequence of the fact that SQL does not promise to deliver the results of a query in any particular order unless ORDER BY is used to constrain the order.
Now, this solution requires that you specify an order by clause that fully orders the result-set. An order by clause that only partially orders the result-set will not be enough, since it will still leave room for some unpredictability and randomness.
Once you have the order by clause, you can then repeat the query with the same limit value and increasing offset values.
Something like this:
select * from table1 order by id1, id2, ... limit 5000000 offset 0;
select * from table1 order by id1, id2, ... limit 5000000 offset 5000000;
select * from table1 order by id1, id2, ... limit 5000000 offset 10000000;
...
2: synthesize a numbering column and filter on it
It is possible to add a column to the select clause which will provide a full order for the result-set. By wrapping this SQL in a subquery, you can then filter on the new column and thereby achieve your own pagination of the data. In fact, this solution is potentially slightly more powerful, since you could theoretically select discontinuous subsets of records, although I've never seen anyone actually do that.
To compute the ordering column, you can use the row_number() partition function.
Importantly, you will still have to specify id columns by which to order the partition. This is unavoidable under any conceivable solution; there always must be some deterministic, predictable record order to guide stateless paging through data.
Something like this:
select * from (select *, row_number() over (id1, id2, ...) rn from table1) t1 where rn>0 and rn<=5000000;
select * from (select *, row_number() over (id1, id2, ...) rn from table1) t1 where rn>5000000 and rn<=10000000;
select * from (select *, row_number() over (id1, id2, ...) rn from table1) t1 where rn>10000000 and rn<=15000000;
...
Obviously, this solution is more complicated and verbose than the previous one. And the previous solution might allow for performance optimizations not possible under the more manual approach of partitioning and filtering. Hence I would recommend the previous solution.
My above discussion focuses on PostgreSQL, but other DBMSs should provide equivalent features. For example, for SQL Server, see Equivalent of LIMIT and OFFSET for SQL Server?, which shows an example of the synthetic numbering solution, and also indicates that (at least as of SQL Server 2012) you can use OFFSET {offset} ROWS and FETCH NEXT {limit} ROWS ONLY to achieve limit/offset functionality.

SQL Server pagination Query - Performance Consideration

Am working on SQL and am not so technical on the performance aspects. Am forming the Query dynamically using c# and with pagination purpose in my mind
every time on pagination click i fetch 10 records and my sample query like below
Select *
from (Select ROW_NUMBER() OVER (ORDER BY TestId)[RowNumber],TestId...........) as paging
Where RowNumber BETWEEN 10 AND 20
where testId is the primary key.
Which works perfectly. i posted syntax as it is the confidential data. It executes in say 6 secs
if user clicks last page am forming the below query
Select *
from (Select ROW_NUMBER() OVER (ORDER BY TestId)[RowNumber],TestId...........) as paging
Where RowNumber BETWEEN 30000 AND 30010
The above query takes 40 sec.
What is the Core thing i am missing
Each time i get 10 records but a huge difference in time
Thanks
There's no way around this problem, I'm afraid. With every method you have to somehow calculate the numbers for every row, and you either precalculate them in some temp table / indexed view, or let sql server do this on the fly (your current solution).
If you want to boost the performance of current query, add and index on TestId (even though it's already a PK) with included columns (you must include all columns that will be returned).
create index idxI__testid on <yourtable> (TestId) include (<column1>,<column2>)
But it only makes sense, if you want to return only a few of the columns.
1) testid needs to be indexed. use INCLUDE (columns to return) when creating the index as suggested.
2) try to use select TOP. for example:
Select * from (Select TOP 20 ROW_NUMBER() OVER (ORDER BY TestId)[RowNumber],TestId...........)
as paging
Where RowNumber BETWEEN 10 AND 20

Using limit in sqlite SQL statement in combination with order by clause

Will the following two SQL statements always produce the same result set?
1. SELECT * FROM MyTable where Status='0' order by StartTime asc limit 10
2. SELECT * FROM (SELECT * FROM MyTable where Status='0' order by StartTime asc) limit 10
Yes, but ordering subqueries is probably a bad habit to get into. You could feasibly add a further ORDER BY outside the subquery in your second example, e.g.
SELECT *
FROM (SELECT *
FROM Test
ORDER BY ID ASC
) AS A
ORDER BY ID DESC
LIMIT 10;
SQLite still performs the ORDER BY on the inner query, before sorting them again in the outer query. A needless waste of resources.
I've done an SQL Fiddle to demonstrate so you can view the execution plans for each.
No. First because the StartTime column may not have UNIQUE constraint. So, even the first query may not always produce the same result - with itself!
Second, even if there are never two rows with same StartTime, the answer is still negative.
The first statement will always order on StartTime and produce the first 10 rows. The second query may produce the same result set but only with a primitive optimizer that doesn't understand that the ORDER BY in the subquery is redundant. And only if the execution plan includes this ordering phase.
The SQLite query optimizer may (at the moment) not be very bright and do just that (no idea really, we'll have to check the source code of SQLite*). So, it may appear that the two queries are producing identical results all the time. Still, it's not a good idea to count on it. You never know what changes will be made in a future version of SQLite.
I think it's not good practice to use LIMIT without ORDER BY, in any DBMS. It may work now, but you never know how long these queries will be used by the application. And you may not be around when SQLite is upgraded or the DBMS is changed.
(*) #Gareth's link provides the execution plan which suggests that current SQLite code is dumb enough to execute the redundant ordering.

processing large table - how do i select the records page by page?

I need to do a process on all the records in a table. The table could be very big so I rather process the records page by page. I need to remember the records that have already been processed so there are not included in my second SELECT result.
Like this:
For first run,
[SELECT 100 records FROM MyTable]
For second run,
[SELECT another 100 records FROM MyTable]
and so on..
I hope you get the picture. My question is how do I write such select statement?
I'm using oracle btw, but would be nice if I can run on any other db too.
I also don't want to use store procedure.
Thank you very much!
Any solution you come up with to break the table into smaller chunks, will end up taking more time than just processing everything in one go. Unless the table is partitioned and you can process exactly one partition at a time.
If a full table scan takes 1 minute, it will take you 10 minutes to break up the table into 10 pieces. If the table rows are physically ordered by the values of an indexed column that you can use, this will change a bit due to clustering factor. But it will anyway take longer than just processing it in one go.
This all depends on how long it takes to process one row from the table of course. You could chose to reduce the load on the server by processing chunks of data, but from a performance perspective, you cannot beat a full table scan.
You are most likely going to want to take advantage of Oracle's stopkey optimization, so you don't end up with a full tablescan when you don't want one. There are a couple ways to do this. The first way is a little longer to write, but let's Oracle automatically figure out the number of rows involved:
select *
from
(
select rownum rn, v1.*
from (
select *
from table t
where filter_columns = 'where clause'
order by columns_to_order_by
) v1
where rownum <= 200
)
where rn >= 101;
You could also achieve the same thing with the FIRST_ROWS hint:
select /*+ FIRST_ROWS(200) */ *
from (
select rownum rn, t.*
from table t
where filter_columns = 'where clause'
order by columns_to_order_by
) v1
where rn between 101 and 200;
I much prefer the rownum method, so you don't have to keep changing the value in the hint (which would need to represent the end value and not the number of rows actually returned to the page to be accurate). You can set up the start and end values as bind variables that way, so you avoid hard parsing.
For more details, you can check out this post

How can I speed up row_number in Oracle?

I have a SQL query that looks something like this:
SELECT * FROM(
SELECT
...,
row_number() OVER(ORDER BY ID) rn
FROM
...
) WHERE rn between :start and :end
Essentially, it's the ORDER BY part that's slowing things down. If I were to remove it, the EXPLAIN cost goes down by an order of magnitude (over 1000x). I've tried this:
SELECT
...
FROM
...
WHERE
rownum between :start and :end
But this doesn't give correct results. Is there any easy way to speed this up? Or will I have to spend some more time with the EXPLAIN tool?
ROW_NUMBER is quite inefficient in Oracle.
See the article in my blog for performance details:
Oracle: ROW_NUMBER vs ROWNUM
For your specific query, I'd recommend you to replace it with ROWNUM and make sure that the index is used:
SELECT *
FROM (
SELECT /*+ INDEX_ASC(t index_on_column) NOPARALLEL_INDEX(t index_on_column) */
t.*, ROWNUM AS rn
FROM table t
ORDER BY
column
)
WHERE rn >= :start
AND rownum <= :end - :start + 1
This query will use COUNT STOPKEY
Also either make sure you column is not nullable, or add WHERE column IS NOT NULL condition.
Otherwise the index cannot be used to retrieve all values.
Note that you cannot use ROWNUM BETWEEN :start and :end without a subquery.
ROWNUM is always assigned last and checked last, that's way ROWNUM's always come in order without gaps.
If you use ROWNUM BETWEEN 10 and 20, the first row that satisifies all other conditions will become a candidate for returning, temporarily assigned with ROWNUM = 1 and fail the test of ROWNUM BETWEEN 10 AND 20.
Then the next row will be a candidate, assigned with ROWNUM = 1 and fail, etc., so, finally, no rows will be returned at all.
This should be worked around by putting ROWNUM's into the subquery.
Looks like a pagination query to me.
From this ASKTOM article (about 90% down the page):
You need to order by something unique for these pagination queries, so that ROW_NUMBER is assigned deterministically to the rows each and every time.
Also your queries are no where near the same so I'm not sure what the benefit of comparing the costs of one to the other is.
Is your ORDER BY column indexed? If not that's a good place to start.
Part of the problem is how big is the 'start' to 'end' span and where they 'live'.
Say you have a million rows in the table, and you want rows 567,890 to 567,900 then you are going to have to live with the fact that it is going to need to go through the entire table, sort pretty much all of that by id, and work out what rows fall into that range.
In short, that's a lot of work, which is why the optimizer gives it a high cost.
It is also not something an index can help with much. An index would give the order, but at best, that gives you somewhere to start and then you keep reading on until you get to the 567,900th entry.
If you are showing your end user 10 items at a time, it may be worth actually grabbing the top 100 from the DB, then having the app break that 100 into ten chunks.
Spend more time with the EXPLAIN PLAN tool. If you see a TABLE SCAN you need to change your query.
Your query makes little sense to me. Querying over a ROWID seems like asking for trouble. There's no relational info in that query. Is it the real query that you're having trouble with or an example that you made up to illustrate your problem?