SQL performance: WHERE vs WHERE(ROW_NUMBER) - sql

I want get n-th to m-th records in a table, what's best choice in 2 below solutions:
Solution 1:
SELECT * FROM Table WHERE ID >= n AND ID <= m
Solution 2:
SELECT * FROM
(SELECT *,
ROW_NUMBER() OVER (ORDER BY ID) AS row
FROM Table
)a
WHERE row >= n AND row <= m

As other already pointed out, the queries return different results and are comparing apples to oranges.
But the underlying question remains: which is faster: keyset driven paging or rownumber driven paging?
Keyset Paging
Keyset driven paging relies on remembering the top and bottom keys of the last displayed page, and requesting the next or previous set of rows, based on the top/last keyset:
Next page:
select top (<pagesize>) ...
from <table>
where key > #last_key_on_current_page
order by key;
Previous page:
select top (<pagesize>)
from <table>
where key < #first_key_on_current_page
order by key desc;
This approach has two main advantages over the ROW_NUMBER approach, or over the equivalent LIMIT approach of MySQL:
is correct: unlike the row number based approach it correctly handles new entries and deleted entries. Last row of Page 4 does not show up as first row of Page 5 just because row 23 on Page 2 was deleted in the meantime. Nor do rows mysteriously vanish between pages. These anomalies are common with the row_number based approach, but the key set based solution does a much better job at avoiding them.
is fast: all operations can be solved with a fast row positioning followed by a range scan in the desired direction
However, this approach is difficult to implement, hard to understand by the average programmer and not supported by the tools.
Row Number Driven
This is the common approach introduced with Linq queries:
select ...
from (
select ..., row_number() over (...) as rn
from table)
where rn between #firstRow and #lastRow;
(or a similar query using TOP)
This approach is easy to implement and is supported by tools (specifically by Linq .Limit and .Take operators). But this approach is guaranteed to scan the index in order to count the rows. This approach works usually very fast for page 1 and gradually slows down as the an one goes to higher and higher page numbers.
As a bonus, with this solution is very easy to change the sort order (simply change the OVER clause).
Overall, given the ease of the ROW_NUMBER() based solutions, the support they have from Linq, the simplicity to use arbitrary orders for moderate data sets the ROW_NUMBER based solutions are adequate. For large and very large data sets, the ROW_NUMBER() can occur serious performance issues.
One other thing to consider is that often times there is a definite pattern of access. Often the first few pages are hot and pages after 10 are basically never viewed (eg. most recent posts). In this case, the penalty that occurs with ROW_NUMBER() for visiting bottom pages (display pages for which a large number of rows have to be counted to get the starting result row) may be well ignored.
And finally, the keyset pagination is great for dictionary navigation, which ROW_NUMBER() cannot accommodate easily. Dictionary navigation is where instead of using page number, users can navigate to certain anchors, like alphabet letters. Typical example being a contact Rolodex like sidebar, you click on M and you navigate to the first customer name that starts with M.

The 2nd answer is your best choice. It takes into account the fact that you could have holes in your ID column. I'd rewrite it as a CTE though instead of a subquery...
;WITH MyCTE AS
(SELECT *,
ROW_NUMBER() OVER (ORDER BY ID) AS row
FROM Table)
SELECT *
FROM MyCTE
WHERE row >= #start
AND row <= #end

They are different queries.
Assuming ID is a surrogate key, it may have gaps. ROW_NUMBER will be contiguous.
If you can guarantee you have no gaps in the data, then the 1st one because I'd hope it's indexed,. The 2nd one is more "correct" though.

Related

Very slow performance when Count(*) on subquery with

i need to know the total rows returned by a query to fill pagination text in a web page.
Im doing pagination on SQL side to improve performance.
Using the query below, i get 6560 records in 15 seconds, wich is slow for my needs:
1.
SELECT COUNT(*)
FROM dbo.vw_Lista_Pedidos_Backoffice_ix vlpo WITH (NOLOCK)
WHERE dataCriacaoPedido>=DATEADD(month,-6,getdate())
Using this query, i get the same result in 1 second:
2.
SELECT COUNT(*) FROM
(SELECT *, ROW_NUMBER() over (order by pedidoid desc) as RowNumber
FROM dbo.vw_Lista_Pedidos_Backoffice_ix vlpo WITH (NOLOCK)
WHERE
dataCriacaoPedido>=DATEADD(month,-6,getdate())
) records
WHERE RowNumber BETWEEN 1 AND 6560
If i change the above query (2.) and set the upper limit of RowNumber to a number greater than 6560 (the result of count(*)), the query takes again 15 seconds to run!
So, my questions are:
- why is the query 2. takes so less time, even that the limit on RowNumber actualy dont limit any of the rows in the subquery?
- is there any way i can use the query 2. on my advantage to get the total rows?
Ty all :)
This isn't going to fully answer your question, because the real answer lies in the view definition and optimizing that. This is intended to answer questions about behavior.
The reason why COUNT(*) is slower is because it has to generate all the rows in the view, and then count them. The counting isn't the issue. The generation is.
The reason why ROW_NUMBER() over (order by pedidoid desc) is fast is because an index exists on pedidoid. SQL Server uses the index for ROW_NUMBER(). And, just as important, it can access the data in the view using the same index. So, that speeds the query.
The reason why there is a magic number at 6,561. Well, that I don't know. That has to do with the vagaries of the SQL Server optimizer and your configuration. One possibility has to do with the WHERE clause:
WHERE dataCriacaoPedido >= DATEADD(month, -6, getdate())
My guess is that there are 6,560 matches to the condition. But, SQL Server has to scan the whole table. It scans the table, finds the matching values. However, the engine does not know that it is done, so it keeps searching for rows. As I say, though, this is speculation that explains the behavior.
The really fix the query, you need to understand how the view works.

SQL pagination based on last record retrieved

I need to implement pagination which is semi-resilient to data changing between paginations. The standard pagination relies on SQL's LIMIT and OFFSET, however offset has potential to become inaccurate as new data points are created or their ranking shifts in the sort.
One idea is to hold onto the last data point requested from the API and get the following elements. I don't really know SQL (we're using postgres), but this is my (certainly flawed) attempt at doing something like that. I am trying to store the position of the last element as 'rownum' and then use it in the following query.
WITH rownum AS (
SELECT *, ROW_NUMBER() OVER (ORDER BY rank ASC, id) AS rownum
WHERE id = #{after_id}
FROM items )
SELECT * FROM items
OFFSET rownum
ORDER BY rank ASC, id
LIMIT #{pagination_limit}
I can see some issues with this, like if the last item changes significantly in rank. If anyone can think of another way to do this, that would be great. But I would like to confine it to a single DB query if possible since this is the applications most frequently hit API.
Your whole syntax doesn't quite work. OFFSET comes after ORDER BY. FROM comes before WHERE etc.
This simpler query would do what I think your code is supposed to do:
SELECT *
FROM items
WHERE (rank, id) > (
SELECT (rank, id)
FROM items
WHERE id = #{after_id}
)
ORDER BY rank, id
LIMIT #{pagination_limit};
Comparing the composite type (rank, id) guarantees identical sort order.
Make sure you have two indexes:
A multicolumn index on (rank, id).
Another one on just (id) - you probably have a pk constraint on the column doing that already. (A multicolumn index with leading id would do the job as well.)
More about indexes:
Is a composite index also good for queries on the first field?
If rank is not volatile it would be more efficient to parameterize it additionally instead of retrieving it dynamically - but the volatility of rank seems to be the point of your deliberations ...
I now think the best way to solve this problem is by storing the datetime of the original query and filtering out results after that moment on subsequent queries, thus ensuring the offset is mostly correct. Maybe a persistent database could be used to ensure that the data is at the same state it was when the original query was made.

SQL Server pagination Query - Performance Consideration

Am working on SQL and am not so technical on the performance aspects. Am forming the Query dynamically using c# and with pagination purpose in my mind
every time on pagination click i fetch 10 records and my sample query like below
Select *
from (Select ROW_NUMBER() OVER (ORDER BY TestId)[RowNumber],TestId...........) as paging
Where RowNumber BETWEEN 10 AND 20
where testId is the primary key.
Which works perfectly. i posted syntax as it is the confidential data. It executes in say 6 secs
if user clicks last page am forming the below query
Select *
from (Select ROW_NUMBER() OVER (ORDER BY TestId)[RowNumber],TestId...........) as paging
Where RowNumber BETWEEN 30000 AND 30010
The above query takes 40 sec.
What is the Core thing i am missing
Each time i get 10 records but a huge difference in time
Thanks
There's no way around this problem, I'm afraid. With every method you have to somehow calculate the numbers for every row, and you either precalculate them in some temp table / indexed view, or let sql server do this on the fly (your current solution).
If you want to boost the performance of current query, add and index on TestId (even though it's already a PK) with included columns (you must include all columns that will be returned).
create index idxI__testid on <yourtable> (TestId) include (<column1>,<column2>)
But it only makes sense, if you want to return only a few of the columns.
1) testid needs to be indexed. use INCLUDE (columns to return) when creating the index as suggested.
2) try to use select TOP. for example:
Select * from (Select TOP 20 ROW_NUMBER() OVER (ORDER BY TestId)[RowNumber],TestId...........)
as paging
Where RowNumber BETWEEN 10 AND 20

Fast Way To Estimate Rows By Criteria

I have seen a few posts detailing fast ways to "estimate" the number of rows in a given SQL table without using COUNT(*). However, none of them seem to really solve the problem if you need to estimate the number of rows which satisfy a given criteria. I am trying to get a way of estimating the number of rows which satisfy a given criteria, but the information for these criteria is scattered around two or three tables. Of course a SELECT COUNT(*) with the NOLOCK hint and a few joins will do, and I can afford under- or over-estimating the total records. The probem is that this kind of query will be running every 5-10 minutes or so, and since I don't need the actual number-only an estimate-I would like to trade-off accuracy for speed.
The solution, if any, may be "SQL Server"-specific. In fact, it must be compatible with SQL Server 2005. Any hints?
There is no easy way to do this. You can get an estimate for the total number of rows in a table, e.g. from system catalog views.
But there's no way to do this for a given set of criteria in a WHERE clause - either you would have to keep counts for each set of criteria and the values, or you'd have to use black magic to find that out. The only place that SQL Server keeps something that would go into that direction is the statistics it keeps on the indices. Those will have certain information about what kind of values occur how frequently in an index - but I quite honestly don't have any idea if (and how) you could leverage the information in the statistics in your own queries......
If you really must know the number of rows matching a certain criteria, you need to do a count of some sort - either a SELECT COUNT(*) FROM dbo.YourTable WHERE (yourcriteria) or something else.
Something else could be something like this:
wrap your SELECT statement into a CTE (Common Table Expression)
define a ROW_NUMBER() in that CTE ordering your data by some column (or set of columns)
add a second ROW_NUMBER() to that CTE that orders your data by the same column (or columns) - but in the opposite direction (DESC vs. ASC)
Something like this:
;WITH YourDataCTE AS
(
SELECT (list of columns you need),
ROW_NUMBER() OVER(ORDER BY <your column>) AS 'RowNum',
ROW_NUMBER() OVER(ORDER BY <your column> DESC) AS 'RowNum2'
FROM
dbo.YourTable
WHERE
<your conditions here>
)
SELECT *
FROM YourDataCTE
Doing this, you would get the following effect:
your first row in your result set will contain your usual data columns
the first ROW_NUMBER() will contain the value 1
the second ROW_NUMBER() will contain the total number of row that match that criteria set
It's surprisingly good at dealing with small to mid-size result sets - I haven't tried yet how it'll hold up with really large result sets - but it might be something to investigate and see if it works.
Possible solutions:
If the count number is big in comparison to the total number of rows in the table, then adding indexes that cover where condition will help and the query will be very fast.
If the result number is close to the total number of rows in the table, indexes will not help much. You could implement a trigger that would maintain a 'conditional count table'. So whenever row matching condition added you would increment the value in the table, and when row is deleted you would decrement the value. So you will query this small 'summary count table'.

SQLite3 (or general SQL) retrieve nth row of a query result

Quicky question on SQLite3 (may as well be general SQLite)
How can one retrieve the n-th row of a query result?
row_id (or whichever index) won't work on my case, given that the tables contain a column with a number. Based on some data, the query needs the data unsorted or sorted by asc/desc criteria.
But I may need to quickly retrieve, say, rows 2 & 5 of the results.
So other than implementing a sqlite3_step()==SQLITE_ROW with a counter, right now I have no idea on how to proceed with this.
And I don't like this solution very much because of performance issues.
So, if anyone can drop a hint that'd be highly appreciated.
Regards
david
add LIMIT 1 and OFFSET <n> to the query
example SELECT * FROM users LIMIT 1 OFFSET 5132;
The general approach is that, if you want only the nth row of m rows, use an appropriate where condition to only get that row.
If you need to get to a row and can't because no where criteria can get you there, your database has a serious design issue. It fails the first normal form, which states that "There's no top-to-bottom ordering to the rows."
But I may need to quickly retrieve, say, rows 2 & 5 of the results.
In scenario when you need non-continuous rows you could use ROW_NUMBER():
WITH cte AS (
SELECT *, ROW_NUMBER() OVER() AS rn --OVER(ORDER BY ...) --if specific order is required
FROM t
)
SELECT c
FROM cte
WHERE rn IN (2,5); -- row nums
db<>fiddle demo