SQL Server pagination Query - Performance Consideration - sql

Am working on SQL and am not so technical on the performance aspects. Am forming the Query dynamically using c# and with pagination purpose in my mind
every time on pagination click i fetch 10 records and my sample query like below
Select *
from (Select ROW_NUMBER() OVER (ORDER BY TestId)[RowNumber],TestId...........) as paging
Where RowNumber BETWEEN 10 AND 20
where testId is the primary key.
Which works perfectly. i posted syntax as it is the confidential data. It executes in say 6 secs
if user clicks last page am forming the below query
Select *
from (Select ROW_NUMBER() OVER (ORDER BY TestId)[RowNumber],TestId...........) as paging
Where RowNumber BETWEEN 30000 AND 30010
The above query takes 40 sec.
What is the Core thing i am missing
Each time i get 10 records but a huge difference in time
Thanks

There's no way around this problem, I'm afraid. With every method you have to somehow calculate the numbers for every row, and you either precalculate them in some temp table / indexed view, or let sql server do this on the fly (your current solution).
If you want to boost the performance of current query, add and index on TestId (even though it's already a PK) with included columns (you must include all columns that will be returned).
create index idxI__testid on <yourtable> (TestId) include (<column1>,<column2>)
But it only makes sense, if you want to return only a few of the columns.

1) testid needs to be indexed. use INCLUDE (columns to return) when creating the index as suggested.
2) try to use select TOP. for example:
Select * from (Select TOP 20 ROW_NUMBER() OVER (ORDER BY TestId)[RowNumber],TestId...........)
as paging
Where RowNumber BETWEEN 10 AND 20

Related

Very slow performance when Count(*) on subquery with

i need to know the total rows returned by a query to fill pagination text in a web page.
Im doing pagination on SQL side to improve performance.
Using the query below, i get 6560 records in 15 seconds, wich is slow for my needs:
1.
SELECT COUNT(*)
FROM dbo.vw_Lista_Pedidos_Backoffice_ix vlpo WITH (NOLOCK)
WHERE dataCriacaoPedido>=DATEADD(month,-6,getdate())
Using this query, i get the same result in 1 second:
2.
SELECT COUNT(*) FROM
(SELECT *, ROW_NUMBER() over (order by pedidoid desc) as RowNumber
FROM dbo.vw_Lista_Pedidos_Backoffice_ix vlpo WITH (NOLOCK)
WHERE
dataCriacaoPedido>=DATEADD(month,-6,getdate())
) records
WHERE RowNumber BETWEEN 1 AND 6560
If i change the above query (2.) and set the upper limit of RowNumber to a number greater than 6560 (the result of count(*)), the query takes again 15 seconds to run!
So, my questions are:
- why is the query 2. takes so less time, even that the limit on RowNumber actualy dont limit any of the rows in the subquery?
- is there any way i can use the query 2. on my advantage to get the total rows?
Ty all :)
This isn't going to fully answer your question, because the real answer lies in the view definition and optimizing that. This is intended to answer questions about behavior.
The reason why COUNT(*) is slower is because it has to generate all the rows in the view, and then count them. The counting isn't the issue. The generation is.
The reason why ROW_NUMBER() over (order by pedidoid desc) is fast is because an index exists on pedidoid. SQL Server uses the index for ROW_NUMBER(). And, just as important, it can access the data in the view using the same index. So, that speeds the query.
The reason why there is a magic number at 6,561. Well, that I don't know. That has to do with the vagaries of the SQL Server optimizer and your configuration. One possibility has to do with the WHERE clause:
WHERE dataCriacaoPedido >= DATEADD(month, -6, getdate())
My guess is that there are 6,560 matches to the condition. But, SQL Server has to scan the whole table. It scans the table, finds the matching values. However, the engine does not know that it is done, so it keeps searching for rows. As I say, though, this is speculation that explains the behavior.
The really fix the query, you need to understand how the view works.

Numbering rows in a view

I am connecting to an SQL database using a PLC, and need to return a list of values. Unfortunately, the PLC has limited memory, and can only retrieve approximately 5,000 values at any one time, however the database may contain up to 10,000 values.
As such I need a way of retrieving these values in 2 operations. Unfortunately the PLC is limited in the query it can perform, and is limited to only SELECT and WHERE commands, so I cannot use LIMIT or TOP or anything like that.
Is there a way in which I can create a view, and auto number every field in that view? I could then query all records < 5,000, followed by a second query of < 10,000 etc?
Unfortunately it seems that views do not support the identity column, so this would need to be done manually.
Anyone any suggestions? My only realistic option at the moment seems to be to create 2 views, one with the first 5,000 and 1 with the next 5,000...
I am using SQL Server 2000 if that makes a difference...
There are 2 solutions. The easiest is to modify your SQL table and add an IDENTITY column. If that is not a possibility, the you'll have to do something like the below query. For 10000 rows, it shouldn't be too slow. But as the table grows, it will become worse and worse-performing.
SELECT Col1, Col2, (SELECT COUNT(i.Col1)
FROM yourtable i
WHERE i.Col1 <= o.Col1) AS RowID
FROM yourtable o
While the code provided by Derek does what I asked - i.e numbers each row in the view, the performance for this is really poor - approximately 20 seconds to number 100 rows. As such it is not a workable solution. An alternative is to number the first 5,000 records with a 1, and the next 5,000 with a 2. This can be done with 3 simple queries, and is far quicker to execute.
The code to do so is as follows:
SELECT TOP(5000) BCode, SAPCode, 1 as GroupNo FROM dbo.DB
UNION
SELECT TOP (10000) BCode, SAPCode, 2 as GroupNo FROM dbo.DB p
WHERE ID NOT IN (SELECT TOP(5000) ID FROM dbo.DB)
Although, as pointed out by Andriy M, you should also specify an explicit sort, to ensure the you dont miss any records.
One possibility might be to use a function with a temporary table such as
CREATE FUNCTION dbo.OrderedBCodeData()
RETURNS #Data TABLE (RowNumber int IDENTITY(1,1),BCode int,SAPCode int)
AS
BEGIN
INSERT INTO #Data (BCode,SAPCode)
SELECT BCode,SAPCode FROM dbo.DB ORDER BY BCode
RETURN
END
And select from this function such as
SELECT FROM dbo.OrderedBCodeData() WHERE RowNumber BETWEEN 5000 AND 10000
I haven't used this in production ever, in fact was just a quick idea this morning but worth exploring as a neater alternative?

processing large table - how do i select the records page by page?

I need to do a process on all the records in a table. The table could be very big so I rather process the records page by page. I need to remember the records that have already been processed so there are not included in my second SELECT result.
Like this:
For first run,
[SELECT 100 records FROM MyTable]
For second run,
[SELECT another 100 records FROM MyTable]
and so on..
I hope you get the picture. My question is how do I write such select statement?
I'm using oracle btw, but would be nice if I can run on any other db too.
I also don't want to use store procedure.
Thank you very much!
Any solution you come up with to break the table into smaller chunks, will end up taking more time than just processing everything in one go. Unless the table is partitioned and you can process exactly one partition at a time.
If a full table scan takes 1 minute, it will take you 10 minutes to break up the table into 10 pieces. If the table rows are physically ordered by the values of an indexed column that you can use, this will change a bit due to clustering factor. But it will anyway take longer than just processing it in one go.
This all depends on how long it takes to process one row from the table of course. You could chose to reduce the load on the server by processing chunks of data, but from a performance perspective, you cannot beat a full table scan.
You are most likely going to want to take advantage of Oracle's stopkey optimization, so you don't end up with a full tablescan when you don't want one. There are a couple ways to do this. The first way is a little longer to write, but let's Oracle automatically figure out the number of rows involved:
select *
from
(
select rownum rn, v1.*
from (
select *
from table t
where filter_columns = 'where clause'
order by columns_to_order_by
) v1
where rownum <= 200
)
where rn >= 101;
You could also achieve the same thing with the FIRST_ROWS hint:
select /*+ FIRST_ROWS(200) */ *
from (
select rownum rn, t.*
from table t
where filter_columns = 'where clause'
order by columns_to_order_by
) v1
where rn between 101 and 200;
I much prefer the rownum method, so you don't have to keep changing the value in the hint (which would need to represent the end value and not the number of rows actually returned to the page to be accurate). You can set up the start and end values as bind variables that way, so you avoid hard parsing.
For more details, you can check out this post

SQL performance: WHERE vs WHERE(ROW_NUMBER)

I want get n-th to m-th records in a table, what's best choice in 2 below solutions:
Solution 1:
SELECT * FROM Table WHERE ID >= n AND ID <= m
Solution 2:
SELECT * FROM
(SELECT *,
ROW_NUMBER() OVER (ORDER BY ID) AS row
FROM Table
)a
WHERE row >= n AND row <= m
As other already pointed out, the queries return different results and are comparing apples to oranges.
But the underlying question remains: which is faster: keyset driven paging or rownumber driven paging?
Keyset Paging
Keyset driven paging relies on remembering the top and bottom keys of the last displayed page, and requesting the next or previous set of rows, based on the top/last keyset:
Next page:
select top (<pagesize>) ...
from <table>
where key > #last_key_on_current_page
order by key;
Previous page:
select top (<pagesize>)
from <table>
where key < #first_key_on_current_page
order by key desc;
This approach has two main advantages over the ROW_NUMBER approach, or over the equivalent LIMIT approach of MySQL:
is correct: unlike the row number based approach it correctly handles new entries and deleted entries. Last row of Page 4 does not show up as first row of Page 5 just because row 23 on Page 2 was deleted in the meantime. Nor do rows mysteriously vanish between pages. These anomalies are common with the row_number based approach, but the key set based solution does a much better job at avoiding them.
is fast: all operations can be solved with a fast row positioning followed by a range scan in the desired direction
However, this approach is difficult to implement, hard to understand by the average programmer and not supported by the tools.
Row Number Driven
This is the common approach introduced with Linq queries:
select ...
from (
select ..., row_number() over (...) as rn
from table)
where rn between #firstRow and #lastRow;
(or a similar query using TOP)
This approach is easy to implement and is supported by tools (specifically by Linq .Limit and .Take operators). But this approach is guaranteed to scan the index in order to count the rows. This approach works usually very fast for page 1 and gradually slows down as the an one goes to higher and higher page numbers.
As a bonus, with this solution is very easy to change the sort order (simply change the OVER clause).
Overall, given the ease of the ROW_NUMBER() based solutions, the support they have from Linq, the simplicity to use arbitrary orders for moderate data sets the ROW_NUMBER based solutions are adequate. For large and very large data sets, the ROW_NUMBER() can occur serious performance issues.
One other thing to consider is that often times there is a definite pattern of access. Often the first few pages are hot and pages after 10 are basically never viewed (eg. most recent posts). In this case, the penalty that occurs with ROW_NUMBER() for visiting bottom pages (display pages for which a large number of rows have to be counted to get the starting result row) may be well ignored.
And finally, the keyset pagination is great for dictionary navigation, which ROW_NUMBER() cannot accommodate easily. Dictionary navigation is where instead of using page number, users can navigate to certain anchors, like alphabet letters. Typical example being a contact Rolodex like sidebar, you click on M and you navigate to the first customer name that starts with M.
The 2nd answer is your best choice. It takes into account the fact that you could have holes in your ID column. I'd rewrite it as a CTE though instead of a subquery...
;WITH MyCTE AS
(SELECT *,
ROW_NUMBER() OVER (ORDER BY ID) AS row
FROM Table)
SELECT *
FROM MyCTE
WHERE row >= #start
AND row <= #end
They are different queries.
Assuming ID is a surrogate key, it may have gaps. ROW_NUMBER will be contiguous.
If you can guarantee you have no gaps in the data, then the 1st one because I'd hope it's indexed,. The 2nd one is more "correct" though.

Is there efficient SQL to query a portion of a large table

The typical way of selecting data is:
select * from my_table
But what if the table contains 10 million records and you only want records 300,010 to 300,020
Is there a way to create a SQL statement on Microsoft SQL that only gets 10 records at once?
E.g.
select * from my_table from records 300,010 to 300,020
This would be way more efficient than retrieving 10 million records across the network, storing them in the IIS server and then counting to the records you want.
SELECT * FROM my_table is just the tip of the iceberg. Assuming you're talking a table with an identity field for the primary key, you can just say:
SELECT * FROM my_table WHERE ID >= 300010 AND ID <= 300020
You should also know that selecting * is considered poor practice in many circles. They want you specify the exact column list.
Try looking at info about pagination. Here's a short summary of it for SQL Server.
Absolutely. On MySQL and PostgreSQL (the two databases I've used), the syntax would be
SELECT [columns] FROM table LIMIT 10 OFFSET 300010;
On MS SQL, it's something like SELECT TOP 10 ...; I don't know the syntax for offsetting the record list.
Note that you never want to use SELECT *; it's a maintenance nightmare if anything ever changes. This query, though, is going to be incredibly slow since your database will have to scan through and throw away the first 300,010 records to get to the 10 you want. It'll also be unpredictable, since you haven't told the database which order you want the records in.
This is the core of SQL: tell it which 10 records you want, identified by a key in a specific range, and the database will do its best to grab and return those records with minimal work. Look up any tutorial on SQL for more information on how it works.
When working with large tables, it is often a good idea to make use of Partitioning techniques available in SQL Server.
The rules of your partitition function typically dictate that only a range of data can reside within a given partition. You could split your partitions by date range or ID for example.
In order to select from a particular partition you would use a query similar to the following.
SELECT <Column Name1>…/*
FROM <Table Name>
WHERE $PARTITION.<Partition Function Name>(<Column Name>) = <Partition Number>
Take a look at the following white paper for more detailed infromation on partitioning in SQL Server 2005.
http://msdn.microsoft.com/en-us/library/ms345146.aspx
I hope this helps however please feel free to pose further questions.
Cheers, John
I use wrapper queries to select the core query and then just isolate the ROW numbers that i wish to take from the query - this allows the SQL server to do all the heavy lifting inside the CORE query and just pass out the small amount of the table that i have requested. All you need to do is pass the [start_row_variable] and the [end_row_variable] into the SQL query.
NOTE: The order clause is specified OUTSIDE the core query [sql_order_clause]
w1 and w2 are TEMPORARY table created by the SQL server as the wrapper tables.
SELECT
w1.*
FROM(
SELECT w2.*,
ROW_NUMBER() OVER ([sql_order_clause]) AS ROW
FROM (
<!--- CORE QUERY START --->
SELECT [columns]
FROM [table_name]
WHERE [sql_string]
<!--- CORE QUERY END --->
) AS w2
) AS w1
WHERE ROW BETWEEN [start_row_variable] AND [end_row_variable]
This method has hugely optimized my database systems. It works very well.
IMPORTANT: Be sure to always explicitly specify only the exact columns you wish to retrieve in the core query as fetching unnecessary data in these CORE queries can cost you serious overhead
Use TOP to select only a limited amont of rows like:
SELECT TOP 10 * FROM my_table WHERE ID >= 300010
Add an ORDER BY if you want the results in a particular order.
To be efficient there has to be an index on the ID column.