How to make efficient pagination with total count - sql

We have a web application which helps organizing biological experiments (users describe experiment and upload experiment data). In the main page, we show first 10 experiments and then below Previous Next 1 2 3 .. 30.
I bugs me how to make efficient total count and pagination. Currently:
select count(id) from experiments; // not very efficient in large datasets
but how does this scale when dealing with large datarecords > 200.000. I tried to import random experiments to table, but it still performs quite ok (0.6 s for 300.000 experiments).
The other alternative I thought about is to add addtional table statistics (column tableName, column recordsCount). So after each insert to table experiments I would increase recordsCount in statistics (this means inserting to one table and updating other, using sql transaction of course). Vice versa goes for delete statement (recordsCount--).
For pagination the most efficient way is to do where id > last_id as sql uses index of course. Is there any other better way?
In case results are to be filtered e.g. select * from experiment where name like 'name%', option with table statistics fails. We need to get total count as: select count(id) from experiment where name like 'name%'.
Application was developed using Laravel 3 in case it makes any difference.
I would like to develop pagination that always performs the same. Records count must not affect pagination nor total count of records.

Please have the query like below:
CREATE PROCEDURE [GetUsers]
(
#Inactive Bit = NULL,
#Name Nvarchar(500),
#Culture VarChar(5) = NULL,
#SortExpression VarChar(50),
#StartRowIndex Int,
#MaxRowIndex Int,
#Count INT OUTPUT
)
AS
BEGIN
SELECT ROW_NUMBER()
OVER
(
ORDER BY
CASE WHEN #SortExpression = 'Name' THEN [User].[Name] END,
CASE WHEN #SortExpression = 'Name DESC' THEN [User].[Name] END DESC
) AS RowIndex, [User].*
INTO #tmpTable
FROM [User] WITH (NOLOCK)
WHERE (#Inactive IS NULL OR [User].[Inactive] = #Inactive)
AND (#Culture IS NULL OR [User].[DefaultCulture] = #Culture)
AND [User].Name LIKE '%' + #Name + '%'
SELECT *
FROM #tmpTable WITH (NOLOCK)
WHERE #tmpTable.RowIndex > #StartRowIndex
AND #tmpTable.RowIndex < (#StartRowIndex + #MaxRowIndex + 1)
SELECT #Count = COUNT(*) FROM #tmpTable
IF OBJECT_ID('tempdb..#tmpTable') IS NOT NULL DROP TABLE #tmpTable;
END

Related

SQL: Delete Rows from Dynamic list of tables where ID is null

I'm a SQL novice, and usually figure things out via Google and SO, but I can't wrap my head around the SQL required for this.
My question is similar to Delete sql rows where IDs do not have a match from another table, but in my case I have a middle table that I have to query, so here's the scenario:
We have this INSTANCES table that basically lists all the occurrences of files sent to the database, but have to join with CROSS_REF so our reporting application knows which table to query for the report, and we just have orphaned INSTANCES rows I want to clean out. Each DETAIL table contains different fields from the other ones.
I want to delete all single records from INSTANCES if there are no records for that Instance ID in any DETAIL table. The DETAIL table got regularly cleaned of old files, but the Instance record wasn't cleaned up, so we have a lot of INSTANCE records that don't have any associated DETAIL data. The thing is, I have to select the Table Name from CROSS_REF to know which DETAIL_X table to look up the Instance ID.
In the below example then, since DETAIL_1 doesn't have a record with Instance ID = 1001, I want to delete the 1001 record from INSTANCES.
INSTANCES
Instance ID
Detail ID
1000
123
1001
123
1002
234
CROSS_REF
Detail ID
Table Name
123
DETAIL_1
124
DETAIL_2
125
DETAIL_3
DETAIL_1
Instance ID
1000
1000
2999
Storing table names or column names in a database is almost always a sign for a bad database design. You may want to change this and thus get rid of this problem.
However, when knowing the possible table names, the task is not too difficult.
delete from instances i
where not exists
(
select null
from cross_ref cr
left join detail_1 d1 on d1.instance_id = i.instance_id and cr.table_name = 'DETAIL_1'
left join detail_2 d2 on d2.instance_id = i.instance_id and cr.table_name = 'DETAIL_2'
left join detail_3 d3 on d3.instance_id = i.instance_id and cr.table_name = 'DETAIL_3'
where cr.detail_id = i.detail_id
and
(
d1.instance_id is not null or
d2.instance_id is not null or
d3.instance_id is not null
)
);
(You can replace is not null by = i.instance_id, if you find that more readable. In that case you could even remove these criteria from the ON clauses.)
Much thanks to #DougCoats, this is what I ended up with.
So here's what I ended up with (#Doug, if you want to update your answer, I'll mark yours correct).
DECLARE #Count INT, #Sql VARCHAR(MAX), #Max INT;
SET #Count = (SELECT MIN(DetailID) FROM CROSS_REF)
SET #Max = (SELECT MAX(DetailID) FROM CROSS_REF)
WHILE #Count <= #Max
BEGIN
IF (select count(*) from CROSS_REF where file_id = #count) <> 0
BEGIN
SET #sql ='DELETE i
FROM Instances i
WHERE NOT EXISTS
(
SELECT InstanceID
FROM '+(SELECT TableName FROM Cross_Ref WHERE DetailID=#Count)+' d
WHERE d.InstanceId=i.InstanceID
AND i.detailID ='+ cast(#Count as varchar) +'
)
AND i.detailID ='+ cast(#Count as varchar)
EXEC(#sql);
SET #Count=#Count+1
END
END
this answer assumes you have sequential data in the CROSS_REF table. If you do not, you'll need to alter this to account it (as it will bomb due to missing object reference).
However, this should give you an idea. It also could probably be written to do a more set based approach, but my answer is to demonstrate dynamic sql use. Be careful when using dynamic SQL though.
DECLARE #Count INT, #Sql VARCHAR(MAX), #Max INT;
SET #Count = (SELECT MIN(DetailID) FROM CROSS_REF)
SET #Max = (SELECT MAX(DetailID) FROM CROSS_REF)
WHILE #Count <= #Max
BEGIN
IF (select count(*) from CROSS_REF where file_id = #count) <> 0
BEGIN
SET #sql ='DELETE i
FROM Instances i
WHERE NOT EXISTS
(
SELECT InstanceID
FROM '+(SELECT TableName FROM Cross_Ref WHERE DetailID=#Count)+' d
WHERE d.InstanceId=i.InstanceID
AND i.detailID ='+ cast(#Count as varchar) +'
)
AND i.detailID ='+ cast(#Count as varchar)
EXEC(#sql);
SET #Count=#Count+1
END
END

How to set total number of rows before an OFFSET occurs in stored procedure

I've created a stored procedure that filters and paginates for a DataTable.
Problem: I need to set an OUTPUT variable for #TotalRecords found before an OFFSET occurs, otherwise it sets #TotalRecord to #RecordPerPage.
I've messed around with CTE's and also simply trying this:
SELECT *, #TotalRecord = COUNT(1)
FROM dbo
But that doesn't work either.
Here is my stored procedure, with most of the stuff pulled out:
ALTER PROCEDURE [dbo].[SearchErrorReports]
#FundNumber varchar(50) = null,
#ProfitSelected bit = 0,
#SortColumnName varchar(30) = null,
#SortDirection varchar(10) = null,
#StartIndex int = 0,
#RecordPerPage int = null,
#TotalRecord INT = 0 OUTPUT --NEED TO SET THIS BEFORE OFFSET!
AS
BEGIN
SET NOCOUNT ON;
SELECT *
FROM
(SELECT *
FROM dbo.View
WHERE (#ProfitSelected = 1 AND Profit = 1)) AS ERP
WHERE
((#FundNumber IS NULL OR #FundNumber = '')
OR (ERP.FundNumber LIKE '%' + #FundNumber + '%'))
ORDER BY
CASE
WHEN #SortColumnName = 'FundNumber' AND #SortDirection = 'asc'
THEN ERP.FundNumber
END ASC,
CASE
WHEN #SortColumnName = 'FundNumber' AND #SortDirection = 'desc'
THEN ERP.FundNumber
END DESC
OFFSET #StartIndex ROWS
FETCH NEXT #RecordPerPage ROWS ONLY
Thank you in advance!
You could try something like this:
create a CTE that gets the data you want to return
include a COUNT(*) OVER() in there to get the total count of rows
return just a subset (based on your OFFSET .. FETCH NEXT) from the CTE
So your code would look something along those lines:
-- CTE definition - call it whatever you like
WITH BaseData AS
(
SELECT
-- select all the relevant columns you need
p.ProductID,
p.ProductName,
-- using COUNT(*) OVER() returns the total count over all rows
TotalCount = COUNT(*) OVER()
FROM
dbo.Products p
)
-- now select from the CTE - using OFFSET/FETCH NEXT, get only those rows you
-- want - but the "TotalCount" column still contains the total count - before
-- the OFFSET/FETCH
SELECT *
FROM BaseData
ORDER BY ProductID
OFFSET 20 ROWS FETCH NEXT 15 ROWS ONLY
As a habit, I prefer non-null entries before possible null. I did not reference those in my response below, and limited a working example to just the two inputs you are most concerned with.
I believe there could be some more clean ways to apply your local variables to filter the query results without having to perform an offset. You could return to a temp table or a permanent usage table that cleans itself up and use IDs that aren't returned as a way to set pages. Smoother, with less fuss.
However, I understand that isn't always feasible, and I become frustrated myself with those attempting to solve your use case for you without attempting to answer the question. Quite often there are multiple ways to tackle any issue. Your job is to decide which one is best in your scenario. Our job is to help you figure out the script.
With that said, here's a potential solution using dynamic SQL.
I'm a huge believer in dynamic SQL, and use it extensively for user based table control and ease of ETL mapping control.
use TestCatalog;
set nocount on;
--Builds a temp table, just for test purposes
drop table if exists ##TestOffset;
create table ##TestOffset
(
Id int identity(1,1)
, RandomNumber decimal (10,7)
);
--Inserts 1000 random numbers between 0 and 100
while (select count(*) from ##TestOffset) < 1000
begin
insert into ##TestOffset
(RandomNumber)
values
(RAND()*100)
end;
set nocount off;
go
create procedure dbo.TestOffsetProc
#StartIndex int = null --I'll reference this like a page number below
, #RecordsPerPage int = null
as
begin
declare #MaxRows int = 30; --your front end will probably manage this, but don't trust it. I personally would store this on a table against each display so it can also be returned dynamically with less manual intrusion to this procedure.
declare #FirstRow int;
--Quick entry to ensure your record count returned doesn't excede max allowed.
if #RecordsPerPage is null or #RecordsPerPage > #MaxRows
begin
set #RecordsPerPage = #MaxRows
end;
--Same here, making sure not to return NULL to your dynamic statement. If null is returned from any variable, the entire statement will become null.
if #StartIndex is null
begin
set #StartIndex = 0
end;
set #FirstRow = #StartIndex * #RecordsPerPage
declare #Sql nvarchar(2000) = 'select
tos.*
from ##TestOffset as tos
order by tos.RandomNumber desc
offset ' + convert(nvarchar,#FirstRow) + ' rows
fetch next ' + convert(nvarchar,#RecordsPerPage) + ' rows only'
exec (#Sql);
end
go
exec dbo.TestOffsetProc;
drop table ##TestOffset;
drop procedure dbo.TestOffsetProc;

SQL Server recent rows

I'm sure this is easy but I have googled a lot and searched.
Ok, I have a table WITHOUT dates etc with 100000000000000000 records.
I want to see the latest entries, i.e.
Select top 200 *
from table
BUT I want to see the latest entries. Is there a rowidentifier that I could use in a table?
ie
select top 200 *
from table
order by rowidentifer Desc
Thanks
Is there a row.identifier that i could use in a table ie set top 200 * from table order by row.identifer Desc
As already stated in the comment's, there is not. The best way is having an identity, timestamp or some other form of identifying the record. Here is an alternative way using EXCEPT to get what you need, but the execution plan isn't the best... Play around with it and change as needed.
--testing purposes...
DECLARE #tbl TABLE(FirstName VARCHAR(50))
DECLARE #count INT = 0
WHILE (#count <= 12000)
BEGIN
INSERT INTO #tbl(FirstName)
SELECT
'RuPaul ' + CAST(#count AS VARCHAR(5))
SET #count += 1
END
--adjust how many records you would like, example is 200
SELECT *
FROM #tbl
EXCEPT(SELECT TOP (SELECT COUNT(*) - 200 FROM #tbl) * FROM #tbl)
--faster than above
DECLARE #tblCount AS INT = (SELECT COUNT(*) FROM #tbl) - 200
SELECT *
FROM #tbl
EXCEPT(SELECT TOP (#tblCount) * FROM #tbl)
On another note, you could create another Table Variable that has an ID and other columns, then you could insert the records you would need. Then you can perform other operations against the table, for example OrderBy etc...
What you could do
ALTER TABLE TABLENAME
ADD ID INT IDENTITY
This will add another column to the table "ID" and automatically give it an ID. Then you have an identifier you can use...
Nope, in short, there is none, if you don`t have a column dedicated as one (ie. an IDENTITY, or a SEQUENCE, or something similar). If you did, then you could get an ordered result back.

SQL Server 2008 paging methods?

I have to work with a potentially large list of records and I've been Googling for ways to avoid selecting the whole list, instead I want to let users select a page (like from 1 to 10) and display the records accordingly.
Say, for 1000 records I will have 100 pages of 10 records each and the most recent 10 records will be displayed first then if the user click on page 5, it will show records from 41 to 50.
Is it a good idea to add a row number to each record then query based on row number? Is there a better way of achieving the paging result without too much overhead?
So far those methods as described here look the most promising:
http://developer.berlios.de/docman/display_doc.php?docid=739&group_id=2899
http://www.codeproject.com/KB/aspnet/PagingLarge.aspx
The following T-SQL stored procedure is a very efficient implementation of paging. THE SQL optimiser can find the first ID very fast. Combine this with the use of ROWCOUNT, and you have an approach that is both CPU-efficient and read-efficient. For a table with a large number of rows, it certainly beats any approach that I've seen using a temporary table or table variable.
NB: I'm using a sequential identity column in this example, but the code works on any column suitable for page sorting. Also, sequence breaks in the column being used don't affect the result as the code selects a number of rows rather than a column value.
EDIT: If you're sorting on a column with potentially non-unique values (eg LastName), then add a second column to the Order By clause to make the sort values unique again.
CREATE PROCEDURE dbo.PagingTest
(
#PageNumber int,
#PageSize int
)
AS
DECLARE #FirstId int, #FirstRow int
SET #FirstRow = ( (#PageNumber - 1) * #PageSize ) + 1
SET ROWCOUNT #FirstRow
-- Add check here to ensure that #FirstRow is not
-- greater than the number of rows in the table.
SELECT #FirstId = [Id]
FROM dbo.TestTable
ORDER BY [Id]
SET ROWCOUNT #PageSize
SELECT *
FROM dbo.TestTable
WHERE [Id] >= #FirstId
ORDER BY [Id]
SET ROWCOUNT 0
GO
If you use a CTE with two row_number() columns - one sorted asc, one desc, you get row numbers for paging as well as the total records by adding the two row_number columns.
create procedure get_pages(#page_number int, #page_length int)
as
set nocount on;
with cte as
(
select
Row_Number() over (order by sort_column desc) as row_num
,Row_Number() over (order by sort_column) as inverse_row_num
,id as cte_id
From my_table
)
Select
row_num+inverse_row_num as total_rows
,*
from CTE inner join my_table
on cte_id=df_messages.id
where row_num between
(#page_number)*#page_length
and (#page_number+1)*#page_length
order by rownumber
Using OFFSET
Others have explained how the ROW_NUMBER() OVER() ranking function can be used to perform pages. It's worth mentioning that SQL Server 2012 finally included support for the SQL standard OFFSET .. FETCH clause:
SELECT first_name, last_name, score
FROM players
ORDER BY score DESC
OFFSET 40 ROWS FETCH NEXT 10 ROWS ONLY
If you're using SQL Server 2012 and backwards-compatibility is not an issue, you should probably prefer this clause as it will be executed more optimally by SQL Server in corner cases.
Using the SEEK Method
There is an entirely different, much faster way to perform paging in SQL. This is often called the "seek method" as described in this blog post here.
SELECT TOP 10 first_name, last_name, score
FROM players
WHERE (score < #previousScore)
OR (score = #previousScore AND player_id < #previousPlayerId)
ORDER BY score DESC, player_id DESC
The #previousScore and #previousPlayerId values are the respective values of the last record from the previous page. This allows you to fetch the "next" page. If the ORDER BY direction is ASC, simply use > instead.
With the above method, you cannot immediately jump to page 4 without having first fetched the previous 40 records. But often, you do not want to jump that far anyway. Instead, you get a much faster query that might be able to fetch data in constant time, depending on your indexing. Plus, your pages remain "stable", no matter if the underlying data changes (e.g. on page 1, while you're on page 4).
This is the best way to implement paging when lazy loading more data in web applications, for instance.
Note, the "seek method" is also called keyset paging.
Try something like this:
declare #page int = 2
declare #size int = 10
declare #lower int = (#page - 1) * #size
declare #upper int = (#page ) * #size
select * from (
select
ROW_NUMBER() over (order by some_column) lfd,
* from your_table
) as t
where lfd between #lower and #upper
order by some_column
Here's an updated version of #RoadWarrior's code, using TOP. Performance is identical, and extremely fast. Make sure you have an index on TestTable.ID
CREATE PROC dbo.PagingTest
#SkipRows int,
#GetRows int
AS
DECLARE #FirstId int
SELECT TOP (#SkipRows)
#FirstId = [Id]
FROM dbo.TestTable
ORDER BY [Id]
SELECT TOP (#GetRows) *
FROM dbo.TestTable
WHERE [Id] >= #FirstId
ORDER BY [Id]
GO
Try this
Declare #RowStart int, #RowEnd int;
SET #RowStart = 4;
SET #RowEnd = 7;
With MessageEntities As
(
Select ROW_NUMBER() Over (Order By [MESSAGE_ID]) As Row, [MESSAGE_ID]
From [TBL_NAFETHAH_MESSAGES]
)
Select m0.MESSAGE_ID, m0.MESSAGE_SENDER_NAME,
m0.MESSAGE_SUBJECT, m0.MESSAGE_TEXT
From MessageEntities M
Inner Join [TBL_NAFETHAH_MESSAGES] m0 on M.MESSAGE_ID = m0.MESSAGE_ID
Where M.Row Between #RowStart AND #RowEnd
Order By M.Row Asc
GO
Why not to use recommended solution:
SELECT VALUE product FROM
AdventureWorksEntities.Products AS product
order by product.ListPrice SKIP #skip LIMIT #limit

delete old records and keep 10 latest in sql compact

i'm using a sql compact database(sdf) in MS SQL 2008.
in the table 'Job', each id has multiple jobs.
there is a system regularly add jobs into the table.
I would like to keep the 10 latest records for each id order by their 'datecompleted'
and delete the rest of the records
how can i construct my query? failed in using #temp table and cursor
Well it is fast approaching Christmas, so here is my gift to you, an example script that demonstrates what I believe it is that you are trying to achieve. No I don't have a big white fluffy beard ;-)
CREATE TABLE TestJobSetTable
(
ID INT IDENTITY(1,1) not null PRIMARY KEY,
JobID INT not null,
DateCompleted DATETIME not null
);
--Create some test data
DECLARE #iX INT;
SET #iX = 0
WHILE(#iX < 15)
BEGIN
INSERT INTO TestJobSetTable(JobID,DateCompleted) VALUES(1,getDate())
INSERT INTO TestJobSetTable(JobID,DateCompleted) VALUES(34,getDate())
SET #iX = #iX + 1;
WAITFOR DELAY '00:00:0:01'
END
--Create some more test data, for when there may be job groups with less than 10 records.
SET #iX = 0
WHILE(#iX < 6)
BEGIN
INSERT INTO TestJobSetTable(JobID,DateCompleted) VALUES(23,getDate())
SET #iX = #iX + 1;
WAITFOR DELAY '00:00:0:01'
END
--Review the data set
SELECT * FROM TestJobSetTable;
--Apply the deletion to the remainder of the data set.
WITH TenMostRecentCompletedJobs AS
(
SELECT ID, JobID, DateCompleted
FROM TestJobSetTable A
WHERE ID in
(
SELECT TOP 10 ID
FROM TestJobSetTable
WHERE JobID = A.JobID
ORDER BY DateCompleted DESC
)
)
--SELECT * FROM TenMostRecentCompletedJobs ORDER BY JobID,DateCompleted desc;
DELETE FROM TestJobSetTable
WHERE ID NOT IN(SELECT ID FROM TenMostRecentCompletedJobs)
--Now only data of interest remains
SELECT * FROM TestJobSetTable
DROP TABLE TestJobSetTable;
How about something like:
DELETE FROM
Job
WHERE NOT
id IN (
SELECT TOP 10 id
FROM Job
ORDER BY datecompleted)
This is assuming you're using 3.5 because nested SELECT is only available in this version or higher.
I did not read the question correctly. I suspect something more along the lines of a CTE will solve the problem, using similar logic. You want to build a query that identifies the records you want to keep, as your starting point.
Using CTE on SQL Server Compact 3.5