Custom Pagination in datatable - sql

I have a web application in which I get data from my database and show in a datatable. I am facing an issue doing this as the data that I am fetching has too many rows(200 000). So when I query something like select * from table_name;
my application gets stuck.
Is there a way to handle this problem with JavaScript?
I tried pagination but I cannot figure how would i do that as datatable creates pagination for already rendered data?
Is there a way through which I can run my query through pagination at
the backend?

I have come across the same problem when working with mongodb and angularjs. I used server side paging. Since you have huge number of records, You can try using the same approach.
Assuming a case that you are displaying 25 records in one page.
Backend:
Get the total count of the records using COUNT query.
select * from table_name LIMIT 25 OFFSET
${req.query.pageNumber*25} to query limited records based on the page number;
Frontend:
Instead of using datatable, display the data in HTML table it self.
Define buttons for next page and previous page.
Define global variable in the controller/js file for pageNumber.
Increment pageNumber by 1 when next page button is clicked and
decrement that by 1 when prev button is pressed.
use result from COUNT query to put upper limit to pageNumber
variable.(if 200 records are there limit will be 200/25=8).
So basically select * from table_name LIMIT 25 OFFSET
${req.query.pageNumber*25} will limit the number of records to 25. when req.query.pageNumber=1, it will offset first 25records and sends next 25 records. similarly if req.query.pageNumber=2, it will offset first 2*25 records and sends 51-75 records.

There are two ways to handle.
First way - Handling paging in client side
Get all data from database and apply custom paging.
Second way - Handling paging in server side
Every time you want to call in database and get records according to pagesize.

You can use LIMIT and OFFSET constraints for pagination in MySQL. I understand that at a time 2 lacs data makes performance slower. But as you mention that you have to use JS for that. So make it clear that if you wants js as frontend then it is not going to help you. But as you mention that you have a web application, If that application is on Node(as server) then I can suggest you the way, which can help you a lot.
use 2 variables, named var_pageNo and var_limit. Now use the row query of mysql as
select * form <tbl_name> LIMIT var_limit OFFSET (var_pageNo * var_limit);
Do code according to this query. Replace the variable with your desire values. This will make your performance faster, and will fetch the data as per your specified limit.
hope this will helpful.

Related

RavenDB paging via cursor

Paging in RavenDB is done via skip+take. This is default implementation I'm happy with most of the time. However for frequently changing data I want paging via a cursor. The cursor/after parameter specifies which was the last record displayed and where the list should continue on the next page.
This should work for data which can be dynamically sorted, so the sorting parameter is not fixed.
github is doing it this way on the "stars" page for example: https://github.com/[username]?after=Y3Vyc29&tab=stars
Any ideas how to achieve this in RavenDB?
there is no cursor pagination in RavenDB.
But you can use the 'last-modified to continuously iterate on frequently changing data.
from Orders as o
where o.'#metadata'.'#last-modified' > "2018-08-28:12:11"
order by o.'#metadata'.'#last-modified'
select {
A: o["#metadata"]["#last-modified"]
}.
You can also use Subscription

keyset pagination for specific to jump page

is keyset pagination in front end only for next and previous ? because what I have learn about it, I could just use that N keep it for previous and. next
let say this query for page one,
SELECT * FROM nameTable ORDER BY ASC id LIMIT 10
and we save the last id on N
and then for next SELECT * FROM nameTable WHERE id > N ORDER BY ASC id LIMIT 10
and if for previous just use WHERE id < N ?
how about if in Client want to jump to the page 10 or back to 3 pages ??
can u all tell me how to do that and is that possible using keyset ?
Using keyset pagination you can not jump to a given page.
You can just go to first, last previous and next.
As explained by Laurenz, you can still move/skip a number "pages" from your current stand but I am not really sure what would be the use case for it.
The main objective of keyset pagination is to avoid the use of the offset/skip - limit for large sets of data, but if you want to jump to an exact page you must the offset/skip keywords.
Normally next and prev functionality using a good search gives good enough user experience :)
If you want to get to the previous page, remember the low bound for id as well as the high bound.
To scroll 3 pages ahead, use LIMIT 30 OFFSET 20 instead of LIMIT 10. To jump to page X, calculate the difference between X and the current page and multiply that difference with the number of rows per page.
It's all pretty straightforward.

Select database table from arbitrary row number in RFC call

I successfully deal with selection from SAP Tables by using RFC function module. Issue I'm facing is to understand best practice of selecting data from arbitrary row.
Example: First RFC call will fetch 1000 records from KNA1 (I will log in custom transparent table how many records in total where considered so far).
New RFC Call should take next 1000 rows but starting from row 1000 till 2000. Is there some elegant way of dealing with this situation?
Using of Cursor is not possible since there are 2 consecutive calls of same RFC cursor value will be reset.
Otherwise I should always selecting everything and distinguish requested data by looping the total data which will consume a lot of time.
Thanks for any suggestions!
Use OFFSET
In the SELECT with OFFSET:
SELECT * FROM kna1
UP TO 1000 ROWS
OFFSET (lv_offset)
WHERE ...
ORDER BY ...
If lv_offset contains 2000 for example, it will return the rows 2001-3000 by the ordering.
According to the online help, you have to use ORDER BY in the SELECT.

Pulling a 33,000-record recordset took LESS execution time than using Count() in the SQL. How is that possible?

Thanks in advance for putting up with me.
Pulling a 33,000-record recordset from the database took LESS execution time than using Count() in the SQL and just grabbing 20 rows.
How is that possible?
A bit more detail:
Before, we were grabbing the entire recordset yet only displaying 20 rows of it on a page at a time for pagination. That was cringeworthy and wasteful, so I redesigned the page to only grab 20 rows at a time and to simply use an index variable to grab the next page, and so on.
All well and good, but that lacked a record count, which our people needed.
So after the record query, I added (what I thought would be) a quick query just on the index of the table using the Count(index) function in Structured Query Language.
A side by side comparison of the original page and my new page indicates my new page takes roughly 10% longer to execute than the original! I was flabbergasted. I thought for sure it would be lightning fast, way faster than the original.
Any thoughts on why and what I might do to remedy that?
Is it because the script has to run two queries, regardless of the data retrieved?
Update:
Here is the SQL.
(Table names and field names are fictionalized in this post for security, but the structure is the same as the real page).
The main recordset select query contains:
SELECT
top 21 roster_id, roster_pplid, roster_pplemailid, roster_emailid, roster_firstname,
roster_lastname, roster_since, roster_pplsubscrid, roster_firstppldone, roster_pmtcurrent,
roster_emailverified, roster_active, roster_selfcanceled, roster_deactreason
FROM roster
WHERE
roster_siteid = 22
AND roster_isdeleted = false
order by roster_id desc
The record count query contains:
SELECT
COUNT(roster_id)
FROM
roster
WHERE
roster_siteid = 22
AND roster_isdeleted = false
The first query runs, then the second. The second always dynamically has the same matching WHERE filter.
I think I know why it is slower, I'm using GetRows to grab the recordset in the new page, was not using that in the old page. That seems to be the slowdown. But I have to use it, cannot step beyond the 21st record otherwise.
Nick.McDermaid : The SQL shown is selecting the TOP 21 rows, that is how it is grabbing just 20 rows (number 21 is just to populate the index for the "Next" page link).

Result query doesnt has the entire result at first

im using oracle 11g R2,the result of this query:
SELECT u.object_name,u.object_type,t.owner,DBMS_METADATA.GET_DDL(object_type, object_name)
FROM user_objects u
inner join all_tables t
on u.object_name = t.table_name;
Just show me the first 50 rows , it needs to scroll down the query result tab to get the other results and query looks like it is working when i scrolled.
How can i fix it??
For SQL Developer, you can change the fetch size here:
Tools->Preferences->Database->Advanced
The first option is "Sql Array Fetch Size (Max 500). The default is 50.
I'm not quite sure what the problem is or what a fix would look like.
The client application that you are using to run the query decides how many rows to fetch before displaying the data to you and whether to continue fetching data or to wait for you to request more rows. You don't say which client application you are using so it is hard to tell you whether or how to configure your particular client to behave differently. If you are using SQL Developer, there are settings that control how many rows to fetch so you can adjust the default from 50. Other GUIs likely have similar settings.
Alternately, you could use a client application like SQL*Plus whose default behavior is to fetch all the rows without trying to page through the results for a human.