I want to use DataTables to show data to a user.
I read the document about "Server-side processing", but
I don't know PHP, so I can't figure out what has happened.
How client-side code sends the data to server-side script?
And how to let server-side script know how many records should be returned?
Please refer the Server-side processing chapter in the DataTables manual. The length parameter determines how many records are requested and start parameter determines first record index (zero-based).
Below is an excerpt from the manual:
start
Paging first record indicator. This is the start point in the current
data set (0 index based - i.e. 0 is the first record).
length
Number of records that the table can display in the current draw. It
is expected that the number of records returned will be equal to this
number, unless the server has fewer records to return. Note that this
can be -1 to indicate that all records should be returned (although
that negates any benefits of server-side processing!)
Related
Good day everyone,
we are trying to have through the use of the integrations of the Apigee service of google all the rows in a bigquery table that have a certain value in a field.
this operation is quite easy to do, but when we have more than 200 lines as a result, problems arise.
The problem is that using the integration to connect to BigQuery I am not returning any listEntitiesPageToken value and not even any listEntitiesNextPageToken value
so i can't figure out how i can go about navigating the result pages
Has anyone had the same problem? What do you suggest?
In the tutorial: "https://cloud.google.com/apigee/docs/api-platform/integration/connectors-task#configure-the-connectors-task" is write : "For example, if you are expecting 1000 records in your result set, you can set the listEntitiesPageSize to 100. So when the Connectors task runs for the first time, it returns the first 100 records, the next 100 records in the second run and so on."
And there is a tip: "Use the listEntitiesPageSize parameter in conjunction with the listEntitiesPageToken parameter to navigate through the pages."
I used the tutorial to understand how to use the task for loop and I understood that I should create a "subintegration" which must be called by a "main integration" for each element present in a list / array.
But what what can i do since these tokens are empty?
In Eloqua, can you send out an email to a contact list but version the "hero" image headline for each segment using dynamic content blocks?
And then can you do the reverse, have the main image remain the same, and dynamically populate products below that they've purchased in the past?
For scenario 1, yes that is possible out of the box.
Scenario 2 however is a bit more complicated and would generally require a 3rd party tool to provide this type of dynamic code generation based upon a lookup table (in this case a line item inventory or purchases). Because a contact could have zero or more products (commonly as individual records in a CDO), you would generally need to aggregate or count the number of related records, and then generate your HTML table and formatting around those record values, and be contextually aware if it is the first or last record (to begin and close the table). Dynamic content does not have mathematical functions and would not be able to count those related records - this is something usually provided by a B2C system like SFMC using ampscript or dynamically generated through custom code and sent through a transactional SMTP service. You could have multiple dynamic content on top of each other, but your biggest limitation becomes the field merge, with only lets you select a record based upon earliest/last creation date, or last modified. This is not suitable if you have more than 2 records. A third party service that provides a cloud content module for your email is your best bet.
I have a web application in which I get data from my database and show in a datatable. I am facing an issue doing this as the data that I am fetching has too many rows(200 000). So when I query something like select * from table_name;
my application gets stuck.
Is there a way to handle this problem with JavaScript?
I tried pagination but I cannot figure how would i do that as datatable creates pagination for already rendered data?
Is there a way through which I can run my query through pagination at
the backend?
I have come across the same problem when working with mongodb and angularjs. I used server side paging. Since you have huge number of records, You can try using the same approach.
Assuming a case that you are displaying 25 records in one page.
Backend:
Get the total count of the records using COUNT query.
select * from table_name LIMIT 25 OFFSET
${req.query.pageNumber*25} to query limited records based on the page number;
Frontend:
Instead of using datatable, display the data in HTML table it self.
Define buttons for next page and previous page.
Define global variable in the controller/js file for pageNumber.
Increment pageNumber by 1 when next page button is clicked and
decrement that by 1 when prev button is pressed.
use result from COUNT query to put upper limit to pageNumber
variable.(if 200 records are there limit will be 200/25=8).
So basically select * from table_name LIMIT 25 OFFSET
${req.query.pageNumber*25} will limit the number of records to 25. when req.query.pageNumber=1, it will offset first 25records and sends next 25 records. similarly if req.query.pageNumber=2, it will offset first 2*25 records and sends 51-75 records.
There are two ways to handle.
First way - Handling paging in client side
Get all data from database and apply custom paging.
Second way - Handling paging in server side
Every time you want to call in database and get records according to pagesize.
You can use LIMIT and OFFSET constraints for pagination in MySQL. I understand that at a time 2 lacs data makes performance slower. But as you mention that you have to use JS for that. So make it clear that if you wants js as frontend then it is not going to help you. But as you mention that you have a web application, If that application is on Node(as server) then I can suggest you the way, which can help you a lot.
use 2 variables, named var_pageNo and var_limit. Now use the row query of mysql as
select * form <tbl_name> LIMIT var_limit OFFSET (var_pageNo * var_limit);
Do code according to this query. Replace the variable with your desire values. This will make your performance faster, and will fetch the data as per your specified limit.
hope this will helpful.
I'm writing a interface to query pagination data from Hbase table ,I query pagination data by some conditions, but it's very slow .My rowkey like this : 12345678:yyyy-mm-dd , length of 8 random Numbers and date .I try to use Redis cache all rowkeys and do pagination in it , but it's difficult to query data by the other conditions .
I also consider to design the secondary index in Hbase , and I discuss it with colleagues ,they think the secondary index is hard to maintain .
So , who can give me some ideas?
First thing, AFAIK random number + date pattern of rowkey may lead to hotspotting, if you scale with large data.
Regarding Pagination :
I'd offer solr + hbase if you are using cloudera then its cloudera search. It gives good performance(proved in our case) while querying 100 per page and with webservice call we have populated angularjs dashboard.
Also, most important thing is you can move back and forth between pages with out any issues..
Below diagram describes that.
To achieve this, you need to create collections(from hbase data) and can use solrj api
Hbase alone with scan api doesn't work for quick queries.
Apart from that, Please see my answer. Which is more insightful with implementation details...
How to achieve pagination in HBase?
Hbase only solution could be Hindex (co-processor based solution)
Link explains more in detail
Hindex architecture :
In Hbase to achieve good read performance you want your data retrieved by small number of gets (requests for single row) or a small scan (request over range of rows). Hbase stores your data sorted by key, so most important idea is to come up with such row key that would allow it.
Your key seems to contain only random integer and date so I assume that your queries are about pagination over records marked with time.
First idea is that in typical pagination scenario you access just 1 page at a time and navigate from page 1 to page 2 to page 3 etc. Given you want to paginate over all records for date 2015-08-16 you could use a scan of 50 rows with start key '\0:2015-08-16' (as it is smaller than any row in 2015-08-16) to retrieve first page. After retrieval of first page you have last key from a first page, say '12345:2015-08-16'. You can use it (or 12346:2015-08-16) to make another scan with start key 12346:2015-08-16 of 50 rows to retrieve page 2 and so on. So using this approach you query your pages fast as a single scan with predefined number of returned rows. So you can use last page row key as a parameter to paging API or just put last row key in redis so next paging API call will find it there.
All this works perfectly well until some user comes in and clicks directly to page 100. Or try to click on page 5 when he was on page 2. In such scenario you can use similar scan with nSkippedPages * 50 rows. This will not be as fast as a sequential access, but it's not a usual page usage pattern. You can use redis then to cache last row of the page result in a structure like pageNumber -> rowKey. Then if next user comes and clicks on page 100, it will see same performance as is in usual click page 1- click page 2- click page 3 scenario.
Then to make things more fast for users which click on page 99 first time, you could write a separate daemon which retrieves every 50th row and puts result in redis as a page index. Then launch it every 10-15 minutes and say that your page index has at most 10-15 minutes stale data.
You also can design a separate API which preloads row keys for a bulk of N pages (say about 100 pages, it could be async e.g. don't wait for actual preload to complete). What it will do is just a scan with KeyOnlyFilter and 50*N results and then selection of rowkeys for each page. So it accepts rowkey and populates redis with rowkey cache for N pages. Then when user walks in on a first page you fetch first 100 pages row keys for him so when he clicks on some page link seen on page, page start row key will be available. With right bulk size of preload you could approach your required latency.
Limit could be implemented using Scan.setMaxResults() or using PageFilter.
"skip nPages * 50 rows" and especially "output every 50th row" functionality seems to be trickier e.g. for latter you may end-up performing full scan which retrieves the keys or writing map-reduce to do it and for first it is not clear how to do it without sending rows over network since request can be distributed across several regions.
If you are looking for secondary indexes that are maintained in HBase there are several open source options (Splice Machine, Lilly, etc.). You can do index lookups in a few milliseconds.
I am trying to get information about files in a folder using https://apis.live.net/v5.0/folderid/files?
This particular folder of mine has around 5200 files. So I am getting a readtimeout when I make the above mentioned request. Is there any restriction on the number of files that I can make the request.
Note : I am able to successfully retrieve the file information from folder if I restrict the file count to 500 say https://apis.live.net/v5.0/folderid/files?limit=500
In general it's good to page queries that could potentially return a large number of requests. You could try using the limit query parameter in combination with the offset query parameter to read sets of the children at a time and see if that works better for you.
I'll quote in the relevant information from the documentation for ease of reference:
Specify the first item to get by setting the offset parameter in the preceding code to the index of the first item that you want to get. For example, to get two items starting with the third item, use FOLDER_ID/files?limit=2&offset=3.
Note In the JavaScript Object Notation (JSON)-formatted object that's returned, you can look in the paging object for the previous and next structures, if they apply, to get the offset and limit parameter values of the previous and next entries, if they exist.
You may also want to consider swapping to the new API, which has it's own paging model (using the next links).