datatables : If I decide to use lazy loading would search show incorrect data? - datatables

I have a large amount of text information which I'll be loading in a table.
I want the user to search through it, using datatables search.
I wasn't doing lazy loading earlier, but now I'm thinking of using it now.
However, if I lazy load and the user searches for data, he/she won't be able to see everything, since the data isn't completely loaded.
Am I guess this correctly or does datatables work around this somehow ?

There are two processing modes: client-side and server-side, see Processing modes for more information.
Client-side processing - the full data set is loaded up-front and searching/filtering/pagination is done in the browser.
Server-side processing - an Ajax request is made for every table redraw, with only the data required for each display returned. Searching/filtering/pagination is performed on the server.
There is also Scroller extension, it's a virtual rendering plug-in for DataTables which allows large datasets to be drawn on screen very quickly.
Along with server-side processing it could be used to perform lazy loading of large amount of data. When searching, the request will be made to the server to search the whole dataset and only subset needed to be displayed at the moment will be returned.
See Server-side processing for more information on request and response in server-side processing mode.

Related

Asp .Net Core Controller Task timing out

I have an ASP .NET Core 3.1 website that imports data. Prior to importing, I want to run some validation on the import. So the user selects the file to import, clicks 'Validate', then either gets some validation error messages so they can fix the import file, or allows them to import.
The problem I am running into is around the length of time these validation and import processes take. If the file is small, everything works as expected. If the file is larger (Over 1000) records the validation and/or may take several minutes. On my local machine, or a server on my network, this works fine. On my actual public facing website, I am getting:
503 first byte timeout
So, I need some strategies for getting around this. Turning up the timeout time seems like a rabbit hole? It looks like BackgroundService/IHostedService is probably the best way to go? But I cant seem to find an example of how to do this in the way I would like:
Call "Validate" via AJAX
Turn on a loader
Perform validation
Turn off loader
Display either success or list of errors to user
##UPDATE## -
Validation -
Ajax call to controller
Controller calls Business Logic code
a. Check file extension
b. check file size
c. Read in .csv with CsvHelper
d. Check that all required columns are present
c. Check that required columns contain valid data - length, no whitespace, valid zip code, valid phone, etc...
d. Check for internal duplicates
e. If append (as opposed to overwrite) check for duplicates in database - this is the slow step
So, is a better solution to speed up the validation process? Is BackgroundService overkill?

Pulling summary report for monitoring using reporting task in NiFi

Working on a piece of the project where a report needs to be generated with all the flow details(memory used, number of records processed, Processes ran successful, failed, etc). Most of the details are present on the Summary tab, but the requirement is to have separate reports.
Can any one help me with solution/steps/examples/screens/videos.
Thanks much.
Every underlying behavior of the UX/UI that Apache NiFi provides is also accessible through an API (in fact, the UI calls the API to perform each of these tasks). So you can invoke the GET /system-diagnostics API to return that information in JSON form, and then parse this data and present it in whatever form you like.

mvc4 passing data between paging action and views

i have an action which will invoke a service (not database)to get some data for display,and i want to do paging on these data.however,every time a second page is clicked,it will invoke this action and of course invoke the service again,actually when i click the first page link,it already generate the whole data including what the second page needs. i just want to invoke the service once and get all the data,and later when paging,i don't need to invoke the service again,how can i deal with that?hope someone could give me a hint~
There are several ways to address this. If it's practical and a limited amount of data, it's ok to return the entire data set in the first request.
If that's your case I would consider returning a pure JSON object when you load the page initially. You can then deserialize this into a JS object variable on the web page that you can perform your paging operations against. This is an example of client side paging where all the data exists client side.
Another approach is to do Ajax based paging where you request the data for the next page as needed. I would still recommend returning JSON in this scenario as well.
The two approaches differ in that the first one returns all the data upfront, whereas the second one only returns what you need to render any given page.

Yii: Does renderPartial caches content of view files?

I need to render and display instances of different models(comments, polls, etc) on the same page. Those instances are sorted by date, so it is possible that there will be couple comments then poll then another commetns and once more poll and so on.
So I am calling renderPartial in a loop. I am afraid that this can work slow as each renderPartial needs to read file from hdd.
So my question is: does renderPartial caches content of the file somewhere in the memory during one http request? So calling renderpartial multiple times will not touch hdd every time.
Yii will not cache anything until you instruct it to do so. You can cache whole page, or just a page part, it's up to you really, you should follow Yii's cache tutorial to find out how to achieve that.
I'm not a admin or something, but I'm pretty sure that OS will cache the file, so you don't need to worry about that.
Here is 1 alternative: you can prepare a set of all the data(as array for example, or some custom type of structure/object). Then change you code in the view to check for this array and loop trough it. This way you can send the output in single renderPartial call.

WCF Paged Results & Data Export

I've walked into a project that is using a WCF service for the data tier. Currently, when data is needed for a grid, all rows are returned and the results are bound to a grid and the dataset is stuffed into a session variable for paging/sorting/rebinding. We've already hit a max message size problem, so I'm thinking it's time to convert from fetch and cache to fetch only the current page.
Face value this seems easy enough, but there's a small catch. The user is allowed to export the entire result set at any point. This means that for grid viewing purposes fetching the current page is fine, but when they want to do an export, I still need to make a call for all data.
This puts me back into the max message size issue. What is the recommended approach for this type of setup?
We are currently using the wsHttpBinding...
Thanks for any assistance.
I think the recommended approach for large files is to use WCF streaming. I'm not sure the exact details for your scenario, but you could take a look at this as a starting point:
http://msdn.microsoft.com/en-us/library/ms789010.aspx
I would probably do something like this in your case
create a service with a "paged" GetData() method - where you specify the page index and the page size as additional parameters. This should give you a nice clean interface for "regular" use, and that should not hit the maxMessageSize limits
create a second service (or method) that would send all data - ideally, you could bundle that up into a ZIP file or something on the server, before sending it. If that ZIP file is still too large, you might want to check out WCF streaming for handling large files, as Andy already pointed out
The maxMessageSizeLimit is in place for a good reason: to avoid Denial of Service attacks where a WCF service would just get flooded with large messages and thus brought to its knees. If you can, always try to keep that in mind and don't just jack up the maxMessageSize to 2 GB - it might come back to bite you :-)