dgrid's OnDemandGrid keeps sending requests to the server - dojo

I have an OnDemandGrid setup to display product data (called parts) for a project i am working on. I currently have only two entries in the product database.
My OnDemandGrid is setup with only the basic options: store, and columns. I am hoping it will be a virtual scrolling grid. the store was setup as a JsonRest store, with Cache
what happens when i open up the page and startup the grid is, the grid keeps sending requests to the server for data continuously - approximately 2 requests per second.
I also realize that for a grid with only two rows, it has a scrollbar on the right. when i try to use this scrollbar to scroll, I find that the grid seems to flicker and reset itself. many times.
I suspect the virtual scroll feature is doing something funky, somehow not acknowledging that there are only two entries. can some one help me out in this? am willing to provide more details should that be necessary.
Here is my code by the way:
require(["dgrid/OnDemandGrid", "dojo/store/Memory", "dojo/store/Cache"], function(OnDemandGrid, Memory, Cache){
var partsCache = new Memory();
App.Store.parts = new Cache(partsMaster, partsCache);
var grid = new OnDemandGrid({
store: App.Store.parts,
columns: {
name:'Part Name',
part_no:'Part Number'
},
}, "grid");
grid.startup();
})
partsMaster is a JsonRest store defined earlier (global at the moment - taking the grid for a spin) in the code. I've done some tests to safely determine that JsonRest is not the issue.
here is a screenshot of the grid currently (note the existence of the scrollbar):
Any help is appreciated!
EDIT: attached is a screen shot of the first request response header from chrome:

Based on the screenshot it looks like your response is not including the Content-Range header, which is what dojo/store/JsonRest uses to inform itself of the total number of results in the set. While I'm not sure that alone will cause your infinite-querying problem, it will definitely cause a problem.
The Content-Range header should look like e.g. Content-Range: items 0-24/500 (assuming 500 was the total number of items in the result set).
See http://dojotoolkit.org/reference-guide/1.9/quickstart/rest.html for more information on how JsonRest expects services to behave.
If this doesn't completely solve the problem, I'd also be curious to verify that the response body is indeed yielding the correct subset of results.
Edit: based on interaction I had on a dgrid issue today, the issue could be that your service is actually returning the incorrect number of results based on the query. See these comments on #691.

Related

DOM not refreshing with Selenium (IE, Chrome and VBA)

I'm using Selenium Basic to collect data from a website and store this into a database. The page I'm scraping is dynamic and loads more information as you scroll. I've been able to address most of this by using the implicit/ explicit waits, etc.
I am capturing all the IDs necessary to create the click action, which opens up another javascript popup for me to collect information there. However, even though I've been able to get these new IDs when the page loads by scrolling, when the app uses that new ID to click, I'm getting an error saying the element cannot be found. This is preventing me from opening up the javascript windows for these newly loaded rows.
When I go to collect this new data, the elements don't exist even though I was able to get the IDs for them.
When I look at the DOM in the browser and page source, all of it is there, so I don't believe it's an issue of letting the browser load.
I've tried utilizing the wait methods (implicit/explicit)...I've even put in hard 60 second waits through the routine. No matter what I do, the routine bombs out after the first 10 rows because it can't find the elements to the data it found after scrolling. I've also tried this using Chrome as well.
Unfortunately, the website needs to be private, so I can't provide the full code. The issue that's happening comes here:
driver.FindElementByXPath("//*[contains(text(),'" & DBA!ParseID & "')]").Click
The error I get is "Element not found for XPath("//*[contains(text(),'ID12345"')]
ParseID is the ID found from parsing elements within the body tag. So, I am able to collect all the IDs after loading all the data, but when it goes to click using the above code, it only works for the initial 10 rows. Everything loaded after that will not work (even though they've been loaded in the Browser for quite some time).
What I should be getting is, say 20 IDs which can create 20 clicks to javascript pop-ups to get more information. However, I am getting 20 IDs but the ability to only click on the first 10, even though I've loaded the entire page.
This issue hasn't been resolved the way I initially expected, but I've accomplished what I needed through a different and more efficient way.
First, when I researched this further by removing certain IDs in my loop, I noticed that this really didn't have much to do with data updating in the DOM or browser, but rather the ID itself not being found by a (still) unknown reason. It actually seems very arbitrary why it's bombing out. The ID matches the ID in the DOM, but when the string is being moved to the XPath, it can't find it in the DOM. I'm not sure why this would occur unless the string is breaking when being passed somehow, but I'll just let that one remain mysterious until someone smarter comes along!
What I did to accomplish what I needed is loop through the actual class N times, and pull the elements I needed within the classes. Rather than use the ID above as a unique identifier, I used the count of class web elements as the identifier. This worked with 90% less code.
Thank you all!

Force GET_ONE request when navigating to Show page

As I believe is common in many APIs, ours returns a subset of fields for a record when it is part of a List request, and more details when it is a single-record request to its Show endpoint.
It seems that react-admin attempts to avoid doing a second request when loading a Show page (possibly re-using the record data from List?), which results in missing data. Refreshing the page fixes this, but I'm wondering if there is a setting that will force a GET_ONE request on every Show page load.
There are no setting for that. However, this should be achievable with a custom saga which would listen to LOCATION_CHANGE action (from react-redux-router) and dispatch a refreshView action (from react-admin) when the new location pathname ends with /show.
Edit: however this is very strange. We only use the data we already got from the list for optimistic display but we still request with a GET_ONE when navigating to a show page from the list. Do you have a codesandbox showing your issue?

ag grid server side sorting with pagination

I'm really confused about how to implement server side sorting with Ag Grid while having pagination and infinite scrolling options. I have the following use cases specified for our application implementation of ag grid
5 page sizes (20, 50, 100, 200, All)
All = grid height of 300 rows and infinite scrolling
Each specific page size means we retrieve the # of items from our API call that is = to page size. So for example, a page size of 50, means each API call per page, retrieves 50 items.
The above statement means that navigation to a new page of the grid = a new call to API to retrieve data
Based on all of this, I also need to implement server-side sorting. What I need to do, is the following
User clicks a column header
Header click triggers a function that calls our API with a sort parameter and returns results
Grid refreshes with new (sorted) results
From what I've read so far, the two primary requirements are that sorting and enableServerSideSorting parameters are set to true. However I'm not sure what to do after that. Is it possible to modify the getRows ag grid callback, to call my API each time instead of the function looking only at cached results?
I'm just looking at what the conventional process is to handle a situation like this. Any help is appreciated and thank you in advance.

Inconsitent decoding of image URL via 'URL.createObjectURL()'

I have a WinJS app where the user can choose via a dropdown menu, to view one of several images.
During the dropdown's onchange event, I'm using:
URL.createObjectURL(file, { oneTimeOnly: true })
To generate a URL which I set as a html <img> element's src property.
Whilst this works 'some' of of the time, the behaviour is very inconsistent a high proportion of the time, I get the error:
DOM7009: Unable to decode image at URL: 'blob:7743DB87-59F8-4750-B3A2-3505518CA7CB'.
Obviously the blob URL varies.
This doesn't seem related to the actual image, as during testing I'm only using 3 images (all of which will load at times) but all three of them won't load at other times.
Am I using the wrong approach here? or could something else be the problem? any thoughts greatly appreciated.

Problem with Dojo Tree

I have implemented Dojo tree, it is working fine till certain levels of sub-tree/sub-node.After fetching of 250-300 nodes..its giving error msg: "A script on this page is cause Internet Explorer to run slowly.If it continues to run, your computer may become unresponsive."...wht is the problem here..?
In your case it seems that the data been loaded is causing too much JavaScript execution to happen.This is crossing the browser JavaScript execution threshold.
I have faced this type of issue when we DojoX Grid used to load data >500 records and the way to workaround this to load only relevant data (one page at a time) on the client.On scrolling of grid, you can fetch the next page.
In your case your can defer loading sub-trees and leaf until the user clicks on Expand node option.There maybe other data stores that may provide such behavior in Dojo.