I have a CListView with pagination showing all results. Everything works fine.
I have a search widget on the page that renders a partial view to replace the existing list with search results.
The first search result page loads, looks great, and even shows the correct number of results in pagination, but if I try to go to one of the next pages, items from the first list are loaded.
Does anyone know what I have to do in order to fix this? Do the search results need their own full view rendered?
Thanks you in advance.
I was mistaken.
The pagination was not linking to the previous result, it was loading everything which the previous result also does, which is why I was mistaken.
I thought that the data provider in the controller kept track of the result set but you have to keep resending the same criteria every time. I was sending the criteria the first time and not on subsequent results so it loaded everything in absence of any constraints. I simply put the search parameters in session and then retrieved them so they could be considered on every request and it fixed the problem.
In short there is no bug, just me being new working with Yii.
Related
I'm using Selenium Basic to collect data from a website and store this into a database. The page I'm scraping is dynamic and loads more information as you scroll. I've been able to address most of this by using the implicit/ explicit waits, etc.
I am capturing all the IDs necessary to create the click action, which opens up another javascript popup for me to collect information there. However, even though I've been able to get these new IDs when the page loads by scrolling, when the app uses that new ID to click, I'm getting an error saying the element cannot be found. This is preventing me from opening up the javascript windows for these newly loaded rows.
When I go to collect this new data, the elements don't exist even though I was able to get the IDs for them.
When I look at the DOM in the browser and page source, all of it is there, so I don't believe it's an issue of letting the browser load.
I've tried utilizing the wait methods (implicit/explicit)...I've even put in hard 60 second waits through the routine. No matter what I do, the routine bombs out after the first 10 rows because it can't find the elements to the data it found after scrolling. I've also tried this using Chrome as well.
Unfortunately, the website needs to be private, so I can't provide the full code. The issue that's happening comes here:
driver.FindElementByXPath("//*[contains(text(),'" & DBA!ParseID & "')]").Click
The error I get is "Element not found for XPath("//*[contains(text(),'ID12345"')]
ParseID is the ID found from parsing elements within the body tag. So, I am able to collect all the IDs after loading all the data, but when it goes to click using the above code, it only works for the initial 10 rows. Everything loaded after that will not work (even though they've been loaded in the Browser for quite some time).
What I should be getting is, say 20 IDs which can create 20 clicks to javascript pop-ups to get more information. However, I am getting 20 IDs but the ability to only click on the first 10, even though I've loaded the entire page.
This issue hasn't been resolved the way I initially expected, but I've accomplished what I needed through a different and more efficient way.
First, when I researched this further by removing certain IDs in my loop, I noticed that this really didn't have much to do with data updating in the DOM or browser, but rather the ID itself not being found by a (still) unknown reason. It actually seems very arbitrary why it's bombing out. The ID matches the ID in the DOM, but when the string is being moved to the XPath, it can't find it in the DOM. I'm not sure why this would occur unless the string is breaking when being passed somehow, but I'll just let that one remain mysterious until someone smarter comes along!
What I did to accomplish what I needed is loop through the actual class N times, and pull the elements I needed within the classes. Rather than use the ID above as a unique identifier, I used the count of class web elements as the identifier. This worked with 90% less code.
Thank you all!
As I believe is common in many APIs, ours returns a subset of fields for a record when it is part of a List request, and more details when it is a single-record request to its Show endpoint.
It seems that react-admin attempts to avoid doing a second request when loading a Show page (possibly re-using the record data from List?), which results in missing data. Refreshing the page fixes this, but I'm wondering if there is a setting that will force a GET_ONE request on every Show page load.
There are no setting for that. However, this should be achievable with a custom saga which would listen to LOCATION_CHANGE action (from react-redux-router) and dispatch a refreshView action (from react-admin) when the new location pathname ends with /show.
Edit: however this is very strange. We only use the data we already got from the list for optimistic display but we still request with a GET_ONE when navigating to a show page from the list. Do you have a codesandbox showing your issue?
I realize that there are several other posts similar to this one, but I have tried every solution found within them and have not had any luck. Basically, I have a ColdFusion generated webpage that consists of a jQuery DataTable. I want to export the contents of the DataTable to PDF, however on certain tables where the table content exceeds the height of the page, the PDF adds my header area and then the rest of the page is blank. The table then shows up on page 2. Has anyone had trouble similar to this?
I've tried setting the page size, margins, #imports for the CSS, standard links for the CSS, inline CSS, and nothing seems to fix it.
Any help is greatly appreciated! Any suggestions are welcome too!
What ended up fixing this problem for me is I wrote a simple javascript function that takes two parameters, the id of the table requested and the max rows per page. My javascript function then grabs the entire table, and breaks it into separate tables using the max rows value as the number of rows per table. I then output the JS variable as the value for the tag and there are no longer incorrect page breaks.
I hope that in the future the cfdocument tag will be more proficient at knowing when it's reached the end of a page and break the content appropriately, but for now this works.
I am using Troy Goode's PagedList extension on my MVC page. It is working fine by rendering the partial containing given page of records.
Now I have to implement filtering of search results. I am using AJAX form to fetch partial containing results matching with given search criteria. This works well if there are less than one page of results. If the filtered results beyond one page, there is an issue. When clicked on paging link the filtering information is gone and it results next page of unfiltered information.
What mechanism I can use to pass filtering information when something is entered to filter search results and there is paging link?
My PagedList library takes a Func to generate the URL when you call #Html.PagedList(...):
page => Url.Action("Index", new { page = page })
You simply need to customize the parameters you're passing into Url.Action (which is a Microsoft class/method, docs can be found on MSDN).
Our application is an RCP appliction and needs to display table of several thousands items. For this reason, we're using SWT.VIRTUAL in our TableViewer. That works pretty well except for selection.
We're having following issue :
Our TableViewer support sorting and filtering. When we use a virtual tableviewer, changing the selection does not preserve the current selected item but the row currently selected.This leads to another item being selected.
e.g: If Item 'A' present at the 5th row is selected by user and sorting is performed, then after sorting the Item at the 5th row gets selected instead of the Item 'A'.
Using a non virtual TableViewer, everything works fine.
We tried to go into debug and found out that the cache from the AbstractTableViewer.VirtualManager class seems to be up to date with the model.
Forcing the cache to be used in the AbstractTableViewer.virtualSetSelectionToWidget() can be a possible approach.
We have tried to implement a solution suggested in https://bugs.eclipse.org/bugs/show_bug.cgi?id=338696. However it didn't work.
Please suggest some pointers or alternative work around.
Thanks for the answers.
As a workaround for working with huge tables I would suggest you to take a look at the Nattable project http://www.eclipse.org/nattable/. It supports everything you need (sorting, filtering, tree structured elements, lazy loading etc.). We successfully use it in our project, where it is necessary to display hundreds of thousands elements as a tree with around 160 columns. It also has some pretty cool styling features, which can make your table more user-friendly and interactive. Hope this helps