I have implemented Dojo tree, it is working fine till certain levels of sub-tree/sub-node.After fetching of 250-300 nodes..its giving error msg: "A script on this page is cause Internet Explorer to run slowly.If it continues to run, your computer may become unresponsive."...wht is the problem here..?
In your case it seems that the data been loaded is causing too much JavaScript execution to happen.This is crossing the browser JavaScript execution threshold.
I have faced this type of issue when we DojoX Grid used to load data >500 records and the way to workaround this to load only relevant data (one page at a time) on the client.On scrolling of grid, you can fetch the next page.
In your case your can defer loading sub-trees and leaf until the user clicks on Expand node option.There maybe other data stores that may provide such behavior in Dojo.
Related
Lighthouse audit suggest me to lazy load chat beacon that is implemented via Google Tag Manager. Because this tag is quite big I have been already delay it by adding custom event that fires 1.5 seconds after Window Loaded event and on that custom event I am firing this chat beacon. Should I do anyway lazy loading on this tag? Report says that I could save around 3 second in loading page. If yes how could I make lazy loading tag in GTM and if it is even possible? Bellow I pasted how tag look like. Thank you for any suggestion.
<script type="text/javascript">!function(e,t,n){function a(){var e=t.getElementsByTagName("script")[0],n=t.createElement("script");n.type="text/javascript",n.async=!0,n.src="https://beacon-v2.helpscout.net",e.parentNode.insertBefore(n,e)}if(e.Beacon=n=function(t,n,a){e.Beacon.readyQueue.push({method:t,options:n,data:a})},n.readyQueue=[],"complete"===t.readyState)return a();e.attachEvent?e.attachEvent("onload",a):e.addEventListener("load",a,!1)}(window,document,window.Beacon||function(){});</script>
<script type="text/javascript">window.Beacon('init', 'XXXXXXXXX-XXXX-XXXXXXXXX-XXXXXXX')</script>
And one more question. This tag is attach to the page which means that if we scroll down it will always be in the same place. Lazy loading in this situation I understand as loading beacon after clicking this chat icon?
Yes, in this case lazy loading would be loading on click.
However, people rarely do it. Why? Cuz async loading n.async=!0 won't normally delay anything from the user perspective in any significant way.
And lighthouse is not the last instance of truth. It's just a not-very-good way to generically suggest page speed improvements.
If you want to measure it properly, stop using lighthouse, start using page speed profiling in your browser. Measure the user experience with the chat how it is now, then disable it completely in tag manager and reload the page. See if you're able to find those claimed 3 seconds. Repeat both experiments a few times to compensate for random speed fluctuations. I doubt you will be able to measure any real difference.
I'm using Selenium Basic to collect data from a website and store this into a database. The page I'm scraping is dynamic and loads more information as you scroll. I've been able to address most of this by using the implicit/ explicit waits, etc.
I am capturing all the IDs necessary to create the click action, which opens up another javascript popup for me to collect information there. However, even though I've been able to get these new IDs when the page loads by scrolling, when the app uses that new ID to click, I'm getting an error saying the element cannot be found. This is preventing me from opening up the javascript windows for these newly loaded rows.
When I go to collect this new data, the elements don't exist even though I was able to get the IDs for them.
When I look at the DOM in the browser and page source, all of it is there, so I don't believe it's an issue of letting the browser load.
I've tried utilizing the wait methods (implicit/explicit)...I've even put in hard 60 second waits through the routine. No matter what I do, the routine bombs out after the first 10 rows because it can't find the elements to the data it found after scrolling. I've also tried this using Chrome as well.
Unfortunately, the website needs to be private, so I can't provide the full code. The issue that's happening comes here:
driver.FindElementByXPath("//*[contains(text(),'" & DBA!ParseID & "')]").Click
The error I get is "Element not found for XPath("//*[contains(text(),'ID12345"')]
ParseID is the ID found from parsing elements within the body tag. So, I am able to collect all the IDs after loading all the data, but when it goes to click using the above code, it only works for the initial 10 rows. Everything loaded after that will not work (even though they've been loaded in the Browser for quite some time).
What I should be getting is, say 20 IDs which can create 20 clicks to javascript pop-ups to get more information. However, I am getting 20 IDs but the ability to only click on the first 10, even though I've loaded the entire page.
This issue hasn't been resolved the way I initially expected, but I've accomplished what I needed through a different and more efficient way.
First, when I researched this further by removing certain IDs in my loop, I noticed that this really didn't have much to do with data updating in the DOM or browser, but rather the ID itself not being found by a (still) unknown reason. It actually seems very arbitrary why it's bombing out. The ID matches the ID in the DOM, but when the string is being moved to the XPath, it can't find it in the DOM. I'm not sure why this would occur unless the string is breaking when being passed somehow, but I'll just let that one remain mysterious until someone smarter comes along!
What I did to accomplish what I needed is loop through the actual class N times, and pull the elements I needed within the classes. Rather than use the ID above as a unique identifier, I used the count of class web elements as the identifier. This worked with 90% less code.
Thank you all!
I have a form submission using HTML helpers and a p tag that displays a simple count.
On page load, the count in the p tag is obviously 0, which is derived from a list in the model by calling #Model.Individual.Count. After the form is submitted, I'm expecting the Individual list should have a Count of 1. This is indeed the case when I'm debugging it through Visual Studio. I can see it being updated in the model and hitting the breakpoints in the View and #Model.Individual.Count has a value of 1. But then when the page loads in the browser, the value in the p tag still says 0.
I have no idea what is going on since the debug value says 1 but it displays 0 in the browser...
It appears this was due to page life-cycle events and at which times certain scripts were being loaded during the page events. Additionally, since I have desktop and mobile scripts running on the same .cshtml file (loading them or not loading them depending on the screen size), conflicts were occurring.
Overall, the problem was with scripts being fired at the correct times during the page life-cycle.
NOTE: New to this forum (UX/User Experience), so please let me know if this would be better in a different category. I searched Stack Exchange for "pinterest" and this forum seemed to have the most results. Thanks!
Hi guys. I'm writing a jQuery gist to grab links of all the images pinned to a given board in Pinterest. However, I've been running into the problem of having to repeatedly keep scrolling because all the results are not displayed on the same page. With the trendy "infinite scroll" or "lazy load" feature, one has to keep scrolling to the bottom without actually knowing if they are anywhere close because it seems to depend on your zoom percentage in your browser window and your window size as well, as to how many items display on your screen. I've been searching this for hours to no avail.
Searches I've already done keep returning non-productive results
The results I get when searching for
"Pinterest how to disable lazy loading" and "Pinterest how to disable infinite scroll"
keep returning the opposite of what I am looking for -- incorrect results for my purposes are anything like:
"How to add infinite scroll to my website",
"20 Useful Pinterest Tools",
or anything to do with adding infinite scroll.
The Problem: Infinite Scroll/Lazy Loading makes it hard for me to use browser plugins like jquerify (Chrome) and FireQuery (Firefox)
The issue for me is that I want to be able to view all my pins on a given board at once. Then I can use jQuery to manipulate all images on the page. Currently, infinite scroll makes it hard to keep track of where I'm at. I've tried stuff already by it's late at night and hard to remember everything. The important find was that in page source, Pinterest is using a "lazy" function. Here is what I found:
P.lazy = {
onImageLoad: function(a) {
var b = LOADED_CLASS;
P.overlap.isOverlappingViewport(a) && (b += FADE_CLASS);
a.className += b
}
};
This is just starting to be a deeper rabbit hole. I've checked for plugins to "remove", "disable", or "bypass" lazy loading, but haven't found any ... only those for adding it in.
Thanks in advance for your kind assistance and Cheers.
Pinterest loads cards via Ajax. When you scroll to the bottom of a page, browser javascript fires an Ajax call to load the next page full of cards.
This means it's not really possible to "disable" the infinite scrolling feature.
A few possible approaches:
Depending on how you're instantiating the browser, you might try setting or spoofing the window dimensions to a very large height. Pinterest may detect that height and attempt to load a window's worth of images, which may be enough to cover the feed you're trying to scrape.
If #1 is not practical for you, you can use javascript/jquery to keep scrolling the browser down until it has finished loading all the images. There are several ways to do this, since you are injecting javascript into the browser session.
(a) You can do this the "dumb" way with a loop that sets a timeout (setTimeout), then scrolls to the bottom (scrollTo()), then keeps going until the window stops scrolling and that comprises a kludgey auto-detect for the bottom of the page load.
(b) a more sophisticated approach would be to implement a listener for pinterest's ajax load function, (see the code, but it's a GET request to URL https://www.pinterest.com/resource/UserHomefeedResource/get/). An ajaxComplete() jQuery handler may help you detect the completion of a page load request so you can scrape the new images loaded.
Hope that helps
I'm testing (using Selenium) a site containing a slickgrid.
To find the correct field to enter a value, I have to apply a filter, and then double click the field to enter the data.
The problem is, that after applying the filter nine out of ten times Selenium ends up with an exception that the element is no longer attached to the DOM, or is not present in the cache anymore. One out of ten doesn't fail on this point.
I've tried about every bit of advice I can find on this issue, but none has brought any sufficient help. Waiting an looping until the element is present, visible etc. doesn't work.
So: is there a way to have Selenium locate an element in a slickgrid after the page has changed because of a filter action?
Thanks!