DOM not refreshing with Selenium (IE, Chrome and VBA) - vba

I'm using Selenium Basic to collect data from a website and store this into a database. The page I'm scraping is dynamic and loads more information as you scroll. I've been able to address most of this by using the implicit/ explicit waits, etc.
I am capturing all the IDs necessary to create the click action, which opens up another javascript popup for me to collect information there. However, even though I've been able to get these new IDs when the page loads by scrolling, when the app uses that new ID to click, I'm getting an error saying the element cannot be found. This is preventing me from opening up the javascript windows for these newly loaded rows.
When I go to collect this new data, the elements don't exist even though I was able to get the IDs for them.
When I look at the DOM in the browser and page source, all of it is there, so I don't believe it's an issue of letting the browser load.
I've tried utilizing the wait methods (implicit/explicit)...I've even put in hard 60 second waits through the routine. No matter what I do, the routine bombs out after the first 10 rows because it can't find the elements to the data it found after scrolling. I've also tried this using Chrome as well.
Unfortunately, the website needs to be private, so I can't provide the full code. The issue that's happening comes here:
driver.FindElementByXPath("//*[contains(text(),'" & DBA!ParseID & "')]").Click
The error I get is "Element not found for XPath("//*[contains(text(),'ID12345"')]
ParseID is the ID found from parsing elements within the body tag. So, I am able to collect all the IDs after loading all the data, but when it goes to click using the above code, it only works for the initial 10 rows. Everything loaded after that will not work (even though they've been loaded in the Browser for quite some time).
What I should be getting is, say 20 IDs which can create 20 clicks to javascript pop-ups to get more information. However, I am getting 20 IDs but the ability to only click on the first 10, even though I've loaded the entire page.

This issue hasn't been resolved the way I initially expected, but I've accomplished what I needed through a different and more efficient way.
First, when I researched this further by removing certain IDs in my loop, I noticed that this really didn't have much to do with data updating in the DOM or browser, but rather the ID itself not being found by a (still) unknown reason. It actually seems very arbitrary why it's bombing out. The ID matches the ID in the DOM, but when the string is being moved to the XPath, it can't find it in the DOM. I'm not sure why this would occur unless the string is breaking when being passed somehow, but I'll just let that one remain mysterious until someone smarter comes along!
What I did to accomplish what I needed is loop through the actual class N times, and pull the elements I needed within the classes. Rather than use the ID above as a unique identifier, I used the count of class web elements as the identifier. This worked with 90% less code.
Thank you all!

Related

Auto login to website using script or bookmark

I've been trying to figure this out using various different methods. I'm trying to create a script/bookmark or some type of quick action to open a browser tab or window with a specific URL, and automatically log me in using my credentials. I'm not all that concerned about security for this at the moment.
At first I figured I'd try to use a javascript bookmark to do this, but nothing I found in my research worked. Next I tried to create a bash script, but I couldn't figure out how to send the credentials in via the terminal. Most recently, I literally copied the source code of a site, created a local file and tried to hack together something where I could prefill the form data with credentials and use JS to submit the form, and I've gotten close with this, but for some reason when I use the JS submit function, it errors out and says that the username and password are invalid. But when i turn off the submit function and manually click "log in" on my local html page, it works as expected. I want this to be a one click process, so the idea of using onload/submit or something to that affect is really important to me.
The site I'm testing with has a Rails backend and my next attempt might be trying to use POST to do what I'm thinking, but that's currently outside of my level of knowledge on the subject.
Anyone answering: i do not want to use a password manager to accomplish this.
My requirement is that i will either be able to a) run a script or b) use a 1-click option to do this per website. Ideally i'd be able to set this up in a sort of programmatic way to do this with multiple sites, but I'd be happy with 1 at the moment.
i know similar questions have been answered before, but I haven't been able to use information from those posts (the ones I've seen anyway) to figure out a good way to do this.
Create a bookmark for the current page you have opened.
Edit the bookmark
Change the value for the URL to something like this.
(javascript:(function(){CODE_GOES_HERE_FROM_BELLOW})();
find the field for username and password on the page.
Given example for hotmail
var inputs = document.getElementsByTagName('input'); for(var i=0;i<inputs.length;i++){if(inputs[i].name === 'passwd'){inputs[i].value = 'YOUR_PASSWORD'}else if(inputs[i].name === 'loginfmt'){inputs[i].value = 'YOUR_USERNAME'}}; document.getElementById(document.getElementsByTagName('form')[0].id).submit();
OR
try out casperjs.
The proposed solution didn't work for me and rather than spending tons of time installing a testing framework that I'll never use other than for this purpose, I decided to try to do this another way.
First, I found out that the reason my JS wasn't working before is because the site did not allow a JS submit to be done, or atleast that's what it seemed to be when I got this error: "Synchronous XMLHttpRequest on the main thread is deprecated because of its detrimental effects to the end user's experience"
The javascript I was using was in fact working, just not submitting. I used the following code to fill the fields (using "Class Name" elements on the page since there was no name or ID):
document.getElementsByClassName('username')[0].setAttribute('value', 'user');
document.getElementsByClassName('password')[0].setAttribute('value', 'password');
As I mentioned, the problem was when I tried to use JQuery to submit the form: document.getElementsByClassName('loginForm')[0].submit();
Which is when the above error cropped up. I can't really say for sure whether this is the root of the cause, but the page does submit, but I get an invalid username/password error when I do
I haven't figured out a great way to get around this just yet, but my short-term, "hacky" solution was to use Applescript to send a return keystroke to the browser to submit the form. I'd ideally like to figure out how to get the submission to work using JQuery, but I'm not sure how to get around it.

What is the proper way to test mandatory field in selenium

I am testing my web application using Selenium. All the form validation in the web application is done by HTML5 and some JS(for safari browser). I want to know what is the proper way to test the form validation.
Currently I am using one approach, i.e Before I filled up a mandatory field I clicked on the submit button. If the page is not refreshed then I assume the form validation working correctly. But I think there should be a better approach to do it. I am unable to find any proper solution. Any suggestion will be highly appreciated.
I also go through this link. But it is not working for me because it is not enough to have required attribute (eg required attribute does not work in older Safari browser).
Rather than checking if the page is refreshed or not, you should instead expect that it is not and that a certain error message or field highlighting or something is applied to the current page. When in an error state, the input fields probably get given an extra class, or an error div/span might be added to the DOM, try checking for that sort of thing

Persisting DOM en cache problems using Selenium on SlickGrid

I'm testing (using Selenium) a site containing a slickgrid.
To find the correct field to enter a value, I have to apply a filter, and then double click the field to enter the data.
The problem is, that after applying the filter nine out of ten times Selenium ends up with an exception that the element is no longer attached to the DOM, or is not present in the cache anymore. One out of ten doesn't fail on this point.
I've tried about every bit of advice I can find on this issue, but none has brought any sufficient help. Waiting an looping until the element is present, visible etc. doesn't work.
So: is there a way to have Selenium locate an element in a slickgrid after the page has changed because of a filter action?
Thanks!

In QTP how to identify if a web element is visible on the current visible browser window

On my full screen browser page the header is visible but the footer is not visible on the current window. To see the footer we needs to page down N times as the intermediate contents is populated when we page down (dynamically populate). So my problem is to know how many times i needs to page down to see the footer. Adding to this question, is it possible to know if an web element is below the current visible browser area ?
If you are using QTP for identifying and operating on the objects, you need not scroll down. Make sure that you are using strong locator properties (htmlId, ObjectId etc) for identifying the element and your code will work just fine. QTP works on the HTML source of the web page; so it is immaterial whether or not the element that you want to work on is visible or not. I am assuming there are no AJAX components here. With AJAX, you need to employ a different strategy.

Problem with Dojo Tree

I have implemented Dojo tree, it is working fine till certain levels of sub-tree/sub-node.After fetching of 250-300 nodes..its giving error msg: "A script on this page is cause Internet Explorer to run slowly.If it continues to run, your computer may become unresponsive."...wht is the problem here..?
In your case it seems that the data been loaded is causing too much JavaScript execution to happen.This is crossing the browser JavaScript execution threshold.
I have faced this type of issue when we DojoX Grid used to load data >500 records and the way to workaround this to load only relevant data (one page at a time) on the client.On scrolling of grid, you can fetch the next page.
In your case your can defer loading sub-trees and leaf until the user clicks on Expand node option.There maybe other data stores that may provide such behavior in Dojo.