Persisting DOM en cache problems using Selenium on SlickGrid - selenium

I'm testing (using Selenium) a site containing a slickgrid.
To find the correct field to enter a value, I have to apply a filter, and then double click the field to enter the data.
The problem is, that after applying the filter nine out of ten times Selenium ends up with an exception that the element is no longer attached to the DOM, or is not present in the cache anymore. One out of ten doesn't fail on this point.
I've tried about every bit of advice I can find on this issue, but none has brought any sufficient help. Waiting an looping until the element is present, visible etc. doesn't work.
So: is there a way to have Selenium locate an element in a slickgrid after the page has changed because of a filter action?
Thanks!

Related

Selenium - when exactly the webdriver instance gets updated?

I'm using selenium to automate a task on a very dynamic website with pyhton.
For this reason certain HTML elements of the current loaded page may or may not be present at the moment of the request from my code.
How exactly the webdriver instance gets updated and receives the new data from the web page?
Is it constantly connected and receive the change in the HTML code instantly?
Or it first download a first verion of the page when driver.get() is called, and then updates it whenever a function such as .find_element_by_class_name() is called?
Q. Is it constantly connected and receives the change in the HTML code instantly?
Ans. Well, for each Selenium command, there will be an HTTP request that will send to the browser driver and then to the server and the command will get, A HTTP response will be returned and the command/commands will get executed based on the response.
Now let's say in a situation you do,
driver.get()
Basically, this is a selenium command.
It would fire an HTTP request stating to launch the URL provided. Based on the response (200-Ok or anything else), you would either see the WebPage getting loaded or an error message.
Same way in Sequence all the Selenium commands will get executed.
It's obvious that we need locators to locate web elements in the UI.
Once we have them, we can tightly couple them with
driver.find_element_by_***
methods.
Things to be noted down here
You need to learn/understand :
Implicit Waits.
Explicit Waits.
Fluent Waits.
Implicit Waits :
By implicitly waiting, WebDriver polls the DOM for a certain duration when trying to find any element. This can be useful when certain elements on the webpage are not available immediately and need some time to load.
Basically what it means is, whenever you use drive.find_element it gonna look for implicit waits if you have defined one.
If you've not defined one implicit wait, then the default value is 0.
Explicit wait
They allow your code to halt program execution, or freeze the thread, until the condition you pass it resolves. The condition is called with a certain frequency until the timeout of the wait is elapsed. This means that for as long as the condition returns a falsy value, it will keep trying and waiting.
FluentWait
FluentWait instance defines the maximum amount of time to wait for a condition, as well as the frequency with which to check the condition.
Reference Link
Updated :
PS : Please check in the dev tools (Google chrome) if we have unique entry in HTML DOM or not.
Steps to check:
Press F12 in Chrome -> go to element section -> do a CTRL + F -> then paste the xpath and see, if your desired element is getting highlighted with 1/1 matching node.
Locators (by priority from top):
ID
name
classname
linkText
partialLinkText
tagName
css selector
xpath
Web page is loaded by driver.get().
But the driver doesn't "know" what elements are existing there. It just opens, loads the web page.
To access any element, to check element presence etc. you will need to do that particularly per each specific element / elements using commands like .find_element_by_class_name() with a specific element locator.

DOM not refreshing with Selenium (IE, Chrome and VBA)

I'm using Selenium Basic to collect data from a website and store this into a database. The page I'm scraping is dynamic and loads more information as you scroll. I've been able to address most of this by using the implicit/ explicit waits, etc.
I am capturing all the IDs necessary to create the click action, which opens up another javascript popup for me to collect information there. However, even though I've been able to get these new IDs when the page loads by scrolling, when the app uses that new ID to click, I'm getting an error saying the element cannot be found. This is preventing me from opening up the javascript windows for these newly loaded rows.
When I go to collect this new data, the elements don't exist even though I was able to get the IDs for them.
When I look at the DOM in the browser and page source, all of it is there, so I don't believe it's an issue of letting the browser load.
I've tried utilizing the wait methods (implicit/explicit)...I've even put in hard 60 second waits through the routine. No matter what I do, the routine bombs out after the first 10 rows because it can't find the elements to the data it found after scrolling. I've also tried this using Chrome as well.
Unfortunately, the website needs to be private, so I can't provide the full code. The issue that's happening comes here:
driver.FindElementByXPath("//*[contains(text(),'" & DBA!ParseID & "')]").Click
The error I get is "Element not found for XPath("//*[contains(text(),'ID12345"')]
ParseID is the ID found from parsing elements within the body tag. So, I am able to collect all the IDs after loading all the data, but when it goes to click using the above code, it only works for the initial 10 rows. Everything loaded after that will not work (even though they've been loaded in the Browser for quite some time).
What I should be getting is, say 20 IDs which can create 20 clicks to javascript pop-ups to get more information. However, I am getting 20 IDs but the ability to only click on the first 10, even though I've loaded the entire page.
This issue hasn't been resolved the way I initially expected, but I've accomplished what I needed through a different and more efficient way.
First, when I researched this further by removing certain IDs in my loop, I noticed that this really didn't have much to do with data updating in the DOM or browser, but rather the ID itself not being found by a (still) unknown reason. It actually seems very arbitrary why it's bombing out. The ID matches the ID in the DOM, but when the string is being moved to the XPath, it can't find it in the DOM. I'm not sure why this would occur unless the string is breaking when being passed somehow, but I'll just let that one remain mysterious until someone smarter comes along!
What I did to accomplish what I needed is loop through the actual class N times, and pull the elements I needed within the classes. Rather than use the ID above as a unique identifier, I used the count of class web elements as the identifier. This worked with 90% less code.
Thank you all!

What is the proper way to test mandatory field in selenium

I am testing my web application using Selenium. All the form validation in the web application is done by HTML5 and some JS(for safari browser). I want to know what is the proper way to test the form validation.
Currently I am using one approach, i.e Before I filled up a mandatory field I clicked on the submit button. If the page is not refreshed then I assume the form validation working correctly. But I think there should be a better approach to do it. I am unable to find any proper solution. Any suggestion will be highly appreciated.
I also go through this link. But it is not working for me because it is not enough to have required attribute (eg required attribute does not work in older Safari browser).
Rather than checking if the page is refreshed or not, you should instead expect that it is not and that a certain error message or field highlighting or something is applied to the current page. When in an error state, the input fields probably get given an extra class, or an error div/span might be added to the DOM, try checking for that sort of thing

In QTP how to identify if a web element is visible on the current visible browser window

On my full screen browser page the header is visible but the footer is not visible on the current window. To see the footer we needs to page down N times as the intermediate contents is populated when we page down (dynamically populate). So my problem is to know how many times i needs to page down to see the footer. Adding to this question, is it possible to know if an web element is below the current visible browser area ?
If you are using QTP for identifying and operating on the objects, you need not scroll down. Make sure that you are using strong locator properties (htmlId, ObjectId etc) for identifying the element and your code will work just fine. QTP works on the HTML source of the web page; so it is immaterial whether or not the element that you want to work on is visible or not. I am assuming there are no AJAX components here. With AJAX, you need to employ a different strategy.

Selenium not able to load dynamic options in a dropdown

Options for a dropdown on a webpage I am testing are dependent upon the values supplied for earlier textboxes and selects (E.g. based on the currency and amount specified, dropdown for product will show appropriate values. With no values, the supplied the dropdown is blank.).
Now, although I have provided the values for currency and amount, the product dropdown is still blank. It is not fetching the filtered values based on earlier data supplied. I am using Selenium server (2.24.1) and writing scripts in Java in Eclipse with TestNG and testing on IE8.
When inspected, the dropdown is no different from others, only its options change based on the values of other elements on the page. The web application is developed in Java (Wicket framework).
The Selenium code:
selenium.select(ownerBranch, "label=4521 - Branch one");
selenium.select(currency, "label=SEK - Swedish kronor");
Thread.sleep(sleep);
selenium.type(amountSantioned,"100000");
Thread.sleep(sleep);
selenium.click(chooseLoanTermBymatDate);
Thread.sleep(sleep);
timeNow=Calendar.getInstance();
timeNow.add(Calendar.DATE,+360);selenium.type(maturityDate,dateformat.format(timeNow.getTime()));
Thread.sleep(sleep);
selenium.type(amountSantioned,"100000");
Thread.sleep(sleep);
selenium.select(serviceDelChannel, "label=BackOffice");
Thread.sleep(sleep);
selenium.select(product, "label=");
Thread.sleep(sleep*2);
selenium.select(product,"label=LN7292 - Consumer loan for Year2026");
Thread.sleep(sleep);
I'm not going to try to reproduce the issue (if you can point me to a publicly visible site with similar behaviour, I might test it), so I'm only taking a guess here:
Since Selenium RC is written in pure Javascript and "only" firing change events on selecting values from drop-downs, Wicket is probably waiting for something else or relying on a completely different mechanism.
Things you can try:
Use Selenium WebDriver. Selenium RC has been deprecated for over a year now, because it had serious technical limitations (you might have just bumped into one) that are now solved by WebDriver. Also, you won't ever have to use Thread.sleep() again (although I'm almost sure it could be got rid of even here, mostly). This solution is the most painful, but is almost guaranteed to work well, because WebDriver behaves like a real user.
Call selenium.fireEvent() on all the input elements you're interacting with. Useful events might be focus, blur, maybe even click in between them.
Calling selenium.keyPressNative(String.valueOf(KeyEvent.VK_ENTER)) (presses Enter natively) after you every change of a dropdown. If the changed dropdown is not focused before this, you might need to focus() it beforehand.
The painful way that tries to simulate user's behaviour as close as possible instead of using JS methods: Instead of using select(), try to focus() a dropdown element, then select one of its options by pressing Down arrow repeatedly, then Enter.