Stale Web Element - selenium

Any suggestions how to get around stale elements exceptions when it seems one shouldn't be raised? Fair enough with any of the many javascript libraries in use any updates may remove the respective DOM nodes and later FindElement from the disconnected node you can get a stale element.
The problem it seems, the tests in question i'm testing i know once displayed the content is static. Also the same code is working 100% with Firefox, but both the Chrome and Edge's WebDrivers (if not the same really close?) are giving unexpected "Stale Element Exceptions".
My code gets a node which is a parent for a given subtree holding information desired. For the parent node "Driver.FindElement()" is used but subsequent XPath queries are relative to the "parent" node.
For example given this DOM tree:
<node id='Closest node By ID'>
<span>
<div>text i want</div>
<div>ignore this</div>
<div>text to get</div>
</span>
<span>
<!-- same pattern ... -->
</span>
</node>
var parentNode = WebDriver.FindElement(ByID("ID"));
var txt1 = parentNode.FindElement(By.XPath("./span/div[1]");
var txt2 = parentNode.FindElement(By.XPath("./span/div[3]");
The problem is at some point performing the XPath quires will result in a 'stale element exception'.
I have working repro i'd be glad to share if someone could help look into this, getting into the WebDriver itself may be beyond me... Ping me will gladly provide a complete repro in C# details and whatever else may be needed.
Ah, and for vitals:
Chrome && WebDriver are both matched up.
Google Chrome is up to date, actually checking was 92, updated to 93 re-ran and same behavior.
Version 93.0.4577.63 (Official Build) (64-bit)
C# .Net framework 4.8
Win 10 "Version 21H1 (OS Build 19043.1202)"
Thanks in advance.

Related

Finding xpath of shadow dom elements with robot framework

I'm writing automated UI tests using Robot Framework with python. I'm testing a Polymer application that uses the shadow dom Robot's standard libraries don't seem to be able to locate elements in the shadow dom using xpath.
I've been sorting through selenium's repo/overflow/internets and polymer's repo/overflow/internets and haven't found the intersection of a solution for using robot with python (the selenium workaround isn't that graceful either). I was hoping someone had run into this issue with this specific framework that might be willing to share a solution (or note that there's not one).
Test Case
Wait Until Page Contains Element xpath=(//html/body/some-root-shadow-dom-element/input)
This of course results in an error because the shadow dom is not put into the main document of the DOM tree.
Open the browser console and type this into the console:
dom:document.querySelector("apply-widget").shadowRoot.querySelectorAll("div)
Make sure to hit enter after so it will be ran.

Selenium “Element is not clickable at point” error in Firefox but working in Chrome

In Selenium I am trying to locate an element.
But getting the below error:
org.openqa.selenium.WebDriverException: Element is not clickable at point (1009.25, 448.183349609375). Other element would receive the click: <rect data-sdf-index="7" height="390" width="420" class="aw-relations-noeditable-area"></rect> (WARNING: The server did not provide any stacktrace information)
Command duration or timeout: 12 milliseconds
Getting this error in firefox. But its working successfully in Chrome browser.
Is anyone having solution for it?
I already tried help from this post:-Selenium "Element is not clickable at point" error in Firefox but not able to get the result.
I have written below code:
public void createPortOnSelectedNode( String nodeName ) {
ISingleLocator m_nodeContainer = m_nodePage.getNodeContainer();
WebElement node = m_nodePage.getNode( m_nodeContainer, nodeName ).getElement();
Actions action = new Actions(DefaultDriver.getWebDriver());
action.moveToElement(node, 40, 0);
action.click();
action.perform();
}
Hi the above error comes under such scenario where Your webdriver script performs the action but the element on which you want to do operation is not properly loaded inside the DOM i.e its position is not fixed inside the DOM tree (also note selenium is able to perform its action because element is available inside the DOM hence webdriver only looks for the presence of element inside the DOM and not its position inside the DOM)
So how to overcome this issue
1.Give time to DOM to properly give positions to its element.
and that can be achieved by :
1.Instead of performing operation's directly at the target area try to do some extra/false
activity with webdriver which will give time for DOM to position all of his elements
2.apply Thread.sleep().
3. also if you are running your test in smaller window size then set the size to maximum it
will also help
i have not included any code cause the link that you have refer in the question contains ample amount of work regarding that so i decided to make everybody underrated why this error occurs. thanks hope this helps
Have you tried to click directly using Javascript? In python I use
driver.execute_script("arguments[0].click();", elt)
In Java it should look like executeScript instead...

Selenium Script Fails even though Xpath , Firebug show the correct element

I had a question . When i "inspect" this particular element and take "exact" xpath and copy it to my selenium script and run my script, it fails to identify?
Any idea how to do it?
Repeating again, i copied the exact xpath , tried inspecting element. All correct. Still :(
Thanks,
S.K
If Selenium fails to find an element you know is present, commonly the problem is with synchronization: Selenium tries to access the element too fast, before it appears on the page (and when you try to inspect element even a second later, you can see it, since it was rendered by then). Try to WAIT for the very same element before doing anything else. Examples of the wait can be found here

Webdriver unable to find element

I was unable to find the element (with id below) with Selenium even though it's visible in the html source page after successfully clicking on 'Search' button (using Selenium) of the previous page that has url as follows:
String url="https://sjobs.brassring.com/1033/ASP/TG/cim_advsearch.asp?partnerid=25314&siteid=5290";
driver.get(url);
if(driver.findElements(By.id("submit1")).size() != 0)
driver.findElement(By.id("submit1")).click(); // clicking on 'Search' button
if(driver.findElements(By.id("ctl00_MainContent_GridFormatter_YUIGrid")).size() != 0)
System.out.println("FOUND!");
String pageSource= driver.getPageSource();
"FOUND!" was never rendered, nor pageSource contained the element with the above id. I am using Selenium 2.3.3 and testing with latest versions of IE, Chrome, and Firefox webdrivers. Could someone please help? Thank you.
About 1/3 from the bottom of the target page are the followings (third line is location of the id):
<div id="ctl00_MainContent_GridFormatter_datatable" class="datatable">
<div id="THeadersDiv" style="display:none;">
<table id="ctl00_MainContent_GridFormatter_YUIGrid" class="basicGrid" border="0"> <!-- this is the element in question -->
I think I got it. I believe that the driver cannot find the element because there are two elements both with identical IDs. (which is terrible web code). I looked at the rest of the code, and it looks like the two elements also share the same class, and are the only two elements with that class.
Therefore, I believe that doing a By.className(".basicGrid") should work

Selenium use a Tree class for expanding/clicking a node in a tree made with RichFaces

In Selenium RC I need to expand/click a node in a tree made with RichFaces. I have done a TreeUtil class, but at this point I am not sure how to click/expand a node (which I retrieve with this xpath: "//div[#id='foo:classTree']/div/div/table["+nodeLevel+"]/tbody/tr/td/div/a") using only a nodeNumber and a nodeLevel.
Anybody has any idea?
Your question isn't very clear to me: are those click commands (with the XPath) not working because they result in "element not found" errors or because the click simply isn't causing the behavior your expect?
If it's an element-not-found issue, I suggest you use Firebug's $x function in the console to refine your XPath. You can run this function call in the Firebug function to see what the XPath is truly evaluating to:
$x("//div[#id=\"foo:classTree\"]/div/div/table[XXX]/tbody/tr/td/a")
Where XXX is some index. This is by far the best way to figure out the right XPath.
If the problem is that the click is just not really causing the tree map to change, try switching from click() to fireEvent("//xpath", "click") and see if that helps.
In my case clickAt() helped.