Selenium evaluates DOM or Visible elements - selenium

I would like to understand how Selenium evaluates the page.
I have set of test to check elements on the page. Written with Nunit, Selenium and PhantomJS as Driver.
Page.Visit();
Page.FindElement(By.Id("testid").Text.Should().NotBeNull(); // PASS
Page.FindElement(By.Id("testid").Text.Should().NotBeEmpty(); // does NOT PASS
The test DOES NOT pass if the browser size is set to be very small:
driver.Manage().Window.Size = new Size(10,10);
Based on this test, it is confusing how PhantomJS evaluates the page. I always thought that it checks the DOM but seems like for element TEXT it evaluates based on visibility!

Although this surprised me too when I first discovered it, Selenium will only find elements visible in the viewport of the browser. For this reason, you will want to ensure at the start of your tests that your browser viewport is large enough to accommodate the content of your application.
Typically this can be done by maximizing the browser window. If you are using Windows, triggering the F11 key via Selenium should work.

Related

window.stop() execution through selenium is not working

I am trying to read elements from the page using selenium but it seems the page is getting loaded infinitely. The element which I want to read is visible on the page (tried xpath and I was able to fetch the element). I tried the below code to execute the javascript command to stop the page load but for some reasons it is getting time out.
driver.executeScript("window.stop());
As #pcalkins suggested to use PageLoadStrategy, I used the "none" strategy in the chrome capabilities and it really did work. Below is the chrome capability property which I set.
"pageLoadStrategy" : none

Finding xpath of shadow dom elements with robot framework

I'm writing automated UI tests using Robot Framework with python. I'm testing a Polymer application that uses the shadow dom Robot's standard libraries don't seem to be able to locate elements in the shadow dom using xpath.
I've been sorting through selenium's repo/overflow/internets and polymer's repo/overflow/internets and haven't found the intersection of a solution for using robot with python (the selenium workaround isn't that graceful either). I was hoping someone had run into this issue with this specific framework that might be willing to share a solution (or note that there's not one).
Test Case
Wait Until Page Contains Element xpath=(//html/body/some-root-shadow-dom-element/input)
This of course results in an error because the shadow dom is not put into the main document of the DOM tree.
Open the browser console and type this into the console:
dom:document.querySelector("apply-widget").shadowRoot.querySelectorAll("div)
Make sure to hit enter after so it will be ran.

There is no frame but still there is an error element is not found

Below is my code, the line =>
driver.findElement(By.xpath("//*[#id=\"quote_password\"]")).sendKeys("password"); throws exception that element is not found
#Test
public void mytest()
{
System.setProperty("webdriver.chrome.driver","Drivers/chromedriver.exe");
WebDriver driver = new ChromeDriver();
driver.manage().window().maximize();
driver.get("http://billing.scriptinglogic.net/index.php/sessions/login");
driver.findElement(By.xpath("//*[#id='email']")).sendKeys("email");
driver.findElement(By.xpath("//*[#id='password']")).sendKeys("password");
driver.findElement(By.xpath("/html/body/div/div/form/input")).click();
driver.findElement(By.xpath("//*[text()='Quotes']")).click();
driver.findElement(By.xpath("//*[text()='Create Quote']")).click();
driver.findElement(By.xpath("//*[#id=\"quote_password\"]")).sendKeys("password");
}
Quick and dirty solution:
WebDriverWait wait = new WebDriverWait(driver, 15, 100);
driver.get("http://billing.scriptinglogic.net/index.php/sessions/login");
driver.findElement(By.id("email")).sendKeys("<EMAIL>");
driver.findElement(By.id("password")).sendKeys("<PASSWORD>");
driver.findElement(By.name("btn_login")).click();
wait.until(ExpectedConditions.visibilityOfElementLocated(By.xpath("//*[text()='Quotes']"))).click();
wait.until(ExpectedConditions.visibilityOfElementLocated(By.cssSelector(".create-quote"))).click();
wait.until(ExpectedConditions.elementToBeClickable(By.id("quote_password"))).sendKeys("password");
Updated based on the credentials supplied in the comment below. I've tweaked the selectors to use ID, name and class where applicable. There is no need to use complex XPath locators when they aren't needed.
The explicit waits are required based on the way the site is working and I've added JeffC's suggestion of using an elementToBeClickable expected condition for the input element (I don't think it's really required in this instance though since the site doesn't seem to ever disable the input field, so a visibilityOfElementLocated expected condition is just as good really).
This solution is working for me in Chrome and Firefox in standard mode and Firefox in headless mode. It's not working in headless mode in Chrome because the screen size is smaller and when the screen width goes below 1000px the header changes and the text "Quotes" is never displayed. Below 767px the header is completely removed and you get a side menu. This means that the flow of the script needs to change slightly based on resolution.
I would suggest asking your developer to add an ID to the menu items, it will make it easier to locate them and use the site in its various states. The quick and dirty solution to this problem is ensure the browser is a certain size while the test runs, you can do this by setting the size in the first line of your script:
driver.manage().window().setSize(new Dimension(1024, 768));
When you do this it passes in Firefox and Chrome in standard and headless mode.
Note: The lines with an explicit wait that result in an element being clicked are anchor elements so there is no point waiting for the element to be clickable as the condition is always going to return true.

Getting unexpected results using selenium waitForNotVisible

I have some selenium based tests a feature where an item is deselected on the page, which causes that element to be removed from the page. Because it is ajax based I do a click for the deselect action, and then wait for the element to no longer be on the page before moving on. the basic flow is
click(TargetElement)
if(isElementPresent(targetElement)){
waitForNotVisible(targetElement)
}
...
This seems to work 100% of the time when run against a local selenium server instance, but when run against the selenium grid I have set up, it always times out on waitforNotVisible (in both cases, the conditional is always met).
Originally when this was failing, I didn't have the conditional and I thought that would clear it up, but it didn't. Maybe my expectations for waitForNotVisible are not correct, but I wonder why this would be working locally and not against the grid. All of my other tests seem to work fine via both methods.
And yes, I am using selenium 1; at the moment moving to selenium2/Webdriver is not an option in the short term, so please don't suggest using webdriver as a solution. At the moment I'm most interested in understanding why this would fail as-is.

Alternative of visibility of element located in selenium 2.0 that checks the presence of an element on the DOM?

I have a problem in my framework that instead of using static sleeps I try to wait for a visibility of an element. The thing is that visibilty of element checks the presence of an element on the DOM, that will return true but in my system the page is not fully loaded yet. What happens is that as soon as I get true when checking the visibility of element I set values. These values get reset when the actual page get fully loaded.
My question is what can I use instead of static sleeps to wait for the actual page (not only the DOM) to get fully loaded as visibility of element is not working for me?
P.S. I'm using Selenium webdriver with python 2.7
/Adam
The expected_conditions.visibility_of_element_located(locator) method will check for both - the presence of the element in the DOM, and its visibility (element is displayed with height and width greater than zero).
Ideally, the driver.get(url) method should automatically wait for the full page to be loaded before moving on to the next line. However, this might not behave as expected, in case, the web application being tested uses ajax calls/actions (as although the page has loaded but the ajax actions are still in progress). In such scenario, we can use something like below to wait for stability before performing action(s) on the desired webelements.
# create the firefox driver
driver = webdriver.Firefox()
# navigate to the app url
driver.get('http://www.google.com')
# keep a watch on jQuery 'active' attribute
WebDriverWait(driver, 10).until(lambda s: s.execute_script("return jQuery.active == 0"))
# page should be stable enough now, and we can perform desired actions
elem = WebDriverWait(driver, 10).until(expected_conditions.visibility_of_element_located((By.ID, 'id')))
elem.send_keys('some text')
Hope this helps..
Try ExpectedConditions.elementToBeClickable.
See: https://seleniumhq.github.io/selenium/docs/api/java/org/openqa/selenium/support/ui/ExpectedConditions.html#elementToBeClickable-org.openqa.selenium.By-