Using the Selenium chrome webdriver I am trying to load a page,
But getting the Timeout error for the selenium
like timeout from renderer : 3000
It's the default timeout it is waiting for until the page gets loaded .
I am using groovy selenium to work with chrome.
Everything is fine. Only Timeout error causing issue sometimes.
Does someone have any idea about WHAT IS THE DEFAULT TIMEOUT FOR PAGE LOADING IN CHROME SELENIUM WEBDRIVER ??
And Can I change that timeout ?
If yes how to ?
I am currently using selenium chrome driver v2.9 chrome v.27 in GROOVY.
I have been trying Chrome, Firefox & PhantomJS using Python.
I have been trawling loads of webpages trying to find answers for using a headless web driver WITH a working load page timeout.
There is a lot of confusion between the various answers.
I'll post my findings on this page since it comes up high in a search like 'selenium chrome timeout'.
I can now get chromedriver.exe to work WITH a timeout.
The selenium wrapper seems broken but the following works as per my requirements. Instead of calling:
chrome_driver.set_page_load_timeout(my_timeout_secs)
chrome_driver.get( ...... )
use:
chrome_driver.execute(Command.SET_TIMEOUTS, {
'ms': float(my_timeout_secs * 1000),
'type': 'page load'})
chrome_driver.get( ...... )
This bypasses the selenium wrapper and sets the timeout with Chrome.
Then just use a standard Selenium catch. The page will timeout, then hit your code after Chrome raises a TimeoutException from selenium.common.exceptions
try:
... set_timeout ...
... get url ...
except TimeoutException as e:
# do whatever you need to here
Hope that helps anybody needing headless driver with timeouts.
Thanks,
LJHW
Have you seen the documentation?
An implicit wait is to tell WebDriver to poll the DOM for a certain amount of time when trying to find an element or elements if they are not immediately available. The default setting is 0.
Related
I'm currently running Selenium with Specflow.
One of my tests clicks on a button which triggers the download of a pdf file.
That file is automatically opened in a new tab where the test then grabs the url and downloads the referenced file directly to the selenium project.
This whole process works perfectly when chrome driver is run normally but fails on a headless browser with the following error:
The HTTP request to the remote WebDriver server for URL http://localhost:59658/session/c72cd9679ae5f713a6c857b80c3515e4/url timed out
after 60 seconds. -> The request was aborted: The operation has timed out.
This error occurs when attempting to run driver.Url
driver.Url calls work elsewhere in the code. It only fails after the headless browser switches tabs. (Yes, I am switching windows using the driver)
For reference, I cannot get this url without clicking the button on the first page and switching tabs as the url is auto-generated after the button is clicked.
I believe you are just using argument as "--headless" for better performance you should select screen size too. Sometimes, due to inappropriate screen size it cannot detect functions which you are looking for. try using this code or just add one line for size.
from selenium import webdriver
chrome_options = Options()
chrome_options.add_argument("--headless")
chrome_options.add_argument("--window-size=1920x1080")
driver = webdriver.Chrome(chrome_options=chrome_options)
Don't forget to put other arguments according to your need in "driver".
We've started getting random timeouts, but can not get reasons of that. The tests run on remote machines on amazon using selenium grid. Here is how it is going on:
browser is opened,
then a page is loading, but can not load fully within 120 seconds,
then timeout exeption is thrown.
If I run the same tests localy then everything is ok.
The Error is ordinary timeout exception that is thrown if a page is not loaded completely during the period of time that is set in driver.manage().timeouts().pageLoadTimeout(). The problem is that a page of the site can not be loaded completely within that time. But, When period of time that is set in driver.manage().timeouts().pageLoadTimeout() is finished and, consequently, Selenium possession of a browser is finished, the page is loaded at once. The issue can not be reproduced manually on the same remote machines. We've tried different versions of Selenium standalone, Chromedriver, Selenium driver. Browser is Google Chrome 63. Would be happy to hear any suggestions about reasons.
When Selenium loads a webpage/url by default it follows a default configuration of pageLoadStrategy set to normal. To make Selenium not to wait for full page load we can configure the pageLoadStrategy. pageLoadStrategy supports 3 different values as follows:
normal (full page load)
eager (interactive)
none
Code Sample :
Java
capabilities.setCapability("pageLoadStrategy", "none");
Python
caps["pageLoadStrategy"] = "none"
Here you can find the detailed discussions through Java and Python clients.
I have a script that logs into a site and then takes a screenshot. It uses Chrome 59 on MacOS in headless mode.
I have two problems, that I think are related. One problem is that my script takes minutes when it should take seconds. The second is that Chrome icon lingers in my Dock and never closes.
I think these problems are caused by the site that I am checking has a couple of elements that don't load. \images\grey_semi.png and https://www.google-analytics.com/analytics.js and I think this holds up selenium and prevents it from closing as instructed with driver.close()
What can I do?
script:
import os
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.chrome.options import Options
chrome_options = Options()
chrome_options.add_argument("--headless")
driver = webdriver.Chrome(executable_path=os.path.abspath("chromedriver"), chrome_options=chrome_options)
driver.get("https://url.com/")
username = driver.find_element_by_name("L1")
username.clear()
username.send_keys("name")
password = driver.find_element_by_name("P1")
password.clear()
password.send_keys("pass")
driver.find_element_by_id("login").click()
driver.get("https://url.com/Overview.aspx")
driver.get_screenshot_as_file('main-page.png')
driver.close()
I don't see any waits in your code. As you know Web apps are using AJAX techniques to load dynamic data. When a page is loaded by the browser, the elements within that page may load at different time intervals. Depending from the implementation it is possible that the load event is affected by the google-analytics.com/analytics.js, since a web page is considered completely loaded after all content (including images, script files, CSS files, etc.) is loaded. By default your UI Selenium tests use fresh instance of the browser, so it shouldn't cache the analytics.js. One more thing to check is if Google Analytics is placed in a specific place so that it isn't loaded until the page has loaded or run async. It used to be before the </body> tag but I believe it's now supposed to be the last <script> in the <head> tag. You can find more details of Page Load Impact of Google Analytics here, they claim if done right, the load times are so small that it’s not even worth worrying about. My best guess is that the issue is with the how Google Analytics are used.
About your second problem
Chrome icon lingers in my Dock and never closes
In case you see errors in browser console, try use the quit() method, it closes the browser and shuts down the ChromeDriver executable that is started when starting the ChromeDriver. Keep in mind that close() is used to close the browser only, but the driver instance still remains dangling. Another thing to check is that you are actually using the latest versions of both ChromeDriver executable and Chrome browser.
UPDATE:
If waits do NOT affect your execution time, this means that Selenium will wait for the page to finish loading and then look for the elements you've specified. The only real option that I can think off is to specify a page timeout like so:
from selenium.common.exceptions import TimeoutException
try:
driver.set_page_load_timeout(seconds)
except TimeoutException:
# put code with waits here
I solved this with the following:
driver.find_element_by_link_text("Job Board").click()
driver.find_element_by_link_text("Available Deployments").click()
I am trying to scrape a website that contains images using a headless Selenium.
Initially, the website populates 50 images. If you scroll down more and more images are loaded.
Windows 7 x64
python 2.7
recent install of selenium
[1] Non-Headless
Navigating to the website with selenium as follows:
from selenium import webdriver
browser = webdriver.Firefox()
browser.get(url)
browser.execute_script('window.scrollBy(0, 10000)')
browser.page_source
This works (if anyone has a better suggestion please let me know).
I can continue to scrollBy() until I reach the end and then pull the source page.
[2] Headless with HTMLUNIT
from selenium import webdriver
driver = webdriver.Remote(desired_capabilities=webdriver.DesiredCapabilities.HTMLUNIT)
driver.get(url)
I cannot use scrollBy() in this headless environment.
Any suggestions on how to scrape this kind of page?
Thanks
One option is to study the JavaScript to see how it calculates what to load next. Then implement that logic in your scraping client instead. Once you have done that, you can use faster scraping tools like Perl's WWW::Mechanize.
You need to enable JavaScript explicitly when using the HtmlUnit Driver:
driver.setJavascriptEnabled(true);
According to [http://code.google.com/p/selenium/wiki/HtmlUnitDriver](the docs), it should emulate IE's JavaScript handling by default.
When I tried the same method, I got error messages that selenium crashed while connecting java to simulate javascript.
I wrote the script into execute_script method then the code works well.
I guess the communication between selenium and java server part is not configured properly.
Enabling the javascript with HTMLUNITDRIVERWITHJS is possible and quick ;)
I am using Chrome driver for my Selenium test case. It is working fine. There is a performance issue in my project, so I want to migrate the testcase from ChromeDriver to HtmlUnitDriver. When I am trying to use HtmlUnitDriver in my testcase, by just changing the driver name with HtmlUnitDriver, the selenium testcase is not working.
After working around with this driver, I thought that HtmlUnitDriver is not loading the entire page.
Why I am telling this is because HtmlUnitDriver can find some div id's which are in the beginning of the page.
Other divs were not found by this driver. I am getting NoSuchElementException for this div id's.
So please help me to resolve this problem in my project.
Aren't the elements you are looking for created by JavaScript/AJAX calls? You might need to enable JavaScript support in HtmlUnitDriver first.
But beware, it could work well, but it could behave differently from what you see in the real browsers.
Otherwise, are you using Implicit/Explicit Waits for your searches? Even with JS enabled, sometimes it takes a while before all asynchronous requests are handled.