I'm currently running Selenium with Specflow.
One of my tests clicks on a button which triggers the download of a pdf file.
That file is automatically opened in a new tab where the test then grabs the url and downloads the referenced file directly to the selenium project.
This whole process works perfectly when chrome driver is run normally but fails on a headless browser with the following error:
The HTTP request to the remote WebDriver server for URL http://localhost:59658/session/c72cd9679ae5f713a6c857b80c3515e4/url timed out
after 60 seconds. -> The request was aborted: The operation has timed out.
This error occurs when attempting to run driver.Url
driver.Url calls work elsewhere in the code. It only fails after the headless browser switches tabs. (Yes, I am switching windows using the driver)
For reference, I cannot get this url without clicking the button on the first page and switching tabs as the url is auto-generated after the button is clicked.
I believe you are just using argument as "--headless" for better performance you should select screen size too. Sometimes, due to inappropriate screen size it cannot detect functions which you are looking for. try using this code or just add one line for size.
from selenium import webdriver
chrome_options = Options()
chrome_options.add_argument("--headless")
chrome_options.add_argument("--window-size=1920x1080")
driver = webdriver.Chrome(chrome_options=chrome_options)
Don't forget to put other arguments according to your need in "driver".
Related
from selenium import webdriver
driver = webdriver.Chrome(options=chrome_options)
driver.get(my_website)
The above code opens my website but only after loading a tab with the title data:, for a second or two. How do I hide it?
Your code successfully opens the website properly through google-chrome after loading a tab with the title data:, for a second or two, is perfectly fine.
The main concern is, your program shouldn't be stuck with data:, in the url bar. Incase this situation happens the simplest solution would be to:
Check if the url is properly formatted. As an example, you have mentioned the protocol along with the actual url as follows:
http://localhost:3000
You are using compatibile version of binaries in terms of Google Chrome Browser and ChromeDriver
References
You can find a couple of relevant discussions in:
How to work with a specific version of ChromeDriver while Chrome Browser gets updated automatically through Python selenium
Selenium for ChromeDriver and Chrome Browser and the log message “Only local connections are allowed”
WebDriverException: Message: Service /usr/lib/chromium-browser/chromedriver unexpectedly exited on Raspberry-Pi with ChromeDriver and Selenium
RemoteDisconnected(“Remote end closed connection without” http.client.RemoteDisconnected: Remote end closed connection without response
Running on Windows 7 Enterprise, 64-Bit OS, with Chrome Version 65.0.3325.181 (Official Build) (64-bit) and ChromeDriver version 2.37.
The same code runs flawlessly in Firefox. I am using webdriver to fill out a page (say page 1) that will generate an XML link. When I click the link from page 1 it will open the XML page in a new window. My code switches to the window and calls "getCurrentUrl()". Once it hits this snippet of code, it hangs for several minutes and returns:
[1523382059.135][SEVERE]: Timed out receiving message from renderer:
300.000 [1523382059.138][SEVERE]: Timed out receiving message from renderer: -0.002.
However, if I manually refresh the page, it will return the URL and finish executing.I have tried telling selenium to send control + F5, as well as the refresh methods and even telling it to get(getCurrentUrl() ).
Could this be an issue with proxys or maybe an issue with pulling the page, since it is just raw XML?
Thanks for the time and help.
I have recently had issues with Selenium and the chrome driver myself. Even though its not ideal have you tried using a implicit or explicit wait? Thread.sleep() has worked for me in the past when all else failed.
I have a script that logs into a site and then takes a screenshot. It uses Chrome 59 on MacOS in headless mode.
I have two problems, that I think are related. One problem is that my script takes minutes when it should take seconds. The second is that Chrome icon lingers in my Dock and never closes.
I think these problems are caused by the site that I am checking has a couple of elements that don't load. \images\grey_semi.png and https://www.google-analytics.com/analytics.js and I think this holds up selenium and prevents it from closing as instructed with driver.close()
What can I do?
script:
import os
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.chrome.options import Options
chrome_options = Options()
chrome_options.add_argument("--headless")
driver = webdriver.Chrome(executable_path=os.path.abspath("chromedriver"), chrome_options=chrome_options)
driver.get("https://url.com/")
username = driver.find_element_by_name("L1")
username.clear()
username.send_keys("name")
password = driver.find_element_by_name("P1")
password.clear()
password.send_keys("pass")
driver.find_element_by_id("login").click()
driver.get("https://url.com/Overview.aspx")
driver.get_screenshot_as_file('main-page.png')
driver.close()
I don't see any waits in your code. As you know Web apps are using AJAX techniques to load dynamic data. When a page is loaded by the browser, the elements within that page may load at different time intervals. Depending from the implementation it is possible that the load event is affected by the google-analytics.com/analytics.js, since a web page is considered completely loaded after all content (including images, script files, CSS files, etc.) is loaded. By default your UI Selenium tests use fresh instance of the browser, so it shouldn't cache the analytics.js. One more thing to check is if Google Analytics is placed in a specific place so that it isn't loaded until the page has loaded or run async. It used to be before the </body> tag but I believe it's now supposed to be the last <script> in the <head> tag. You can find more details of Page Load Impact of Google Analytics here, they claim if done right, the load times are so small that it’s not even worth worrying about. My best guess is that the issue is with the how Google Analytics are used.
About your second problem
Chrome icon lingers in my Dock and never closes
In case you see errors in browser console, try use the quit() method, it closes the browser and shuts down the ChromeDriver executable that is started when starting the ChromeDriver. Keep in mind that close() is used to close the browser only, but the driver instance still remains dangling. Another thing to check is that you are actually using the latest versions of both ChromeDriver executable and Chrome browser.
UPDATE:
If waits do NOT affect your execution time, this means that Selenium will wait for the page to finish loading and then look for the elements you've specified. The only real option that I can think off is to specify a page timeout like so:
from selenium.common.exceptions import TimeoutException
try:
driver.set_page_load_timeout(seconds)
except TimeoutException:
# put code with waits here
I solved this with the following:
driver.find_element_by_link_text("Job Board").click()
driver.find_element_by_link_text("Available Deployments").click()
Using the Selenium chrome webdriver I am trying to load a page,
But getting the Timeout error for the selenium
like timeout from renderer : 3000
It's the default timeout it is waiting for until the page gets loaded .
I am using groovy selenium to work with chrome.
Everything is fine. Only Timeout error causing issue sometimes.
Does someone have any idea about WHAT IS THE DEFAULT TIMEOUT FOR PAGE LOADING IN CHROME SELENIUM WEBDRIVER ??
And Can I change that timeout ?
If yes how to ?
I am currently using selenium chrome driver v2.9 chrome v.27 in GROOVY.
I have been trying Chrome, Firefox & PhantomJS using Python.
I have been trawling loads of webpages trying to find answers for using a headless web driver WITH a working load page timeout.
There is a lot of confusion between the various answers.
I'll post my findings on this page since it comes up high in a search like 'selenium chrome timeout'.
I can now get chromedriver.exe to work WITH a timeout.
The selenium wrapper seems broken but the following works as per my requirements. Instead of calling:
chrome_driver.set_page_load_timeout(my_timeout_secs)
chrome_driver.get( ...... )
use:
chrome_driver.execute(Command.SET_TIMEOUTS, {
'ms': float(my_timeout_secs * 1000),
'type': 'page load'})
chrome_driver.get( ...... )
This bypasses the selenium wrapper and sets the timeout with Chrome.
Then just use a standard Selenium catch. The page will timeout, then hit your code after Chrome raises a TimeoutException from selenium.common.exceptions
try:
... set_timeout ...
... get url ...
except TimeoutException as e:
# do whatever you need to here
Hope that helps anybody needing headless driver with timeouts.
Thanks,
LJHW
Have you seen the documentation?
An implicit wait is to tell WebDriver to poll the DOM for a certain amount of time when trying to find an element or elements if they are not immediately available. The default setting is 0.
I need to build a Python scraper to scrape data from a website where content is only displayed after a user clicks a link bound with a Javascript onclick function, and the page is not reloaded. I've looked into Selenium in order to do this and played around with it a bit, and it seems Selenium opens a new Firefox web browser everytime I instantiate a driver:
>>> driver = webdriver.Firefox()
Is this open browser required, or is there a way to get rid of it? I'm asking because the scraper is potentially part of a web app, and I'm afraid if multiple users start using it, I will have a bunch of browser windows open on my server.
Yes, selenium automates web browsers.
You can add this at the bottom of your python code to make sure the browser is closed at the end:
driver.quit()