I have been hitting this issue from Selenium Grid2 since 2.41.0 or earlier. Currently I am using 2.44.0. The way I set up was having a node (running on Windows7) with maxSession to be 16, and a hub running on Linux. After the setup is done, I can see from the Grid/console that there were 16 icons available, which is expected. But when I kicked out tests which require more than 6 browser instances (in this case, Chrome browser), and I can see from the grid/console there were 6 icons grayed-out, and some message saying that there were also some "requests waiting for a slot to be free": https://selenium.googlecode.com/issues/attachment?aid=63970009000&name=Screen+Shot+2014-11-13+at+12.10.46+PM.png&token=ABZ6GAd1E0jC2GEYFnemYyFfc8n9RA9uYQ%3A1416429465010&inline=1.
And from the log, here I found:
WebDriverException: Message: Error forwarding the new session Request timed out waiting for a node to become available.
Does anyone know how to resolve this? Many thanks in advance.
There are configuration options for max instances of each browser type as well as the overall max number of instances.
Could your configuration be limiting the number of Chromes to 6?
Related
when I start the test, the browser is opened but it does not load the URL.
After 10-15 seconds it stopped to load (see screenshot).
I had updated intelliJ, updated playwright to version 1.28.1 (doesn't help)
It happened 7 out of 10 trials.
Any idea why it suddenly becomes that behavior?
Many thanks!
there are 3 probabilities causing this error when hitting the URL:
Your browser version could be a outdated which the site is blocking, so try updating playwright browsers using the command 'npx playwright install'.
Check whether the site is being blocked due to any firewall or proxy setting and if so, get it whitelisted.
As the request also seems to be a crash due to timeout, kindly check whether you have decent internet speed.
We recently upgraded our Windows 10 test environment with ChromeDriver v87.0.4280.20 and Chrome v87.0.4280.66 (Official Build) (64-bit) and after the up-gradation even the minimal program is producing this ERROR log:
[9848:10684:1201/013233.169:ERROR:device_event_log_impl.cc(211)] [01:32:33.170] USB: usb_device_handle_win.cc:1020 Failed to read descriptor from node connection: A device attached to the system is not functioning. (0x1F)
Minimum Code Block:
from selenium import webdriver
options = webdriver.ChromeOptions()
options.add_argument("start-maximized")
driver = webdriver.Chrome(options=options, executable_path=r'C:\WebDrivers\chromedriver.exe')
driver.get('https://www.google.com/')
Console Output:
DevTools listening on ws://127.0.0.1:64170/devtools/browser/2fb4bb93-79ab-4131-9e4a-3b65c08dbffb
[9848:10684:1201/013233.169:ERROR:device_event_log_impl.cc(211)] [01:32:33.170] USB: usb_device_handle_win.cc:1020 Failed to read descriptor from node connection: A device attached to the system is not functioning. (0x1F)
[9848:10684:1201/013233.172:ERROR:device_event_log_impl.cc(211)] [01:32:33.173] USB: usb_device_handle_win.cc:1020 Failed to read descriptor from node connection: A device attached to the system is not functioning. (0x1F)
Anyone facing the same? Was there any change in ChromeDriver/Chrome v87 with respect to ChromeDriver/Chrome v86?
Any clues will be helpful.
However these log messages can be supressed from appearing on the console through an easy hack i.e. by adding an argument through add_experimental_option() as follows:
options.add_experimental_option('excludeSwitches', ['enable-logging'])
Code Block:
from selenium import webdriver
options = webdriver.ChromeOptions()
options.add_argument("start-maximized")
# to supress the error messages/logs
options.add_experimental_option('excludeSwitches', ['enable-logging'])
driver = webdriver.Chrome(options=options, executable_path=r'C:\WebDrivers\chromedriver.exe')
driver.get('https://www.google.com/')
My apologies for the log spam. If you aren't having issues connecting to a device with WebUSB you can ignore these warnings. They are triggered by Chrome attempting to read properties of USB devices that are currently suspended.
After going through quite a few discussions, documentations and Chromium issues here are the details related to the surfacing of the log message:
[9848:10684:1201/013233.169:ERROR:device_event_log_impl.cc(211)] [01:32:33.170] USB: usb_device_handle_win.cc:1020 Failed to read descriptor from node connection: A device attached to the system is not functioning. (0x1F)
Details
It all started with the reporting of chromium issue Remove WebUSB's dependency on libusb on Windows as:
For Linux (probably Mac as well), both WebUSB notification and communication works correctly (after allowing user access to the device in udev rules).
For Windows, it seems that libusb only works with a non-standard WinUsb driver (https://github.com/libusb/libusb/issues/255).
When the hardware is inserted and the VID/PID is unknown to the system, windows 10 correctly loads it's CDC driver for the CDC part and the WinUSB driver (version 10) for the WebUSB part (no red flags). However, it seems that chrome never finds the device until I manually force an older WinUSB driver (version 6 - probably modified also) on the interface.
The solution was implemented in a step-wise manner as follows:
Start supporting some transfers in the new Windows USB backend
Fix bulk/interrupt transfers in the new Windows USB backend
[usb] Read BOS descriptors from the hub driver on Windows
[usb] Collect all composite devices paths during enumeration on Windows
[usb] Remove out parameters in UsbServiceWin helper functions
[usb] Support composite devices in the new Windows backend
[usb] Detect USB functions as Windows enumerates them
[usb] Support composite devices with multiple functions
[usb] Hold interface requests until Windows enumerates functions
[usb] Add direction parameter to ClearHalt
[usb] Count references to a WINUSB_INTERFACE_HANDLE
[usb] Implement blocking operations in the Windows backend
These changes ensured that the new backend was ready to be tested and was available through Chrome Canary and chrome-dev-channel which you can access manually through:
chrome://flags#enable-new-usb-backend
More change requests were submitted as follows:
[usb] Mark calls to SetupDiGetDeviceProperty as potentially blocking: According to hang reports this function performs an RPC call which may take some time to complete. Mark calls with a base::ScopedBlockingCall so that the thread pool knows this task may be busy for a while.
variations: Enable NewUsbBackend in field trial testing config: This flag was experimental as beta-channel uses this change configuration as the default for testing.
As the experimental launch of the new backend appeared to be stable, finally these configuration was enabled by default so that the chanege rolls out to all users of Chrome 87 through usb: Enable new Windows USB backend by default. Revision / Commit
The idea was once this configuration becomes the default for a few milestones, Chromium Team will start removing the Windows-specific code from the old back-end and remove the flag.
Road Ahead
Chromium Team have already merged the revision/commit to Extend new-usb-backend flag expiration within Chrome v90 which will be available soon.
Update
As per #ReillyGrant's [Committer, WebDriver for Google Chrome] comment :
..." it would be good to reduce the log level for these messages so they don't appear on the console by default but we haven't landed code to do that yet"...
References
You can find a couple of relevant detailed discussions in:
Failed to read descriptor from node connection: A device attached to the system is not functioning error using ChromeDriver Selenium on Windows OS
Failed to read descriptor from node connection: A device attached to the system is not functioning error using ChromeDriver Chrome through Selenium
I encounered this problem yesterday,and I has fixed it by update all available windows update.
https://support.microsoft.com/en-us/windows/what-to-try-if-your-touchscreen-doesn-t-work-f159b366-b3ef-99ad-24a4-31a4c62ab46d
A partial solution that worked for me
I was getting this error too. It was stopping my program running.
I unplugged all my USB devices, ran the program, with no error.
Plugged the devices back in, ran the program. I am still getting the error, however, the program finished without the error stopping the program.
Note: For WebdriverIO on Windows 10, this suppresses the error messages for me:
"goog:chromeOptions": { "excludeSwitches": ["enable-logging"] }
My Selenium-WebDriver + Python scripts work fine, but give a ConnectionAbortedError if there is too much time in between the webdriver commands.
The following minimum working example gives an error:
from selenium import webdriver
from selenium.webdriver import Firefox
import time
browser = webdriver.Firefox()
browser.get('http://www.google.com/')
searchfield = browser.find_element_by_id("lst-ib")
time.sleep(5)
browser.close() # -> ConnectionAbortedError
while without the 5 second sleep, there is no error:
browser = webdriver.Firefox()
browser.get('http://www.google.com/')
searchfield = browser.find_element_by_id("lst-ib")
browser.close()
No one else seems to have had this issue... Is it normal that the connection is lost after several seconds? Or am I doing something wrong?
I have tried to use implicitly_wait and set_script_timeout, but increasing these timeouts did not solve the problem.
ConnectionAbortedError
You are seeing ConnectionAbortedError as the default timeout for the keep-alive connection is 5s.
As per #andreastt's comment in the discussion Keep-Alive connection to geckodriver 0.21.0 dropped after 5s of inactivity without re-connection using Selenium Python client GekcoDriver v0.21.0 switched on to use HTTP/1.1 Keep-Alive connections.
As you mentioned ConnectionAbortedError if there is 5 second sleep between the webdriver commands is confirmed by #whimboo's comment where he mentions:
It looks like the default timeout for the keep-alive connection is 5s. Sadly once this time passed-by the connection is not correctly re-instantiated.
Here is the reference to Why 5s?
GeckoDriver team could have to bumped up this timeout to a sane value, but on the other side the client also has to create a new connection in case of failures. The expectation was that the client needs to check whether the connection is still alive before using it. When the connection eventually gets closed by the server after five seconds of inactivity, the client needs to make a new connection. Though it makes much sense to bump up the Keep-Alive timeout duration. Ias it is a common practice for Automation Testers to wait five seconds for an operation to complete or for a five second thread sleep for an element to be present/visible in the tests.
But again, capping the Keep-Alive connection timeout to a higher value will not resolve the underlying issue as there are bugs in WebDriver clients’ handling of HTTP connections mentioned in Support keep alive connections.
Further #AutomatedTester mentioned:
This issue is not because we are not connected at the time of doing a request. If that was the issue we would be getting a httplib.HTTPConnection exception being thrown. Instead a BadStatusLine is thrown when we do a connection and close it and try parse the response.
Now this could be the python stdlib bug, httplib bug or selenium bug, which will need investigating.
#andreastt adds:
The HTTPD’s Keep-Alive timeout value is orthogonal to this issue. It is a known issue that the Python 2.7 standard library that urllib, used by the Selenium Python client, does not conform to HTTP/1.1. Increasing the server timeout would mitigate this, but not resolve the underlying problem, which is that the HTTP library in Python has a defect.
The issue appears to be fixed in more recent Python versions. When we investigated this we also found that various HTTP libraries built on top of urllib, such as requests, works around the issue using various mechanisms (like special-casing the BadStatusLine exception and re-connecting).
Selenium Team was working on a patch for the Python client to replace urllib with something else that does not exhibit the same defect with Keep-Alive connections. This work can be tracked in Urllib3.
Meanwhile the geckodriver team is working on extending the server-side timeout value to something more reasonable. As I said, this would help mitigate this issue but not fundamentally fix it. In any case it is true that five seconds is probably too low to get real benefit from persistent HTTP connections, and that increasing it to something like 60 seconds would have greater performance.
You can track the work on increasing the server Keep-Alive timeout in the discussion Increase Keep-Alive connection drop timeout.
Solution
Upgrading your Test Environment to use Selenium v3.14.0 may solve your issue.
We've started getting random timeouts, but can not get reasons of that. The tests run on remote machines on amazon using selenium grid. Here is how it is going on:
browser is opened,
then a page is loading, but can not load fully within 120 seconds,
then timeout exeption is thrown.
If I run the same tests localy then everything is ok.
The Error is ordinary timeout exception that is thrown if a page is not loaded completely during the period of time that is set in driver.manage().timeouts().pageLoadTimeout(). The problem is that a page of the site can not be loaded completely within that time. But, When period of time that is set in driver.manage().timeouts().pageLoadTimeout() is finished and, consequently, Selenium possession of a browser is finished, the page is loaded at once. The issue can not be reproduced manually on the same remote machines. We've tried different versions of Selenium standalone, Chromedriver, Selenium driver. Browser is Google Chrome 63. Would be happy to hear any suggestions about reasons.
When Selenium loads a webpage/url by default it follows a default configuration of pageLoadStrategy set to normal. To make Selenium not to wait for full page load we can configure the pageLoadStrategy. pageLoadStrategy supports 3 different values as follows:
normal (full page load)
eager (interactive)
none
Code Sample :
Java
capabilities.setCapability("pageLoadStrategy", "none");
Python
caps["pageLoadStrategy"] = "none"
Here you can find the detailed discussions through Java and Python clients.
My current setup is 5 nodes with 10 Firefox browsers each, all connected to a hub.
I am running into a problem where I am exhausting the 10 firefox browsers for each node. So any new selenium runs are getting queued up at the Hub and running when any FF browser for a node becomes available.
What I want to do is somehow query the selenium grid2 hub to get the number of free/idle/available browsers before actually running my tests on that particular grid setup. Based on my result I would redirect the tests to another grid setup (on another machine) or may be not even run the tests.
Of course I can add more nodes or even increase the number of browsers that can be handled by each node. But I am looking for an answer which will help me query the Grid and then allow me to decide on what action I can take rather than muscling my way by brute force (bigger server to handle more browser sessions).
I also sense that this may be a feature not implemented by Selenium Grid 2, so was wondering how others have got around this problem.
It provides sessions information from each selenium node in a selenium grid. You can get the session information of each node like this (assume your selenium node listens to port 5555):
$ curl http://<selenium-node>:5555/wd/hub/sessions
You will get a JSON object response like this:
{"value":[],"sessionId":null,"status":0,"hCode":1542413295,"class":"org.openqa.selenium.remote.Response"}
Then you can calculate how many active sessions from the "value" array value on each selenium node when it hits those nodes. Then you know how many left.