selenium.common.exceptions.InvalidSelectorException using Selenium in python 3.7 - selenium

I want to use selenium to automate the process from open a specific website to log in to search for particular articles. Few of the steps I could do it but facing error in 'sign in' step.
from selenium import webdriver
from selenium.webdriver.common.by import By
base = 'https://www.wsj.com'
url = 'https://www.wsj.com/search/term.html?KEYWORDS=cybersecurity&min-date=2018/04/01&max-date=2019/03/31&isAdvanced=true&daysback=90d&andor=AND&sort=date-desc&source=wsjarticle,wsjpro&page=1'
browser = webdriver.Safari(executable_path='/usr/bin/safaridriver')
browser.get(url)
browser.find_element_by_id('editions-select').click()
browser.find_element_by_id('na,us').click()
browser.find_element(By.XPATH, '//button[#type="button"],[contain(.,"Sign In")]').click()
browser.find_element_by_id('username').send_keys('**#&^&#$##$')
browser.find_element_by_id('password').send_keys('###$%%**')
browser.find_element_by_id('basic-login').click()
browser.find_element_by_id('masthead-container').click()
browser.find_element_by_id('searchInput').send_keys('cybersecurity')
browser.find_element_by_name('ADVANCED SEARCH').click()
browser.find_element_by_id('dp1560924131783').send_keys('2018/04/01')
browser.find_element_by_id('dp1560924131784').send_keys('2019/03/31')
browser.find_element_by_id('wsjblogs').click()
browser.find_element_by_id('wsjvideo').click()
browser.find_element_by_id('interactivemedia').click()
browser.find_element_by_id('sitesearch').click()
The code is working till this line:
browser.find_element_by_id('na,us').click()
But after that it is showing error in this line:
browser.find_element(By.XPATH, '//button[#type="button"],[contain(.,"Sign In")]').click()
The error message says​:
selenium.common.exceptions.InvalidSelectorException: Message:
What is wrong is my code?

This error message...
selenium.common.exceptions.InvalidSelectorException
...implies that the XPath expression was not a valid one.
However, it seems you were close. You need to replace:
'//button[#type="button"],[contain(.,"Sign In")]'
and join the two condition with and operator as follows:
"//button[#type='button' and contains(.,'Sign In')]"

Related

How do I resolve the NoSuchElementException error in Selenium?

HTMLI was following an online tutorial on scraping glassdoor website via selenium.
My code does not get through this statement:
try:
driver.find_element_by_class_name("selected").click()
print('x out worked')
except ElementClickInterceptedException:
print('x out failed')
pass
time.sleep(.1)
try:
driver.find_element_by_css_selector('[alt="Close"]').click()
print(' x out worked')
except NoSuchElementException:
print(' x out failed (next page or missing)')
pass
The error I receive is :
NoSuchElementException: Message: no such element: Unable to locate element: {"method":"css selector","selector":".selected"}
Couple of things that I tried:
driver.maximize_window()
driver.implicitly_wait(20)
Hard to tell without seeing the HTML, but try reversing your quotes (single/double)...
driver.find_element_by_css_selector("[alt='Close']").click()
Or...
driver.find_element_by_xpath("//*[#alt='Close']")
If that doesn't work add the relevant HTML section to your post

How to address urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='127.0.0.1', port=58408): Max retries exceeded with url

I am trying to scrape a few pages of a website with selenium and use the results but when I run the function twice
[WinError 10061] No connection could be made because the target machine actively refused it'
Error appears for the 2nd function call.
Here's my approach :
import os
import re
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.chrome.options import Options
from bs4 import BeautifulSoup as soup
opts = webdriver.ChromeOptions()
opts.binary_location = os.environ.get('GOOGLE_CHROME_BIN', None)
opts.add_argument("--headless")
opts.add_argument("--disable-dev-shm-usage")
opts.add_argument("--no-sandbox")
browser = webdriver.Chrome(executable_path="CHROME_DRIVER PATH", options=opts)
lst =[]
def search(st):
for i in range(1,3):
url = "https://gogoanime.so/anime-list.html?page=" + str(i)
browser.get(url)
req = browser.page_source
sou = soup(req, "html.parser")
title = sou.find('ul', class_ = "listing")
title = title.find_all("li")
for j in range(len(title)):
lst.append(title[j].getText().lower()[1:])
browser.quit()
print(len(lst))
search("a")
search("a")
OUTPUT
272
raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='127.0.0.1', port=58408): Max retries exceeded with url: /session/4b3cb270d1b5b867257dcb1cee49b368/url (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x000001D5B378FA60>: Failed to establish a new connection: [WinError 10061] No connection could be made because the target machine actively refused it'))
This error message...
raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='127.0.0.1', port=58408): Max retries exceeded with url: /session/4b3cb270d1b5b867257dcb1cee49b368/url (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x000001D5B378FA60>: Failed to establish a new connection: [WinError 10061] No connection could be made because the target machine actively refused it'))
...implies that the failed to establish a new connection raising MaxRetryError as no connection could be made.
A couple of things:
First and foremost as per the discussion max-retries-exceeded exceptions are confusing the traceback is somewhat misleading. Requests wraps the exception for the users convenience. The original exception is part of the message displayed.
Requests never retries (it sets the retries=0 for urllib3's HTTPConnectionPool), so the error would have been much more canonical without the MaxRetryError and HTTPConnectionPool keywords. So an ideal Traceback would have been:
ConnectionError(<class 'socket.error'>: [Errno 1111] Connection refused)
Root Cause and Solution
Once you have initiated the webdriver and web client session, next within def search(st) you are invoking get() o access an url and in the subsequent lines you are also invoking browser.quit() which is used to call the /shutdown endpoint and subsequently the webdriver & the web-client instances are destroyed completely closing all the pages/tabs/windows. Hence no more connection exists.
You can find a couple of relevant detailed discussion in:
PhantomJS web driver stays in memory
Selenium : How to stop geckodriver process impacting PC memory, without calling
driver.quit()?
In such a situation in the next iteration (due to the for loop) when browser.get() is invoked there are no active connections. hence you see the error.
So a simple solution would be to remove the line browser.quit() and invoke browser.get(url) within the same browsing context.
Conclusion
Once you upgrade to Selenium 3.14.1 you will be able to set the timeout and see canonical Tracebacks and would be able to take required action.
References
You can find a relevant detailed discussion in:
MaxRetryError: HTTPConnectionPool: Max retries exceeded (Caused by ProtocolError('Connection aborted.', error(111, 'Connection refused')))
tl; dr
A couple of relevent discussions:
Adding max_retries as an argument
Removed the bundled charade and urllib3.
Third party libraries committed verbatim
Problem
The driver was asked to crawl the URL after being quit.
Make sure that you're not quitting the driver before getting the content.
Solution
Regarding your code, when executing search("a") , the driver retrieves the url, returns the content and after that it closes.
when serach() runs another time, the driver no longer exists so it is not able to proceed with the URL.
You need to remove the browser.quit() from the function and add it at the end of the script.
lst =[]
def search(st):
for i in range(1,3):
url = "https://gogoanime.so/anime-list.html?page=" + str(i)
browser.get(url)
req = browser.page_source
sou = soup(req, "html.parser")
title = sou.find('ul', class_ = "listing")
title = title.find_all("li")
for j in range(len(title)):
lst.append(title[j].getText().lower()[1:])
print(len(lst))
search("a")
search("a")
browser.quit()
I faced the same issue in Robot Framework.
MaxRetryError: HTTPConnectionPool(host='options=add_argument("--ignore-certificate-errors")', port=80): Max retries exceeded with url: /session (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x000001ABA3190F10>: Failed to establish a new connection: [Errno 11001] getaddrinfo failed')).
This issue got fixed once I updated all the libraries to their latest version in Pycharm and also I selected Intellibot#SeleniumLibrary.patched

selenium.WebDriverException: Returned value cannot be converted to WebElement

Seeing below error when trying to find element. Here I want to fill text after finding web element.
I am able to find xpath through chrome console, some how I am getting this issue. There are some posts on this issue, however most of them are related to appium, which is irrelevant to me.
util.driver.switchTo().defaultContent();
util.driver.switchTo().frame(0);
util.driver.findElement(By.xpath("//label[text()='Reason for Escalation']/following-sibling::div/input"));
Error message:
org.openqa.selenium.WebDriverException: Returned value cannot be converted to WebElement: {error=no such element, message=no such element: Unable to locate element: {"method":"xpath","selector":"//label[text()='Reason for Escalation']/following-sibling::div/input"}
Driver info: driver.version: RemoteWebDriver
at org.openqa.selenium.remote.RemoteWebDriver.findElement(RemoteWebDriver.java:324)
at org.openqa.selenium.remote.RemoteWebDriver.findElementByXPath(RemoteWebDriver.java:419)
at org.openqa.selenium.By$ByXPath.findElement(By.java:353)
at org.openqa.selenium.remote.RemoteWebDriver.findElement(RemoteWebDriver.java:309)
Caused by: java.lang.ClassCastException: com.google.common.collect.Maps$TransformedEntriesMap cannot be cast to org.openqa.selenium.WebElement
at org.openqa.selenium.remote.RemoteWebDriver.findElement(RemoteWebDriver.java:322)
at org.openqa.selenium.remote.RemoteWebDriver.findElementByXPath(RemoteWebDriver.java:419)
at org.openqa.selenium.By$ByXPath.findElement(By.java:353)
at org.openqa.selenium.remote.RemoteWebDriver.findElement(RemoteWebDriver.java:309)
There are 3 iframes on the page the elements am accessing are in first page
I would try to invoke WebDriverWait on the iframe before switching to it.
// wait for iframe to exist, then switch to it
WebDriverWait wait = new WebDriverWait(util.driver, 10);
wait.until(ExpectedConditions.frameToBeAvailableAndSwitchToIt(By.Xpath("//iframe[contains(#name, 'vfFrameId')]")));
// wait for element to exist
element = wait.until(ExpectedConditions.presenceOfElementLocated(By.xpath("//label[text()='Reason for Escalation']/following-sibling::div/input")));

Selenium close_fds

Hello I am trying to run the python script below
from selenium import webdriver
driver=webdriver.Chrome('C:\\Users\\Julian\\Downloads\\chromedriver_win32\\chromedriver.exe')
driver.get('https://www.youtube.com')
Upon Running it I am taken to a file named service.py and pointed to the the following
:Exceptions:
- WebDriverException : Raised either when it can't start the service
or when it can't connect to the service
"""
try:
cmd = [self.path]
cmd.extend(self.command_line_args())
self.process = subprocess.Popen(cmd, env=self.env,d
close_fds=(platform.system() != 'Windows'),
stdout=self.log_file,
stderr=self.log_file,
stdin=PIPE)
The particular line highlighted
close_fds=(platform.system() != 'Windows'),
Can someone point me in the direction of what I am supposed to do or change? Any help is strongly appreciated!
I just had to remove the d. Must have typed it by accident.

TypeError: Failed to execute 'createNSResolver' on 'Document': parameter 1 is not of type 'Node'

I'm using Cucumber with Watir Web-driver and Chrome browser.
When I execute my tests, sometimes there is an error like this:
"Selenium::WebDriver::Error::InvalidSelectorError: invalid selector: Unable to locate an element with the xpath expression //a[contains(., 'Joao Moreira')] because of the following error:
TypeError: Failed to execute 'createNSResolver' on 'Document': parameter 1 is not of type 'Node'.
(Session info: chrome=43.0.2357.81)
(Driver info: chromedriver=2.9.248315,platform=Windows NT 6.3 x86_64)"
I tried to get an answer trough Google but with no success.
Pretty sure this is this issue here: https://code.google.com/p/selenium/issues/detail?id=8600
And it is fixed as of Selenium 2.46.0. I haven't seen the error since moving.
Add a line to handle the exception thrown. Seems like the error halts the test. This has nothing to do with the locator, or iframe.Try to wrap your method in rescue clause:
begin
{your method}
rescue
Selenium::WebDriver::Error::InvalidSelectorError
end