Selenium error 'Message: no such element: Unable to locate element' - selenium

I get this error when trying to get the price of the product:
selenium.common.exceptions.NoSuchElementException: Message: no such element: Unable to locate element
But the thing is that I am searching by XPath from the Inspect HTML panel. SO how come it doesn't'search it?
Here is the code:
main_page = 'https://www.takealot.com/500-classic-family-computer-tv-game-ejc/PLID90448265'
PATH = 'C:\Program Files (x86)\chromedriver.exe'
driver = webdriver.Chrome(PATH)
driver.get(main_page)
time.sleep(10)
active_price = driver.find_element(By.XPATH,'//*[#id="shopfront-app"]/div[3]/div[1]/div[2]/aside/div[1]/div[1]/div[1]/span').text
print(active_price)
I found the way to get the price the other way, but I am still interested why selenium can't find it by XPath:
price = driver.find_element(By.CLASS_NAME, 'buybox-module_price_2YUFa').text
active_price = ''
after_discount = ''
count = 0
for char in price:
if char == 'R':
count +=1
if count ==2:
after_discount += char
else:
active_price += char
print(active_price)
print(after_discount)

To extract the text R 345 ideally you need to induce WebDriverWait for the visibility_of_element_located() and you can use either of the following locator strategies:
Using CSS_SELECTOR and text attribute:
driver.get("https://www.takealot.com/500-classic-family-computer-tv-game-ejc/PLID90448265")
print(WebDriverWait(driver, 20).until(EC.visibility_of_element_located((By.CSS_SELECTOR, "span[data-ref='buybox-price-main']"))).text)
Using XPATH and get_attribute("innerHTML"):
driver.get("https://www.takealot.com/500-classic-family-computer-tv-game-ejc/PLID90448265")
print(WebDriverWait(driver, 20).until(EC.visibility_of_element_located((By.XPATH, "//span[#data-ref='buybox-price-main']"))).get_attribute("innerHTML"))
Console Output:
R 345
Note : You have to add the following imports :
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
You can find a relevant discussion in How to retrieve the text of a WebElement using Selenium - Python
References
Link to useful documentation:
get_attribute() method Gets the given attribute or property of the element.
text attribute returns The text of the element.
Difference between text and innerHTML using Selenium

You are trying to get a price from buybox-module_price_2YUFa but price is actually inside the child span
//span[#data-ref='buybox-price-main']
Use following xpath to get the price

Related

Selenium: 'function' object has no attribute 'click'

I am running into this issue when trying to click a button using selenium. The button html reads as below:
<button class="Component-button-0-2-65 Component-button-d1-0-2-68">and 5 more</button>
My code is here:
button = EC.element_to_be_clickable((By.XPATH,'.//button[contains(#class,"Component-button-d")]'))
if button:
print("TRUE")
button.click()
My output is:
TRUE
Traceback (most recent call last):
File "", line 47, in <module>
button.click()
AttributeError: 'function' object has no attribute 'click'
I am stumped as to why 'button' element is found by selenium (print(True) statement is executed) but then the click() method returns an attribute error.
This is the page I am scraping data from: https://religiondatabase.org/browse/regions
I am able to extract all the information I need on the page, so the code leading up to the click is working.
I was expecting the item to be clickable. I'm not sure how to troubleshoot the attribute error (function object has no attribute click). Because it I paste the xpath into the webpage, it highlights the correct element.
Your code is incorrect. You can find below an example of how you can use Expected Conditions (along with WebdriverWait):
from selenium import webdriver
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
## [...define your driver here, other imports etc]
wait = WebDriverWait(driver, 25)
##[open a url, etc]
element = wait.until(EC.presence_of_element_located((By.XPATH, '//div[#class="input-group"]')))
element.click()
Selenium documentation can be found here.
The element search is always done through the driver object which is the WebDriver instance in it's core form:
driver.find_element(By.XPATH, "element_xpath")
But when you apply WebDriverWait, the wait configurations are applied on the driver object.
WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.XPATH, "element_xpath")))
So even during WebDriverWait the driver object needs to be present along with the expected_conditions and the locator strategy.
This usecase
This line of code:
button = EC.element_to_be_clickable((By.XPATH,'.//button[contains(#class,"Component-button-d")]'))
button object references to a junk value but on probing returns true and prints TRUE but the click can't be performed as button is not of WebElement type.
Solution
Given the HTML:
<button class="Component-button-0-2-65 Component-button-d1-0-2-68">and 5 more</button>
To click on the element you need to induce WebDriverWait for the element_to_be_clickable() and you can use either of the following locator strategies:
Using XPATH with classname:
WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.CSS_SELECTOR, "//button[contains(#class,'Component-button-d')]"))).click()
Using XPATH with classname and innerText:
WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.XPATH, "//button[contains(#class,'Component-button-d') and text()='and 5 more']"))).click()
Note: You have to add the following imports :
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC

How To Scrape This Field

I want the following field, the "514" id. (Id is located in the first row of this webpage)
I tried using xpath with class name and then get attribute, but that prints blank.
Here is a screenshot of the tag in question
Screenshot
import time
from selenium import webdriver
driver = webdriver.Chrome()
driver.get('https://www.abstractsonline.com/pp8/#!/10517/sessions/#timeSlot=Apr08/1')
page_source = driver.page_source
element = driver.find_elements_by_xpath('.//li[#class="result clearfix"]')
for el in element:
id=el.find_element_by_class_name('name').get_attribute("data-id")
print(id)
You can use find once.
by css - .result.clearfix .name
by xpath - .//*[#class='result clearfix']//*[#class='name']

Selenium can't find by element

I try to scroll down by element class. I need to scroll by tweet in twitter.com
from time import sleep
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
driver = webdriver.Firefox()
driver.get('http://twitter.com/elonmusk')
sleep(5)
while True:
html = driver.find_element_by_class_name('css-901oao r-1fmj7o5 r-1qd0xha r-a023e6 r-16dba41 r-rjixqe r-bcqeeo r-bnwqim r-qvutc0')
html.send_keys(Keys.END)
I have error:
selenium.common.exceptions.NoSuchElementException: Message: Unable to locate element: .css-901oao r-1fmj7o5 r-1qd0xha r-a023e6 r-16dba41 r-rjixqe r-bcqeeo r-bnwqim r-qvutc0
That class name have spaces in your code, class_name does not work with spaces, try the below xpath :
//div[#data-testid='tweet']
and you can write like this in code :
counter = 1
while True:
html = driver.find_element_by_xpath(f"(//div[#data-testid='tweet'])[{counter}]")
counter = counter + 1
html.send_keys(Keys.END)

login to page using selenium

Attached below is the login page screenshot. I am unable to figure out how to find class name of username or what can be the id for putting username. Basically i am trying to login here.
I have tried
driver.find_elements_by_xpath("//*[contains(text(), 'Username')]").sendKeys("thakneh")
but the error i am getting is:
'list' object has no attribute 'sendKeys'.
Also, driver.find_elements_by_xpath("//*[contains(text(), 'Username')]") part returns empty list.
list' object has no attribute 'sendKeys
this is because, you are using find_elements which will return a list in Selenium-Python bindings.
you can not perform send_keys to a list in Selenium - Python. It has to be a single web element.
try find_element instead. (this will return a web element not list of web elements)
driver.find_element_by_xpath("//*[contains(text(), 'Username')]").sendKeys("thakneh")
Update 1 :
driver.get("Your URL")
driver.maximize_window()
wait = WebDriverWait(driver, 10)
wait.until(EC.element_to_be_clickable((By.XPATH, "//div[text()='Username']/../following-sibling::input"))).send_keys('username')
wait.until(EC.element_to_be_clickable((By.XPATH, "//div[text()='Password']/../following-sibling::input"))).send_keys('password')
wait.until(EC.element_to_be_clickable((By.XPATH, "//div[text()='Sign In']/.."))).click()
Imports :
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC

Extract the breadcrumbs of a website using selenium

i need to extract the breadcrumbs of this website site: https://www.woolworths.com.au/Shop/Browse/drinks/cordials-juices-iced-teas/iced-teas
I tried to inspect the element and copy the xpath but it doesn't extract it
from selenium import webdriver
driver = webdriver.Firefox()
driver.get('https://www.woolworths.com.au/Shop/Browse/drinks/cordials-juices-iced-teas/iced-teas')
driver.find_elements_by_xpath('//*[#id="center-panel"]/div/wow-tile-list-with-content/ng-transclude/wow-browse-tile-list/wow-tile-list/div/div[1]/div[1]/wow-breadcrumbs/div/ul/li[4]/span/span')
driver.find_element_by_css_selector('#center-panel > div > wow-tile-list-with-content > ng-transclude > wow-browse-tile-list > wow-tile-list > div > div.tileList > div.tileList-headerContainer > wow-breadcrumbs > div > ul > li:nth-child(4) > span > span')
How can I proceed?
To print the breadcrumbs of the website site: https://www.woolworths.com.au/Shop/Browse/drinks/cordials-juices-iced-teas/iced-teas you have to induce WebDriverWait for the desired visibility_of_element_located() and you can use either of the following Locator Strategies:
Using CSS_SELECTOR and get_attribute() method:
print(WebDriverWait(driver, 20).until(EC.visibility_of_element_located((By.CSS_SELECTOR, "ul.breadcrumbs-linkList li:nth-child(4) span span"))).get_attribute("innerHTML"))
Using XPATH and text property:
print(WebDriverWait(driver, 20).until(EC.visibility_of_element_located((By.XPATH, "//ul[#class='breadcrumbs-linkList']//following-sibling::li[4]//span//span"))).text)
Note : You have to add the following imports :
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
Outro
As per the documentation:
get_attribute() method Gets the given attribute or property of the element.
text attribute returns The text of the element.
Difference between text and innerHTML using Selenium
The page you are trying to scrape is written in Angular, meaning that most of the DOM elements are loaded dynamically by JavaScript AJAX code and they are not present once the page is loaded. (driver.get function returns)
You should use waits until function to locate such elements.
Here is the working example using the XPATH you provided:
driver.get('https://www.woolworths.com.au/Shop/Browse/drinks/cordials-juices-iced-teas/iced-teas')
try:
element = WebDriverWait(driver, 1).until(
EC.presence_of_element_located((By.XPATH, '//*[#id="center-panel"]/div/wow-tile-list-with-content/ng-transclude/wow-browse-tile-list/wow-tile-list/div/div[1]/div[1]/wow-breadcrumbs/div/ul/li[4]/span/span'))
)
print(element.text) ' this outputs Iced Teas
except TimeoutException:
print("Timeout")
Below one works for my validation
//*[span='first text' and span='Search results for "second text"']