BeautifulSoup: AttributeError: 'str' object has no attribute 'find_element_by_xpath' - beautifulsoup

I get an error when I run this function. I cannot figure out what I did wrong. I even tried removing ".text" and I still get the same error.
def get_detail_data(bs):
title = bs.find_element_by_xpath('//*[#id="itemTitle"]').text
print(title)
Pycharm Error:
title = bs.find_element_by_xpath('//*[#id="itemTitle"]').text
AttributeError: 'str' object has no attribute 'find_element_by_xpath'
Entire code: https://pastebin.com/rsTmDgBD
Thanks.

You have imported selenium, but haven't used it. The find_element_by_xpath is not a BeautifulSoup method, but a selenium method. You probably have to start with something like:
from selenium import webdriver
driver = webdriver.Firefox()
driver.get(url)
title = driver.find_element_by_xpath('//*[#id="itemTitle"]').text
print(title)

Related

'NoneType' object is not callable error while attempting to click on an element returned with BeautifulSoup using Selenium

I want to click on element targeted with beautifulsoup using click() method in selenium but this error is showing:
'NoneType' object is not callable
Example of my code:
from selenium import webdriver
from bs4 import BeautifulSoup
tabs = deals_tabs.find_all('div',{'class':'FilterSort__filter___36MvO'})
tabs.pop(0)
for tab in tabs:
category = tab.text
tab.click()
findall()
findall() finds all the matches and returns them as a list of strings, with each string representing one match.
So tabs is a list of strings as well as tab.
But click() is a WebElement method and can't be called on a string. Hence you see the error.

Selenium, groovy, can't perform any click(), sendKeys() or similar functions

I am not sure what I'm missing from my code. But I am trying to run a basic Groovy script, where I'm finding an element from the page, and clicking on it.
My code works, to the point where I add .click() or .sendKeys().
A few things to note are I'm running selenium on ReadyAPI. I have followed all the instructions from their help page to make sure I have the right drivers in the right folders.
My code is as follows:
import java.util.ArrayList;
import org.openqa.selenium.*
import org.openqa.selenium.By
import org.openqa.selenium.WebDriver
import org.openqa.selenium.WebElement
import org.openqa.selenium.chrome.ChromeDriver
def PATH_TO_CHROMEDRIVER = context.expand( '${PATH_TO_CHROMEDRIVER}' );
System.setProperty("webdriver.chrome.driver", PATH_TO_CHROMEDRIVER);
def WebDriver driver = new ChromeDriver();
driver.get("https://www.rakuten.com/");
WebElement loginButtonId = driver.findElementsByXPath("//*[#name='email_address']");
loginButtonId.click();
driver.close();
return
The error msg I get is the following:
org.codehaus.groovy.runtime.typehandling.GroovyCastException: Cannot cast object '[]' with class 'java.util.ArrayList' to class 'org.openqa.selenium.WebElement' due to: groovy.lang.GroovyRuntimeException: Could not find matching constructor for: org.openqa.selenium.WebElement() error at line: 12
I appreciate it if anyone could help here.
Thanks,
Your mistake is in this line:
WebElement loginButtonId = driver.findElementsByXPath("//*[#name='email_address']");
findElements you should use when you need to detect a list of elements,
If you want to get a single element use findElement:
WebElement loginButtonId = driver.findElementByXPath("//*[#name='email_address']");

How to get the "none display" html from selenium

I'm trying to get some context by using selenium, however I can't just get the "display: none" part content. I tried use attribute('innerHTML') but still not work as expected.
Hope if you could share some knowledge.
[Here is the html][1]
[1]: https://i.stack.imgur.com/LdDL4.png
# -*- coding: utf-8 -*-
from selenium import webdriver
import time
from bs4 import BeautifulSoup
import re
from pyvirtualdisplay import Display
from lxml import etree
driver = webdriver.PhantomJS()
driver.get('http://flights.ctrip.com/')
driver.maximize_window()
time.sleep(1)
element_time = driver.find_element_by_id('DepartDate1TextBox')
element_time.clear()
element_time.send_keys(u'2017-10-22')
element_arr = driver.find_element_by_id('ArriveCity1TextBox')
element_arr.clear()
element_arr.send_keys(u'北京')
element_depart = driver.find_element_by_id('DepartCity1TextBox')
element_depart.clear()
element_depart.send_keys(u'南京')
driver.find_element_by_id('search_btn').click()
time.sleep(1)
print(driver.current_url)
driver.find_element_by_id('btnReSearch').click()
print(driver.current_url)
overlay=driver.find_element_by_id("mask_loading")
print(driver.exeucte_script("return arguments[0].getAttribute('style')",overlay))
driver.quit()
To retrieve the attribute "display: none" you can use the following line of code:
String my_display = driver.findElement(By.id("mask_loading")).getAttribute("display");
System.out.println("Display attribute is set to : "+my_display);
if element style attribute has the value display:none, then it is a hidden element. basically selenium doesn't interact with hidden element. you have to go with javascript executor of selenium to interact with it. You can get the style value as given below.
WebElement overlay=driver.findElement(By.id("mask_loading"));
JavascriptExecutor je = (JavascriptExecutor )driver;
String style=je.executeScript("return arguments[0].getAttribute("style");", overlay);
System.out.println("style value of the element is "+style);
It prints the value "z-index: 12;display: none;"
or if you want to get the innerHTML,
String innerHTML=je.executeScript("return arguments[0].innerHTML;",overlay);
In Python,
overlay=driver.find_element_by_id("mask_loading")
style =driver.exeucte_script("return arguments[0].getAttribute('style')",overlay)
or
innerHTML=driver.execute_script("return arguments[0].innerHTML;", overlay)

How can i use a function in find_all() in BeautifulSoup

I'm using bs4 for my project. Now I get something like:
<tr flag='t'><td flag='f'></td></tr>
I already know i could use a function in find_all(). So i use
def myrule(tag):
return tag['flag']=='f' and tag.parent['flag']=='t';
soup.find_all(myrule)
then i get the error like
KeyError: 'myrule'
can anyone help me with this, why it don't work.
Thanks.
You are searching every possible tag in your soup object for an attribute named flag. If the current tag being passed don't have that attribute it'll throw an error and the program will stop.
You should initially verify if the tag have that attribute before checking the rest. Like this:
from bs4 import BeautifulSoup
example = """<tr flag='t'><td flag='f'></td></tr>"""
soup = BeautifulSoup(example, "lxml")
def myrule(tag):
return "flag" in tag.attrs and tag['flag']=='f' and tag.parent['flag']=='t';
print(soup.find_all(myrule))
Outputs:
[<td flag="f"></td>]

Find an element with xpath misses a required positional element

I want to click on a radio-button in my html-file.
The following code:
from selenium.webdriver.remote import webdriver
element = webdriver.WebDriver.find_element_by_xpath("//input[#type='radio' and #name='AlarmMode']")
element.click()
gives me the error:
TypeError: find_element_by_xpath() missing 1 required positional argument: 'xpath'
Which argument is missing?
You need to first instantiate a webdriver object and then call find_element_by_xpath() on it:
from selenium import webdriver
driver = webdriver.Firefox()
driver.get(url)
element = driver.find_element_by_xpath("//input[#type='radio' and #name='AlarmMode']")
element.click()