Why is this BeautifulSoup result []? - beautifulsoup

I want to get the text in the span. I have checked it, but I don't see the problem
from bs4 import BeautifulSoup
import urllib.request
import socket
searchurl = "http://suchen.mobile.de/auto/search.html?scopeId=C&isSearchRequest=true&sortOption.sortBy=price.consumerGrossEuro"
f = urllib.request.urlopen(searchurl)
html = f.read()
soup = BeautifulSoup(html)
print(soup.findAll('span',attrs={'class': 'b'}))
The result was [], why?

Looking at the site in question, your search result turns up an empty list because there are no spans with a class value of b. BeautifulSoup does not propagate down the CSS like a browser would. In addition, your urllib request looks incorrect. Looking at the site, I think you want to grab all the spans with a class of label, though it's hard when the site isn't in my native language. Here's is how you would go about it:
from bs4 import BeautifulSoup
import urllib2 # Note urllib2
searchurl = "http://suchen.mobile.de/auto/search.html?scopeId=C&isSearchRequest=true&sortOption.sortBy=price.consumerGrossEuro"
f = urllib2.urlopen(searchurl) # Note no need for request
html = f.read()
soup = BeautifulSoup(html)
for s in soup.findAll('span', attrs={"class":"label"}):
print s.text
This gives for the url listed:
Farbe:
Kraftstoffverbr. komb.:
Kraftstoffverbr. innerorts:
Kraftstoffverbr. außerorts:
CO²-Emissionen komb.:
Zugr.-lgd. Treibstoffart:

Related

How to scrape a specific itemprop from a web page with XPath and Selenium?

I'm trying to use Python (Selenium, BeautifulSoup, and XPath) to scrape a span with an itemprop equal to "description", but every time I run the code, the "try" fails and it prints out the "except" error.
I do see the element in the code when I inspect elements on the page.
Line that isn't getting the desired response:
quick_overview = soup.find_element_by_xpath("//span[contains(#itemprop, 'description')]")
Personally, I think you should just keep working with selenium
quick_overview = driver.find_element_by_xpath("//span[contains(#itemprop, 'description')]")
for the element and add .text to end to get the text content.
To actually use soup to parse this out you would likely need a wait condition from selenium first so no real point.
However, should you decide to integrate bs4 then you need to change your function to work with the actual html from driver.page_source and parse that, then switch to select_one to grab your item. Then ensure you are returning from the function and assigning to new soup object.
from bs4 import BeautifulSoup
from selenium import webdriver # links w/ browser and carries out actions
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
PATH = "C:\Program Files (x86)\chromedriver_win32\chromedriver.exe"
baseurl = "http://www.waytekwire.com"
skus_to_find_test = ['WL16-8', 'WG18-12']
driver = webdriver.Chrome(PATH)
driver.get(baseurl)
def use_driver_current_html(driver):
soup = BeautifulSoup(driver.page_source, 'lxml')
return soup
for sku in skus_to_find_test[0]:
search_bar = driver.find_element_by_id('themeSearchText')
search_bar.send_keys(sku)
search_bar.send_keys(Keys.RETURN)
try:
product_url = driver.find_elements_by_xpath("//div[contains(#class, 'itemDescription')]//h3//a[contains(text(), sku)]")[0]
product_url.click()
WebDriverWait(driver,10).until(EC.presence_of_element_located((By.XPATH, "//span[contains(#itemprop, 'description')]")))
soup = use_driver_current_html(driver)
try:
quick_overview = soup.select_one("span[itemprop=description]").text
print(quick_overview)
except:
print('No Quick Overview Found.')
except:
print('Product not found.')

How do I avoid the 'NavigableString' error with BeautifulSoup and get to the text of href?

This is what I have:
import requests
from bs4 import BeautifulSoup
from urllib.parse import urljoin
url = "http://python.beispiel.programmierenlernen.io/index.php"
doc = requests.get(url).content
soup = BeautifulSoup(doc, "html.parser")
for i in soup.find("div", {"class":"navigation"}):
print(i)
Currently the print output of "i" is:
<a class="btn btn-primary" href="index.php?page=2">Zur nächsten Seite!</a>
I want to print out the href link "index.php?page=2".
When I try to use BeautifulSoups "find", "select" or "attrs" method on "i" I get an error. For instance with
print(i.attrs["href"])
I get:
AttributeError: 'NavigableString' object has no attribute 'attrs'
How do I avoid the 'NavigableString' error with BeautifulSoup and get the text of href?
The issue seems to be for i in soup.find. If you're looking for only one element, there's no need to iterate that element, and if you're looking for multiple elements, find_all instead of find would probably match the intent.
More concretely, here are the two approaches. Beyond what's been mentioned above, note that i is a div that contains the desired a as a child, so we need an extra step to reach it (this could be more direct with an xpath).
import requests
from bs4 import BeautifulSoup
url = "http://python.beispiel.programmierenlernen.io/index.php"
doc = requests.get(url).content
soup = BeautifulSoup(doc, "html.parser")
for i in soup.find_all("div", {"class": "navigation"}):
print(i.find("a", href=True)["href"])
print(soup.find("div", {"class": "navigation"})
.find("a", href=True)["href"])
Output:
index.php?page=2
index.php?page=2

Web-scraping code using selenium and beautifulsoup not working properly

I wrote the python code for web-scraping Sydney morning herald newspaper. This code first clicks all the show more button and then scrape all the articles. Selenium part is working correctly. But I think there is some problem in the scraping part, as after scraping the desired fields (date, title, and content)for few articles (5-6) it is only giving date and title, no content.
import time
import csv
import requests
from bs4 import BeautifulSoup
from bs4.element import Tag
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
base = 'https://www.smh.com.au'
browser = webdriver.Safari(executable_path='/usr/bin/safaridriver')
wait = WebDriverWait(browser, 10)
browser.get('https://www.smh.com.au/search?text=cybersecurity')
while True:
try:
time.sleep(2)
show_more = wait.until(EC.element_to_be_clickable((By.CLASS_NAME, '_3we9i')))
show_more.click()
except Exception as e:
print(e)
break
soup = BeautifulSoup(browser.page_source,'lxml')
anchors = soup.find_all('a', {'tabindex': '-1'})
for anchor in anchors:
browser.get(base + anchor['href'])
sub_soup = BeautifulSoup(browser.page_source, 'html.parser')
dateTag = sub_soup.find('time', {'class': '_2_zR-'})
titleTag = sub_soup.find('h1', {'itemprop': 'headline'})
contentTag = sub_soup.find_all('div', {'class': '_1665V undefined'})
date = None
title = None
content = None
if isinstance(dateTag, Tag):
date = dateTag.get_text().strip()
if isinstance(titleTag, Tag):
title = titleTag.get_text().strip()
if isinstance(contentTag, list):
content = []
for c in contentTag:
content.append(c.get_text().strip())
content = ' '.join(content)
print(f'{date}\n {title}\n {content}\n')
time.sleep(3)
browser.close()
Why did this code stop giving content part after a few articles? I don't understand it.
Thank you.
It's because You've reached your monthly free access limit
It's the message displayed on the webpage after a few page displayed.

scraping an iframe with beautifulsoup and python

i am trying to scrape the following page:
https://www.dukascopy.com/swiss/english/marketwatch/sentiment/
more exactly, the numbers in the chart. for example, the number 74,19 % in the green bar next to the aud/usd text. i have inspected the elements and found out that the tag for this number is span. but the following code does not return this or any other number in the chart:
import requests
from bs4 import BeautifulSoup
r=requests.get('https://www.dukascopy.com/swiss/english/marketwatch/sentiment/')
soup = BeautifulSoup(r.content, "html.parser")
data = soup('span')
print(data)
So if you incorporate selenium with beautiful soup, you will get all the abilities of selenium to scrape iframes.
try this:
from bs4 import BeautifulSoup
from selenium import webdriver
from selenium.common.exceptions import NoSuchElementException
from selenium.webdriver.common.keys import Keys
browser = webdriver.Firefox()
browser.get(bond_iframe)
bond_source = browser.page_source
browser.quit()
soup = BeautifulSoup(bond_source,"html.parser")
for div in soup.findAll('div',attrs={'class':'qs-note-panel'}):
print div
The for loop would be which div tag you are searching for

Passing results from mechanize to BeautifulSoup

I get an when i try to mix mechanize and BeautifulSoup in the following code:
from BeautifulSoup import BeautifulSoup
import urllib2
import re
import mechanize
br=mechanize.Browser()
br.set_handle_robots(True)
br.open('http://tel.search.ch/')
br.select_form(nr=0)
br.form["was"] = "siemens"
br.submit()
content = br.response
soup = BeautifulSoup(content)
for a in soup.findAll('a',href=True):
if re.findall('title', a['href']):
print "URL:", a['href']
br.close()
The code from the beginning till br.submit() works fine with mechanize and the for loop with BeautifulSoup too. But I don't know how to pass the results from br.submit() into BeautifulSoup. The 2 lines:
content = br.response
soup = BeautifulSoup(content)
are apparently wrong. I get an error for soup = BeautifulSoup(content):
TypeError: expected string or buffer
Can anyone help?
Try changing
content = br.response
to
content = br.response().read()
In this way content now has html that can be passed to BeautifulSoup.