Find specific data-availability class - beautifulsoup

I have the following webpage source:
<li class="available" data-availability="homeDelivery">
<i class="icon-tick"></i> FREE delivery
</li>
I want to print "Free delivery" to the screen if data-availability == homeDelivery.
I tried with the below code but I get no match.
result = soup.find_all("option", {"data-availability": 'homeDelivery'})
print(result)
ANy ideas? Thank you!

You should be looking for <li> tag not option.
Try this:
from bs4 import BeautifulSoup
sample = """<li class="available" data-availability="homeDelivery">
<i class="icon-tick"></i> FREE delivery
</li>"""
result = BeautifulSoup(sample, "html.parser").find_all("li", {"data-availability": 'homeDelivery'})
print([i.getText(strip=True) for i in result])
Output:
['FREE delivery']

Related

How to extract value of all classes in beautiful Soup

I have a HTML file with a structure like this:
<p id="01">... EU legislation and the <em>monetary power</em> of the
<span class="institution" Wikidata="Q8901" name="European Central Bank">ECB</span>.</p>
<p id="02"><span class="person" Wikidata="Q563217">Guido Carli</span>, Governor of the
<span class="institution" Wikidata="Q806176">Bank of Italy</span> ...</p>
I need to have a Python dict like this:
{'institution': ['Q8901', 'Q806176'], 'person': ['Q563217']}
So I need to get the value of the class attribute of all span tags, along with their text. How can I do this with bs4?
Select your elements and iterate the ResultSet while appending the values to your dict. To extract the values of an attribute use .get(). Because class will give you a list pick yours by index or key.
Example
from bs4 import BeautifulSoup
html = '''
<p id="01">... EU legislation and the <em>monetary power</em> of the
<span class="institution" Wikidata="Q8901" name="European Central Bank">ECB</span>.</p>
<p id="02"><span class="person" Wikidata="Q563217">Guido Carli</span>, Governor of the
<span class="institution" Wikidata="Q806176">Bank of Italy</span> ...</p>
'''
soup = BeautifulSoup(html)
d = {
'institution':[],
'person':[]
}
for e in soup.select('span[wikidata]'):
d[e.get('class')[0]].append(e.get('wikidata'))
d
Output
{'institution': ['Q8901', 'Q806176'], 'person': ['Q563217']}
This is the way I solved my problem thanks to #HedgeHog.
from bs4 import BeautifulSoup
from collections import defaultdict
def capture_info(soup: 'BeautifulSoup') -> defaultdict:
info = defaultdict(list)
for i in soup.select('span[Wikidata]'):
info[i.get('class')[0]].append(i.get('wikidata'))
return info
html = '''
<p id="01">... EU legislation and the <em>monetary power</em> of the
<span class="institution" Wikidata="Q8901" name="European Central Bank">ECB</span>.</p>
<p id="02"><span class="person" Wikidata="Q563217">Guido Carli</span>, Governor of the
<span class="institution" Wikidata="Q806176">Bank of Italy</span> ...</p>
'''
soup = BeautifulSoup(html, 'html.parser')
info = capture_info(soup)
The output is:
{'institution': ['Q8901', 'Q806176'], 'person': ['Q563217']})

Data scraping by selenium p tag

I searched a lot on the internet. I couldn't find an example similar to the one below. I'm trying to pull text from a web page. There is no location line in the first p tag. The second location section has a location line. When pulling data, I can only pull the contents of the p tag, which is the location row. I cannot pull the contents of the other p tag. I wonder how can I pull the data inside the first and second p tag?
HTML codes of Page Source:
<div class=" col-md-8">
<p>
<i class='fa fa-home main-color'></i> ORHAN MAH.İBRAHİM CAD. NO:35
<br>
<i class='fa fa-phone main-color'></i>
<a class="gri" href="tel:0508-2920344">0508-2920344 </a>
<br />
<i class='fa fa-clock-o main-color'></i>
<span class="red">19.01.2022</span>
</p>
<p>
<i class='fa fa-home main-color'></i> HAZAN MAH.ÖKTEM CAD. NO:13/B
<br>
<i class='fa fa-phone main-color'></i>
<a class="gri" href="tel:0584 837 23 70">0584 837 23 70 </a>
<br>
<i class="fa fa-map-marker main-color"></i>
<a class="gri" href="https://www.google.com/maps?q=35.554433,25.887766" target="_blank">Haritada</a>
<br />
<i class='fa fa-clock-o main-color'></i>
<span class="red">20.01.2022</span>
</p>
</div>
Here is the selenium code I used to pull the data from the HTML source above:
item = browser.find_elements_by_class_name("col-md-10")
urls = browser.find_elements_by_xpath("//div[#class=' col-md-10']/p/a[2]")
for i in zip(item,urls):
try:
address = i[0].find_element_by_css_selector("p").text.split("\n")[:2]
except:
address = None
try:
phone = i[0].find_element_by_xpath("//a[#class='gri'][1]").text
except:
phone = None
print(address)
print(phone)
try:
url = i[1].get_attribute('href').replace("https://www.google.com/maps?q=","")
except:
url = None
try:
date = i[0].find_element_by_xpath("//span[#class='red'][1]").text
except:
date = None
print(url)
print(date)
Use xpath //div[#class=' col-md-8']/p. This will return data of both p tags.
Then you can perform string operations as per your requirement and use data of each p tag using for loop
The 1.p tag blog has no location section. The 2.p tag blog has a location section. In the 1.p tag I want, I want to print none instead of the location in the p blog. When I try to pull with zip_longest regularly the location fails to pull.
#1.p tag block
ORHAN MAH.İBRAHİM CAD. NO:35
0508-2920344
19.01.2022
#2.p tag block
HAZAN MAH.ÖKTEM CAD. NO:13/B
0584 837 23 70
Haritada
20.01.2022

How can I get a value from an attribute inside a tag

I have a soup object like:
<a class="love-action js-add-to-favorites" data-id="415953" data-price="715.00" href="#">
</a>
I did
soup = BeautifulSoup(src, 'lxml') #передаем переменную в суп
price = soup.find(class_='col-5 col-sm-4 col-lg-7 mob-position detail-top-actions').find('a', class_='love-action js-add-to-favorites')
print(price)
I'd like to get only: 715.00
How to fix?
You can access attributes of a tag by treating it like a dictionary - So simply get the value from the attribute data-price by:
price['data-price']
Example based on your question
soup = BeautifulSoup(src, 'lxml') #передаем переменную в суп
price = soup.find(class_='col-5 col-sm-4 col-lg-7 mob-position detail-top-actions').find('a', class_='love-action js-add-to-favorites')
print(price['data-price'])
Output
715.00

Scrapy CSV export shows the same data in all rows

I'm trying to scrape the following html code:
<ul class="results-list" id="search-results">
<li>
<h3 class="name">First John</h3>
<div class="details">
email
<span class="phone">999999999</span>
</div>
</li>
<li>
<h3 class="name">Second John</h3>
<div class="details">
email
<span class="phone">999999999</span>
</div>
</li>
</ul>
When I run my spider, I get 2 rows, containing the same information. I have name,email,phone columns and for example in the name column for both I would get:
First John,Second John.
My Scrapy code is the following:
people= response.xpath('//ul[#class="results-list"]/li')
for person in people:
item = SpiderItem()
item['Name'] = person.xpath(
'//h3/text()').extract()
item['Email'] = person.xpath(
'//div[#class="details"]/a/#href').extract()
item['Phone'] = person.xpath(
'//div[#class="details"]/span[#class="phone"]/text()').extract()
yield item
However when I run scrapy crawl MySpider -o output.csv I get the same information in all rows.
you are using absolute path on your xpath expressions, change them to:
for person in people:
item = SpiderItem()
item['Name'] = person.xpath(
'.//h3/text()').extract_first()
item['Email'] = person.xpath(
'.//div[#class="details"]/a/#href').extract_first()
item['Phone'] = person.xpath(
'.//div[#class="details"]/span[#class="phone"]/text()').extract_first()
yield item

BeautifulSoup Nested class selector

I am using BeautifulSoup for a project. Here is my HTML structure
<div class="container">
<div class="fruits">
<div class="apple">
<p>John</p>
<p>Sam</p>
<p>Bailey</p>
<p>Jack</p>
<ul>
<li>Sour</li>
<li>Sweet</li>
<li>Salty</li>
</ul>
<span>Fruits are good</span>
</div>
<div class="mango">
<p>Randy</p>
<p>James</p>
</div>
</div>
<div class="apple">
<p>Bill</p>
<p>Sean</p>
</div>
</div>
Now I want to grab text in div class 'apple' which falls under class 'fruits'
This is what I have tried so far ....
for node in soup.find_all("div", class_="apple")
Its returning ...
Bill
Sean
But I want it to return only ...
John
Sam
Bailey
Jack
Sour
Sweet
Salty
Fruits are good
Please note that I DO NOT know the exact structure of elements inside div class="apple" There can be any type of different HTML elements inside that class. So the selector has to be flexible enough.
Here is the full code, where I need to add this BeautifulSoup code ...
class MySpider(CrawlSpider):
name = 'dknnews'
start_urls = ['http://www.example.com/uat-area/scrapy/all-news-listing/_recache']
allowed_domains = ['example.com']
def parse(self, response):
hxs = Selector(response)
soup = BeautifulSoup(response.body, 'lxml')
#soup = BeautifulSoup(content.decode('utf-8','ignore'))
nf = NewsFields()
ptype = soup.find_all(attrs={"name":"dknpagetype"})
ptitle = soup.find_all(attrs={"name":"dknpagetitle"})
pturl = soup.find_all(attrs={"name":"dknpageurl"})
ptdate = soup.find_all(attrs={"name":"dknpagedate"})
ptdesc = soup.find_all(attrs={"name":"dknpagedescription"})
for node in soup.find_all("div", class_="apple"): <!-- THIS IS WHERE I NEED TO ADD THE BS CODE -->
ptbody = ''.join(node.find_all(text=True))
ptbody = ' '.join(ptbody.split())
nf['pagetype'] = ptype[0]['content'].encode('ascii', 'ignore')
nf['pagetitle'] = ptitle[0]['content'].encode('ascii', 'ignore')
nf['pageurl'] = pturl[0]['content'].encode('ascii', 'ignore')
nf['pagedate'] = ptdate[0]['content'].encode('ascii', 'ignore')
nf['pagedescription'] = ptdesc[0]['content'].encode('ascii', 'ignore')
nf['bodytext'] = ptbody.encode('ascii', 'ignore')
yield nf
for url in hxs.xpath('//ul[#class="scrapy"]/li/a/#href').extract():
yield Request(url, callback=self.parse)
I am not sure how to use nested selectors with BeautifulSoup find_all ?
Any help is very appreciated.
Thanks
soup.select('.fruits .apple p')
use CSSselector, it's very easy to express class.
soup.find(class_='fruits').find(class_="apple").find_all('p')
Or, you can use find() to get the p tag step by step
EDIT:
[s for div in soup.select('.fruits .apple') for s in div.stripped_strings]
use strings generator to get all the string under the div tag, stripped_strings will get rid of \n in the results.
out:
['John', 'Sam', 'Bailey', 'Jack', 'Sour', 'Sweet', 'Salty', 'Fruits are good']
Full code:
from bs4 import BeautifulSoup
source_code = """<div class="container">
<div class="fruits">
<div class="apple">
<p>John</p>
<p>Sam</p>
<p>Bailey</p>
<p>Jack</p>
<ul>
<li>Sour</li>
<li>Sweet</li>
<li>Salty</li>
</ul>
<span>Fruits are good</span>
</div>
<div class="mango">
<p>Randy</p>
<p>James</p>
</div>
</div>
<div class="apple">
<p>Bill</p>
<p>Sean</p>
</div>
</div>
"""
soup = BeautifulSoup(source_code, 'lxml')
[s for div in soup.select('.fruits .apple') for s in div.stripped_strings]