Beautiful Soup - Extracting info - beautifulsoup

I'm having some issues trying to extract info from this html excerpt.
So far I'm using this to extract the html below.
#//////////////////////////////
with open('soup.html','r') as f:
soup = BeautifulSoup(f, 'html.parser')
base = soup.find_all('script', type="application/ld+json")
print(base)
#//////////////////////////////
How do I pull out the URL for each line?
how do I pull out the name for each line?
This is what I get:
[<script type="application/ld+json">
{"#context":"http://schema.org","#type":"Organization","name":"Redfin","logo":"https://ssl.cdn-redfin.com/static-images/images/redfin-logo-transparent-bg-260x66.png","url":"https://www.redfin.com"}
</script>,
<script type="application/ld+json">
[{"#context":"http://schema.org","name":"7316 Green St, New Orleans, LA 70118","url":"/LA/New-Orleans/7316-Green-St-70118/home/79443425","address":{"#type":"PostalAddress","streetAddress":"7316 Green St","addressLocality":"New Orleans","addressRegion":"LA","postalCode":"70118","addressCountry":"US"},"numberOfRooms":"6","#type":"SingleFamilyResidence"},{"#context":"http://schema.org","#type":"Product","name":"7316 Green St, New Orleans, LA 70118","offers":{"#type":"Offer","price":"624900","priceCurrency":"USD"},"url":"/LA/New-Orleans/7316-Green-St-70118/home/79443425"}]
</script>,
<script type="application/ld+json">
[{"#context":"http://schema.org","name":"257 Cherokee St #2, New Orleans, LA 70118","url":"/LA/New-Orleans/257-Cherokee-St-70118/unit-2/home/144766248","address":{"#type":"PostalAddress","streetAddress":"257 Cherokee St #2","addressLocality":"New Orleans","addressRegion":"LA","postalCode":"70118","addressCountry":"US"},"numberOfRooms":"2","#type":"SingleFamilyResidence"},{"#context":"http://schema.org","#type":"Product","name":"257 Cherokee St #2, New Orleans, LA 70118","offers":{"#type":"Offer","price":"129500","priceCurrency":"USD"},"url":"/LA/New-Orleans/257-Cherokee-St-70118/unit-2/home/144766248"}]
</script>, <script type="application/ld+json">

What you show as the result is a list of dicts, you should iterate it and get the values needed.

Use json to read in the dictionary/json format and then you can call the item by the key name:
you will need to add:
import json
Then you can do:
#//////////////////////////////
with open('soup.html','r') as f:
soup = BeautifulSoup(f, 'html.parser')
base = soup.find_all('script', type="application/ld+json")
for each in base:
jsonData = json.loads(each.text)
url = jsonData['url']
name = jsonData['name']
print ('Name: %s\nURL: %s\n' %(name, url))
#//////////////////////////////

Related

How can I print only numbers inside a tag

I have a soup object like:
<div class="list-card__select">
<div class="list-card__item-size">
Size:
75 м² </div>
I did
soup = BeautifulSoup(text, 'lxml')
number = item.find(class_='list-card__item-size').text
print(number)
Result: 'Size: 75 м²'
How can I get just: '75'
you can do this:
soup = BeautifulSoup(html,"html.parser")
data = soup.findAll("span", { "class":"comments" })
numbers = [d.text for d in data]
Provided that the pattern is always identical, a simple split() can be used.
item.find(class_='list-card__item-size').text.split(' ')[1]
Alternatives can be regex or you inspect other elements, javascript or api that hold this information directly.
If number is always positive then we also can use re package.
import re
string = "Size: 75 м²"
print( re.findall(r'\d+', string)[0] )
Output : 75

Scraping a web page with a "more" button...with beautifulsoup

I'm trying to scrape information from this website: "http://vlg.film/"
I'm not only interested in the first 15 titles, but in all of them. When clicking on the 'Show More' button a couple of times, the extra titles show up in the "inspect element" window, but the url stays the same, i.e. "https://vlg.film/". Does anyone have a or some bright ideas? I am fairly new to this..Thanks
`
import requests as re
from bs4 import BeautifulSoup as bs
url = ("https://vlg.film/")
page = re.get(url)
soup = bs(page.content, 'html.parser')
wrap = soup.find_all('div', class_="column column--20 column--main")
for det in wrap:
link = det.a['href']
print(link)
`
Looks like you can simply add the pagination to the url. The trick is to know when you reached the end. Playing around with it, it appears once you reach the end, it repeats the first page. So all you need to do is keep appending the links into a list, and when you start to repeat a link, have it stop.
import requests as re
from bs4 import BeautifulSoup as bs
next_page = True
page_num = 1
links = []
while next_page == True:
url = ("https://vlg.film/")
payload = {'PAGEN_1': '%s' %page_num}
page = re.get(url, params=payload)
soup = bs(page.content, 'html.parser')
wrap = soup.find_all('div', class_="column column--20 column--main")
for det in wrap:
link = det.a['href']
if link in links:
next_page = False
break
links.append(link)
page_num += 1
for link in links:
print(link)
Output:
/films/ainbo/
/films/boss-level/
/films/i-care-a-lot/
/films/fear-of-rain/
/films/extinct/
/films/reckoning/
/films/marksman/
/films/breaking-news-in-yuba-county/
/films/promising-young-woman/
/films/knuckledust/
/films/rifkins-festival/
/films/petit-pays/
/films/life-as-it-should-be/
/films/human-voice/
/films/come-away/
/films/jiu-jitsu/
/films/comeback-trail/
/films/cagefighter/
/films/kolskaya/
/films/golden-voices/
/films/bad-hair/
/films/dragon-rider/
/films/lucky/
/films/zalozhnik/
/films/findind-steve-mcqueen/
/films/black-water-abyss/
/films/bigfoot-family/
/films/alone/
/films/marionette/
/films/after-we-collided/
/films/copperfield/
/films/her-blue-sky/
/films/secret-garden/
/films/hour-of-lead/
/films/eve/
/films/happier-times-grump/
/films/palm-springs/
/films/unhinged/
/films/mermaid-in-paris/
/films/lassie/
/films/sunlit-night/
/films/hello-world/
/films/blood-machines/
/films/samsam/
/films/search-and-destroy/
/films/play/
/films/mortal/
/films/debt-collector-2/
/films/chosen-ones/
/films/inheritance/
/films/tailgate/
/films/silent-voice/
/films/roads-not-taken/
/films/jim-marshall/
/films/goya-murders/
/films/SUFD/
/films/pinocchio/
/films/swallow/
/films/come-as-you-are/
/films/kelly-gang/
/films/corpus-christi/
/films/gentlemen/
/films/vic-the-viking/
/films/perfect-nanny/
/films/farmageddon/
/films/close-to-the-horizon/
/films/disturbing-the-peace/
/films/trauma-center/
/films/benjamin/
/films/COURIER/
/films/aeronauts/
/films/la-belle-epoque/
/films/arctic-dogs/
/films/paradise-hills/
/films/ditya-pogody/
/films/selma-v-gorode-prizrakov/
/films/rainy-day-in-ny/
/films/ty-umeesh-khranit-sekrety/
/films/after-the-wedding/
/films/the-room/
/films/kuda-ty-propala-bernadett/
/films/uglydolls/
/films/smert-i-zhizn-dzhona-f-donovana/
/films/sinyaya-bezdna-2/
/films/just-a-gigolo/
/films/i-am-mother/
/films/city-hunter/
/films/lets-dance/
/films/five-feet-apart/
/films/after/
/films/100-things/
/films/greta/
/films/CORGI/
/films/destroyer/
/films/vice/
/films/ayka/
/films/van-gogh/
/films/serenity/
This is pretty simple web site to extract data. Create a urls list of web page , how many page do you want to extract. Then use for loop to iterate all page extract the data.
import requests as re
from bs4 import BeautifulSoup as bs
urls = ["http://vlg.film/ajax/index_films.php?PAGEN_1={}".format(x) for x in range(1,11)]
for url in urls:
page = re.get(url)
soup = bs(page.content, 'html.parser')
wrap = soup.find_all('div', class_="column column--20 column--main")
print(url)
for det in wrap:
link = det.a['href']
print(link)

Splinter - Element is not clickable because another element <p> obscures it

I am trying to get some thumbnail pictures from a website, from src, as well as click on a link, so I can later get the big picture.
For that I'm using Splinter with BeautifulSoup.
This is the htmlfor the first element I need to get:
In order to do that I have the following code:
executable_path = {"executable_path": "/path/to/geckodriver"}
browser = Browser("firefox", **executable_path, headless=False
def get_player_images():
url = f'https://www.premierleague.com/players'
# Initiate a splinter instance of the URL
browser.visit(url)
browser.find_by_tag('div[class="table playerIndex"]')
soup = BeautifulSoup(browser.html, 'html.parser')
for el in soup:
td = el.findAll('td')
for each_td in td:
link = each_td.find('a', href=True)
if link:
print (link['href'])
image = each_td.find('img')
if image:
print(image['src'])
# run
get_player_images()
But I'm running into 2 issues, after browser opens:
I'm accessing only first two actual src for players. After that, photos are missing, which I don't get why.
/players/19970/Max-Aarons/overview
https://resources.premierleague.com/premierleague/photos/players/40x40/p232980.png
/players/13279/Abdul-Rahman-Baba/overview
https://resources.premierleague.com/premierleague/photos/players/40x40/p118335.png
/players/13286/Tammy-Abraham/overview
//platform-static-files.s3.amazonaws.com/premierleague/photos/players/40x40/Photo-Missing.png
/players/3512/Adam-Smith/overview
//platform-static-files.s3.amazonaws.com/premierleague/photos/players/40x40/Photo-Missing.png
/players/10905/Che-Adams/overview
....
Also, if I try to click on href link, with:
if link:
browser.click_link_by_partial_href(link['href'])
I get the error:
selenium.common.exceptions.ElementClickInterceptedException: Message: Element <a class="playerName" href="/players/19970/Max-Aarons/overview"> is not clickable at point (244,600) because another element <p> obscures it
What am I doing wrong? I'm running into a lot of troubles with selenium.
The player data is loaded dynamically via Javascript. You can use requests module to obtain the info.
For example:
import re
import json
import requests
from bs4 import BeautifulSoup
url = 'https://footballapi.pulselive.com/football/players?pageSize=30&compSeasons=274&altIds=true&page={page}&type=player&id=-1&compSeasonId=274'
img_url = 'https://resources.premierleague.com/premierleague/photos/players/250x250/{player_id}.png'
headers = {'Origin': 'https://www.premierleague.com'}
for page in range(1, 10): # <--- increase this to desired number of pages
data = requests.get(url.format(page=page), headers=headers).json()
# uncoment this to print all data:
# print(json.dumps(data, indent=4))
for player in data['content']:
print('{:<50} {}'.format(player['name']['display'], img_url.format(player_id=player['altIds']['opta'])))
Prints:
Ethan Ampadu https://resources.premierleague.com/premierleague/photos/players/250x250/p199598.png
Joseph Anang https://resources.premierleague.com/premierleague/photos/players/250x250/p447879.png
Florin Andone https://resources.premierleague.com/premierleague/photos/players/250x250/p93284.png
André Gomes https://resources.premierleague.com/premierleague/photos/players/250x250/p120250.png
Andreas Pereira https://resources.premierleague.com/premierleague/photos/players/250x250/p156689.png
Angeliño https://resources.premierleague.com/premierleague/photos/players/250x250/p145235.png
Faustino Anjorin https://resources.premierleague.com/premierleague/photos/players/250x250/p223332.png
Michail Antonio https://resources.premierleague.com/premierleague/photos/players/250x250/p57531.png
Cameron Archer https://resources.premierleague.com/premierleague/photos/players/250x250/p433979.png
Archie Davies https://resources.premierleague.com/premierleague/photos/players/250x250/p215061.png
Stuart Armstrong https://resources.premierleague.com/premierleague/photos/players/250x250/p91047.png
Marko Arnautovic https://resources.premierleague.com/premierleague/photos/players/250x250/p41464.png
Kepa Arrizabalaga https://resources.premierleague.com/premierleague/photos/players/250x250/p109745.png
Harry Arter https://resources.premierleague.com/premierleague/photos/players/250x250/p48615.png
Daniel Arzani https://resources.premierleague.com/premierleague/photos/players/250x250/p200797.png
... and so on.
Note: to get smaller thumbnails, change 250x250 in the image URLs to 40x40

Webscrape ISBN info from brazilian website

I'm trying to get some tags with beautiful soup, to generate a bibtex entry with this data.
The ISBN brazilian site, when access from browser, shows the informations about that ISBN. But when i tried to use urlopen and requests, it gives me a HTTPError code 500. In browser this happened, and only resolved by closing the tab and opening the same link in another tab.
The website asks for captcha. I think the first search need to be answering the captcha, and the others, just changing the isbn in url will works.
After this, when you hit 'link+isbn' it shows the information about the book. I'm trying to use this 'link+isbn' to webscrape with beautifoul soup.
Link that works: http://www.isbn.bn.br/website/consulta/cadastro/isbn/9788521208037 -- (do a first search in 'www.isbn. ... /cadastro' fisrt, because the captcha)
I tried with some codes, and now i'm just trying to get the html of website without error 500.
import sys
import urllib
from urllib.request import Request, urlopen
from bs4 import BeautifulSoup
BRbase = 'http://www.isbn.bn.br/website/consulta/cadastro/isbn/'
Lista_ISBN = ['9788542209402',
'9788542206937',
'9788521208037']
for isbn in Lista_ISBN:
page = BRbase + isbn
url = Request(page, headers={'User-Agent': 'Mozilla/5.0'})
html = urlopen(url).read()
#code to beautiful soup
try:
#code to beautiful soup and generate bibtex
print(page)
print(html)
except:
print('ISBN {} não encontrado'.format(isbn))
sys.exit(1)
import requests
from bs4 import BeautifulSoup
headers = {"Cookie": 'JSESSIONID=60F8CDFBD408299B40C7E7C2459DC624'}
isbn = ['9788542209402', '9788542206937', '9788521208037']
for item in isbn:
print(f"{'*'*20}Extracting ISBN# {item}{'*'*20}")
r = requests.get(
f"http://www.isbn.bn.br/website/consulta/cadastro/isbn/{item}", headers=headers)
soup = BeautifulSoup(r.text, 'html.parser')
for item in soup.findAll('strong')[2:10]:
print(item.parent.get_text(strip=True))
Output:
********************Extracting ISBN# 9788542209402********************
ISBN978-85-422-0940-2
TítuloSPQR
Edição1
Ano Edição2017
Tipo de SuportePapel
Páginas448
Editor(a)Planeta
ParticipaçõesMary Beard ( Autor)Luiz Gil Reyes (Tradutor)
********************Extracting ISBN# 9788542206937********************
ISBN978-85-422-0693-7
TítuloEm nome de Roma
Edição1
Ano Edição2016
Tipo de SuportePapel
Páginas560
Editor(a)Planeta
ParticipaçõesAdrian Goldsworthy ( Autor)Claudio Blanc (Tradutor)
********************Extracting ISBN# 9788521208037********************
ISBN978-85-212-0803-7
TítuloCurso de física básica: ótica, relatividade e física quântica
Edição2
Ano Edição2014
Tipo de SuportePapel
Páginas0
Editor(a)Blucher
ParticipaçõesH. Moysés Nussenzveig ( Autor)

Is it possible to use bs4 soup object with lxml?

I am trying to use both BS4 and lxml
so instead of parsing html page twice, is there any way to use soup object in lxml or vice versa?
self.soup = BeautifulSoup(open(path), "html.parser")
i tried using this object with lxml like this
doc = html.fromstring(self.soup)
this is throwing error TypeError: expected string or bytes-like object
is there anyway to get this type of usage ?
I don't think there is a way without going through a string object.
from bs4 import BeautifulSoup
import lxml.html
html = """
<html><body>
<div>
<p>test</p>
</div>
</body></html>
"""
soup = BeautifulSoup(html, 'html.parser')
# Soup to lxml.html
doc = lxml.html.fromstring(soup.prettify())
print (type(doc))
print (lxml.html.tostring(doc))
#lxml.html to soup
soup = BeautifulSoup(lxml.html.tostring(doc), 'html.parser')
print (type(soup))
print (soup.prettify())
Outputs:
<class 'lxml.html.HtmlElement'>
b'<html>\n <body>\n <div>\n <p>\n test\n </p>\n </div>\n </body>\n</html>'
<class 'bs4.BeautifulSoup'>
<html>
<body>
<div>
<p>
test
</p>
</div>
</body>
</html>
Updated in response to comment:
You can use lxml.etree to iterate through the doc object:
# Soup to lxml.etree
doc = etree.fromstring(soup.prettify())
it = doc.getiterator()
for element in it:
print("%s - %s" % (element.tag, element.text.strip()))