Splinter - Element is not clickable because another element <p> obscures it - selenium

I am trying to get some thumbnail pictures from a website, from src, as well as click on a link, so I can later get the big picture.
For that I'm using Splinter with BeautifulSoup.
This is the htmlfor the first element I need to get:
In order to do that I have the following code:
executable_path = {"executable_path": "/path/to/geckodriver"}
browser = Browser("firefox", **executable_path, headless=False
def get_player_images():
url = f'https://www.premierleague.com/players'
# Initiate a splinter instance of the URL
browser.visit(url)
browser.find_by_tag('div[class="table playerIndex"]')
soup = BeautifulSoup(browser.html, 'html.parser')
for el in soup:
td = el.findAll('td')
for each_td in td:
link = each_td.find('a', href=True)
if link:
print (link['href'])
image = each_td.find('img')
if image:
print(image['src'])
# run
get_player_images()
But I'm running into 2 issues, after browser opens:
I'm accessing only first two actual src for players. After that, photos are missing, which I don't get why.
/players/19970/Max-Aarons/overview
https://resources.premierleague.com/premierleague/photos/players/40x40/p232980.png
/players/13279/Abdul-Rahman-Baba/overview
https://resources.premierleague.com/premierleague/photos/players/40x40/p118335.png
/players/13286/Tammy-Abraham/overview
//platform-static-files.s3.amazonaws.com/premierleague/photos/players/40x40/Photo-Missing.png
/players/3512/Adam-Smith/overview
//platform-static-files.s3.amazonaws.com/premierleague/photos/players/40x40/Photo-Missing.png
/players/10905/Che-Adams/overview
....
Also, if I try to click on href link, with:
if link:
browser.click_link_by_partial_href(link['href'])
I get the error:
selenium.common.exceptions.ElementClickInterceptedException: Message: Element <a class="playerName" href="/players/19970/Max-Aarons/overview"> is not clickable at point (244,600) because another element <p> obscures it
What am I doing wrong? I'm running into a lot of troubles with selenium.

The player data is loaded dynamically via Javascript. You can use requests module to obtain the info.
For example:
import re
import json
import requests
from bs4 import BeautifulSoup
url = 'https://footballapi.pulselive.com/football/players?pageSize=30&compSeasons=274&altIds=true&page={page}&type=player&id=-1&compSeasonId=274'
img_url = 'https://resources.premierleague.com/premierleague/photos/players/250x250/{player_id}.png'
headers = {'Origin': 'https://www.premierleague.com'}
for page in range(1, 10): # <--- increase this to desired number of pages
data = requests.get(url.format(page=page), headers=headers).json()
# uncoment this to print all data:
# print(json.dumps(data, indent=4))
for player in data['content']:
print('{:<50} {}'.format(player['name']['display'], img_url.format(player_id=player['altIds']['opta'])))
Prints:
Ethan Ampadu https://resources.premierleague.com/premierleague/photos/players/250x250/p199598.png
Joseph Anang https://resources.premierleague.com/premierleague/photos/players/250x250/p447879.png
Florin Andone https://resources.premierleague.com/premierleague/photos/players/250x250/p93284.png
André Gomes https://resources.premierleague.com/premierleague/photos/players/250x250/p120250.png
Andreas Pereira https://resources.premierleague.com/premierleague/photos/players/250x250/p156689.png
Angeliño https://resources.premierleague.com/premierleague/photos/players/250x250/p145235.png
Faustino Anjorin https://resources.premierleague.com/premierleague/photos/players/250x250/p223332.png
Michail Antonio https://resources.premierleague.com/premierleague/photos/players/250x250/p57531.png
Cameron Archer https://resources.premierleague.com/premierleague/photos/players/250x250/p433979.png
Archie Davies https://resources.premierleague.com/premierleague/photos/players/250x250/p215061.png
Stuart Armstrong https://resources.premierleague.com/premierleague/photos/players/250x250/p91047.png
Marko Arnautovic https://resources.premierleague.com/premierleague/photos/players/250x250/p41464.png
Kepa Arrizabalaga https://resources.premierleague.com/premierleague/photos/players/250x250/p109745.png
Harry Arter https://resources.premierleague.com/premierleague/photos/players/250x250/p48615.png
Daniel Arzani https://resources.premierleague.com/premierleague/photos/players/250x250/p200797.png
... and so on.
Note: to get smaller thumbnails, change 250x250 in the image URLs to 40x40

Related

When i am using soup. Find(p)['class'] its is saying literal['class'] cannot be assigns to the type 'SupportsIndex' slice

import requests
from bs4 import BeautifulSoup, NavigableString, Tag
url = "https://www.codewithharry.com"
r = requests.get(url)
htmlContent = r.content
soup = BeautifulSoup(htmlContent, 'html.parser')
print(soup.find('p')['class'])
This code showed me a warning in vs code for the written code and the output that was given was correct so you can just write there as shown below:
print (soup. Find('p')['class']) # type: ignore
Sometimes it show the warning in the terminal in vs code so you can use this to stop the error

Can use Beautifulsoup to find elements hidden by other wrapped elements?

I would like to extract the text data of the author affiliations on this page using Beautiful soup.
I know of a work around using selenium to simply click on the 'show more' link and scan the page again? Im not sure what kind of elements these are, hidden? as they only appear in the inspector after clicking the button.
Is there a way to extract this info just using beautiful soup or do I need selenium or something equivalent to reveal the elements in the HTML code?
from bs4 import BeautifulSoup
import requests
url = 'https://www.sciencedirect.com/science/article/abs/pii/S0920379621007596'
sp = BeautifulSoup(r.content, 'html.parser')
r = sp.get(url)
author_data = sp.find('div', id='author-group')
affiliations = author_data.find('dl', class_='affiliation').text
print(affiliations)
That info is within a script tag though you need to map the letters for affiliations to the actual affiliations. The code below extracts the JavaScript object housing the info you want and handles with JSON library.
There is then a series of steps to dynamically determine which indices hold the info of interest and then use a constructed mapping of the letters to affiliations to assign the correct affiliation to each author.
The author first and last names are also dynamically ascertained and joined together with a space.
The intention was to avoid hardcoding indices which might change over time.
import re
import json
import requests
r = requests.get('https://www.sciencedirect.com/science/article/abs/pii/S0920379621007596',
headers={'User-Agent': 'Mozilla/5.0'})
data = json.loads(re.search(r'(\{"abstracts".*})', r.text).group(1))
base = [i for i in data['authors']['content']
if i.get('#name') == 'author-group'][0]['$$']
affiliation_data = [i for i in base if i['#name'] == 'affiliation']
author_data = [i for i in base if i['#name'] == 'author']
name_info = [i['_'] for author in author_data for i in author['$$']
if i['#name'] in ['given-name', 'surname']]
affiliations = dict(zip([j['_'] for i in affiliation_data for j in i['$$'] if j['#name'] == 'label'], [
j['_'] for i in affiliation_data for j in i['$$'] if isinstance(j, dict) and '_' in j and j['_'][0].isupper()]))
# print(affiliations)
author_affiliations = dict(zip([' '.join([i[0], i[1]]) for i in zip(name_info[0::2], name_info[1::2])], [
affiliations[j['_']] for author in author_data for i in author['$$'] if i['#name'] == 'cross-ref' for j in i['$$'] if j['_'] != '⁎']))
print(author_affiliations)

web scrape does not find the correct tags

I am trying to extract the text of this page: https://www.londonstockexchange.com/news-article/ESNT/date-for-fy-2020-results-announcement/14850033 using bs4 and pandas
I start with:
src=requests.get(url).content
soup = BeautifulSoup(src,'xml')
and see that the text I am interested in is wrapped in p tags,
but when I run soup.find_all('p'), the only return I get is the closing paragraph.
How can I extract the paragraph text within? What am I missing?
These are the paragraphs I am trying to extract:
I tried also with selenium using:
chrome_options = webdriver.ChromeOptions()
chrome_options.add_argument("--headless")
chrome_driver = os.getcwd() + "\\chromedriver.exe"
driver = webdriver.Chrome(options = chrome_options, executable_path = chrome_driver)
driver.get(url)
page = driver.page_source
page_soup = BeautifulSoup(page,'xml')
div=page_soup.find_all('p')
[a.text for a in div]
I figured it out.
The body of the site comes from a <script> tag that holds a JSON but with a funky encoding.
That said tag has an id of "ng-lseg-state", which means this is Angular's custom HTML encoding.
You can target the <script> tag with BeautifulSoup and parse it with the json module.
Then, however, you need to deal with Angular's encoding. One way, a bit crude thou, is to chain a bunch of .replace() methods.
Here's how:
import json
import requests
from bs4 import BeautifulSoup
url = "https://www.londonstockexchange.com/news-article/ESNT/date-for-fy-2020-results-announcement/14850033"
script = BeautifulSoup(requests.get(url).text, "lxml").find("script", {"id": "ng-lseg-state"})
article = json.loads(script.string.replace("&q;", '"'))
main_key = "G.{{api_endpoint}}/api/v1/pages?parameters=newsId%3D14850033&a;path=news-article"
article_body = article[main_key]["body"]["components"][1]["content"]["newsArticle"]["value"]
decoded_body = (
article_body
.replace('&l;', '<')
.replace('&g;', '>')
.replace('&q;', '"')
)
print(BeautifulSoup(decoded_body, "lxml").find_all("p")[22].getText())
This outputs:
Essentra plc is a FTSE 250 company and a leading global provider of essential components and solutions.&a;#160; Organised into three global divisions, Essentra focuses on the light manufacture and distribution of high volume, enabling components which serve customers in a wide variety of end-markets and geographies.
However, as I've said, this is not the best approach, as I'm not entirely sure how to deal with a bunch of other characters, namely:
&a;#160;
&a;amp;
&s;
just to name a few. But I've already asked about this.
EDIT:
Here's a fully working code based on the answer to my question, mentioned above.
import html
import json
import requests
from bs4 import BeautifulSoup
def unescape(decoded_html):
char_mapping = {
'&a;': '&',
'&q;': '"',
'&s;': '\'',
'&l;': '<',
'&g;': '>',
}
for key, value in char_mapping.items():
decoded_html = decoded_html.replace(key, value)
return html.unescape(decoded_html)
url = "https://www.londonstockexchange.com/news-article/ESNT/date-for-fy-2020-results-announcement/14850033"
script = BeautifulSoup(requests.get(url).text, "lxml").find("script", {"id": "ng-lseg-state"})
payload = json.loads(unescape(script.string))
main_key = "G.{{api_endpoint}}/api/v1/pages?parameters=newsId%3D14850033&path=news-article"
article_body = payload[main_key]["body"]["components"][1]["content"]["newsArticle"]["value"]
print(BeautifulSoup(article_body, "lxml").find_all("p")[22].getText())

Scraping a web page with a "more" button...with beautifulsoup

I'm trying to scrape information from this website: "http://vlg.film/"
I'm not only interested in the first 15 titles, but in all of them. When clicking on the 'Show More' button a couple of times, the extra titles show up in the "inspect element" window, but the url stays the same, i.e. "https://vlg.film/". Does anyone have a or some bright ideas? I am fairly new to this..Thanks
`
import requests as re
from bs4 import BeautifulSoup as bs
url = ("https://vlg.film/")
page = re.get(url)
soup = bs(page.content, 'html.parser')
wrap = soup.find_all('div', class_="column column--20 column--main")
for det in wrap:
link = det.a['href']
print(link)
`
Looks like you can simply add the pagination to the url. The trick is to know when you reached the end. Playing around with it, it appears once you reach the end, it repeats the first page. So all you need to do is keep appending the links into a list, and when you start to repeat a link, have it stop.
import requests as re
from bs4 import BeautifulSoup as bs
next_page = True
page_num = 1
links = []
while next_page == True:
url = ("https://vlg.film/")
payload = {'PAGEN_1': '%s' %page_num}
page = re.get(url, params=payload)
soup = bs(page.content, 'html.parser')
wrap = soup.find_all('div', class_="column column--20 column--main")
for det in wrap:
link = det.a['href']
if link in links:
next_page = False
break
links.append(link)
page_num += 1
for link in links:
print(link)
Output:
/films/ainbo/
/films/boss-level/
/films/i-care-a-lot/
/films/fear-of-rain/
/films/extinct/
/films/reckoning/
/films/marksman/
/films/breaking-news-in-yuba-county/
/films/promising-young-woman/
/films/knuckledust/
/films/rifkins-festival/
/films/petit-pays/
/films/life-as-it-should-be/
/films/human-voice/
/films/come-away/
/films/jiu-jitsu/
/films/comeback-trail/
/films/cagefighter/
/films/kolskaya/
/films/golden-voices/
/films/bad-hair/
/films/dragon-rider/
/films/lucky/
/films/zalozhnik/
/films/findind-steve-mcqueen/
/films/black-water-abyss/
/films/bigfoot-family/
/films/alone/
/films/marionette/
/films/after-we-collided/
/films/copperfield/
/films/her-blue-sky/
/films/secret-garden/
/films/hour-of-lead/
/films/eve/
/films/happier-times-grump/
/films/palm-springs/
/films/unhinged/
/films/mermaid-in-paris/
/films/lassie/
/films/sunlit-night/
/films/hello-world/
/films/blood-machines/
/films/samsam/
/films/search-and-destroy/
/films/play/
/films/mortal/
/films/debt-collector-2/
/films/chosen-ones/
/films/inheritance/
/films/tailgate/
/films/silent-voice/
/films/roads-not-taken/
/films/jim-marshall/
/films/goya-murders/
/films/SUFD/
/films/pinocchio/
/films/swallow/
/films/come-as-you-are/
/films/kelly-gang/
/films/corpus-christi/
/films/gentlemen/
/films/vic-the-viking/
/films/perfect-nanny/
/films/farmageddon/
/films/close-to-the-horizon/
/films/disturbing-the-peace/
/films/trauma-center/
/films/benjamin/
/films/COURIER/
/films/aeronauts/
/films/la-belle-epoque/
/films/arctic-dogs/
/films/paradise-hills/
/films/ditya-pogody/
/films/selma-v-gorode-prizrakov/
/films/rainy-day-in-ny/
/films/ty-umeesh-khranit-sekrety/
/films/after-the-wedding/
/films/the-room/
/films/kuda-ty-propala-bernadett/
/films/uglydolls/
/films/smert-i-zhizn-dzhona-f-donovana/
/films/sinyaya-bezdna-2/
/films/just-a-gigolo/
/films/i-am-mother/
/films/city-hunter/
/films/lets-dance/
/films/five-feet-apart/
/films/after/
/films/100-things/
/films/greta/
/films/CORGI/
/films/destroyer/
/films/vice/
/films/ayka/
/films/van-gogh/
/films/serenity/
This is pretty simple web site to extract data. Create a urls list of web page , how many page do you want to extract. Then use for loop to iterate all page extract the data.
import requests as re
from bs4 import BeautifulSoup as bs
urls = ["http://vlg.film/ajax/index_films.php?PAGEN_1={}".format(x) for x in range(1,11)]
for url in urls:
page = re.get(url)
soup = bs(page.content, 'html.parser')
wrap = soup.find_all('div', class_="column column--20 column--main")
print(url)
for det in wrap:
link = det.a['href']
print(link)

Why text function of xpath doesn't show any data on scrapy selenium?

I am trying to scrape a website with scrapy-selenium. I am facing two problem
I applied xpath on chrome developer tool I found all elements but after execution of code it returns only one Selector object.
text() function of xpath expression returns none.
This is the URL I am trying to scrape: http://www.atab.org.bd/Member/Dhaka_Zone
Here is a screenshot of inspector tool:
Here is my code:
# -*- coding: utf-8 -*-
import scrapy
from scrapy.selector import Selector
from scrapy_selenium import SeleniumRequest
from selenium.webdriver.common.keys import Keys
class AtabDSpider(scrapy.Spider):
name = 'atab_d'
def start_requests(self):
yield SeleniumRequest(
url = "https://www.atab.org.bd/Member/Dhaka_Zone",
#url = "https://www.bit2lead.com",
#wait_time = 15,
wait_time = 3,
callback = self.parse
)
def parse(self, response):
companies = response.xpath("//ul[#class='row']/li")
print("Numbers Of Iterable Item: " + str(len(companies)))
for company in companies:
yield {
"company": company.xpath(".//div[#class='card']/div[1]/div/a/h3[#data-bind='text: NameOfOrganization']/text()").get()
#also tried
#"company": company.xpath(".//div[#class='card']/div[1]/div/a/h3/text()").get()
}
Here is a screenshot of my terminal:
And This is the url: ( https://www.algoslab.com ) I was practicing before That worked well. Although it's simple enough.
Why don't you try directly like the following to get everything in one go with the blink of an eye:
import requests
link = 'http://123.253.36.205:8051/API/Member/GetMembersList?searchString=&zone=0&blacklisted=false'
r = requests.get(link)
for item in r.json():
_name = item['NameOfOrganization']
phone = item['Phone']
print(_name,phone)
Output are like (should produce 3160 lines of results):
Aqib Travels & Tours Ltd. +88-029101468, 58151369
4S Tours & Travels Ltd 8954750
5M Logistics And Tours Ltd +880 2 48810030
The xpath you want could be simplified as //h3[#data-bind='text: NameOfOrganization'] to select the element and then view the text