Unable to parse element Selenium - selenium

I am trying to parse the date element ("3 February 2022") on the following webpage. However, I am unable to find it, even when using selenium to load it. Any suggestions to what I am doing wrong? Currently trying with the following code:
import requests as re
from bs4 import BeautifulSoup
import time
import re
from selenium import webdriver
url = "http://www.londonstockexchange.com/news-article/SAIN/net-asset-value-s/15316710"
driver = webdriver.Chrome()
driver.get(url)
time.sleep(5)
soup = str(BeautifulSoup(driver.page_source, 'html.parser'))
date = re.findall("[0-9]{1,2}\s[A-Z][a-z]+\s[0-9]{4}", soup)
print(f'Tager {date[-1]} ud af mulige datoer: {date}')

Related

How to scrape company names from inc5000?

I am trying to scrape all company names from inc5000 site ("https://www.inc.com/inc5000/2021"). The problem is that the company names are displayed using JavaScript. I have tried using selenium and requests_html both to render the site but still when I fetch source code of page i get JavaScript. This is what I tried. I am new to web scraping so it is possible that I am making some foolish mistake. please guide
Here is my code.
...
from selenium import webdriver
from webdriver_manager.chrome import ChromeDriverManager
from selenium.webdriver.chrome.options import Options
from bs4 import BeautifulSoup
options = Options()
options.headless = True
driver = webdriver.Chrome(ChromeDriverManager().install(),options=options)
driver.get("https://www.inc.com/inc5000/2021")
data=driver.page_source
print(data)
...
You could give some time to render or use seleniums waits:
...
import time
driver.get('https://www.inc.com/inc5000/2021')
time.sleep(5)
data = driver.page_source
soup = BeautifulSoup(data)
for e in soup.select('.company'):
print(e.text)
...
Why do you need beautiful soup, you just could use selenium:
from selenium import webdriver
from selenium.webdriver.common.by import By
driver = webdriver.Chrome()
driver.get("https://www.inc.com/inc5000/2021")
companies = [e.text for e in driver.find_elements(By.CLASS_NAME, "company")]
This will only give you the elements in the viewport. You need to improve on that by scrolling.

Can't find data while scraping site using BeautifulSoup or Selenium

I am trying to scrape a site for link to the newest factsheet. I've tried using Selenium and BeautifulSoup, however each time I am unable to find the link using the tools. For instance when checking the output using Soup I get nothing from the part. Any suggestions?
Link to site scraped site
Using selenium:
#BIOG
from selenium.webdriver.common.by import By
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
options = Options()
options.headless = True
driver = webdriver.Chrome(options=options)
driver.get('https://www.biotechgt.com/performance/monthly-factsheets')
html = driver.page_source
driver.find_elements(By.XPATH, '/html/body/div/main/section/div/div/div/div/div[2]/div/div[1]/div[2]/div/table/tbody[1]/tr[2]/td/a')
To get all download links from the page, you can use next example:
import requests
from bs4 import BeautifulSoup
url = "https://www.biotechgt.com/performance/monthly-factsheets"
soup = BeautifulSoup(
requests.get(url, cookies={"dp-disclaimer": "APPROVED"}).content,
"html.parser",
)
for a in soup.select("a.gtm-downloads:has(.btn-download)"):
print(a["href"])
Prints:
https://www.biotechgt.com/download_file/force/191/209
https://www.biotechgt.com/download_file/force/187/209
https://www.biotechgt.com/download_file/force/185/209
https://www.biotechgt.com/download_file/force/184/209
...
You have page source
html = driver.page_source
but you are not using it in soup at all.
so change that :
soup = BeautifulSoup(driver.page_source, "lxml")
As far as Selenium is concerned :
You can use below css selector :
a[href^='https://www.biotechgt.com/download']
in code
ele = driver.find_element(By.CSS_SELECTOR, "a[href^='https://www.biotechgt.com/download']")
then you can do
ele.click() or any other stuff with web element.
Update 1:
driver.maximize_window()
driver.implicitly_wait(30)
driver.get("https://www.biotechgt.com/performance/monthly-factsheets")
wait = WebDriverWait(driver, 10)
wait.until(EC.element_to_be_clickable((By.XPATH, "//a[text()=' Allow all cookies ']"))).click()
driver.execute_script("var scrollingElement = (document.scrollingElement || document.body);scrollingElement.scrollTop = scrollingElement.scrollHeight;")
wait.until(EC.element_to_be_clickable((By.XPATH, "//button[text()='Accept']"))).click()
ActionChains(driver).move_to_element(wait.until(EC.visibility_of_element_located((By.CSS_SELECTOR, "a[href^='https://www.biotechgt.com/download']")))).perform()
for link in driver.find_elements_by_css_selector("a[href^='https://www.biotechgt.com/download']"):
print(link.get_attribute('href'))

building a web scraping using the selenium command

I'm building a web scraping using the selenium command. I was able to read the data from the table on the first and second pages, however, I cannot read the data on the following pages. Can anybody help me?
Below is the code I am using.
NoSuchElementException: Message: no such element: Unable to locate
element:
{"method":"xpath","selector":"//table[1]/tbody[1]/tr[#class='painel'
and 1]/td[2]/a[1 and #href='javascript:pesquisar(2);']"} (Session
info: headless chrome=86.0.4240.75)
import time
import requests
import pandas as pd
from bs4 import BeautifulSoup
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver import ActionChains
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.common.keys import Keys
import os
import json
url = 'https://www.desaparecidos.pr.gov.br/desaparecidos/desaparecidos.do?action=iniciarProcesso&m=false'
option = Options()
driver = webdriver.Chrome('chromedriver',chrome_options=chrome_options)
driver.get(url)
time.sleep (5)
lista = driver.find_element_by_xpath('//*[#id="list_tabela"]/tbody')
lista_text = lista.text
print (lista_text)
driver.implicitly_wait(5)
driver.find_element_by_xpath("//table[1]/tbody[1]/tr[#class='painel' and 1]/td[2]/a[1 and #href='javascript:pesquisar(2);']").click()
time.sleep (5)
lista = driver.find_element_by_xpath('//*[#id="list_tabela"]/tbody')
lista_text = lista.text
print (lista_text)
driver.implicitly_wait(10)
driver.find_element_by_xpath("//table[1]/tbody[1]/tr[#class='painel' and 1]/td[2]/a[3 and #href='javascript:pesquisar(3);']").click()
time.sleep (10)
lista = driver.find_element_by_xpath('//*[#id="list_tabela"]/tbody')
lista_text = lista.text
print (lista_text)

Selenium scraping JS loaded pages

I'm trying to scrape some of the loaded JS data from https://surviv.io/stats/player787, such as the number of total kills. Could someone tell me how I can scrape the js loaded data with selenium. Thanks.
EDIT: Here is some of the code
from selenium import webdriver
browser = webdriver.Firefox()
browser.get('https://surviv.io/stats/player787')
b = browser.find_element_by_tag_name('tr')
The 'tr' which contains the data that I want is not grabbed by selenium
To get the count of kills.Induce WebDriverWait and visibility_of_all_elements_located
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
from selenium import webdriver
browser = webdriver.Firefox()
browser.get('https://surviv.io/stats/player787')
allkills = WebDriverWait(browser,20).until(EC.visibility_of_all_elements_located((By.XPATH,"//div[#class='card-mode-stat-name' and text()='KILLS']/following-sibling::div[1]")))
for item in allkills:
print(item.text)
The reason it's not finding it is because the page isn't fully rendered. You can add a wait with selenium so will not move on until the specified element is rendered first.
Also, if it's in a <table> tag, let pandas do the parsing for you (it uses beautifulsoup under the hood to pull out the <table>, <th>, <tr>, and <td> tags, returns them as a list of dataframes once you get the rendered html source:
from selenium import webdriver
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By
from selenium.common.exceptions import TimeoutException
import pandas as pd
browser = webdriver.Chrome('C:/chromedriver_win32/chromedriver.exe')
browser.get('https://surviv.io/stats/player787')
delay = 3 # seconds
WebDriverWait(browser, delay).until(EC.presence_of_element_located((By.CLASS_NAME, 'player-stats-overview')))
df = pd.read_html(browser.page_source)[0]
print (df.loc[0,'Kills'])
browser.close()
Output:
18884
print (df)
Wins Kills Games K/G
0 638 18884 8896 2.1
You could avoid the overhead of a browser and simply mimic the POST request the page makes.
import requests
headers = {'content-type': 'application/json; charset=UTF-8'}
data = {"slug":"player787","interval":"all","mapIdFilter":"-1"}
r = requests.post('https://surviv.io/api/user_stats', headers=headers, json=data)
data = r.json()
desired_stats = ['wins', 'kills', 'games', 'kpg']
for stat in desired_stats:
print(stat, ': ' , data[stat])
For OP:
View of payload in network tab visible when you click on the appropriate xhr indicated by the url in my answer (you need to scroll down to see the payload info)
To scrape the values 652, 19152, 8926, 2.1, etc from JS loaded pages you you have to induce WebDriverWait for the visibility_of_all_elements_located() and you can use either of the following Locator Strategies:
Using CSS_SELECTOR:
driver.get('https://surviv.io/stats/player787')
print([my_elem.get_attribute("innerHTML") for my_elem in WebDriverWait(driver, 5).until(EC.visibility_of_all_elements_located((By.CSS_SELECTOR, "table.player-stats-overview td")))])
Using XPATH:
driver.get('https://surviv.io/stats/player787')
print([my_elem.get_attribute("innerHTML") for my_elem in WebDriverWait(driver, 5).until(EC.visibility_of_all_elements_located((By.XPATH, "//table[#class='player-stats-overview']//td")))])
Console Output:
['652', '19152', '8926', '2.1']
Note : You have to add the following imports :
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC

Not able to get hidden contents of a website

I am trying to scrape a website with the help of BeautifulSoup. I am not able to get the contents of the website but it is on the source code when I inspect the site.
import requests
import urllib
from bs4 import BeautifulSoup
url1 = 'https://recruiting.ultipro.com/usg1006/JobBoard/dfc53730-57d1-3460-336f-ddafabd108f3/?q=&o=postedDateDesc'
response1 = get(url1)
print(response1.text[:500])
html_soup1 = BeautifulSoup(response1.text, 'html.parser')
type(html_soup1)
all_info1 = html_soup1.find("div", {"data-bind": "foreach: opportunities"})
all_info1
all_automation1 = all_info1.find_all("div",{"data-automation":"opportunity"})
all_automation1
In the source code there is "job-title", "location" and "description" and other details but I am not able to see the same details in the html contents.
You should try like this or anything similar to fetch the title from that page:
import time
from selenium import webdriver
from bs4 import BeautifulSoup
driver = webdriver.Chrome()
driver.get('https://recruiting.ultipro.com/usg1006/JobBoard/dfc53730-57d1-3460-336f-ddafabd108f3/?q=&o=postedDateDesc')
time.sleep(3) #let the browser load it's content
soup = BeautifulSoup(driver.page_source,'lxml')
for item in soup.select("h3 .opportunity-link"):
print(item.text)
driver.quit()