How to scrape a data table that has multiple pages using selenium? - selenium

I'm extracting NBA stats from my yahoo fantasy account. Below is the code that I made in jupyter notebook using selenium. Each page shows 25 players and a total of 720 players. I did a for loop that will scrape players in increments of 25 instead of one by one.
for k in range (0,725,25):
Players = driver.find_elements_by_xpath('//tbody/tr/td[2]/div/div/div/div/a')
Team_Position = driver.find_elements_by_xpath('//span[#class= "Fz-xxs"]')
Games_Played = driver.find_elements_by_xpath('//tbody/tr/td[7]/div')
Minutes_Played = driver.find_elements_by_xpath('//tbody/tr/td[11]/div')
FGM_A = driver.find_elements_by_xpath('//tbody/tr/td[12]/div')
FTM_A = driver.find_elements_by_xpath('//tbody/tr/td[14]/div')
Three_Points = driver.find_elements_by_xpath('//tbody/tr/td[16]/div')
PTS = driver.find_elements_by_xpath('//tbody/tr/td[17]/div')
REB = driver.find_elements_by_xpath('//tbody/tr/td[18]/div')
AST = driver.find_elements_by_xpath('//tbody/tr/td[19]/div')
ST = driver.find_elements_by_xpath('//tbody/tr/td[20]/div')
BLK = driver.find_elements_by_xpath('//tbody/tr/td[21]/div')
TO = driver.find_elements_by_xpath('//tbody/tr/td[22]/div')
NBA_Stats = []
for i in range(len(Players)):
players_stats = {'Name': Players[i].text,
'Position': Team_Position[i].text,
'GP': Games_Played[i].text,
'MP': Minutes_Played[i].text,
'FGM/A': FGM_A[i].text,
'FTM/A': FTM_A[i].text,
'3PTS': Three_Points[i].text,
'PTS': PTS[i].text,
'REB': REB[i].text,
'AST': AST[i].text,
'ST': ST[i].text,
'BLK': BLK[i].text,
'TO': TO[i].text}
driver.get('https://basketball.fantasysports.yahoo.com/nba/28951/players?status=ALL&pos=P&cut_type=33&stat1=S_AS_2021&myteam=0&sort=AR&sdir=1&count=' + str(k))
The browser will go page by page after it's done. I print out the results. It only scrape 1 player. What did I do wrong?
A picture of my codes and printing the results

It's hard to see what the issue here is without looking at the original page (can you provide a URL?), however looking at this:
next = driver.find_element_by_xpath('//a[#id = "yui_3_18_1_1_1636840807382_2187"]')
"1636840807382" looks like a Javascript timestamp, so I would guess that the reference you've got hardcoded there is dynamically generated, so the element "yui_3_18_1_1_1636840807382_2187" no longer exists.

Related

Selenium web scraping on Betway

I am trying to webscrape this website: https://va.betway.com/sports/category/basketball/usa/nba?tab=matches. I am unable to get this elementImage of game stats.
This is my code snippet:
s = Service("./drivers/geckodriver")
options = FirefoxOptions()
options.headless = True
browser = webdriver.Firefox(service=s,options = options)
browser.get(website_hr)
print('Title: %s' % browser.title)
player_prop0 = browser.find_elements(by='id',value = 'root')
player_prop2 = browser.find_elements(by=By.CLASS_NAME,value = 'sc-eZMymg ktcjyc')
I get no value from player_prop2 and player_prop0.
enter image description here
How can I get the data on this page? Thank you
I tried using ID and class to get the game lines for the NBA games

Can use Beautifulsoup to find elements hidden by other wrapped elements?

I would like to extract the text data of the author affiliations on this page using Beautiful soup.
I know of a work around using selenium to simply click on the 'show more' link and scan the page again? Im not sure what kind of elements these are, hidden? as they only appear in the inspector after clicking the button.
Is there a way to extract this info just using beautiful soup or do I need selenium or something equivalent to reveal the elements in the HTML code?
from bs4 import BeautifulSoup
import requests
url = 'https://www.sciencedirect.com/science/article/abs/pii/S0920379621007596'
sp = BeautifulSoup(r.content, 'html.parser')
r = sp.get(url)
author_data = sp.find('div', id='author-group')
affiliations = author_data.find('dl', class_='affiliation').text
print(affiliations)
That info is within a script tag though you need to map the letters for affiliations to the actual affiliations. The code below extracts the JavaScript object housing the info you want and handles with JSON library.
There is then a series of steps to dynamically determine which indices hold the info of interest and then use a constructed mapping of the letters to affiliations to assign the correct affiliation to each author.
The author first and last names are also dynamically ascertained and joined together with a space.
The intention was to avoid hardcoding indices which might change over time.
import re
import json
import requests
r = requests.get('https://www.sciencedirect.com/science/article/abs/pii/S0920379621007596',
headers={'User-Agent': 'Mozilla/5.0'})
data = json.loads(re.search(r'(\{"abstracts".*})', r.text).group(1))
base = [i for i in data['authors']['content']
if i.get('#name') == 'author-group'][0]['$$']
affiliation_data = [i for i in base if i['#name'] == 'affiliation']
author_data = [i for i in base if i['#name'] == 'author']
name_info = [i['_'] for author in author_data for i in author['$$']
if i['#name'] in ['given-name', 'surname']]
affiliations = dict(zip([j['_'] for i in affiliation_data for j in i['$$'] if j['#name'] == 'label'], [
j['_'] for i in affiliation_data for j in i['$$'] if isinstance(j, dict) and '_' in j and j['_'][0].isupper()]))
# print(affiliations)
author_affiliations = dict(zip([' '.join([i[0], i[1]]) for i in zip(name_info[0::2], name_info[1::2])], [
affiliations[j['_']] for author in author_data for i in author['$$'] if i['#name'] == 'cross-ref' for j in i['$$'] if j['_'] != '⁎']))
print(author_affiliations)

Scrapy only show the first result of each page

I need to scrape the items of the first page and then go to the next button to go to the second page and scrape and so on.
This is my code, but only scrape the first item of each page, if there are 20 pages enter to every page and scrape only the first item.
Could anyone please help me .
Thank you
Apologies for my english.
class CcceSpider(CrawlSpider):
name = 'ccce'
item_count = 0
allowed_domain = ['www.example.com']
start_urls = ['https://www.example.com./afiliados value=&categoria=444&letter=']
rules = {
# Reglas Para cada item
Rule(LinkExtractor(allow = (), restrict_xpaths = ('//li[#class="pager-next"]/a')), callback = 'parse_item', follow = True),
}
def parse_item(self, response):
ml_item = CcceItem()
#info de producto
ml_item['nombre'] = response.xpath('normalize-space(//div[#class="news-col2"]/h2/text())').extract()
ml_item['url'] = response.xpath('normalize-space(//div[#class="website"]/a/text())').extract()
ml_item['correo'] = response.xpath('normalize-space(//div[#class="email"]/a/text())').extract()
ml_item['descripcion'] = response.xpath('normalize-space(//div[#class="news-col4"]/text())').extract()
self.item_count += 1
if self.item_count > 5:
#insert_table(ml_item)
raise CloseSpider('item_exceeded')
yield ml_item
As you haven't given an working target url, I'm a bit guessing here, but most probably this is the problem:
parse_item should be a parse_page (and act accordingly)
Scrapy is downloading a full page which has - according to your description - multiple items and then passes this as a response object to your parse method.
It's your parse method's responsibility to process the whole page by iterating over the items displayed on the page and creating multiple scraped items accordingly.
The scrapy documentation has several good examples for this, one is here: https://doc.scrapy.org/en/latest/topics/selectors.html#working-with-relative-xpaths
Basically your code structure in def parse_XYZ should look like this:
def parse_page(self, response):
items_on_page = response.xpath('//...')
for sel_item in items_on_page:
ml_item = CcceItem()
#info de producto
ml_item['nombre'] = # ...
# ...
yield ml_item
Insert the right xpaths for getting all items on the page and adjust your item xpaths and you're ready to go.

Retrieve all videos added to all playlists by YouTube user

I've hit a wall with the way I would like to use the YouTube data API. I have a user account that is trying to act as an 'aggregator', by adding videos from various other channels into one of about 15 playlists, based on categories. My problem is, I can't get all these videos into a single feed, because they belong to various YouTube users. I'd like to get them all into a single list, so I could sort that master list by most recent and most popular, to populate different views in my web app.
How can I get a list of all the videos that a user has added to any of their playlists?
YouTube must track this kind of stuff, because if you go into the "Feed" section of any user's page at `http://www.youtube.com/' it gives you a stream of activity that includes videos added to playlists.
To be clear, I don't want to fetch a list of videos uploaded by just this user, so http://gdata.../<user>/uploads won't work. Since there are a number of different playlists, http://gdata.../<user>/playlists won't work either, because I would need to make about 15 requests each time I wanted to check for new videos.
There seems to be no way to retrieve a list of all videos that a user has added to all of their playlists. Can somebody think of a way to do this that I might have overlooked?
Something like this for retrieving youtube links from playlist. It still need improvements.
import urllib2
import xml.etree.ElementTree as et
import re
import os
more = 1
id_playlist = raw_input("Enter youtube playlist id: ")
number_of_iteration = input("How much video links: ")
number = number_of_iteration / 50
number2 = number_of_iteration % 50
if (number2 != 0):
number3 = number + 1
else:
number3 = number
start_index = 1
while more <= number3:
#reading youtube playlist page
if (more != 1):
start_index+=50
str_start_index = str(start_index)
req = urllib2.Request('http://gdata.youtube.com/feeds/api/playlists/'+ id_playlist + '?v=2&&start-index=' + str_start_index + '&max-results=50')
response = urllib2.urlopen(req)
the_page = response.read()
#writing page in .xml
dat = open("web_content.xml","w")
dat.write(the_page)
dat.close()
#searching page for links
tree = et.parse('web_content.xml')
all_links = tree.findall('*/{http://www.w3.org/2005/Atom}link[#rel="alternate"]')
#writing links + attributes to .txt
if (more == 1):
till_links = 50
else:
till_links = start_index + 50
str_till_links = str(till_links)
dat2 = open ("links-"+ str_start_index +"to"+ str_till_links +".txt","w")
for links in all_links:
str1 = (str(links.attrib) + "\n")
dat2.write(str1)
dat2.close()
#getting only links
f = open ("links-"+ str_start_index +"to"+ str_till_links +".txt","r")
link_all = f.read()
new_string = link_all.replace("{'href': '","")
new_string2 = new_string.replace("', 'type': 'text/html', 'rel': 'alternate'}","")
f.close()
#writing links to .txt
f = open ("links-"+ str_start_index +"to"+ str_till_links +".txt","w")
f.write(new_string2)
f.close()
more+=1
os.remove('web_content.xml')
print "Finished!"

How to get an outline view in sublime texteditor?

How do I get an outline view in sublime text editor for Windows?
The minimap is helpful but I miss a traditional outline (a klickable list of all the functions in my code in the order they appear for quick navigation and orientation)
Maybe there is a plugin, addon or similar? It would also be nice if you can shortly name which steps are neccesary to make it work.
There is a duplicate of this question on the sublime text forums.
Hit CTRL+R, or CMD+R for Mac, for the function list. This works in Sublime Text 1.3 or above.
A plugin named Outline is available in package control, try it!
https://packagecontrol.io/packages/Outline
Note: it does not work in multi rows/columns mode.
For multiple rows/columns work use this fork:
https://github.com/vlad-wonderkidstudio/SublimeOutline
I use the fold all action. It will minimize everything to the declaration, I can see all the methods/functions, and then expand the one I'm interested in.
I briefly look at SublimeText 3 api and view.find_by_selector(selector) seems to be able to return a list of regions.
So I guess that a plugin that would display the outline/structure of your file is possible.
A plugin that would display something like this:
Note: the function name display plugin could be used as an inspiration to extract the class/methods names or ClassHierarchy to extract the outline structure
If you want to be able to printout or save the outline the ctr / command + r is not very useful.
One can do a simple find all on the following grep ^[^\n]*function[^{]+{ or some variant of it to suit the language and situation you are working in.
Once you do the find all you can copy and paste the result to a new document and depending on the number of functions should not take long to tidy up.
The answer is far from perfect, particularly for cases when the comments have the word function (or it's equivalent) in them, but I do think it's a helpful answer.
With a very quick edit this is the result I got on what I'm working on now.
PathMaker.prototype.start = PathMaker.prototype.initiate = function(point){};
PathMaker.prototype.path = function(thePath){};
PathMaker.prototype.add = function(point){};
PathMaker.prototype.addPath = function(path){};
PathMaker.prototype.go = function(distance, angle){};
PathMaker.prototype.goE = function(distance, angle){};
PathMaker.prototype.turn = function(angle, distance){};
PathMaker.prototype.continue = function(distance, a){};
PathMaker.prototype.curve = function(angle, radiusX, radiusY){};
PathMaker.prototype.up = PathMaker.prototype.north = function(distance){};
PathMaker.prototype.down = PathMaker.prototype.south = function(distance){};
PathMaker.prototype.east = function(distance){};
PathMaker.prototype.west = function(distance){};
PathMaker.prototype.getAngle = function(point){};
PathMaker.prototype.toBezierPoints = function(PathMakerPoints, toSource){};
PathMaker.prototype.extremities = function(points){};
PathMaker.prototype.bounds = function(path){};
PathMaker.prototype.tangent = function(t, points){};
PathMaker.prototype.roundErrors = function(n, acurracy){};
PathMaker.prototype.bezierTangent = function(path, t){};
PathMaker.prototype.splitBezier = function(points, t){};
PathMaker.prototype.arc = function(start, end){};
PathMaker.prototype.getKappa = function(angle, start){};
PathMaker.prototype.circle = function(radius, start, end, x, y, reverse){};
PathMaker.prototype.ellipse = function(radiusX, radiusY, start, end, x, y , reverse/*, anchorPoint, reverse*/ ){};
PathMaker.prototype.rotateArc = function(path /*array*/ , angle){};
PathMaker.prototype.rotatePoint = function(point, origin, r){};
PathMaker.prototype.roundErrors = function(n, acurracy){};
PathMaker.prototype.rotate = function(path /*object or array*/ , R){};
PathMaker.prototype.moveTo = function(path /*object or array*/ , x, y){};
PathMaker.prototype.scale = function(path, x, y /* number X scale i.e. 1.2 for 120% */ ){};
PathMaker.prototype.reverse = function(path){};
PathMaker.prototype.pathItemPath = function(pathItem, toSource){};
PathMaker.prototype.merge = function(path){};
PathMaker.prototype.draw = function(item, properties){};