Requests response object: how to check page loaded completely (dynamic content)? - beautifulsoup

I am doing the following. After creating a session I am doing a simple GET to a page. Problem is, this page if full of dynamic parts, so it takes between 10-30 seconds to fully generate the HTML I am interested in. The HTML I process with BeautifulSoup.
If I process the response object too quickly, I don't get the data I want. I have used "sleep" to pause for some time, but I think there should be a better way to check for complete page load. I cannot depend on status 200 code, because inside the main page, dynamic parts are still loading.
My code:
s = requests.session()
r = s.get('URL')
time.sleep(20)
... code to process response object...
I have tried to do it more "elegantly" to check for a certain tag through BeautifulSoup search, but doesn't seem to work.
My code:
title_found = False
while title_found == False:
soupje = BeautifulSoup(r.text, 'html.parser')
title_found_in_html_full = soupje.find(id='titleView!1Title')
if title_found_in_html_full is not None:
title_found_in_html = title_found_in_html_full.get('id')
if title_found_in_html == 'titleView!1Title':
title_found = True
Is it true the response object changes over time as the page is loading?
Any suggestions? Thanks

Related

Splash return embedded response

I am looking to return an embedded response from a website. This website makes it very difficult to reach this embedded response without javascript so I am hoping to use splash. I am not interested in returning the rendered HTML, but rather one embedded response. Below is a screenshot of the exact response that I am looking to get back from splash.
This response returns a JSON object to the site to render, I would like the raw JSON returned from this response, how do I do this in Lua?
Turns out this is a bit tricky. The following is the kludge I have found to do this:
Splash call with LUA script, called from Scrapy:
scrpitBusinessUnits = """
function main(splash, args)
splash.request_body_enabled = true
splash.response_body_enabled = true
assert(splash:go(args.url))
assert(splash:wait(18))
splash:runjs('document.getElementById("RESP_INQA_WK_BUSINESS_UNIT$prompt").click();')
assert(splash:wait(20))
return {
har = splash:har(),
}
end
"""
yield SplashRequest(
url=self.start_urls[0],
callback=self.parse,
endpoint='execute',
magic_response=True,
meta={'handle_httpstatus_all': True},
args={'lua_source': scrpitBusinessUnits,'timeout':90,'images':0},
)
This script works by returning the HAR file of the whole page load, it is key to set splash.request_body_enabled = true and splash.response_body_enabled = true to get the actual response content in the HAR file.
The HAR file is just a glorified JSON object with a different name... so:
def parse(self, response):
harData = json.loads(response.text)
responseData = harData['har']['log']['entries']
...
# Splash appears to base64 encode large content fields,
# you may have to decode the field to load it properly
bisData = base64.b64decode(bisData['content']['text'])
From there you can search the JSON object for the exact embedded response.
I really dont think this is a very efficient method, but it works.

How to simulate pressing buttons to keep on scraping more elements with Scrapy

On this page (https://www.realestate.com.kh/buy/), I managed to grab a list of ads, and individually parse their content with this code:
import scrapy
class scrapingThings(scrapy.Spider):
name = 'scrapingThings'
# allowed_domains = ['https://www.realestate.com.kh/buy/']
start_urls = ['https://www.realestate.com.kh/buy/']
def parse(self, response):
ads = response.xpath('//*[#class="featured css-ineky e1jqslr40"]//a/#href')
c = 0
for url in ads:
c += 1
absolute_url = response.urljoin(url.extract())
self.item = {}
self.item['url'] = absolute_url
yield scrapy.Request(absolute_url, callback=self.parse_ad, meta={'item': self.item})
def parse_ad(self, response):
# Extract things
yield {
# Yield things
}
However, I'd like to automate the switching from one page to another to grab the entirety of the ads available (not only on the first page, but on all pages). By, I guess, simulating the pressings of the 1, 2, 3, 4, ..., 50 buttons as displayed on this screen capture:
Is this even possible with Scrapy? If so, how can one achieve this?
Yes it's possible. Let me show you two ways of doing it:
You can have your spider select the buttons, get the #href value of them, build a [full] URL and yield as a new request.
Here is an example:
def parse(self, response):
....
href = response.xpath('//div[#class="desktop-buttons"]/a[#class="css-owq2hj"]/following-sibling::a[1]/#href').get()
req_url = response.urljoin(href)
yield Request(url=req_url, callback=self.parse_ad)
The selector in the example will always return the #href of the next page's button (It returns only one value, if you are in page 2 it returns the #href of page 2)
In this page, the href is an relative url, so we need to use response.urljoin() method to build a full url. It will use the response as base.
We yield a new request, the response will be parsed in the callback function you determined.
This will require your callback function to always yield the request for the next page. So it's a recursive solution.
A more simple approach would be to just observe the pattern of the hrefs and manually yield all requests. Each button has a href of "/buy/?page={nr}" where {nr} is the number of the page, se can arbitrarily change this nr value and yield all requests at once.
def parse(self, response):
....
nr_pages = response.xpath('//div[#class="desktop-buttons"]/a[#class="css-1en2dru"]//text()').getall()
last_page_nr = int(nr_pages[-1])
for nr in range(2, last_page_nr + 1):
req_url = f'/buy/?page={nr}'
yield Request(url=response.urljoin(req_url), callback=self.parse_ad)
nr_pages returns the number of all buttons
last_page_nr selects the last number (which is the last available page)
We loop in the range between 2 to the value of last_page_nr (50 in this case) and in each loop we request a new page (that correspond to the number).
This way you can make all the requests in your parse method, and parse the response in the parse_ad without recursive calling.
Finally I suggest you take a look on scrapy tutorial it covers several common scenarios on scraping.

Acess data image url when the data url is only obtain upon rendering

I would like to automatically get images saved as browser's data after the page renders, using their corresponding data URLs.
For example:
You can go to the webpage: https://en.wikipedia.org/wiki/Truck
Using the WebInspector from Firefox pick the first thumbnail image on the right.
Now on the Inspector tab, right click over the img tag, go to Copy and press "Image Data-URL"
Open a new tab, paste and enter to see the image from the data URL.
Notice that the data URL is not available on the page source. On the website I want to scrape, the images are rendered after passing through a php script. The server returns a 404 response if the images try to be accessed directly with the src tag attribute.
I believe it should be possible to list the data URLs of the images rendered by the website and download them, however I was unable to find a way to do it.
I normally scrape using selenium webdriver with Firefox coded in python, but any solution would be welcome.
I managed to work out a solution using chrome webdriver with CORS disabled as with Firefox I could not find a cli argument to disable it.
The solution executes some javascript to redraw the image on a new canvas element and then use toDataURL method to get the data url. To save the image I convert the base64 data to binary data and save it as png.
This apparently solved the issue in my use case.
Code to get first truck image
from binascii import a2b_base64
from selenium.webdriver.chrome.options import Options
chrome_options = Options()
chrome_options.add_argument("--disable-web-security")
chrome_options.add_argument("--disable-site-isolation-trials")
driver = webdriver.Chrome(options=chrome_options)
driver.get("https://en.wikipedia.org/wiki/Truck")
img = driver.find_element_by_xpath("/html/body/div[3]/div[3]"
"/div[5]/div[1]/div[4]/div"
"/a/img")
img_base64 = driver.execute_script(
"""
const img = arguments[0];
const canvas = document.createElement('canvas');
const ctx = canvas.getContext('2d');
canvas.width = img.width;
canvas.height = img.height;
ctx.drawImage(img, 0, 0);
data_url = canvas.toDataURL('image/png');
return data_url
""",
img)
binary_data = a2b_base64(img_base64.split(',')[1])
with open('image.png', 'wb') as save_img:
save_img.write(binary_data)
Also, I found that the data url that you get with the procedure described in my question, was generated by the Firefox web inspector on request, so it should not be possible to get a list of data urls (that are not within the page source) as I first thought.
BeautifulSoup is the best library to use for such problem statements. When u wanna retrieve data from any website, u can blindly use BeautifulSoup as it is faster than selenium. BeautifulSoup just takes around 10 seconds to complete this task, whereas selenium would approximately take 15-20 seconds to complete the same task, so it is better to use BeautifulSoup. Here is how u do it using BeautifulSoup:
from bs4 import BeautifulSoup
import requests
import time
st = time.time()
src = requests.get('https://en.wikipedia.org/wiki/Truck').text
soup = BeautifulSoup(src,'html.parser')
divs = soup.find_all('div',class_ = "thumbinner")
count = 1
for x in divs:
url = x.a.img['srcset']
url = url.split('1.5x,')[-1]
url = url.split('2x')[0]
url = "https:" + url
url = url.replace(" ","")
path = f"D:\\Truck_Img_{count}.png"
response = requests.get(url)
file = open(path, "wb")
file.write(response.content)
file.close()
count+=1
print(f"Execution Time = {time.time()-st} seconds")
Output:
Execution Time = 9.65831208229065 seconds
29 Images. Here is the first image:
Hope that this helps!

How to make scrapy wait for request result before continuing to next line

In my spider, I have some code like this:
next_page_url = response.follow(
url=self.start_urls[0][:-1]+str(page_number+1),
callback=self.next_page
)
if next_page_url:
next_page looks like this:
def next_page(self, response):
next_page_count = len(<xpath I use>)
if next_page_count > 0:
return True
else:
return False
I need next_page_url to be set before I can continue the next segment of code.
This code essentially checks if the current page is the last page for some file writing purposes
The answer I ended up going with:
instead of checking if the next page exists and then continuing on current request, I made the request to the page, checked if I got a response, and if I didn't, I said that the previous page was the final page.
I did this by using the meta keyword in scrapy's Request library (response.follow()) to pass the current page's necessary tracking data into the new request

Scrapy Running Results

Just getting started with Scrapy, I'm hoping for a nudge in the right direction.
I want to scrape data from here:
https://www.sportstats.ca/display-results.xhtml?raceid=29360
This is what I have so far:
import scrapy
import re
class BlogSpider(scrapy.Spider):
name = 'sportstats'
start_urls = ['https://www.sportstats.ca/display-results.xhtml?raceid=29360']
def parse(self, response):
headings = []
results = []
tables = response.xpath('//table')
headings = list(tables[0].xpath('thead/tr/th/span/span/text()').extract())
rows = tables[0].xpath('tbody/tr[contains(#class, "ui-widget-content ui-datatable")]')
for row in rows:
result = []
tds = row.xpath('td')
for td in enumerate(tds):
if headings[td[0]].lower() == 'comp.':
content = None
elif headings[td[0]].lower() == 'view':
content = None
elif headings[td[0]].lower() == 'name':
content = td[1].xpath('span/a/text()').extract()[0]
else:
try:
content = td[1].xpath('span/text()').extract()[0]
except:
content = None
result.append(content)
results.append(result)
for result in results:
print(result)
Now I need to move on to the next page, which I can do in a browser by clicking the "right arrow" at the bottom, which I believe is the following li:
<li><a id="mainForm:j_idt369" href="#" class="ui-commandlink ui-widget fa fa-angle-right" onclick="PrimeFaces.ab({s:"mainForm:j_idt369",p:"mainForm",u:"mainForm:result_table mainForm:pageNav mainForm:eventAthleteDetailsDialog",onco:function(xhr,status,args){hideDetails('athlete-popup');showDetails('event-popup');scrollToTopOfElement('mainForm\\:result_table');;}});return false;"></a>
How can I get scrapy to follow that?
If you open the url in a browser without javascript you won't be able to move to the next page. As you can see inside the li tag, there is some javascript to be executed in order to get the next page.
Yo get around this, the first option is usually try to identify the request generated by javascript. In your case, it should be easy: just analyze the java script code and replicate it with python in your spider. If you can do that, you can send the same request from scrapy. If you can't do it, the next option is usually to use some package with javascript/browser emulation or someting like that. Something like ScrapyJS or Scrapy + Selenium.
You're going to need to perform a callback. Generate the url from the xpath from the 'next page' button. So url = response.xpath(xpath to next_page_button) and then when you're finished scraping that page you'll do yield scrapy.Request(url, callback=self.parse_next_page). Finally you create a new function called def parse_next_page(self, response):.
A final, final note is if it happens to be in Javascript (and you can't scrape it even if you're sure you're using the correct xpath) check out my repo in using splash with scrapy https://github.com/Liamhanninen/Scrape