Splash return embedded response - scrapy

I am looking to return an embedded response from a website. This website makes it very difficult to reach this embedded response without javascript so I am hoping to use splash. I am not interested in returning the rendered HTML, but rather one embedded response. Below is a screenshot of the exact response that I am looking to get back from splash.
This response returns a JSON object to the site to render, I would like the raw JSON returned from this response, how do I do this in Lua?

Turns out this is a bit tricky. The following is the kludge I have found to do this:
Splash call with LUA script, called from Scrapy:
scrpitBusinessUnits = """
function main(splash, args)
splash.request_body_enabled = true
splash.response_body_enabled = true
assert(splash:go(args.url))
assert(splash:wait(18))
splash:runjs('document.getElementById("RESP_INQA_WK_BUSINESS_UNIT$prompt").click();')
assert(splash:wait(20))
return {
har = splash:har(),
}
end
"""
yield SplashRequest(
url=self.start_urls[0],
callback=self.parse,
endpoint='execute',
magic_response=True,
meta={'handle_httpstatus_all': True},
args={'lua_source': scrpitBusinessUnits,'timeout':90,'images':0},
)
This script works by returning the HAR file of the whole page load, it is key to set splash.request_body_enabled = true and splash.response_body_enabled = true to get the actual response content in the HAR file.
The HAR file is just a glorified JSON object with a different name... so:
def parse(self, response):
harData = json.loads(response.text)
responseData = harData['har']['log']['entries']
...
# Splash appears to base64 encode large content fields,
# you may have to decode the field to load it properly
bisData = base64.b64decode(bisData['content']['text'])
From there you can search the JSON object for the exact embedded response.
I really dont think this is a very efficient method, but it works.

Related

How to simulate pressing buttons to keep on scraping more elements with Scrapy

On this page (https://www.realestate.com.kh/buy/), I managed to grab a list of ads, and individually parse their content with this code:
import scrapy
class scrapingThings(scrapy.Spider):
name = 'scrapingThings'
# allowed_domains = ['https://www.realestate.com.kh/buy/']
start_urls = ['https://www.realestate.com.kh/buy/']
def parse(self, response):
ads = response.xpath('//*[#class="featured css-ineky e1jqslr40"]//a/#href')
c = 0
for url in ads:
c += 1
absolute_url = response.urljoin(url.extract())
self.item = {}
self.item['url'] = absolute_url
yield scrapy.Request(absolute_url, callback=self.parse_ad, meta={'item': self.item})
def parse_ad(self, response):
# Extract things
yield {
# Yield things
}
However, I'd like to automate the switching from one page to another to grab the entirety of the ads available (not only on the first page, but on all pages). By, I guess, simulating the pressings of the 1, 2, 3, 4, ..., 50 buttons as displayed on this screen capture:
Is this even possible with Scrapy? If so, how can one achieve this?
Yes it's possible. Let me show you two ways of doing it:
You can have your spider select the buttons, get the #href value of them, build a [full] URL and yield as a new request.
Here is an example:
def parse(self, response):
....
href = response.xpath('//div[#class="desktop-buttons"]/a[#class="css-owq2hj"]/following-sibling::a[1]/#href').get()
req_url = response.urljoin(href)
yield Request(url=req_url, callback=self.parse_ad)
The selector in the example will always return the #href of the next page's button (It returns only one value, if you are in page 2 it returns the #href of page 2)
In this page, the href is an relative url, so we need to use response.urljoin() method to build a full url. It will use the response as base.
We yield a new request, the response will be parsed in the callback function you determined.
This will require your callback function to always yield the request for the next page. So it's a recursive solution.
A more simple approach would be to just observe the pattern of the hrefs and manually yield all requests. Each button has a href of "/buy/?page={nr}" where {nr} is the number of the page, se can arbitrarily change this nr value and yield all requests at once.
def parse(self, response):
....
nr_pages = response.xpath('//div[#class="desktop-buttons"]/a[#class="css-1en2dru"]//text()').getall()
last_page_nr = int(nr_pages[-1])
for nr in range(2, last_page_nr + 1):
req_url = f'/buy/?page={nr}'
yield Request(url=response.urljoin(req_url), callback=self.parse_ad)
nr_pages returns the number of all buttons
last_page_nr selects the last number (which is the last available page)
We loop in the range between 2 to the value of last_page_nr (50 in this case) and in each loop we request a new page (that correspond to the number).
This way you can make all the requests in your parse method, and parse the response in the parse_ad without recursive calling.
Finally I suggest you take a look on scrapy tutorial it covers several common scenarios on scraping.

having difficulty converting requests.models.Response to scrapy.selector.unified.Selector

This code
import requests
url = 'https://docs.scrapy.org/en/latest/_static/selectors-sample1.html'
response = requests.get(url)
gets me a requests.models.Response instance, from which I can extract data with scrapy
from scrapy import Selector
sel = Selector(response=response)
sel.xpath('//div')
A post gives a great way to access a website. Here is just part of it.
response = requests.get('https://www.zhihu.com/api/v4/columns/wangzhenotes/items', headers=headers)
print(response.json())
With that approach, I got the content from that site.
However, the same code cannot extract data from the Response instance
sel = Selector(response=response)
len(sel.xpath('//div'))
I just got 0. How do I fix this?
Result of this request
response = requests.get('https://www.zhihu.com/api/v4/columns/wangzhenotes/items', headers=headers)
is JSON object, sure it does not contain any div
to get the required information you have to parse that JSON
response = requests.get('https://www.zhihu.com/api/v4/columns/wangzhenotes/items', headers=headers)
data = response.json()['data']
then you need to loop through the data list and take fields which you need
again, if you want yo use scrapy, you can make requests to url https://www.zhihu.com/api/v4/columns/wangzhenotes/items
and then in parse method read response as JSON:
j_obj = json.loads(response.body_as_unicode())

Requests response object: how to check page loaded completely (dynamic content)?

I am doing the following. After creating a session I am doing a simple GET to a page. Problem is, this page if full of dynamic parts, so it takes between 10-30 seconds to fully generate the HTML I am interested in. The HTML I process with BeautifulSoup.
If I process the response object too quickly, I don't get the data I want. I have used "sleep" to pause for some time, but I think there should be a better way to check for complete page load. I cannot depend on status 200 code, because inside the main page, dynamic parts are still loading.
My code:
s = requests.session()
r = s.get('URL')
time.sleep(20)
... code to process response object...
I have tried to do it more "elegantly" to check for a certain tag through BeautifulSoup search, but doesn't seem to work.
My code:
title_found = False
while title_found == False:
soupje = BeautifulSoup(r.text, 'html.parser')
title_found_in_html_full = soupje.find(id='titleView!1Title')
if title_found_in_html_full is not None:
title_found_in_html = title_found_in_html_full.get('id')
if title_found_in_html == 'titleView!1Title':
title_found = True
Is it true the response object changes over time as the page is loading?
Any suggestions? Thanks

How to make scrapy wait for request result before continuing to next line

In my spider, I have some code like this:
next_page_url = response.follow(
url=self.start_urls[0][:-1]+str(page_number+1),
callback=self.next_page
)
if next_page_url:
next_page looks like this:
def next_page(self, response):
next_page_count = len(<xpath I use>)
if next_page_count > 0:
return True
else:
return False
I need next_page_url to be set before I can continue the next segment of code.
This code essentially checks if the current page is the last page for some file writing purposes
The answer I ended up going with:
instead of checking if the next page exists and then continuing on current request, I made the request to the page, checked if I got a response, and if I didn't, I said that the previous page was the final page.
I did this by using the meta keyword in scrapy's Request library (response.follow()) to pass the current page's necessary tracking data into the new request

How to use Python scrapy for myltiple URL's

my question is similar to this post:
How to use scrapy for Amazon.com links after "Next" Button?
I want my crawler to traverse through all "Next" links. I've searched a lot, but most people ether focus on how to parse the ULR or simply put all URL's in the initial URL list.
So far, I am able to visit the first page and parse the next page's link. But I don't know how to visit that page using the same crawler(spider). I tried to append the new URL into my URL list, it does appended (I checked the length), but later it doesn't visit the link. I have no idea why...
Note that in my case, I only know the first page's URL. Second page's URL can only be obtained after visiting the first page. The same, (i+1)'th page's URL is hidden in the i'th page.
In the parse function, I can parse and print the correct next page link URL. I just don't know how to visit it.
Please help me. Thank you!
import scrapy
from bs4 import BeautifulSoup
class RedditSpider(scrapy.Spider):
name = "test2"
allowed_domains = ["http://www.reddit.com"]
urls = ["https://www.reddit.com/r/LifeProTips/search?q=timestamp%3A1427232122..1437773560&sort=new&restrict_sr=on&syntax=cloudsearch"]
def start_requests(self):
for url in self.urls:
yield scrapy.Request(url, self.parse, meta={
'splash': {
'endpoint': 'render.html',
'args': { 'wait': 0.5 }
}
})
`
def parse(self, response):
page = response.url[-10:]
print(page)
filename = 'reddit-%s.html' % page
#parse html for next link
soup = BeautifulSoup(response.body, 'html.parser')
mydivs = soup.findAll("a", { "rel" : "nofollow next" })
link = mydivs[0]['href']
print(link)
self.urls.append(link)
with open(filename, 'wb') as f:
f.write(response.body)
Update
Thanks to Kaushik's answer, I figured out how to make it work. Though I still don't know why my original idea of appending new URL's doesn't work...
The updated code is as follow:
import scrapy
from bs4 import BeautifulSoup
class RedditSpider(scrapy.Spider):
name = "test2"
urls = ["https://www.reddit.com/r/LifeProTips/search?q=timestamp%3A1427232122..1437773560&sort=new&restrict_sr=on&syntax=cloudsearch"]
def start_requests(self):
for url in self.urls:
yield scrapy.Request(url, self.parse, meta={
'splash': {
'endpoint': 'render.html',
'args': { 'wait': 0.5 }
}
})
def parse(self, response):
page = response.url[-10:]
print(page)
filename = 'reddit-%s.html' % page
with open(filename, 'wb') as f:
f.write(response.body)
#parse html for next link
soup = BeautifulSoup(response.body, 'html.parser')
mydivs = soup.findAll("a", { "rel" : "nofollow next" })
if len(mydivs) != 0:
link = mydivs[0]['href']
print(link)
#yield response.follow(link, callback=self.parse)
yield scrapy.Request(link, callback=self.parse)
What you require is explained very well in the Scrapy docs . I don't think you would need any other explanation other than that. Suggest going through it once for better understanding.
A brief explanation first though:
To follow a link to the next page, Scrapy provides many methods. The most basic methods is using the http.Request method
Request object :
class scrapy.http.Request(url[, callback,
method='GET', headers, body, cookies, meta, encoding='utf-8',
priority=0, dont_filter=False, errback, flags])
>>> yield scrapy.Request(url, callback=self.next_parse)
url (string) – the URL of this request
callback (callable) – the function that will be called with the response of this request (once its downloaded) as its first parameter.
For convenience though, Scrapy has inbuilt shortcut for creating Request objects by using response.follow where the url can be an absolute path or a relative path.
follow(url, callback=None, method='GET', headers=None, body=None,
cookies=None, meta=None, encoding=None, priority=0, dont_filter=False,
errback=None)
>>> yield response.follow(url, callback=self.next_parse)
In case if you have to move through to the next link by passing values to a form or any other type of input field, you can use the Form Request objects. The FormRequest class extends the base Request with functionality
for dealing with HTML forms. It uses lxml.html forms to pre-populate
form fields with form data from Response objects.
Form Request object
from_response(response[, formname=None,
formid=None, formnumber=0, formdata=None, formxpath=None,
formcss=None, clickdata=None, dont_click=False, ...])
If you want to simulate a HTML Form POST in your spider and send a couple of key-value fields, you can return a FormRequest object (from your spider) like this:
return [FormRequest(url="http://www.example.com/post/action",
formdata={'name': 'John Doe', 'age': '27'},
callback=self.after_post)]
Note : If a Request doesn’t specify a callback, the spider’s parse() method will be used. If exceptions are raised during processing, errback is called instead.