How can deltafetch & splash be used together in Scrapy (python) - scrapy

I am trying to build a scraper using scrapy and I plan to use deltafetch to enable incremental refresh but I need to parse javascript based pages which is why I need to use splash as well.
In the settings.py file, we need to add
SPIDER_MIDDLEWARES = {'scrapylib.deltafetch.DeltaFetch': 100,}
for enabling deltafetch whereas, we need to add
SPIDER_MIDDLEWARES = {'scrapy_splash.SplashDeduplicateArgsMiddleware': 100,} for splash
I wanted to know how would both of them work together if both of them use some kind of spider middleware.
Is there some way in which I could use both of them?

For other answers see here and here. Essentially you can use the request meta parameter to manually set the deltafetch_key for the requests you are making. In this way you can request the same page with Splash even after you've successfully scraped items from that page with Scrapy and vice versa. Hope that helps!
from scrapy_splash import SplashRequest
from scrapy.utils.request import request_fingerprint
(your spider code here)
yield scrapy.Request(url, meta={'deltafetch_key': request_fingerprint(response.request)})

Related

Beautifulsoup not able to extract src tag

I want to automate downloading images from imgflip.
import requests
from bs4 import BeautifulSoup as bs
url="https://imgflip.com/memegenerator/Drake-Hotline-Bling"
page=requests.get(url)
parsed=bs(page.content,'html.parser')
res=parsed.find_all('img',class_="mm-img shadow")
print(res)
When I inspect the page, I see the src tag for the image but the response I get does not have the src tag. I have also tried setting src=True, but it also doesn't work. Thank you for helping.
On the other hand, with a dynamic website the server might not send
back any HTML at all. Instead, you’ll receive JavaScript code as a
response. This will look completely different from what you saw when
you inspected the page with your browser’s developer tools.
Src: real python.
In my case also, the website sent back javascript code. The solution is to use requests-html or selenium

XHR request pulls a lot of HTML content, how can I scrape it/crawl it?

So, I'm trying to scrape a website with infinite scrolling.
I'm following this tutorial on scraping infinite scrolling web pages: https://blog.scrapinghub.com/2016/06/22/scrapy-tips-from-the-pros-june-2016
But the example given looks pretty easy, it's an orderly JSON object with the data you want.
I want to scrape this https://www.bahiablancapropiedades.com/buscar#/terrenos/venta/bahia-blanca/todos-los-barrios/rango-min=50.000,rango-max=350.000
The XHR response for each page is weird, looks like corrupted html code
This is how the Network tab looks
I'm not sure how to navigate the items inside "view". I want the spider to enter each item and crawl some information for every one.
In the past I've succesfully done this with normal pagination and rules guided by xpaths.
https://www.bahiablancapropiedades.com/buscar/resultados/0
This is XHR url.
While scrolling the page it will appear the 8 records per request.
So do one thing get all records XPath. these records divide by 8. it will appear the count of XHR requests.
do below process. your issue will solve. I get the same issue as me. I applied below logic. it will resolve.
pagination_count = xpath of presented number
value = int(pagination_count) / 8
for pagination_value in value:
url = https://www.bahiablancapropiedades.com/buscar/resultados/+[pagination_value]
pass this url to your scrapy funciton.
It is not corrupted HTML, it is escaped to prevent it from breaking the JSON. Some websites will return simple JSON data and others, like this one, will return the actual HTML to be added.
To get the elements you need to get the HTML out of the JSON response and create your own parsel Selector (this is the same as when you use response.css(...)).
You can try the following in scrapy shell to get all the links in one of the "next" pages:
scrapy shell https://www.bahiablancapropiedades.com/buscar/resultados/3
import json
import parsel
json_data = json.loads(response.text)
sel = parsel.Selector(json_data['view']) # view contains the HTML
sel.css('a::attr(href)').getall()

Scrapy Link Extractor Rules

I have a spider setup using link extractor rules. The spider crawls and scrapes the items that I expect, although it will only follow the 'Next' pagination button to the 3rd page, where the spider then finishes without any errors, there are a total of 50 pages to crawl via the 'Next' pagination. Here is my code:
from scrapy.linkextractors import LinkExtractor
from scrapy.spiders import Rule, CrawlSpider
class MySpider(CrawlSpider):
name = 'my_spider'
start_urls = [some_base_url]
rules = (
Rule(LinkExtractor(allow='//div[#data-test="productGridContainer"]//a[contains(#data-test, "product-title")]'), callback='parse_item'),
Rule(LinkExtractor(restrict_xpaths='//div[#data-test="productGridContainer"]//a[contains(#data-test, "next")]'), follow=True)
)
def parse_item(self, response):
# inspect_response(response, self)
...items are being scraped
return scraped_info
It feels like I may be missing a setting or something as the code functions as expected for the first 3 iterations. My settings file does not override the DEPTH_LIMIT default of 0. Any help is greatly appreciated, thank you.
EDIT 1
It appears it may not have anything to do with my code as if I start with a different product page I can get up to 8 pages scraped before the spider exits. Not sure if its the site I am crawling or how to troubleshoot?
EDIT 2
Troubleshooting it some more it appears that my 'Next' link disappears from the web page. When I start on the first page the pagination element is present to go to the next page. When I view the response for the next page there are no products and no next link element so the spider thinks it it done. I have tried enabling cookies to see if the site is requiring a cookie in order to paginate. That doesnt have any affect. Could it be a timing thing?
EDIT 3
I have adjusted the download delay and concurrent request values to see if that makes a difference. I get the same results whether I pull the page of data in 1 second or 30 minutes. I am assuming 30 minutes is slow enough as I can manually do it faster than that.
EDIT 4
Tried to enable cookies along with the cookie middleware in debug mode to see if that would make a difference. The cookies are fetched and sent with the requests but I get the same behavior when trying to navigate pages.
To check if the site denies too many requests in short time you can add the following code to your spider (e.g. before your rulesstatement) and play around with the value. See also the excellent documentation.
custom_settings = {
'DOWNLOAD_DELAY': 0.4
}

Response has nothing in it

I have been following the scrapy tutorial trying to create a very simple web scraper for warframe.market. I have about a year of coding experience from school, but no python experience. I simply want to get the price of an item from the website. I used the following to scrape the page:
scrapy shell "https://warframe.market/items/hydroid_prime_set"
then I inspected the web page to find the individual elements that I am trying to scrape. I used this command to try to view the results I wanted:
response.css("div.order-row.d-flex.col-12").extract()
This did not work, so I used view(response) to see what I had scraped, and my cmd just waits endlessly at this point.
Is HTTPS stopping me from scraping? Am I selecting the wrong css in my response? Is the webpage too big? Could someone please show me where I went wrong?
Thanks
The response isn't empty, but it's rendered using javascript (you can validate it inspecting the response.body), for example try this in the shell:
import json
data = json.loads(response.css('#application-state::text').extract_first())
for order in data.get('payload',{}).get('orders', []):
print '"{}" price: {}'.format(order.get('platinum'),
order.get('user',{}).get('ingame_name'))

How to pass header information of scrapy directly to splash

I would like to login on splash based on the header information obtained by login authentication with scrapy.
I have two questions regarding this.
question 1.
Is it possible to pass header information obtained by logging in with scrpay directly to splash?
question 2.
Does it make sense with splash to recognize that it is not necessary to perform login authentication?
This is my base code.
import scrapy
from scrapy_splash import SplashRequest
def test(self, response):
self.logger.info("headers:-------"+ str(response.headers))
return SplashRequest(
"url",
self.test2,
endpoint='execute',
args={
'lua_source': """Lua script""",
'wait': 5,})
thank you for reading my question.