scrapy.Request not going through - scrapy

The crawling process seems to ignore and/or not execute the line yield scrapy.Request(property_file, callback=self.parse_property). The first scrapy.Request in def start_requests goes through and executed properly, but not one in def parse_navpage as seen here.
import scrapy
class SmartproxySpider(scrapy.Spider):
name = "scrape_zoopla"
allowed_domains = ['zoopla.co.uk']
def start_requests(self):
# Read source from file
navpage_file = f"file:///C:/Users/user/PycharmProjects/ScrapeZoopla/ScrapeZoopla/ScrapeZoopla/spiders/html_source/navpage/NavPage_1.html"
yield scrapy.Request(navpage_file, callback=self.parse_navpage)
def parse_navpage(self, response):
listings = response.xpath("//div[starts-with(#data-testid, 'search-result_listing_')]")
for listing in listings:
listing_url = listing.xpath(
"//a[#data-testid='listing-details-link']/#href").getall() # List of property urls
break
print(listing_url) #Works
property_file = f"file:///C:/Users/user/PycharmProjects/ScrapeZoopla/ScrapeZoopla/ScrapeZoopla/spiders/html_source/properties/Property_1.html"
print("BEFORE YIELD")
yield scrapy.Request(property_file, callback=self.parse_property) #Not going through
print("AFTER YIELD")
def parse_property(self, response):
print("PARSE PROPERTY")
print(response.url)
print("PARSE PROPERTY AFTER URL")
Running scrapy crawl scrape_zoopla in the command returns:
2022-09-10 20:38:24 [scrapy.core.engine] DEBUG: Crawled (200) <GET file:///C:/Users/user/PycharmProjects/ScrapeZoopla/ScrapeZoopla/ScrapeZoopla/spiders/html_source/navpage/NavPage_1.html> (referer: None)
BEFORE YIELD
AFTER YIELD
2022-09-10 20:38:24 [scrapy.core.engine] INFO: Closing spider (finished)
Both scrapy.Requests requested local files and only the first one worked. The files exist and properly display the pages and in case one of them does not the crawler would return error "No such file or directory" and likely be interrupted. It seems here the crawler just passed right through the request, not even gone through it, and returned no error. What is the error here?

This is a total shot in the dark but you could try sending both requests from your start_requests method. I honestly don't see why this would work but It might be worth a shot.
import scrapy
class SmartproxySpider(scrapy.Spider):
name = "scraoe_zoopla"
allowed_domains = ['zoopla.co.uk']
def start_requests(self):
# Read source from file
navpage_file = f"file:///C:/Users/user/PycharmProjects/ScrapeZoopla/ScrapeZoopla/ScrapeZoopla/spiders/html_source/navpage/NavPage_1.html"
property_file = f"file:///C:/Users/user/PycharmProjects/ScrapeZoopla/ScrapeZoopla/ScrapeZoopla/spiders/html_source/properties/Property_1.html"
yield scrapy.Request(navpage_file, callback=self.parse_navpage)
yield scrapy.Request(property_file, callback=self.parse_property)
def parse_navpage(self, response):
listings = response.xpath("//div[starts-with(#data-testid, 'search-result_listing_')]")
for listing in listings:
listing_url = listing.xpath(
"//a[#data-testid='listing-details-link']/#href").getall() # List of property urls
break
print(listing_url) #Works
def parse_property(self, response):
print("PARSE PROPERTY")
print(response.url)
print("PARSE PROPERTY AFTER URL")
Update
It just dawned on me why this is happening. It is because you have the allowed_domains attribute set but the request you are making is on your local file system which naturally is not going to match the allowed domain.
Scrapy assumes that all of the initial urls sent from start_requests are permitted and therefore doesn't do any verification for those, but all subsequent parse methods check against the allowed_domains attribute.
Just remove that line from the top of your spider class and your original structure should work fine.

Related

How to callback on 301 redirect without crawling in scrapy?

I am scraping a search result page where in some cases a 301 redirect will be triggered. In that case I do not want to crawl that page, but I need to call a different callback function, passing the redirect URL string to it.
I belive it should be possible to do it along the rules, but could not figure out how to:
class GetbidSpider(CrawlSpider):
handle_httpstatus_list = [301]
rules = (
Rule(
LinkExtractor(
allow=['^https://www\.testrule*$'],
),
follow=False,
callback= 'parse_item'
),
)
def parse_item(self, response):
self.logger.info('Parsing %s', response.url)
print(response.status)
print(response.headers[b'Location'])
The logfile only shows:
DEBUG: Crawled (301) <GET https:...
But the parsind info never gets printed, indicating never entering the function.
How can I
I really can't understand why my suggestions don't work for you. This is a tested code:
import scrapy
class RedirectSpider(scrapy.Spider):
name = 'redirect_spider'
def start_requests(self):
yield scrapy.Request(
url='https://www.moneycontrol.com/india/stockpricequote/pesticidesagrochemicals/piindustries/PII',
meta={'handle_httpstatus_list': [301]},
callback=self.parse,
)
def parse(self, response):
print(response.status)
print(response.headers[b'Location'])
pass

Looping through pages of Web Page's Request URL with Scrapy

I'm looking to adapt this tutorial, (https://medium.com/better-programming/a-gentle-introduction-to-using-scrapy-to-crawl-airbnb-listings-58c6cf9f9808) to scraping this site of tiny home listings: https://tinyhouselistings.com/.
The tutorial uses the request URL, to get a very complete and clean JSON file, but does so for the first page only. It seems that looping through the 121 pages of my tinyhouselistings request url should be pretty straight-forward but I have not been able to get anything to work. The tutorial does not loop through the pages of the request url, but rather uses scrapy splash, run within a Docker container to get all the listings. I am willing to try that, but I just feel like it should be possible to loop through this request url.
This outputs only the first page only of the tinyhouselistings request url for my project:
import scrapy
class TinyhouselistingsSpider(scrapy.Spider):
name = 'tinyhouselistings'
allowed_domains = ['tinyhouselistings.com']
start_urls = ['http://www.tinyhouselistings.com']
def start_requests(self):
url = 'https://thl-prod.global.ssl.fastly.net/api/v1/listings/search?area_min=0&measurement_unit=feet&page=1'
yield scrapy.Request(url=url, callback=self.parse)
def parse(self, response):
_file = "tiny_listings.json"
with open(_file, 'wb') as f:
f.write(response.body)
I've tried this:
class TinyhouselistingsSpider(scrapy.Spider):
name = 'tinyhouselistings'
allowed_domains = ['tinyhouselistings.com']
start_urls = ['']
def start_requests(self):
url = 'https://thl-prod.global.ssl.fastly.net/api/v1/listings/search?area_min=0&measurement_unit=feet&page='
for page in range(1, 121):
self.start_urls.append(url + str(page))
yield scrapy.Request(url=start_urls, callback=self.parse)
But I'm not sure how to then pass start_urls to parse so as to write the response to the json being written at the end of the script.
Any help would be much appreciated!
Remove allowed_domains = ['tinyhouselistings.com'] because the url thl-prod.global.ssl.fastly.net will be filtered out by Scrapy
Since you are using start_requests method so you do not need start_urls, you can only have either of them
import json
class TinyhouselistingsSpider(scrapy.Spider):
name = 'tinyhouselistings'
listings_url = 'https://thl-prod.global.ssl.fastly.net/api/v1/listings/search?area_min=0&measurement_unit=feet&page={}'
def start_requests(self):
page = 1
yield scrapy.Request(url=self.listings_url.format(page),
meta={"page": page},
callback=self.parse)
def parse(self, response):
resp = json.loads(response.body)
for ad in resp["listings"]:
yield ad
page = int(response.meta['page']) + 1
if page < int(listings['meta']['pagination']['page_count'])
yield scrapy.Request(url=self.listings_url.format(page),
meta={"page": page},
callback=self.parse)
From terminal, run spider using to save scraped data to a JSON file
scrapy crawl tinyhouselistings -o output_file.json

store the filtered requests from scrapy dupefilter

I am getting this from the stats after scrapy crawl test
'dupefilter/filtered': 288, how can I store the filtered requests into a .txt file (or any type) so I view later on?
To achieve this you need to make 2 things:
Set DUPEFILTER_DEBUG setting to True - it will add all
filtered requests into log
Set LOG_FILE to save log in
txt file.
One of possible way to do it - by setting custom_settings spider attribute:
....
class SomeSpider(scrapy.Spider):
....
custom_settings = {
"DUPEFILTER_DEBUG":True,
"LOG_FILE": "log.txt"}
....
def parse(self, response):
....
You will have log lines like this:
2019-12-21 20:34:07 [scrapy.dupefilters] DEBUG: Filtered duplicate request: <GET http://quotes.toscrape.com/page/4/> (referer: http://quotes.toscrape.com/page/3/)
UPDATE
To save only dupefilter logs:
....
from logging import FileHandler
class SomeSpider(scrapy.Spider):
....
custom_settings = {
"DUPEFILTER_DEBUG":True,
# "LOG_FILE": "log.txt"} # - optional
....
def start_requests(self):
# Adding file handler to dupefilter logger:
dupefilter_log_filename = "df_log.txt"
self.crawler.engine.slot.scheduler.df.logger.addHandler(FileHandler(dupefilter_log_filename, delay=False, encoding="utf-8"))
def parse(self, response):
....
Additional info:
Scrapy logging documentation
Python logging module documentation

Web Crawler not printing pages correctly

Good morning !
I've developed a very simple spider with Scrapy just to get used with FormRequest. I'm trying to send a request to this page: https://www.caramigo.eu/ which should lead me to a page like this one: https://www.caramigo.eu/be/fr/recherche?address=Belgique%2C+Li%C3%A8ge&date_debut=16-03-2019&date_fin=17-03-2019. The issue is that my spider does not prompt the page correctly (the cars images and info do not appear at all) and therefore I can't collect any data from it. Here is my spider:
import scrapy
class CarSpider(scrapy.Spider):
name = "caramigo"
def start_requests(self):
urls = [
'https://www.caramigo.eu/'
]
for url in urls:
yield scrapy.Request(url=url, callback=self.search_line)
def search_line(self, response):
return scrapy.FormRequest.from_response(
response,
formdata={'address': 'Belgique, Liège', 'date_debut': '16-03-2019', 'date_fin': '17-03-2019'},
callback=self.parse
)
def parse(self, response):
filename = 'caramigo.html'
with open(filename, 'wb') as f:
f.write(response.body)
self.log('Saved file %s' % filename)
Sorry if the syntax is not correct, I'm pretty new to coding.
Thank you in advance !

Can i use scrapy Post request without callback?

I need to update location on site that uses redio button. This can be done with simple Post request. The problem is that output of this request is
window.location='http://store.intcomex.com/en-XCL/Products/Categories?r=True';
Since it is not a valid url Scrapy redirects it to PageNotFound and closes spider.
2017-09-17 09:57:59 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (302) to <GET https:
//store.intcomex.com/en-XCL/ServiceClient/PageNotFound> from <POST https://store.intcomex.com/en-XC
L//User/SetNewLocation>
Here is my code:
def after_login(self, response):
# inspect_response(response, self)
url = "https://store.intcomex.com/en-XCL//User/SetNewLocation"
data={"id":"xclf1"
}
yield scrapy.FormRequest(url, formdata=data, callback = self.location)
# inspect_response(response, self)
def location(self, response):
yield scrapy.Request(url = 'http://store.intcomex.com/en-XCL/Products/Categories?r=True', callback = self.categories)
The question is how can I redirect scrapy to valid url after executing Post request that changes location? Is there some argument that indicates target url or i can execute it without callback and yiel correct url on the next line?
Thanks.