Scrapy - Do spider_idle is called when a DOWNLOAD_DELAY is specified? - scrapy

I'm writing a spider for scraping data about cars from a carsharing websites https://fr.be.getaround.com/. The objective is to divide my spider in two parts. First, it scrapes data for available cars and keeps unavailable cars aside. Second, once all information about available cars is scraped, thus at the end of the process, the spider scrapes additional information for unavailable cars. For this second part, I've added the spider_idle method in my spider. Doing so, it should be called once no available cars is remaining in the waiting list. However, I've added a DOWNLOAD_DELAY (5 seconds) and Autothrottle is enabled. I was wondering, do spider_idle will be called during the waiting time between each request (within the 5 seconds) ?

No.
The spider_idle signal is only called when there are no further requests to process. It will not be called if no request happens to be in progress because the next request needs to wait for a given time to pass.

Related

Yielding more requests if scraper was idle more than 20s

I would like to yield more requests at the end of a CrawlSpider that uses Rules.
I noticed I was not able to feed more requests by doing this in the spider_closed method:
self.crawler.engine.crawl(r, self)
I noticed that this technic work in spider_idle method but I would like to wait to be sure that the crawl is finished before feeding more requests.
I set the setting CLOSESPIDER_TIMEOUT = 30
What would be the code to wait 20 seconds idle before triggering the process of feeding more requests?
Is there a better way?
If it is really important that the previous crawling has completely finished before the new crawling starts, consider running either two separate spiders or the same spider twice in a row with different arguments that determine which URLs it crawls. See Run Scrapy from a script.
If you don’t really need for the previous crawling to finish, and you simply have URLs that should have a higher priority than other URLs for some reason, consider using request priorities instead. See the priority parameter of the Request class constructor.

Are scrapy CONCURRENT_REQUESTS per spider or per machine?

Newbie designing his architecture question here:
My Goal
I want to keep track of multiple twitter profiles over time.
What I want to build:
A SpiderMother class that interfaces with some Database (holding CrawlJobs) to spawn and manage many small Spiders, each crawling 1 user-page on twitter at an irregular interval (the jobs will be added to the database according to some algorithm).
They get spawned as subprocesses by SpiderMother and depending on the success of the crawl, the database job get removed. Is this a good architecture?
Problem I see:
Lets say I spawn 100 spiders and my CONCURRENT_REQUESTS limit is 10, will twitter.com be hit by all 100 spiders immediately or do they line up and go one after the other?
Most scrapy settings / runtime configurations will be isolated for the current open spider during the run. Default scrapy request downloader will be acting only per spider also, so you will indeed see 100 simultaneous requests if you fire up 100 processes. You have several options to enforce per domain concurrency globally and none of them are particularly hassle free:
Use just one spider running per domain and feed it through redis (check out scrapy-redis). Alternatively don't spawn more than one spider at a time.
Have a fixed pool of spiders or limit the amount of spiders you spawn from your orchestrator. Set concurrency settings to be "desired_concurrency divided by number of spiders".
Overriding scrapy downloader class behavior to store its values externally (in redis for example).
Personally I would probably go with the first and if hit by the performance limits of a single process scale to the second.

Scrapy distributed connection count

Let's say I had a couple of servers each running multiple Scrapy spider instances at once. Each spider is limited to 4 concurrent requests with CONCURRENT_REQUESTS = 4. For concreteness, let's say there are 10 spider instances at once so I never expect more than 40 requests max at once.
If I need to know at any given time how many concurrent requests are active across all 10 spiders, I might think of storing that integer on a central redis server under some "connection_count" key.
My idea was then to write some downloader middleware that schematically looks like this:
class countMW(object):
def process_request(self, request, spider):
# Increment the redis key
def process_response(self, request, response, spider):
# Decrement the redis key
return response
def process_exception(self, request, exception, spider):
# Decrement the redis key
However, with this approach it seems the connection count under the central key, can be more than 40. I even get > 4, for a single spider running (when the network is under load), and even for a single spider when the redis store is just replaced with the approach of storing the count as an attribute on the spider instance itself, to remove any lag in remote redis key server updates being the problem.
My theory for the reason this doesn't work is that even though the request concurrency per spider is capped at 4, Scrapy still creates and queues more than 4 requests in the meantime, and those extra requests call process_requests incrementing the count long before they are fetched.
Firstly, is this theory correct? Secondly, if it is, is there a way that I could increment the redis count only when a true fetch was occurring (when the request becomes active), and decrement it similarly.
In my opinion it is better customize scheduler as it fits better to Scrapy architecture and you have full control of the requests emitting process:
Scheduler
The Scheduler receives requests from the engine and enqueues them for feeding them later (also to the engine) when the engine requests them.
https://doc.scrapy.org/en/latest/topics/architecture.html?highlight=scheduler#component-scheduler
For example you can find some inspiration ideas about how to customize scheduler here: https://github.com/rolando/scrapy-redis
Your theory is partially correct. Usually requests are made much faster than they are fulfilled and the engine will give, not some, but ALL of these requests to the scheduler. But these queued requests are not processed and thus will not call process_request until they are fetched.
There is a slight lag between when the scheduler releases a request and when the downloader begins to fetch it; and, this allows for the scenario you observe where more than CONCURRENT_REQUESTS requests are active at the same time. Since Scrapy processes requests asynchronously there is this bit of a sloppy double dipping possibility baked in; so, how to deal with it. I'm sure you don't want to run synchronously.
So the question becomes: what is the motivation behind this? Are you just curious about the inner workings of Scrapy? Or do you have some ISP bandwidth cost limitations to deal with, for example? Because we have to define what we really mean by concurrency here.
When does a request become "active"?
When the scheduler releases it?
When the downloader begins to fetch it?
When the underlying Twisted deferred is created?
When the first TCP packet is sent?
When the first TCP packet is received?
Perhaps you could add your own scheduler middleware for finer grained control and perhaps can take inspiration from Downloader.fetch.

How to know scrapy-redis finish

When I use scrapy-redis that will set spider DontCloseSpider.
How to know scrapy crawling finish.
crawler.signals.connect(ext.spider_closed,signal=signals.spider_closed) not working
Interesting.
I see this comment:
# Max idle time to prevent the spider from being closed when distributed crawling.
# This only works if queue class is SpiderQueue or SpiderStack,
# and may also block the same time when your spider start at the first time (because the queue is empty).
SCHEDULER_IDLE_BEFORE_CLOSE = 10
If you follow the setup instructions properly and it doesn't work, I guess that at least you would have to give some data that allows to reproduce your setup e.g. your settings.py or if you have any interesting spiders/pipelines.
spider_closed signal should indeed happen. Just quite some seconds after it runs out of URLs in the queue. If the queue is not empty, the spider won't close - obviously.

Scrapy patterns for large number of requests

I need to scrape the large site there is about ten categories and thousands (I don't really know how much) of articles in each category. The simplest approach would be to create a spider for each category and yield responses with every next article link for further extraction.
What I'm thinking of is to make a Top Level spiders which would extract article urls from categories to a queue. The Second Level (article) spiders then should receive each a constant number of urls (say 100) from the queue, and when a spider is finished another one is started. In this way a) we can control a number of spiders, which is a constant, say 20 b) we have an option of counting the number of articles in advance c) spider has limited memory usage. The similar worked fine in a previous project.
Does this make sense or you can just fire as many requests from one spider as possible and it will work fine?
you could fire as many requests from one spider as possible.
This is because scrapy doesn't process all requests at once, they are just all queued.
You can change the number of requests to be processed on settings with CONCURRENT_REQUESTS, which could indeed give memory usage problems if it is too high (say 100). Remember that a scrapy job sets 512mb of memory by default per job.