Are scrapy CONCURRENT_REQUESTS per spider or per machine? - scrapy

Newbie designing his architecture question here:
My Goal
I want to keep track of multiple twitter profiles over time.
What I want to build:
A SpiderMother class that interfaces with some Database (holding CrawlJobs) to spawn and manage many small Spiders, each crawling 1 user-page on twitter at an irregular interval (the jobs will be added to the database according to some algorithm).
They get spawned as subprocesses by SpiderMother and depending on the success of the crawl, the database job get removed. Is this a good architecture?
Problem I see:
Lets say I spawn 100 spiders and my CONCURRENT_REQUESTS limit is 10, will twitter.com be hit by all 100 spiders immediately or do they line up and go one after the other?

Most scrapy settings / runtime configurations will be isolated for the current open spider during the run. Default scrapy request downloader will be acting only per spider also, so you will indeed see 100 simultaneous requests if you fire up 100 processes. You have several options to enforce per domain concurrency globally and none of them are particularly hassle free:
Use just one spider running per domain and feed it through redis (check out scrapy-redis). Alternatively don't spawn more than one spider at a time.
Have a fixed pool of spiders or limit the amount of spiders you spawn from your orchestrator. Set concurrency settings to be "desired_concurrency divided by number of spiders".
Overriding scrapy downloader class behavior to store its values externally (in redis for example).
Personally I would probably go with the first and if hit by the performance limits of a single process scale to the second.

Related

Yielding more requests if scraper was idle more than 20s

I would like to yield more requests at the end of a CrawlSpider that uses Rules.
I noticed I was not able to feed more requests by doing this in the spider_closed method:
self.crawler.engine.crawl(r, self)
I noticed that this technic work in spider_idle method but I would like to wait to be sure that the crawl is finished before feeding more requests.
I set the setting CLOSESPIDER_TIMEOUT = 30
What would be the code to wait 20 seconds idle before triggering the process of feeding more requests?
Is there a better way?
If it is really important that the previous crawling has completely finished before the new crawling starts, consider running either two separate spiders or the same spider twice in a row with different arguments that determine which URLs it crawls. See Run Scrapy from a script.
If you don’t really need for the previous crawling to finish, and you simply have URLs that should have a higher priority than other URLs for some reason, consider using request priorities instead. See the priority parameter of the Request class constructor.

Scrapy distributed connection count

Let's say I had a couple of servers each running multiple Scrapy spider instances at once. Each spider is limited to 4 concurrent requests with CONCURRENT_REQUESTS = 4. For concreteness, let's say there are 10 spider instances at once so I never expect more than 40 requests max at once.
If I need to know at any given time how many concurrent requests are active across all 10 spiders, I might think of storing that integer on a central redis server under some "connection_count" key.
My idea was then to write some downloader middleware that schematically looks like this:
class countMW(object):
def process_request(self, request, spider):
# Increment the redis key
def process_response(self, request, response, spider):
# Decrement the redis key
return response
def process_exception(self, request, exception, spider):
# Decrement the redis key
However, with this approach it seems the connection count under the central key, can be more than 40. I even get > 4, for a single spider running (when the network is under load), and even for a single spider when the redis store is just replaced with the approach of storing the count as an attribute on the spider instance itself, to remove any lag in remote redis key server updates being the problem.
My theory for the reason this doesn't work is that even though the request concurrency per spider is capped at 4, Scrapy still creates and queues more than 4 requests in the meantime, and those extra requests call process_requests incrementing the count long before they are fetched.
Firstly, is this theory correct? Secondly, if it is, is there a way that I could increment the redis count only when a true fetch was occurring (when the request becomes active), and decrement it similarly.
In my opinion it is better customize scheduler as it fits better to Scrapy architecture and you have full control of the requests emitting process:
Scheduler
The Scheduler receives requests from the engine and enqueues them for feeding them later (also to the engine) when the engine requests them.
https://doc.scrapy.org/en/latest/topics/architecture.html?highlight=scheduler#component-scheduler
For example you can find some inspiration ideas about how to customize scheduler here: https://github.com/rolando/scrapy-redis
Your theory is partially correct. Usually requests are made much faster than they are fulfilled and the engine will give, not some, but ALL of these requests to the scheduler. But these queued requests are not processed and thus will not call process_request until they are fetched.
There is a slight lag between when the scheduler releases a request and when the downloader begins to fetch it; and, this allows for the scenario you observe where more than CONCURRENT_REQUESTS requests are active at the same time. Since Scrapy processes requests asynchronously there is this bit of a sloppy double dipping possibility baked in; so, how to deal with it. I'm sure you don't want to run synchronously.
So the question becomes: what is the motivation behind this? Are you just curious about the inner workings of Scrapy? Or do you have some ISP bandwidth cost limitations to deal with, for example? Because we have to define what we really mean by concurrency here.
When does a request become "active"?
When the scheduler releases it?
When the downloader begins to fetch it?
When the underlying Twisted deferred is created?
When the first TCP packet is sent?
When the first TCP packet is received?
Perhaps you could add your own scheduler middleware for finer grained control and perhaps can take inspiration from Downloader.fetch.

Apache JMeter - find out how many requests are possible

I am searching for a solution to test, how many requests my webserver could handle, until causing a load-time of more than 5s. Is there a possibility to manage this with Apache jMeter?
Server: SLES OS running a WordPress blog(Apache Webserver, MySQL)
Best regards
Andy
It is. Example action plan:
Record anticipated test scenario using HTTP(S) Test Script Recorder or JMeter Chrome Extension
Perform correlation (handle dynamic values) and parametrization (if required)
Add virtual users. It is recommended to configure users to arrive gradually, like starting with 1 and adding 1 each second. You can use Ultimate Thread Group which provides an easy visual way of defining ramp-up and ramp-down.
Add Duration Assertion with the value of 5000 ms so it will fail the request if it takes > 5 seconds
Use Active Threads Over Time and Response Times Over Time listeners combination to determine what maximum amount of users which can be served providing response time < 5 seconds.

Scrapy patterns for large number of requests

I need to scrape the large site there is about ten categories and thousands (I don't really know how much) of articles in each category. The simplest approach would be to create a spider for each category and yield responses with every next article link for further extraction.
What I'm thinking of is to make a Top Level spiders which would extract article urls from categories to a queue. The Second Level (article) spiders then should receive each a constant number of urls (say 100) from the queue, and when a spider is finished another one is started. In this way a) we can control a number of spiders, which is a constant, say 20 b) we have an option of counting the number of articles in advance c) spider has limited memory usage. The similar worked fine in a previous project.
Does this make sense or you can just fire as many requests from one spider as possible and it will work fine?
you could fire as many requests from one spider as possible.
This is because scrapy doesn't process all requests at once, they are just all queued.
You can change the number of requests to be processed on settings with CONCURRENT_REQUESTS, which could indeed give memory usage problems if it is too high (say 100). Remember that a scrapy job sets 512mb of memory by default per job.

Is it possible to access the reactor from a Scrapy spider?

I'm looking at ways of implementing a crawl delays inside of Scrapy spiders. I was wondering if it is possible to do access the reactor's callLater method from within a spider? That would enable a page to be parsed after n seconds quite easily.
You can set a delay with ease actually by setting the DOWNLOAD_DELAY in the settings file.
DOWNLOAD_DELAY
Default: 0
The amount of time (in secs) that the downloader should wait before
downloading consecutive pages from the same spider. This can be used
to throttle the crawling speed to avoid hitting servers too hard.
Decimal numbers are supported. Example:
DOWNLOAD_DELAY = 0.25 # 250 ms of delay This setting is also
affected by the RANDOMIZE_DOWNLOAD_DELAY setting (which is enabled by
default). By default, Scrapy doesn’t wait a fixed amount of time
between requests, but uses a random interval between 0.5 and 1.5 *
DOWNLOAD_DELAY.
You can also change this setting per spider.
See also Scrapy's Docs - DOWNLOAD_DELAY