For the past month I've been using Scrapy for a web crawling project I've begun.
This project involves pulling down the full document content of all web pages in a single domain name that are reachable from the home page. Writing this using Scrapy was quite easy, but it simply runs too slowly. In 2-3 days I can only pull down 100,000 pages.
I've realized that my initial notion that Scrapy isn't meant for this type of crawl is revealing itself.
I've begun to focus my sights on Nutch and Methabot in hopes of better performance. The only data that I need to store during the crawl is the full content of the web page and preferably all the links on the page (but even that can be done in post-processing).
I'm looking for a crawler that is fast and employs many parallel requests.
This my be fault of server not Scrapy. Server may be not so fast as you want or may be it (or webmaster) detects crawling and limit speed for this connection/cookie.
Do you use proxy? This may slow down crawling too.
This may be Scrapy wisdom, if you will crawl too intensive you may get ban on this server. For my C++ handwritten crawler I artificially set 1 request per second limit. But this speed is enough for 1 thread ( 1 req * 60 secs * 60 minutes * 24 hours = 86400 req / day ). If you interested you may write email to whalebot.helmsman {AT} gmail.com .
Scrapy allows you to determine the number of concurrent requests and the delay between the requests in its settings.
Do you know where the bottleneck is?. As whalebot.helmsman pointed out, the limit may not be on Scrapy itself, but on the server you're crawling.
You should start by finding out whether the bottleneck is the network or CPU.
Related
I have configured Nutch 2.3.1 to crawl some news websites. As websites homepages are going to change after one day that why I want to handle home page in some different way so that for homepage, only main categories are crawled instead of text as text will change after sometime ( I have observed similar things in Google).
For rest of pages, its working fine ( crawling text etc.)
At the moment Nutch doesn't offer any special treatment for homepages, it is just one more URL to crawl. If you want to do this you'll probably need to customise some portions of Nutch.
If you're collecting a fixed set of URLs (that you usually put in the seed file) you can attach some metadata to these URLs and use a different strategy for these URLs. For instance setting a really high score & short fetch interval (https://github.com/apache/nutch/blob/release-2.3.1/src/java/org/apache/nutch/crawl/InjectorJob.java#L56-L59).
Since the generator job will sort the URLs by score, this should work as long as all other URLs have a score lower than the value that you use for the seed URLs. Keep in mind that this will cause Nutch to crawl this URLs every time that a new cycle starts (since the seed URLs are going to be on the top all the time).
If you discover new homepages during your normal craw cycle, then it is tricky because Nutch doesn't have any way of detecting if a given URL is a homepage or not. For this case you'll need to check if the current URL is a homepage, if it is indeed a homepage then, you'll need to modify the score/fetch interval to ensure that this URL ends up in the top ranking URLs.
This workaround could potentially cause some issues: Nutch could end up crawling always only the homepages and not the rest of the URLs, which is not a good case.
You could also write your own generator, this way you have more control and don't rely only on the score, fetch interval alone.
Full disclosure: Although I've used a similar approach in the past we ended up changing this system to use StormCrawler (we were building a news search engine) so we needed more control over when the pages were being fetched (the batch nature of Nutch it is not a great fit for this use case), and some other business cases that needed a more NRT approach.
I am new to crawling and would like to know whether it's possible to use Scrapy to crawl a site, like CNBC.com, incrementally? For example, if today I crawled all pages from a site, then from tomorrow I only want to collect pages that are newly posted to this site, to avoid crawling all the old pages.
Thank you for any info. or input on this.
Yes you can and it's actually quite easy. Every news website has a few very important index pages like the homepage and the categories (eg politics, entertainment etc.) There is no article that doesn't go through these pages for at least a few minutes. Scan those pages every minute or so and save just the links. Then do a diff with what you already have in your databases and a few times a day issue a crawl to scrape all the missing links. Very standard practice.
Please try the scrapy plugin scrapy-deltafetch , which would make your life easier.
Short answer: no.
Longer answer: What you could do is write the article id or the article url to a file and during the scraping, you would match the id or url with the records in the file.
Remember to load your file only once and assign it to a variable. Don't load it during your iteration when scraping.
I'd like to hear the diffrences between 3 different approaches for using Scrapy in order to crawl 1000 sites.
For example, I want to scrape 1000 photo sites, they all most has the same structure.Like have one kind of photo list page,and other kind of big photo page; but these list or photo desc page's HTML code will not all the same.
Another example,I want to scrape 1000 wordpress blog,Only bolg's article.
The first, is exploring the entire 1000 sites using one scrapy project.
The second, is having all these 1000 sites under the same scrapy project, all items in items.py, each site having it's own spider.
The third is similar to the second, but having one spider for all the sites instead of seperating them.
What are the diffrences, and which do you think is the right approach? Is there any other, better approach I've missed?
I had 90 sites to pull from so it wasn't great option to create one crawler per site. The idea was to be able to run in parallel. Also i had split this to pack similar page formats in one place.
So I ended up with 2 crawlers:
Crawler 1 - URL Extractor. This would extract all detail page URLs from top level listing page in a file(s).
Crawler 2 - Fetch Details.
This would read from the URL file and extract item details.
This allowed me to fetch URLs first and estimate number of threads that i might need for second crawler.
Since each crawler was working on specific page format, there were quite a few functions I could reuse.
My website has about 500.000 pages. I made sitemap.xml and listed all pages in it (I know about limitation 50.000 links per file, so I have 10 sitemaps). Anyway I submitted sitemaps in webmastertool and everything seems ok (no error and I can see submitted and index links). Hoverer I have a problem with spidering frequently. GoogleBot spiders the same page 4 times per day but in sitemap.xml I tell that the page would be changed yearly.
This is an example
<url>
<loc>http://www.domain.com/destitution</loc>
<lastmod>2015-01-01T16:59:23+02:00</lastmod>
<changefreq>yearly</changefreq>
<priority>0.1</priority>
</url>
1) So how to tell GoogleBot not to spider so frequently as it overload my server?
2) the website has several pages like http://www.domain.com/destitution1, http://www.domain.com/destitution2 ... and I put canonical url to http://www.domain.com/destitution. Might it be the reason of multi spidering ?
You can report this to Google crawling team, see here :
In general, specific Googlebot crawling-problems like this are best
handled through Webmaster Tools directly. I'd go through the Site
Settings for your main domain, Crawl Rate, and then use the "Report a
problem with Googlebot" form there. The submissions through this form
go to our Googlebot team, who can work out what (or if anything) needs
to be changed on our side. They generally won't be able to reply, and
won't be able to process anything other than crawling issues, but they
sure know Googlebot and can help tweak what it does.
https://www.seroundtable.com/google-crawl-report-problem-19894.html
The crawling will slow down progressively. Bots are likely revisiting your pages because there are internal links between your pages.
In general, canonicals tend to reduce crawling rates. But at the beginning, Google bots need crawl both the source and target page. You will see the benefit later.
Google bots don't necessarily take lastmod and changefreq information into account. But if they establish content is not modified, they will come back less often. It is a matter of time. Every URL has a scheduler for revisits.
Bots adapt to the capaccity of the server (see crawling summary I maintain for more details). You can temporarily slow down bots by returning them http error code 500 if that is an issue. They will stop and come back later.
I don't believe there is a crawling issue with your site. What you see is normal behavior. When several sitemaps are submitted at once, the crawling rates can be temporarily raised.
I started with scrapy some days ago, learned about scraping particular sites, ie the dmoz.org example; so far it's fine and i like it. As I want to learn about search engine development I aim to build a crawler (and storage, indexer etc) for large amount of websites of any "color" and content.
So far I also tried the depth-first-order and bredth-first-order crawling.
I use at the moment just one Rule, I set some path to skip and some domains.
Rule(SgmlLinkExtractor(deny=path_deny_base, deny_domains=deny_domains),
callback='save_page', follow=True),
I have one pipeline, a mysql storage to store url, body and headers of the downloaded pages, done via a PageItem with these fields.
My questions for now are:
Is it fine to use item for simple storing of pages ?
How does it work that the spider checks the database if a page is already crawled (in the last six months ie), it's builtin somehow?
Is there something like a blacklist for useless domains, ie. placeholder domains, link farms etc.?
There are many other issues like storage but I guess I stop here, just one more general search engine question
Is there a way to obtain crawl result data from other professional crawlers, of course it must be done by sending harddisks otherwise the data volume would be the same if I crawl them myself, (compressing left aside).
I will try to answer only two of your questions:
Is it fine to use item for simple storing of pages ?
AFAIK, scrapy doesn't care what you put into an Item's Field. Only your pipeline will dealing be with them.
How does it work that the spider checks the database if a page is already crawled (in the last six months ie), it's builtin somehow?
Scrapy has duplicates middleware, but it filters duplicates only in current session. You have to manually prevent scrapy to not crawl sites you've crawled six months ago.
As for question 3 and 4 - you don't understand them.