How to save Scrapy Broad Crawl Results? - scrapy

Scrapy has a built-in way of persisting results in AWS S3 using the FEEDS setting.
but for a broad crawl over different domains this would create a single file, where the results from all domains are saved.
how could I save the results of each domain in its own separate file?
I wasn't able to find any reference to this in the documentation.

In the FEED_URI setting, you can add placeholder that will be replaced with scraped data.
For i.e. domain name can be incluede in the file name by using the domain attribute like this
FEED_URI = 's3://my-bucket/{domain}/%(time)s.json'
This solution would only work if the spider were run one time per domain, but since you haven't explicitly said so, I would assume a single run crawls multiple domains.
If you know all domains beforehand, you can generate the value of the FEEDS setting programmatically and use item filtering.
# Assumes that items have a domain field and that all target domains are
# defined in an ALL_DOMAINS variable.
class DomainFilter:
def __init__(self, feed_options):
self.domain = feed_options["domain"]
def accepts(self, item):
return item["domain"] == self.domain
ALL_DOMAINS = ["toscrape.com", ...]
FEEDS = {
f"s3://mybucket/{domain}.jsonl": {
"format": "jsonlines",
"item_filter": DomainFilter,
}
for domain in ALL_DOMAINS
}

Related

Scrapy + extract only text + carriage returns in output file

I am new to Scrapy and trying to extract content from web page, but getting lots of extra characters in the output. See image attached.
How can I update my code to get rid of the characters? I need to extract only the href from the web page.
My code:
class AttractionSpider(CrawlSpider):
name = "get-webcontent"
start_urls = [
'http://quotes.toscrape.com/page/1/'
]
rules = ()
def create_dirs(dir):
if not os.path.exists(dir):
os.makedirs(dir)
else:
shutil.rmtree(dir) #removes all the subdirectories!
os.makedirs(dir)
def __init__(self, name=None, **kwargs):
super(AttractionSpider, self).__init__(name, **kwargs)
self.items_buffer = {}
self.base_url = "http://quotes.toscrape.com/page/1/"
from scrapy.conf import settings
settings.overrides['DOWNLOAD_TIMEOUT'] = 360
def write_to_file(file_name, content_list):
with open(file_name, 'wb') as fp:
pickle.dump(content_list, fp)
def parse(self, response):
print ("Start scrapping webcontent....")
try:
str = ""
hxs = Selector(response)
links = hxs.xpath('//li//#href').extract()
with open('test1_href', 'wb') as fp:
pickle.dump(links, fp)
if not links:
return
log.msg("No Data to scrap")
for link in links:
v_url = ''.join( link.extract() )
if not v_url:
continue
else:
_url = self.base_url + v_url
except Exception as e:
log.msg("Parsing failed for URL {%s}"%format(response.request.url))
raise
def parse_details(self, response):
print ("Start scrapping Detailed Info....")
try:
hxs = Selector(response)
yield l_venue
except Exception as e:
log.msg("Parsing failed for URL {%s}"%format(response.request.url))
raise
Now I must say... obviously you have some experience with Python programming congrats, and you're obviously doing the official Scrapy docs tutorial which is great but for the life of me I have no idea exactly given the code snippet you have provided of what you're trying to accomplish. But that's ok, here's a couple of things:
You are using a Scrapy crawl spider. When using a cross spider the rules set the follow or pagination if you will as well as pointing in a car back to the function when the appropriate regular expression matches the rule to a page to then initialize the extraction or itemization. This is absolutely crucial to understand that you cannot use a crossfire without setting the rules and equally as important when using a cross spider you cannot use the parse function, because the way the cross spider is built parse function is already a native built-in function within itself. Do go ahead and read the documents or just create a cross spider and see how it doesn't create in parse.
Your code
class AttractionSpider(CrawlSpider):
name = "get-webcontent"
start_urls = [
'http://quotes.toscrape.com/page/1/'
]
rules = () #big no no ! s3 rul3s
How it should look like
class AttractionSpider(CrawlSpider):
name = "get-webcontent"
start_urls = ['http://quotes.toscrape.com'] # this would be cosidered a base url
# regex is our bf, kno him well, bassicall all pages that follow
#this pattern ... page/.* (meant all following include no exception)
rules = (
Rule(LinkExtractor(allow=r'/page/.*'), follow=True),callback='parse_item'),
)
Number two: go over the thing I mentioned about using the parts function with a Scrapy crawl spider, you should use "parse-_item"; I assume that you at least looked over the official docs but to sum it up, the reason that it cannot be used this because the crawl spider already uses Parts within its logic so by using Parts within a cross spider you're overriding a native function that it has and can cause all sorts of bugs and issues.
That's pretty straightforward; I don't think I have to go ahead and show you a snippet but feel free to go to the official Docs and on the right side where it says "spiders" go ahead and scroll down until you hit "crawl spiders" and it gives some notes with a caution...
To my next point: when you go from your initial parts you are not (or rather you do not) have a call back that goes from parse to Parts details which leads me to believe that when you perform the crawl you don't go past the first page and aside from that, if you're trying to create a text file (or you're using the OS module 2 write out something but you're not actually writing anything) so I'm super confused to why you are using the right function instead of read.
I mean, myself I have in many occasions use an external text file or CSV file for that matter that includes multiple URLs so I don't have to stick it in there but you're clearly writing out or trying to write to a file which you said was a pipeline? Now I'm even more confused! But the point is that I hope you're well aware of the fact that if you are trying to create a file or export of your extracted items there are options to export and to three already pre-built formats that being CSV JSON. But as you said in your response to my comment that if indeed you're using a pipeline and item and Porter intern you can create your own format of export as you so wish but if it's only the response URL that you need why go through all that hassle?
My parting words would be: it would serve you well to go over again Scrapy's official docs tutorial, at nauseam and stressing the importance of using also the settings.py as well as items.py.
# -*- coding: utf-8 -*-
import scrapy
import os
from scrapy.linkextractors import LinkExtractor
from scrapy.spiders import CrawlSpider, Rule
from quotes.items import QuotesItem
class QcrawlSpider(CrawlSpider):
name = 'qCrawl'
allowed_domains = ['quotes.toscrape.com']
start_urls = ['http://quotes.toscrape.com/']
rules = (
Rule(LinkExtractor(allow=r'page/.*'), callback='parse_item', follow=True),
)
def parse_item(self, response):
rurl = response.url
item = QuotesItem()
item['quote'] =response.css('span.text::text').extract()
item['author'] = response.css('small.author::text').extract()
item['rUrl'] = rurl
yield item
with open(os.path.abspath('') + '_' + "urllisr_" + '.txt', 'a') as a:
a.write(''.join([rurl, '\n']))
a.close()
Of course, the items.py would be filled out appropriately by the ones you see in the spider but by including the response URL both itemized I can do both writing out given even the default Scrappy methods CSV etc or I can create my own.
In this case being a simple text file but one can get pretty crafty; for example, if you write it out correctly that's the same using the OS module you can, for example as I have create m3u playlist from video hosting sites, you can get fancy with a custom CSV item exporter. But even with that then using a custom pipeline we can then write out a custom format for your csvs or whatever it is that you wish.

how to save all items for each url

I add a url list like this. When it goes to pipeline, it seems all items from the url list will be passed to process_item.
how to separate items according to the specific url? For example, to save items from one url to one file.
class MySpider(scrapy.Spider):
name = 'example.com'
allowed_domains = ['example.com']
start_urls = [
'http://www.example.com/1.html',
'http://www.example.com/2.html',
'http://www.example.com/3.html',
]
Add a ref_url to the item before yielding it, then check for it in the pipeline. There are other solutions, but this is the most direct one I guess.

Scrapy - target specified URLs only

Am using Scrapy to browse and collect data, but am finding that the spider is crawling lots of unwanted pages. What I'd prefer the spider to do is just begin from a set of defined pages and then parse the content on those pages and then finish. I've tried to implement a rule like the below but it's still crawling a whole series of other pages as well. Any suggestions on how to approach this?
rules = (
Rule(SgmlLinkExtractor(), callback='parse_adlinks', follow=False),
)
Thanks!
Your extractor is extracting every link because it doesn't have any rule arguments set.
If you take a look at the official documentation, you'll notice that scrapy LinkExtractors have lots of parameters that you can set to customize what your linkextractors extract.
Example:
rules = (
# only specific domain links
Rule(LxmlLinkExtractor(allow_domains=['scrapy.org', 'blog.scrapy.org']), <..>),
# only links that match specific regex
Rule(LxmlLinkExtractor(allow='.+?/page\d+.html)', <..>),
# don't crawl speicific file extensions
Rule(LxmlLinkExtractor(deny_extensions=['.pdf','.html'], <..>),
)
You can also set allowed domains for your spider if you don't want it to wonder off somewhere:
class MySpider(scrapy.Spider):
allowed_domains = ['scrapy.org']
# will only crawl pages from this domain ^

Looping on Scrapy doens't work properly

I'm trying to write a small web crawler with Scrapy.
I wrote a crawler that grabs the URLs of certain links on a certain page, and wrote the links to a csv file. I then wrote another crawler that loops on those links, and downloads some information from the pages directed to from these links.
The loop on the links:
cr = csv.reader(open("linksToCrawl.csv","rb"))
start_urls = []
for row in cr:
start_urls.append("http://www.zap.co.il/rate"+''.join(row[0])[1:len(''.join(row[0]))])
If, for example, the URL of the page I'm retrieving information from is:
http://www.zap.co.il/ratemodel.aspx?modelid=835959
then more information can (sometimes) be retrieved from following pages, like:
http://www.zap.co.il/ratemodel.aspx?modelid=835959&pageinfo=2
("&pageinfo=2" was added).
Therefore, my rules are:
rules = (Rule (SgmlLinkExtractor (allow = ("&pageinfo=\d",
), restrict_xpaths=('//a[#class="NumBtn"]',))
, callback="parse_items", follow= True),)
It seemed to be working fine. However, it seems that the crawler is only retrieving information from the pages with the extended URLs (with the "&pageinfo=\d"), and not from the ones without them. How can I fix that?
Thank you!
You can override parse_start_url() method in CrawlSpider:
class MySpider(CrawlSpider):
def parse_items(self, response):
# put your code here
...
parse_start_url = parse_items
Your rule allows urls with "&pageinfo=\d" . In effect only the pages with matching url will be processed. You need to change the allow parameter for the urls without pageinfo to be processed.

How can I scrape other specific pages on a forum with Scrapy?

I have a Scrapy Crawler that crawls some guides from a forum.
The forum that I'm trying to crawl the data has got a number of pages.
The problem is that I cannot extract the links that I want to because there aren't specific classes or ids to select.
The url structure is like this one: http://www.guides.com/forums/forumdisplay.php?f=108&order=desc&page=1
Obviously I can change the number after desc&page=1 to 2, 3, 4 and so on but I would like to know what is the best choice to do this.
How can I accomplish that?
PS: This is the spider code
http://dpaste.com/hold/794794/
I can't seem to open the forum URL (always redirects me to another website), so here's a best effort suggestion:
If there are links to the other pages on the thread page, you can create a crawler rule to explicitly follow these links. Use a CrawlSpider for that:
class GuideSpider(CrawlSpider):
name = "Guide"
allowed_domains = ['www.guides.com']
start_urls = [
"http://www.guides.com/forums/forumdisplay.php?f=108&order=desc&page=1",
]
rules = [
Rule(SgmlLinkExtractor(allow=("forumdisplay.php.*f=108.*page=",), callback='parse_item', follow=True)),
]
def parse_item(self, response):
# Your code
...
The spider should automatically deduplicate requests, i.e. it won't follow the same URL twice even if two pages link to it. If there are very similar URLs on the page with only one or two query arguments different (say, order=asc), you can specify deny=(...) in the Rule constructor to filter them out.