I'm working on scraping program for an art museum.
I'm new to the Scrapy framework and intermediate in python at best
I need to download images from the website and name them accordingly with the value form the parsed data.
I've been going through Scrapy documentation and Google searches but no luck so far. I'm stuck at the pipeline.
I know how I could fix file names after running the Scrapy with wrapper program, but that seem counter productive and sloppy.
Each item yielded from the spider looks like this:
{'Artist': 'SomeArtist',
...
'Image Url': 'https://www.nationalgallery.org.uk/media/33219/n-1171-00-000049-hd.jpg',
'Inventory number': 'NG1171'}
I need to name the image by 'Inventory number'
I managed to make a custom pipeline, but no luck making it work the way I want to.
The closest I got was this, but it failed miserably by assigning same self.file_name value to many images
class DownloadPipeline(ImagesPipeline):
def get_media_requests(self, item, info):
# The only point, that I've found, for accessing item dict before downloading
self.file_name = item['Inventory number']
yield Request(item["Image Url"])
def file_path(self, request, response=None, info=None):
return f"Images/{self.file_name}.jpg"
Something like this would be great:
class DownloadPipeline(ImagesPipeline):
def file_path(self, request, item, response=None, info=None):
file_name = item['Inventory number']
return f"Images/{file_name}.jpg"
Is there any way to make that work?
When you yield the request in get_media_requests you can pass arbitrary data inside the meta param, so you can access as an attribute of request in your file_path method.
class DownloadPipeline(ImagesPipeline):
def get_media_requests(self, item, info):
yield Request(
url=item["Image Url"],
meta={'inventory_number': item.get('Inventory number')}
)
def file_path(self, request, response=None, info=None):
file_name = request.meta.get('inventory_number)
return f"Images/{file_name}.jpg"
Read more here
Related
I wrote a small example spider to illustrate my problem:
class ListEntrySpider(scrapy.Spider):
start_urls = ['https://example.com/lists']
def parse(self, response):
for i in json.dumps(response.text)['ids']:
scrapy.Request(f'https://example.com/list/{i}', callback=self.parse_lists)
def parse_lists(self, response):
for entry in json.dumps(response.text)['list']:
yield ListEntryItem(**entry)
I need to have all the items that result from multiple requests (all ListEntryItems in an array inside the spider, so dispatch requests that depend on all items.
My first idea was to chain the requests and pass the remaining IDs and the already extracted items in the request's meta attribute until the last request is reached.
class ListEntrySpider(scrapy.Spider):
start_urls = ['https://example.com/lists']
def parse(self, response):
ids = json.dumps(response.text)['ids']
yield self._create_request(ids, [])
def parse_lists(self, response):
self._create_request(response.meta['ids'], response.meta['items'].extend(list(self._extract_lists(response))))
def finish(self, response):
items = response.meta['items'].extend(list(self._extract_lists(response)))
def _extract_lists(self, response):
for entry in json.dumps(response.text)['list']:
yield ListEntryItem(**entry)
def _create_request(self, ids: list, items: List[ListEntryItem]):
i = ids.pop(0)
return scrapy.Request(
f'https://example.com/list/{i}',
meta={'ids': ids, 'items': items},
callback=self.parse_lists if len(ids) > 1 else self.finish
)
As you can see, my solution looks very complex. I'm looking for something more readable and less complex.
there are different approaches for this. One is chaining as you do. Problems occur is one of the requests in the middle of the chain is dropped for any reason. Your have to be really careful about that and handle all possible errors / ignored requests.
Another approach is to use a separate spider for all "grouped" requests.
You can start those spiders programmatically and pass a bucket (e.g. a dict) as spider attribute. Within your pipeline you add your items from each request to this bucket. From "outside" you listen to the spider_closed signal and get this bucket which then contains all your items.
Look here for how to start a spider programatically via a crawler runner:
https://docs.scrapy.org/en/latest/topics/practices.html#running-multiple-spiders-in-the-same-process
pass a bucket to your spider when calling crawl() of your crawler runner
crawler_runner_object.crawl(YourSpider, bucket=dict())
and catch the sider_closed signal
from scrapy.signalmanager import dispatcher
def on_spider_closed(spider):
bucket = spider.bucket
dispatcher.connect(on_spider_closed, signal=signals.spider_closed)
this approach might seem even more complicated than chaining your requests but it actually takes a lot of complexity out of the problem as within your spider you can make your requests without taking much care about all the other requests.
For Scrapy, we could get the response.url, response.request.url, but how do we know the response.url, response.request.url is extracted from which parent url?
Thank you,
Ken
You can use Request.meta to keep track of such information.
When you yield your request, include response.url in the meta:
yield response.follow(link, …, meta={'source_url': response.url})
Then read it on your parsing method:
source_url = response.meta['source_url']
That is the most straightforward way to do this, and you can use this method to keep track of original URLs even across different parsing methods, if you wish.
Otherwise, you might want to look into taking advantage of the redirect_urls meta key, which keeps track of redirect jumps.
I am new to Scrapy and trying to extract content from web page, but getting lots of extra characters in the output. See image attached.
How can I update my code to get rid of the characters? I need to extract only the href from the web page.
My code:
class AttractionSpider(CrawlSpider):
name = "get-webcontent"
start_urls = [
'http://quotes.toscrape.com/page/1/'
]
rules = ()
def create_dirs(dir):
if not os.path.exists(dir):
os.makedirs(dir)
else:
shutil.rmtree(dir) #removes all the subdirectories!
os.makedirs(dir)
def __init__(self, name=None, **kwargs):
super(AttractionSpider, self).__init__(name, **kwargs)
self.items_buffer = {}
self.base_url = "http://quotes.toscrape.com/page/1/"
from scrapy.conf import settings
settings.overrides['DOWNLOAD_TIMEOUT'] = 360
def write_to_file(file_name, content_list):
with open(file_name, 'wb') as fp:
pickle.dump(content_list, fp)
def parse(self, response):
print ("Start scrapping webcontent....")
try:
str = ""
hxs = Selector(response)
links = hxs.xpath('//li//#href').extract()
with open('test1_href', 'wb') as fp:
pickle.dump(links, fp)
if not links:
return
log.msg("No Data to scrap")
for link in links:
v_url = ''.join( link.extract() )
if not v_url:
continue
else:
_url = self.base_url + v_url
except Exception as e:
log.msg("Parsing failed for URL {%s}"%format(response.request.url))
raise
def parse_details(self, response):
print ("Start scrapping Detailed Info....")
try:
hxs = Selector(response)
yield l_venue
except Exception as e:
log.msg("Parsing failed for URL {%s}"%format(response.request.url))
raise
Now I must say... obviously you have some experience with Python programming congrats, and you're obviously doing the official Scrapy docs tutorial which is great but for the life of me I have no idea exactly given the code snippet you have provided of what you're trying to accomplish. But that's ok, here's a couple of things:
You are using a Scrapy crawl spider. When using a cross spider the rules set the follow or pagination if you will as well as pointing in a car back to the function when the appropriate regular expression matches the rule to a page to then initialize the extraction or itemization. This is absolutely crucial to understand that you cannot use a crossfire without setting the rules and equally as important when using a cross spider you cannot use the parse function, because the way the cross spider is built parse function is already a native built-in function within itself. Do go ahead and read the documents or just create a cross spider and see how it doesn't create in parse.
Your code
class AttractionSpider(CrawlSpider):
name = "get-webcontent"
start_urls = [
'http://quotes.toscrape.com/page/1/'
]
rules = () #big no no ! s3 rul3s
How it should look like
class AttractionSpider(CrawlSpider):
name = "get-webcontent"
start_urls = ['http://quotes.toscrape.com'] # this would be cosidered a base url
# regex is our bf, kno him well, bassicall all pages that follow
#this pattern ... page/.* (meant all following include no exception)
rules = (
Rule(LinkExtractor(allow=r'/page/.*'), follow=True),callback='parse_item'),
)
Number two: go over the thing I mentioned about using the parts function with a Scrapy crawl spider, you should use "parse-_item"; I assume that you at least looked over the official docs but to sum it up, the reason that it cannot be used this because the crawl spider already uses Parts within its logic so by using Parts within a cross spider you're overriding a native function that it has and can cause all sorts of bugs and issues.
That's pretty straightforward; I don't think I have to go ahead and show you a snippet but feel free to go to the official Docs and on the right side where it says "spiders" go ahead and scroll down until you hit "crawl spiders" and it gives some notes with a caution...
To my next point: when you go from your initial parts you are not (or rather you do not) have a call back that goes from parse to Parts details which leads me to believe that when you perform the crawl you don't go past the first page and aside from that, if you're trying to create a text file (or you're using the OS module 2 write out something but you're not actually writing anything) so I'm super confused to why you are using the right function instead of read.
I mean, myself I have in many occasions use an external text file or CSV file for that matter that includes multiple URLs so I don't have to stick it in there but you're clearly writing out or trying to write to a file which you said was a pipeline? Now I'm even more confused! But the point is that I hope you're well aware of the fact that if you are trying to create a file or export of your extracted items there are options to export and to three already pre-built formats that being CSV JSON. But as you said in your response to my comment that if indeed you're using a pipeline and item and Porter intern you can create your own format of export as you so wish but if it's only the response URL that you need why go through all that hassle?
My parting words would be: it would serve you well to go over again Scrapy's official docs tutorial, at nauseam and stressing the importance of using also the settings.py as well as items.py.
# -*- coding: utf-8 -*-
import scrapy
import os
from scrapy.linkextractors import LinkExtractor
from scrapy.spiders import CrawlSpider, Rule
from quotes.items import QuotesItem
class QcrawlSpider(CrawlSpider):
name = 'qCrawl'
allowed_domains = ['quotes.toscrape.com']
start_urls = ['http://quotes.toscrape.com/']
rules = (
Rule(LinkExtractor(allow=r'page/.*'), callback='parse_item', follow=True),
)
def parse_item(self, response):
rurl = response.url
item = QuotesItem()
item['quote'] =response.css('span.text::text').extract()
item['author'] = response.css('small.author::text').extract()
item['rUrl'] = rurl
yield item
with open(os.path.abspath('') + '_' + "urllisr_" + '.txt', 'a') as a:
a.write(''.join([rurl, '\n']))
a.close()
Of course, the items.py would be filled out appropriately by the ones you see in the spider but by including the response URL both itemized I can do both writing out given even the default Scrappy methods CSV etc or I can create my own.
In this case being a simple text file but one can get pretty crafty; for example, if you write it out correctly that's the same using the OS module you can, for example as I have create m3u playlist from video hosting sites, you can get fancy with a custom CSV item exporter. But even with that then using a custom pipeline we can then write out a custom format for your csvs or whatever it is that you wish.
Would the correct method with a Scrapy Spider for entering a zip code value "27517" automatically within the entry box on this website: Locations of Junkyards be to use a Form Request?
Here is what I have right now:
import scrapy
from scrapy.http import FormRequest
from scrapy.item import Item, Field
from scrapy.http import FormRequest
from scrapy.spider import BaseSpider
class LkqSpider(scrapy.Spider):
name = "lkq"
allowed_domains = ["http://www.lkqcorp.com/en-us/locationResults/"]
start_urls = ['http://www.lkqcorp.com/en-us/locationResults/']
def start_requests(self):
return [ FormRequest("http://www.lkqcorp.com/en-us/locationResults/",
formdata={'dnnVariable':'27517'},
callback=self.parse) ]
def parsel(self):
print self.status
It doesn't do anything when run though, is Form Request mainly for completing login fields? What would be the best way to get to THIS page? (which comes up after the search for the zip 27517 and is where I would start scraping my desired information with a scrapy spider)
this isn't really a FormRequest as FormRequests is only a name for a POST request in scrapy, and of course it helps you fill a form, but a form is also normally a POST request.
You need some debugging console (I prefer Firebug for Firefox) to check which requests are being done, and it looks like it is a GET request and quite simple to replicate, the url would be something like this where you'll have to change the number after /fullcrit/ to the desired zip code, but you also need the lat and lng arguments, for that you could use the Google Maps API, check this answer for an example on how to get it, but to summarise just do this Request and get the location argument.
Apologies if this is a scrapy noob question but I have spent ages looking for the answer to this:
I want to store the raw data from each & every URL I crawl in my local filesystem as a separate file (ie response.body -> /files/page123.html) - ideally with the filename being a hash of the URL. This is so I can do further processing of the HTML (ie further parsing, indexing in Solr/ElasticSearch etc).
I've read the docs and not sure if there's a built-in way of doing this? Since the pages are by default being downloaded by the system it doesn't seem to make sense to be writing custom pipelines etc
As paul t said HttpCache Middleware might work for you but I'd advise writing you're own custom pipeline.
Scrapy has built-in ways of exporting data to files but they're for json, xml and csv not raw html. Don't worry though it's not too hard!
provided your items.py looks somthing like:
from scrapy.item import Item, Field
class Listing(Item):
url = Field()
html = Field()
and you've been saving your scraped data to those items in your spider like so:
item['url'] = response.url
item['html'] = response.body
your pipelines.py would just be:
import hashlib
class HtmlFilePipeline(object):
def process_item(self, item, spider):
file_name = hashlib.sha224(item['url']).hexdigest() #chose whatever hashing func works for you
with open('files/%s.html' % file_name, 'w+b') as f:
f.write(item['html'])
Hope that helps. Oh and dont forget to and to put a files/ directory in your project root and add to your settings.py :
ITEM_PIPELINES = {
'myproject.pipeline.HtmlFilePipeline': 300,
}
source: http://doc.scrapy.org/en/latest/topics/item-pipeline.html