Scrapy spider not following pagination - scrapy

I am using code from this link(https://github.com/eloyz/reddit/blob/master/reddit/spiders/pic.py) but somehow I am unable to visit paginated page.
I am on using scrapy 1.3.0

You don't have any mechanism for processing next page, all you do is gathering images.
Here is what you should be doing, I wrote some selectors but didn't test it.
from scrapy.spiders import Spider
from scrapy.selector import Selector
from scrapy import Request
import urlparse
class xxx_spider(Spider):
name = "xxx"
allowed_domains = ["xxx.com"]
def start_requests(self):
url = 'first page url'
yield Request(url=url, callback=self.parse, meta={"page":1})
def parse(self, response):
page = response.meta["page"] + 1
html = Selector(response)
pics = html.css('div.thing')
for selector in pics:
item = PicItem()
item['image_urls'] = selector.xpath('a/#href').extract()
item['title'] = selector.xpath('div/p/a/text()').extract()
item['url'] = selector.xpath('a/#href').extract()
yield item
next_link = html.css("span.next-button a::attr(href)")
if not next_link is None:
yield Request(url=url, callback=self.parse, meta={"page":page})
Similar to what you did, but when I get images, I then check next page link, if it exists then I yield another request with it.
Mehmet

Related

how to call one spider from another spider on scrapy

I have two spiders, and I want one call the other with the information scraped, which are not links that I could follow. Is there any way of doing it calling a spider from another one?
To illustrate better the problem: the url of the "one" page is the form /one/{item_name}, where {item_name} is the information I can get from the page /other/
...
<li class="item">item1</li>
<li class="item">someItem</li>
<li class="item">anotherItem</li>
...
Then I have the spider OneSpider that scrapes the /one/{item_name}, and the OtherSpider that scrapes /other/ and retrieve the item names, as shown below:
class OneSpider(Spider):
name = 'one'
def __init__(self, item_name, *args, **kargs):
super(OneSpider, self).__init__(*args, **kargs)
self.start_urls = [ f'/one/{item_name}' ]
def parse(self, response):
...
class OtherSpider(Spider):
name = 'other'
start_urls = [ '/other/' ]
def parse(self, response):
itemNames = response.css('li.item::text').getall()
# TODO:
# for each item name
# scrape /one/{item_name}
# with the OneSpider
I already checked these two questions: How to call particular Scrapy spiders from another Python script, and scrapy python call spider from spider, and several other questions where the main solution is creating another method inside the class and passing it as callbacks to new requests, but I don't think it is applicable when these new requests would have customized urls.
Scrapy don't have possibility to call spider from another spider.
related issue in scrapy github repo
However You can merge logic from 2 your spiders into single spider class:
import scrapy
class OtherSpider(scrapy.Spider):
name = 'other'
start_urls = [ '/other/' ]
def parse(self, response):
itemNames = response.css('li.item::text').getall()
for item_name in itemNames:
yield scrapy.Request(
url = f'/one/{item_name}',
callback = self.parse_item
)
def parse_item(self, response):
# parse method from Your OneSpider class

How to link the items.py and my spider file?

I'm new to scrapy and trying to scrape a page which has several links. Which I want to follow and scrape the content from that page as well, and from that page there is another link that I want to scrape.
I tried this path on shell and it worked but, I don't know what I am missing here. I want to be able to crawl through two pages by following the links.
I tried reading through tutorials but I don't really understand what I am missing here.
This is my items.py file.
import scrapy
# item class included here
class ScriptsItem(scrapy.Item):
# define the fields for your item here like:
link = scrapy.Field()
attr = scrapy.Field()
And here is my scripts.py file.
import scrapy
import ScriptsItem
class ScriptsSpider(scrapy.Spider):
name = 'scripts'
allowed_domains = ['https://www.imsdb.com/TV/Futurama.html']
start_urls = ['http://https://www.imsdb.com/TV/Futurama.html/']
BASE_URL = 'https://www.imsdb.com/TV/Futurama.html'
def parse(self, response):
links = response.xpath('//table//td//p//a//#href').extract()
for link in links:
absolute_url = self.BASE_URL + link
yield scrapy.Request(absolute_url, callback=self.parse_attr)
def parse_attr(self, response):
item = ScriptsItem()
item["link"] = response.url
item["attr"] = "".join(response.xpath("//table[#class = 'script-details']//tr[2]//td[2]//a//text()").extract())
return item
Replace
import ScriptsItem
to
from your_project_name.items import ScriptsItem
your_project_name - Name of your project

Extract text from table with Scrapy

I'm trying to extract Job title insides a table from this page: http://www.chalmers.se/en/about-chalmers/Working-at-Chalmers/Vacancies/Pages/default.aspx
This is the code, but it always returns empty. Any idea how to fix this?
import os
from scrapy.spiders import CrawlSpider
from scrapy.selector import Selector
class mySpider(CrawlSpider):
name = "myspider"
allowed_domains = ["www.chalmers.se"]
start_urls = [
"http://www.chalmers.se/en/about-chalmers/Working-at-Chalmers/Vacancies/Pages/default.aspx",
]
def parse(self, response):
sel = response.selector
# try to extract text from a tag inside <td>
for tr in sel.css("table#jobsTable>tbody>tr"):
my_title = tr.xpath('td[#class="jobitem"]/a/text()').extract()
print '================', my_title
I also try to give absolute html path, like bellow but still got empty title:
my_title = response.xpath('/html/body/div/div[1]/div/div[11]/div/table/tbody/tr[1]/td[2]/a/text()').extract()
Your website gets above Jobs table from another source (loading it using AJAX call).
So you just need to start from another url:
start_urls = ['https://web103.reachmee.com/ext/I003/304/main?site=5&validator=a72aeedd63ec10de71e46f8d91d0d57c&lang=UK&ref=&ihelper=http://www.chalmers.se/en/about-chalmers/Working-at-Chalmers/Vacancies/Pages/default.aspx']

Automate page scroll to down in Splash and Scrapy

I am crawling a site which uses lazy loading for product images.
For this reason i included scrapy-splash so that the javascript can be rendered also with splash i can provide a wait argument. Previously i had a though that it is because of the timing that the raw scrapy.Request is returning a placeholder image instead of the originals.
I've tried wait argument to 29.0 secs also, but still my crawler hardly getting 10 items (it should bring 280 items based on calculations). I have a item pipleline which checks if the image is empty in the item so i raise DropItem.
I am not sure, but i also noticed that its not only the wait problem. It looks like that images gets loaded when i scroll down.
What i am looking for is a way to automate a scroll to bottom behaviour within my requests.
Here is my code
Spider
def parse(self, response):
categories = response.css('div.navigation-top-links a.uppercase::attr(href)').extract()
for category in categories:
link = urlparse.urljoin(self.start_urls[0], category)
yield SplashRequest(link, callback=self.parse_products_listing, endpoint='render.html',
args={'wait': 0.5})
Pipeline
class ScraperPipeline(object):
def process_item(self, item, spider):
if not item['images']:
raise DropItem
return item
Settings
IMAGES_STORE = '/scraper/images'
SPLASH_URL = 'http://172.22.0.2:8050'
DUPEFILTER_CLASS = 'scrapy_splash.SplashAwareDupeFilter'
ITEM_PIPELINES = {
'scraper.pipelines.ScraperPipeline': 300,
'scrapy.pipelines.images.ImagesPipeline': 1
}
DOWNLOADER_MIDDLEWARES = {
'scrapy.downloadermiddleware.useragent.UserAgentMiddleware': None,
'scrapy_splash.SplashMiddleware': 725,
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware': 810,
# 'custom_middlewares.middleware.ProxyMiddleware': 210,
}
If you are set on using splash this answer should give you some guidance: https://stackoverflow.com/a/40366442/7926936
You could also use selenium in a DownloaderMiddleware, this is a example I have for a Twitter scraper that will get the first 200 tweets of a page:
from selenium import webdriver
from scrapy.http import HtmlResponse
from selenium.common.exceptions import TimeoutException
from selenium.webdriver.support.ui import WebDriverWait
class SeleniumMiddleware(object):
def __init__(self):
self.driver = webdriver.PhantomJS()
def process_request(self, request, spider):
self.driver.get(request.url)
tweets = self.driver.find_elements_by_xpath("//li[#data-item-type='tweet']")
while len(tweets) < 200:
try:
self.driver.execute_script("window.scrollTo(0, document.body.scrollHeight);")
WebDriverWait(self.driver, 10).until(
lambda driver: new_posts(driver, len(tweets)))
tweets = self.driver.find_elements_by_xpath("//li[#data-item-type='tweet']")
except TimeoutException:
break
body = self.driver.page_source
return HtmlResponse(self.driver.current_url, body=body, encoding='utf-8', request=request)
def new_posts(driver, min_len):
return len(driver.find_elements_by_xpath("//li[#data-item-type='tweet']")) > min_len
In the while loop I am waiting in each loop for new tweets until there are 200 tweets loaded in the page and have a 10 seconds max wait.

Scrapy HtmlXPathSelector

Just trying out scrapy and trying to get a basic spider working. I know this is just probably something I'm missing but I've tried everything I can think of.
The error I get is:
line 11, in JustASpider
sites = hxs.select('//title/text()')
NameError: name 'hxs' is not defined
My code is very basic at the moment, but I still can't seem to find where I'm going wrong. Thanks for any help!
from scrapy.spider import BaseSpider
from scrapy.selector import HtmlXPathSelector
class JustASpider(BaseSpider):
name = "google.com"
start_urls = ["http://www.google.com/search?hl=en&q=search"]
def parse(self, response):
hxs = HtmlXPathSelector(response)
sites = hxs.select('//title/text()')
for site in sites:
print site.extract()
SPIDER = JustASpider()
The code looks quite old version. I recommend using these codes instead
from scrapy.spider import Spider
from scrapy.selector import Selector
class JustASpider(Spider):
name = "googlespider"
allowed_domains=["google.com"]
start_urls = ["http://www.google.com/search?hl=en&q=search"]
def parse(self, response):
sel = Selector(response)
sites = sel.xpath('//title/text()').extract()
print sites
#for site in sites: (I dont know why you want to loop for extracting the text in the title element)
#print site.extract()
hope it helps and here is a good example to follow.
I removed the SPIDER call at the end and removed the for loop. There was only one title tag (as one would expect) and it seems that was throwing off the loop. The code I have working is as follows:
from scrapy.spider import BaseSpider
from scrapy.selector import HtmlXPathSelector
class JustASpider(BaseSpider):
name = "google.com"
start_urls = ["http://www.google.com/search?hl=en&q=search"]
def parse(self, response):
hxs = HtmlXPathSelector(response)
titles = hxs.select('//title/text()')
final = titles.extract()
I had a similar problem, NameError: name 'hxs' is not defined, and the problem related to spaces and tabs: the IDE uses spaces instead of tabs, you should check it out.
Code looks correct.
In latest versions of Scrapy
HtmlXPathSelector is deprecated.
Use Selector:
hxs = Selector(response)
sites = hxs.xpath('//title/text()')
Be sure you are running the code you are showing us.
Try deleting *.pyc files in your project.
This works for me:
Save the file as test.py
Use the command scrapy runspider <filename.py>
For example:
scrapy runspider test.py
You should change
from scrapy.selector import HtmlXPathSelector
into
from scrapy.selector import Selector
And use hxs=Selector(response) instead.
I use Scrapy with BeautifulSoup4.0. For me, Soup is easy to read and understand. This is an option if you don't have to use HtmlXPathSelector. Hope this helps!
import scrapy
from bs4 import BeautifulSoup
import Item
def parse(self, response):
soup = BeautifulSoup(response.body,'html.parser')
print 'Current url: %s' % response.url
item = Item()
for link in soup.find_all('a'):
if link.get('href') is not None:
url = response.urljoin(link.get('href'))
item['url'] = url
yield scrapy.Request(url,callback=self.parse)
yield item
this is just a demo but it works. need to be customized offcourse.
#!/usr/bin/env python
from scrapy.spider import BaseSpider
from scrapy.selector import HtmlXPathSelector
class DmozSpider(BaseSpider):
name = "dmoz"
allowed_domains = ["dmoz.org"]
start_urls = [
"http://www.dmoz.org/Computers/Programming/Languages/Python/Books/",
"http://www.dmoz.org/Computers/Programming/Languages/Python/Resources/"
]
def parse(self, response):
hxs = HtmlXPathSelector(response)
sites = hxs.select('//ul/li')
for site in sites:
title = site.select('a/text()').extract()
link = site.select('a/#href').extract()
desc = site.select('text()').extract()
print title, link, desc