Scrapy, No Errors, Spider closes after crawling - scrapy

for restaurant in response.xpath('//div[#class="listing"]'):
restaurantItem = RestaurantItem()
restaurantItem['name'] = response.css(".title::text").extract()
yield restaurantItem
next_page = response.css(".next > a::attr('href')")
if next_page:
url = response.urlJoin(next_page[0].extract())
yield scrapy.Request(url, self.parse)
I fixed all the errors, that it was giving me. Now, I am getting no errors. Spider just closes, after crawling the start_url. the for loop never gets executed.

When you try to find an element this way:
response.xpath('//div[#class="listing"]')
You are telling I want to find a div that literally only has "listing" as its class:
<div class="listing"></div>
But this doesn't exist anywhere in the DOM, what's happening is the following:
<div class="listing someOtherClass"></div>
To select the above element you have to tell that the element contains a certain attribute text but can contain more. Here, like this:
response.xpath('//div[contains(#class,"listing")]')

Related

Scrapy crawl web with many duplicated element class name

I'm new to the Scrapy and trying to crawl the web but the HTML element consist of many DIV that have duplicated class name eg.
<section class= "pi-item pi-smart-group pi-border-color">
<section class="pi-smart-group-head">
<h3 class = "pi-smart-data-label pi-data-label pi-secondary-font pi-item-spacing">
</section>
<section class= "pi-smart-group-body">
<div class="pi-smart-data-value pi-data-value pi-font pi-item-spacing">
</div>
</section>
</section>
My problem is that this structure repeat for many other element and when I'm using response.css I will get multiple element which I didn't want
(Basically I want to crawl the Pokemon information eg. "Types", "Species" and "Ability" of each Pokemon from https://pokemon.fandom.com/wiki/Bulbasaur , I have done get url for all Pokemon but stuck in getting information from each Pokemon)
I have tried to do this scrapy project for you and got the results. The issue I see is that you have used CSS. You can scrape with that, but it is far more effective to use Xpath selectors. You have more versatility to select the specific tags you want. Here is the code I wrote for you. Bare in mind, this code is just something I did quickly to get your results. It works but I did it in this way so it is easy for you understand it since you are new to scrapy. Please let me know if this is helpful
import scrapy
class PokemonSpiderSpider(scrapy.Spider):
name = 'pokemon_spider'
start_urls = ['https://pokemon.fandom.com/wiki/Bulbasaur']
def parse(self, response):
pokemon_type = response.xpath("(//div[#class='pi-data-value pi-font'])[1]/a/#title")
pokemon_species = response.xpath('//div[#data-source="species"]//div/text()')
pokemon_abilities = response.xpath('//div[#data-source="ability"]/div/a/text()')
yield {
'pokemon type': pokemon_type.extract(),
'pokemon species': pokemon_species.extract(),
'pokemon abilities': pokemon_abilities.extract()
}
You can use XPath expression with a property text:
abilities = response.xpath('//h3[a[.="Abilities"]]/following-sibling::div[1]/a/text()').getall()
species = response.xpath('//h3[a[.="Species"]]/following-sibling::div[1]/text()').get()

Requests response object: how to check page loaded completely (dynamic content)?

I am doing the following. After creating a session I am doing a simple GET to a page. Problem is, this page if full of dynamic parts, so it takes between 10-30 seconds to fully generate the HTML I am interested in. The HTML I process with BeautifulSoup.
If I process the response object too quickly, I don't get the data I want. I have used "sleep" to pause for some time, but I think there should be a better way to check for complete page load. I cannot depend on status 200 code, because inside the main page, dynamic parts are still loading.
My code:
s = requests.session()
r = s.get('URL')
time.sleep(20)
... code to process response object...
I have tried to do it more "elegantly" to check for a certain tag through BeautifulSoup search, but doesn't seem to work.
My code:
title_found = False
while title_found == False:
soupje = BeautifulSoup(r.text, 'html.parser')
title_found_in_html_full = soupje.find(id='titleView!1Title')
if title_found_in_html_full is not None:
title_found_in_html = title_found_in_html_full.get('id')
if title_found_in_html == 'titleView!1Title':
title_found = True
Is it true the response object changes over time as the page is loading?
Any suggestions? Thanks

How to make scrapy wait for request result before continuing to next line

In my spider, I have some code like this:
next_page_url = response.follow(
url=self.start_urls[0][:-1]+str(page_number+1),
callback=self.next_page
)
if next_page_url:
next_page looks like this:
def next_page(self, response):
next_page_count = len(<xpath I use>)
if next_page_count > 0:
return True
else:
return False
I need next_page_url to be set before I can continue the next segment of code.
This code essentially checks if the current page is the last page for some file writing purposes
The answer I ended up going with:
instead of checking if the next page exists and then continuing on current request, I made the request to the page, checked if I got a response, and if I didn't, I said that the previous page was the final page.
I did this by using the meta keyword in scrapy's Request library (response.follow()) to pass the current page's necessary tracking data into the new request

Scrapy Running Results

Just getting started with Scrapy, I'm hoping for a nudge in the right direction.
I want to scrape data from here:
https://www.sportstats.ca/display-results.xhtml?raceid=29360
This is what I have so far:
import scrapy
import re
class BlogSpider(scrapy.Spider):
name = 'sportstats'
start_urls = ['https://www.sportstats.ca/display-results.xhtml?raceid=29360']
def parse(self, response):
headings = []
results = []
tables = response.xpath('//table')
headings = list(tables[0].xpath('thead/tr/th/span/span/text()').extract())
rows = tables[0].xpath('tbody/tr[contains(#class, "ui-widget-content ui-datatable")]')
for row in rows:
result = []
tds = row.xpath('td')
for td in enumerate(tds):
if headings[td[0]].lower() == 'comp.':
content = None
elif headings[td[0]].lower() == 'view':
content = None
elif headings[td[0]].lower() == 'name':
content = td[1].xpath('span/a/text()').extract()[0]
else:
try:
content = td[1].xpath('span/text()').extract()[0]
except:
content = None
result.append(content)
results.append(result)
for result in results:
print(result)
Now I need to move on to the next page, which I can do in a browser by clicking the "right arrow" at the bottom, which I believe is the following li:
<li><a id="mainForm:j_idt369" href="#" class="ui-commandlink ui-widget fa fa-angle-right" onclick="PrimeFaces.ab({s:"mainForm:j_idt369",p:"mainForm",u:"mainForm:result_table mainForm:pageNav mainForm:eventAthleteDetailsDialog",onco:function(xhr,status,args){hideDetails('athlete-popup');showDetails('event-popup');scrollToTopOfElement('mainForm\\:result_table');;}});return false;"></a>
How can I get scrapy to follow that?
If you open the url in a browser without javascript you won't be able to move to the next page. As you can see inside the li tag, there is some javascript to be executed in order to get the next page.
Yo get around this, the first option is usually try to identify the request generated by javascript. In your case, it should be easy: just analyze the java script code and replicate it with python in your spider. If you can do that, you can send the same request from scrapy. If you can't do it, the next option is usually to use some package with javascript/browser emulation or someting like that. Something like ScrapyJS or Scrapy + Selenium.
You're going to need to perform a callback. Generate the url from the xpath from the 'next page' button. So url = response.xpath(xpath to next_page_button) and then when you're finished scraping that page you'll do yield scrapy.Request(url, callback=self.parse_next_page). Finally you create a new function called def parse_next_page(self, response):.
A final, final note is if it happens to be in Javascript (and you can't scrape it even if you're sure you're using the correct xpath) check out my repo in using splash with scrapy https://github.com/Liamhanninen/Scrape

Using Scrapy to scrape data after form submit

I'm trying to scrape content from listing detail page that can only be viewed by clicking the 'view' button which triggers a form submit . I am new to both Python and Scrapy
Example markup
<li><h3>Abc Widgets</h3>
<form action="/viewlisting?id=123" method="post">
<input type="image" src="/images/view.png" value="submit" >
</form>
</li>
My solution in Scrapy is to extract form actions then use Request to return the page with a callback to parse it for for the desired content. However I have hit a few issues
I'm getting the following error "request url must be str or unicode"
secondly when I hardcode a URL to overcome the above issue it seems my parsing function is returning what looks like a list
Here is my code - with reactions of the real URLs
from scrapy.spiders import Spider
from scrapy.selector import Selector
from scrapy.http import Request
from wfi2.items import Wfi2Item
class ProfileSpider(Spider):
name = "profiles"
allowed_domains = ["wfi.com.au"]
start_urls = ["http://example.com/wps/wcm/connect/internet/wfi/Contact+Us/Find+Your+Local+Office/findYourLocalOffice.jsp?state=WA",
"http://example.com/wps/wcm/connect/internet/wfi/Contact+Us/Find+Your+Local+Office/findYourLocalOffice.jsp?state=VIC",
"http://example.com/wps/wcm/connect/internet/wfi/Contact+Us/Find+Your+Local+Office/findYourLocalOffice.jsp?state=QLD",
"http://example.com/wps/wcm/connect/internet/wfi/Contact+Us/Find+Your+Local+Office/findYourLocalOffice.jsp?state=NSW",
"http://example.com/wps/wcm/connect/internet/wfi/Contact+Us/Find+Your+Local+Office/findYourLocalOffice.jsp?state=TAS"
"http://example.com/wps/wcm/connect/internet/wfi/Contact+Us/Find+Your+Local+Office/findYourLocalOffice.jsp?state=NT"
]
def parse(self, response):
hxs = Selector(response)
forms = hxs.xpath('//*[#id="area-managers"]//*/form')
for form in forms:
action = form.xpath('#action').extract()
print "ACTION: ", action
#request = Request(url=action,callback=self.parse_profile)
request = Request(url=action,callback=self.parse_profile)
yield request
def parse_profile(self, response):
hxs = Selector(response)
profile = hxs.xpath('//*[#class="contentContainer"]/*/text()')
print "PROFILE", profile
I'm getting the following error "request url must be str or unicode"
Please have a look at the scrapy documentation for extract(). It says : "Serialize and return the matched nodes as a list of unicode strings" (bold added by me).
The first element of the list is probably what you want. So you could do something like:
request = Request(url=response.urljoin(action[0]), callback=self.parse_profile)
secondly when I hardcode a URL to overcome the above issue it seems my
parsing function is returning what looks like a list
According to the documentation of xpath it's a SelectorList. Add extract() to the xpath and you'll get a list with the text tokens. Eventually you want to clean up and join the elements that list before further processing.