BeautifulSoup not getting entirety of extracted class - beautifulsoup

I am trying to extract data from craigslist using BeautifulSoup. As a preliminary test, I wrote the following:
import urllib2
from bs4 import BeautifulSoup, NavigableString
link = 'http://boston.craigslist.org/search/jjj/index100.html'
print link
soup = BeautifulSoup(urllib2.urlopen(link).read())
print soup
x=soup.body.find("div",class_="content")
print x
Upon printing soup, I can see the entire webpage. However, upon trying to find something more specific such as the class called "content", it prints None. I know that the class exists in the page source as I looked on my own browser, but for some reason, it is not finding it in the BeautifulSoup parsing. Any ideas?
Edit:
I also added in the following to see what would happen:
print soup.body.article
When I do so, it prints out some information between the article tags, but not all. Is it possible that when I am using the find function, it is somehow skipping some information? I'm really not sure why this is happening when it prints the whole thing for the general soup, but not when I try to find particulars within it.

The find method on the BeautifulSoup instance (your soup variable) is not the same as the find method on a Tag (your soup.body).
This:
soup.body.find("div",class_="content")
is only searching through the direct children of the body tag.
If you call find on the BeautifulSoup instance, it does what you want and searches the whole document:
soup.find("div",class_="content")

Related

How can I list the URL of the page the data was scraped from with Scrapy?

I'm a real beginner but I've been searching high and low and can't seem to find a solution. I'm working on building some spiders but I can't figure out how to identify what URL my scraped data comes from.
My spider is extremely basic right now, I'm trying to learn as I go.
I've tried a few lines I've found on stackoverflow but can't get anything working other than a print function (I can't remember if it was "URL: " + response.request.url or something similar. I tried a bunch of things) that worked in the parse section of the code but I can't get anything working in the yield.
I could add other identifiers in the output but ideally I'd like the URL for the project I'm working towards
import scrapy
class FanaticsSpider(scrapy.Spider):
name = 'fanatics'
start_urls = ['https://www.fanaticsoutlet.com/nfl/new-england-patriots/new-england-patriots-majestic-showtime-logo-cool-base-t-shirt-navy/o-9172+t-70152507+p-1483408147+z-8-1114341320',
'https://www.fanaticsoutlet.com/nfl/new-england-patriots/new-england-patriots-nfl-pro-line-mantra-t-shirt-navy/o-2427+t-69598185+p-57711304142+z-9-2975969489',]
def parse(self, response):
yield {
'sale-price': response.xpath('//span[#data-talos="pdpProductPrice"]/span[#class="sale-price"]/text()').re('[$]\d+\.\d+'),
#'sale-price': response.xpath('//span[#data-talos="pdpProductPrice"]/span[#class="sale-price"]/text()').get(),
'regular-price': response.xpath('//span[#data-talos="pdpProductPrice"]/span[#class="regular-price strike-through"]/text()').re('[$]\d+\.\d+'),
#'regular-price': response.xpath('//span[#data-talos="pdpProductPrice"]/span[#class="regular-price strike-through"]/text()').get(),
}
Any help is much appreciated. I haven't begun to learn anything about pipeline yet, I'm not sure if that might hold a solution?
You can simply add the url in the yield like this:
yield {...,
'url': response.url,
...}

Scarpy outoput json

I'm struggling with Scrapy to output only "hits" to a json file. I'm new at this, so if there is just a link I should review, that might help (I've spent a fair amount of time googling around, still struggling) though code correction tips more welcome:).
I'm working off of the scrapy tutorial (https://doc.scrapy.org/en/latest/intro/overview.html) , with the original code outputing a long list including field names and output like "field: output" where both blanks and found items appear. I'd like only to include links that are found, and output them w/o the field name to a file.
For the following code I am trying, if I issue "scrapy crawl quotes2 -o quotes.json > output.json, it works but the quotes.json is always blank (i.e., including if I do "scrapy crawl quotes2 -o quotes.json").
In this case, as an experiment, I only want to return the URL if the string "Jane" is in the URL (e.g., /author/Jane-Austen):
import scrapy
class QuotesSpider(scrapy.Spider):
name = "quotes2"
start_urls = [
'http://quotes.toscrape.com/tag/humor/',
]
def parse(self, response):
for quote in response.css('a'):
for i in quote.css('a[href*=Jane]::attr(href)').extract():
if i is not None:
print(i)
I've tried "yield" and items options, but am not up to speed enough to make them work. My longer term ambition to go to sites without having to understand the html tree (which may in and of itself be the wrong approach) and look for URLs with specific text in the URL string.
Thoughts? Am guessing this is not too hard, but is beyond me.
Well this is happening because you are printing the items, you have to tell Scrapy explicitly to 'yield' them.
But before that i don't see why you are looping through the anchor nodes instead of that you should loop over the quotes using css or XPath selectors, extract all the author links inside that quote and lastly check if that URL contains a specific String (Jane for you case).
for quote in response.css('.quote'):
jane_url = quote.xpath('.//a[contains(#href, "Jane")]').extract_first()
if jane_url is not None:
yield {
'url': jane_url
}

Link extractor is not able to get the paths beyond a certain path

I need a bit help and your guidance on Scrapy.
My Start_Url is :: http://lighting.philips.co.uk/prof/
Have pasted my code below, which is able to get the links / paths till the below url. But not going beyond that. I need to go to each product's page, listed under the path below. In the "productsinfamily" page the specific products are listed (perhaps within a java script). My Crawler is not able to reach those individual product pages.
http://www.lighting.philips.co.uk/prof/led-lamps-and-tubes/led-lamps/corepro-ledbulb/productsinfamily/
Below is the code for the Crawl spider-
import scrapy
from scrapy.spiders import CrawlSpider, Rule
from scrapy.linkextractors import LinkExtractor
class ProductSearchSpider(CrawlSpider):
name = "product_search"
allowed_domains = ["lighting.philips.co.uk"]
start_urls = ['http://lighting.philips.co.uk/prof/']
rules = (Rule(LinkExtractor(allow=
(r'^https?://www.lighting.philips.co.uk/prof/led-lamps-and-tubes/.*', ),),
callback='parse_page', follow=True),)
def parse_page(self, response):
yield{'URL' : response.url}
You are right that the links are defined in javascript.
If you take a look at the html source, on line 3790 you can see a variable named d75products created. This is later used to populate a template and display the products.
The way I'd approach this would be to extract this data from the source and use the json module to load it. Once you have the data, you can do with it whatever you want.
Another way would be to use something (e.g. a browser) to execute the javascript, and then parse the resulting html. I do think that's unnecessary and overcomplicated though.

Enter string into search field with Scrapy Spider; loading the generated URL

Would the correct method with a Scrapy Spider for entering a zip code value "27517" automatically within the entry box on this website: Locations of Junkyards be to use a Form Request?
Here is what I have right now:
import scrapy
from scrapy.http import FormRequest
from scrapy.item import Item, Field
from scrapy.http import FormRequest
from scrapy.spider import BaseSpider
class LkqSpider(scrapy.Spider):
name = "lkq"
allowed_domains = ["http://www.lkqcorp.com/en-us/locationResults/"]
start_urls = ['http://www.lkqcorp.com/en-us/locationResults/']
def start_requests(self):
return [ FormRequest("http://www.lkqcorp.com/en-us/locationResults/",
formdata={'dnnVariable':'27517'},
callback=self.parse) ]
def parsel(self):
print self.status
It doesn't do anything when run though, is Form Request mainly for completing login fields? What would be the best way to get to THIS page? (which comes up after the search for the zip 27517 and is where I would start scraping my desired information with a scrapy spider)
this isn't really a FormRequest as FormRequests is only a name for a POST request in scrapy, and of course it helps you fill a form, but a form is also normally a POST request.
You need some debugging console (I prefer Firebug for Firefox) to check which requests are being done, and it looks like it is a GET request and quite simple to replicate, the url would be something like this where you'll have to change the number after /fullcrit/ to the desired zip code, but you also need the lat and lng arguments, for that you could use the Google Maps API, check this answer for an example on how to get it, but to summarise just do this Request and get the location argument.

Is it possible to use beautiful soup to extract multiple types of items?

I've been looking at documentation and they don't cover this issue. I'm trying to extract all text and all links, but not separately. I want them interleaved to preserve context. I want to end up with an interleaved list of text and links. Is this even possible with BeautifulSoup?
Yes, this is definitely possible.
import urllib2
import BeautifulSoup
request = urllib2.Request("http://www.example.com")
response = urllib2.urlopen(request)
soup = BeautifulSoup.BeautifulSoup(response)
for a in soup.findAll('a'):
print a
Breaking this code snippet down, you are making a request for a website (in this case Google.com) and parsing the response back with BeautifulSoup. Your requirements were to find all links and text and keep the context. The output of the above code will look like this:
<img src="/_img/iana-logo-pageheader.png" alt="Homepage" />
Domains
Numbers
Protocols
About IANA
RFC 2606
About
Presentations
Performance
Reports
Domains
Root Zone
.INT
.ARPA
IDN Repository
Protocols
Number Resources
Abuse Information
Internet Corporation for Assigned Names and Numbers
iana#iana.org