How to solve issue exporting DataFrame to CSV? - pandas

I am extracting some text (HTML) from a txt file. There are about 9 rows that need to extracted and slpit the data up into 4 columns (Field_01 to Field_04) which seems to works well in the VS terminal. However, when I export the data to a csv, 2 issues arise. The first is that the data that should be in the 2nd column is split between the 2nd 3rd and 4th column, while the data that should be in the 3rd and 4th column are pushed to a 5th and 6th. The second issues is that, in the terminal I get all 9 rows, but only one row is exported to the CSV file.
Here is my code...
import pandas as pd
from bs4 import BeautifulSoup
import schedule
import time
#import urllib.parse
#import requests
baseurl = 'https://hippeas.com'
data = open("/run/user/759001103/gvfs/smb-share:server=192.168.0.150,share=indexserver/code.txt", "r")
info = data.readlines()
#print(info)
for items in info:
if items.startswith(" <img src="):
reduce_imgurl = items.split('//')[-1]
if items.startswith(" <h3 class="):
reduce_name = items[39:-6]
if items.startswith(" <a href="):
reduce_link = items[11:-32]
if items.startswith(" <span class="):
reduce_price = items[55:-8]
#print(reduce_imgurl, reduce_name, baseurl + reduce_link, reduce_price)
dataset = {'Field_01':[reduce_imgurl],'Field_02':[reduce_name],'Field_03':[baseurl + reduce_link],'Field_04':[reduce_price]}
#print(dataset)
df = pd.DataFrame(dataset, columns=('Field_01','Field_02','Field_03','Field_04'))
print(df)
df.to_csv(r'/run/user/759001103/gvfs/smb-share:server=192.168.0.150,share=indexserver/Testcode.csv', index = False)
Here is what it shows in the terminal vs the results...
Terminal and csv result

The question should only focus on one issue, so this answer only refers to the actual issue. Furthermore, the content should be parsed with BeautifulSoup if it is HTML, but this is not used at all despite tagging.
The problem is that the CSV is overwritten over and over again in the for-loop. One solution would be to decouple the extracting part from the writing part:
...
dataset = []
for items in info:
if items.startswith(" <img src="):
reduce_imgurl = items.split('//')[-1]
if items.startswith(" <h3 class="):
reduce_name = items[39:-6]
if items.startswith(" <a href="):
reduce_link = items[11:-32]
if items.startswith(" <span class="):
reduce_price = items[55:-8]
dataset.append({'Field_01':reduce_imgurl,'Field_02':reduce_name,'Field_03':baseurl + reduce_link,'Field_04':reduce_price})
pd.DataFrame(dataset).to_csv(r'PATH_TO_FILE', index = False)

Related

Find link in text and replace with "a" tag

I have a partially good HTML, I need to create hyperlink, like:
Superotto: risorse audiovisive per superare i pregiudizi e celebrare
l’otto marzo, in “Indire Informa”, 5 marzo 2021,
https://www.indire.it/2021/03/05/superotto-risorse-audiovisive-per-superare-i-pregiudizi-e-celebrare-lotto-marzo/;
Sezione Superotto in
https://piccolescuole.indire.it/iniziative/la-scuola-allo-schermo/#superotto.
Has to become:
Superotto: risorse audiovisive per superare i pregiudizi e celebrare
l’otto marzo, in “Indire Informa”, 5 marzo 2021, < a
href="https://www.indire.it/2021/03/05/superotto-risorse-audiovisive-per-superare-i-pregiudizi-e-celebrare-lotto-marzo/" >https://www.indire.it/2021/03/05/superotto-risorse-audiovisive-per-superare-i-pregiudizi-e-celebrare-lotto-marzo/< /a >;
Sezione Superotto in < a
href="https://piccolescuole.indire.it/iniziative/la-scuola-allo-schermo/#superotto">https://piccolescuole.indire.it/iniziative/la-scuola-allo-schermo/#superotto< /a >.
Beautifulsoup seems to not find the http well, so I used this regex with the pure python findall, but I cannot substitute or compose the text. Right now I made:
links = re.findall(r"(http|ftp|https:\/\/)([\w_-]+(?:(?:\.[\w_-]+)+))([\w.,#?^=%&:\/~+#-]*[\w#?^=%&\/~+#-])", str(soup))
link_to_replace = []
for l in links:
link = ''.join(l)
if link in soup.find("body").text:
good_link = ""+link+""
fixed_text = soup.replace(link, good_link)
soup.replace_with(fixed_text)
I tried multiple solutions in the last two lines (this is just one), none worked.
Perhaps as follows, where I first identify the relevant anchor elements and strip out any other attributes besides the href, then later substitute the href link with the href html
import re
import requests
from bs4 import BeautifulSoup as bs
r = requests.get('https://rivista.clionet.it/vol5/giorgi-zoppi-la-ricerca-indire-tra-uso-didattico-del-patrimonio-storico-culturale-e-promozione-delle-buone-pratiche/')
soup = bs(r.text, 'lxml')
item = soup.select_one('p:has(a[id="ft-note-16"])')
text = item.text
for tag in item.select('a:not([id])'):
href = tag['href']
tag.attrs = {'href': href}
text = re.sub(href, str(tag), text)
text = re.sub(item.a.text, '', text).strip()
print(text)

Use URLs from List to save zip file

Trying to use urllib.request to read a list of urls from a shapefile, then download the zips from all those URLs. So far I got my list of a certain number of URLs, but I am unable to pass all of them through. The error is expected string or bytes-like object. Meaning theres prob an issue with the URL. As a side note, I also need to download them and name them by their file name/#. Need help!! Code below.
import arcpy
import urllib.request
import os
os.chdir('C:\\ProgInGIS\\FinalExam\\Final')
lidar_shp = 'C:\\ProgInGIS\\FinalExam\\Final\\lidar-2013.shp'
zip_file_download = 'C:\\ProgInGIS\\FinalExam\\Final\\file1.zip'
data = []
with arcpy.da.SearchCursor(lidar_shp,"*") as cursor:
for row in cursor:
data.append(row)
data.sort(key=lambda tup: tup[2])
i = 0
with arcpy.da.UpdateCursor(lidar_shp,"*") as cursor:
for row in cursor:
row = data[i]
i += 1
cursor.updateRow(row)
counter = 0
url_list = []
with arcpy.da.UpdateCursor(lidar_shp, ['geotiff_ur']) as cursor:
for row in cursor:
url_list.append(row)
counter += 1
if counter == 18:
break
for item in url_list:
print(item)
urllib.request.urlretrieve(item)
I understand your question this way: you want to download a zip file for each record in a shapefile from an URL defined in a certain field.
It's easier to use the requests package which is also recommended in the urllib.request documentation:
The Requests package is recommended for a higher-level HTTP client interface.
Here is an example:
import arcpy, arcpy.da
import shutil
import requests
SHAPEFILE = "your_shapefile.shp"
with arcpy.da.SearchCursor(SHAPEFILE, ["name", "url"]) as cursor:
for name, url in cursor:
response = requests.get(url, stream=True)
if response.status_code == 200:
with open(f"{name}.zip", "wb") as file:
response.raw.decode_content = True
shutil.copyfileobj(response.raw, file)
There is another example on GIS StackExchange:
https://gis.stackexchange.com/a/392463/21355

web scrape does not find the correct tags

I am trying to extract the text of this page: https://www.londonstockexchange.com/news-article/ESNT/date-for-fy-2020-results-announcement/14850033 using bs4 and pandas
I start with:
src=requests.get(url).content
soup = BeautifulSoup(src,'xml')
and see that the text I am interested in is wrapped in p tags,
but when I run soup.find_all('p'), the only return I get is the closing paragraph.
How can I extract the paragraph text within? What am I missing?
These are the paragraphs I am trying to extract:
I tried also with selenium using:
chrome_options = webdriver.ChromeOptions()
chrome_options.add_argument("--headless")
chrome_driver = os.getcwd() + "\\chromedriver.exe"
driver = webdriver.Chrome(options = chrome_options, executable_path = chrome_driver)
driver.get(url)
page = driver.page_source
page_soup = BeautifulSoup(page,'xml')
div=page_soup.find_all('p')
[a.text for a in div]
I figured it out.
The body of the site comes from a <script> tag that holds a JSON but with a funky encoding.
That said tag has an id of "ng-lseg-state", which means this is Angular's custom HTML encoding.
You can target the <script> tag with BeautifulSoup and parse it with the json module.
Then, however, you need to deal with Angular's encoding. One way, a bit crude thou, is to chain a bunch of .replace() methods.
Here's how:
import json
import requests
from bs4 import BeautifulSoup
url = "https://www.londonstockexchange.com/news-article/ESNT/date-for-fy-2020-results-announcement/14850033"
script = BeautifulSoup(requests.get(url).text, "lxml").find("script", {"id": "ng-lseg-state"})
article = json.loads(script.string.replace("&q;", '"'))
main_key = "G.{{api_endpoint}}/api/v1/pages?parameters=newsId%3D14850033&a;path=news-article"
article_body = article[main_key]["body"]["components"][1]["content"]["newsArticle"]["value"]
decoded_body = (
article_body
.replace('&l;', '<')
.replace('&g;', '>')
.replace('&q;', '"')
)
print(BeautifulSoup(decoded_body, "lxml").find_all("p")[22].getText())
This outputs:
Essentra plc is a FTSE 250 company and a leading global provider of essential components and solutions.&a;#160; Organised into three global divisions, Essentra focuses on the light manufacture and distribution of high volume, enabling components which serve customers in a wide variety of end-markets and geographies.
However, as I've said, this is not the best approach, as I'm not entirely sure how to deal with a bunch of other characters, namely:
&a;#160;
&a;amp;
&s;
just to name a few. But I've already asked about this.
EDIT:
Here's a fully working code based on the answer to my question, mentioned above.
import html
import json
import requests
from bs4 import BeautifulSoup
def unescape(decoded_html):
char_mapping = {
'&a;': '&',
'&q;': '"',
'&s;': '\'',
'&l;': '<',
'&g;': '>',
}
for key, value in char_mapping.items():
decoded_html = decoded_html.replace(key, value)
return html.unescape(decoded_html)
url = "https://www.londonstockexchange.com/news-article/ESNT/date-for-fy-2020-results-announcement/14850033"
script = BeautifulSoup(requests.get(url).text, "lxml").find("script", {"id": "ng-lseg-state"})
payload = json.loads(unescape(script.string))
main_key = "G.{{api_endpoint}}/api/v1/pages?parameters=newsId%3D14850033&path=news-article"
article_body = payload[main_key]["body"]["components"][1]["content"]["newsArticle"]["value"]
print(BeautifulSoup(article_body, "lxml").find_all("p")[22].getText())

beautifulsoup: find elements after certain element, not necessarily siblings or children

Example html:
<div>
<p>p1</p>
<p>p2</p>
<p>p3<span id="target">starting from here</span></p>
<p>p4</p>
</div>
<div>
<p>p5</p>
<p>p6</p>
</div>
<p>p7</p>
I want to search for <p>s but only if its position is after span#target.
It should return p4, p5, p6 and p7 in the above example.
I tried to get all <p>s first then filter, but then I don't know how do I judge if an element is after span#target or not, either.
You can do this by using the find_all_next function in beautifulsoup.
from bs4 import BeautifulSoup
doc = # Read the HTML here
# Parse the HTML
soup = BeautifulSoup(doc, 'html.parser')
# Select the first element you want to use as the reference
span = soup.select("span#target")[0]
# Find all elements after the `span` element that have the tag - p
print(span.find_all_next("p"))
The above snippet will result in
[<p>p4</p>, <p>p5</p>, <p>p6</p>, <p>p7</p>]
Edit: As per the request to compare position below by OP-
If you want to compare position of 2 elements, you'll have to rely on sourceline and sourcepos provided by the html.parser and html5lib parsing options.
First off, store the sourceline and/or sourcepos of your reference element in a variable.
span_srcline = span.sourceline
span_srcpos = span.sourcepos
(you don't actually have to store them though, you can just do span.sourcepos directly as long as you have the span stored)
Now iterate through the result of find_all_next and compare the values-
for tag in span.find_all_next("p"):
print(f'line diff: {tag.sourceline - span_srcline}, pos diff: {tag.sourcepos - span_srcpos}, tag: {tag}')
You're most likely interested in line numbers though, as the sourcepos denotes the position on a line.
However, sourceline and sourcepos mean slightly different things for each parser. Check the docs for that info
Try this
html_doc = """
<div>
<p>p1</p>
<p>p2</p>
<p>p3<span id="target">starting from here</span></p>
<p>p4</p>
</div>
<div>
<p>p5</p>
<p>p6</p>
</div>
<p>p7</p>
"""
from bs4 import BeautifulSoup
soup = BeautifulSoup(html_doc, 'html.parser')
print(soup.find(id="target").findNext('p').contents[0])
Result
p4
try
span = soup.select("span > #target > p")

Problem Replacing <br> Tags with Newline Using bs4

Problem: I cannot replace <br> tags with a newline character using Beautiful Soup 4.
Code: My program (the relevant portion of it) currently looks like
for br in board.select('br'):
br.replace_with('\n')
but I have also tried board.find_all() in place of board.select().
Results: When I use board.replace_with('\n') all <br> tags are replaced with the string literal \n. For example, <p>Hello<br>world</p> would end up becoming Hello\nworld. Using board.replace_with(\n) causes the error
File "<ipython-input-27-cdfade950fdf>", line 10
br.replace_with(\n)
^
SyntaxError: unexpected character after line continuation character
Other Information: I am using a Jupyter Notebook, if that is of any relevance. Here is my full program, as there may be some issue elsewhere that I have overlooked.
import requests
from bs4 import BeautifulSoup
import pandas as pd
page = requests.get("https://boards.4chan.org/g/")
soup = BeautifulSoup(page.content, 'html.parser')
board = soup.find('div', class_='board')
for br in board.select('br'):
br.replace_with('\n')
message = [obj.get_text() for obj in board.select('.opContainer .postMessage')]
image = [obj['href'] for obj in board.select('.opContainer .fileThumb')]
pid = [obj.get_text() for obj in board.select('.opContainer .postInfo .postNum a[title="Reply to this post"]')]
time = [obj.get_text() for obj in board.select('.opContainer .postInfo .dateTime')]
for x in range(len(image)):
image[x] = "https:" + image[x]
post = pd.DataFrame({
"ID": pid,
"Time": time,
"Image": image,
"Message": message,
})
post
pd.options.display.max_rows
pd.set_option('display.max_colwidth', -1)
display(post)
Any advice would be appreciated. Thank you for reading.
Just tried and it works for me, my bs4 version is 4.8.0, and I am using Python 3.5.3,
example:
from bs4 import BeautifulSoup
soup = BeautifulSoup('hello<br>world')
for br in soup('br'):
br.replace_with('\n')
# <br> was replaced with \n successfully
assert str(soup) == '<html><body><p>hello\nworld</p></body></html>'
# get_text() also works as expected
assert soup.get_text() == 'hello\nworld'
# it is a \n not a \\n
assert soup.get_text() != 'hello\\nworld'
I am not used to work with Jupyter Notebook, but seems that your problem is that whatever you are using to visualize the data is showing you the string representation instead of actually printing the string,
hope this helps,
Regards,
adb
Instead of replacing after converting to soup, try replacing the <br> tags before converting. Like,
soup = BeautifulSoup(str(page.content).replace('<br>', '\n'), 'html.parser')
Hope this helps! Cheers!
P.S.: I did not get any logical reason why this is not working after changing into soup.
After experimenting with variations of
page = requests.get("https://boards.4chan.org/g/")
str_page = page.content.decode()
str_split = '\n<'.join(str_page.split('<'))
str_split = '>\n'.join(str_split.split('>'))
str_split = str_split.replace('\n', '')
str_split = str_split.replace('<br>', ' ')
soup = BeautifulSoup(str_split.encode(), 'html.parser')
for the better part of two hours, I have determined that the Panda data-frame prints the newline character as a string literal. Everything else indicates that the program is working as intended, so I assume this has been the problem all along.
for some reason direct replace with newline does not work with bs4 you have to first replace with some other unique character (character sequence preferably) and then replace that sequence in text with newline.
try this.
for br in soup.find_all('br'): br.replace_with('+++')
text=soup.get_text().replace('+++','\n)