ran into a trouble downloading and reading csv files provided by US Department of Education National Center for Education Statistics. Below is code that should run for folks that might be interested in helping me troubleshoot.
import requests, zipfile, io
# First example shows that the code can work. Works fine on years 2005
# and earlier.
url = 'https://nces.ed.gov/ipeds/datacenter/data/HD2005_Data_Stata.zip'
r_zip_file_2005 = requests.get(url, stream=True)
z_zip_file_2005 = zipfile.ZipFile(io.BytesIO(r_zip_file_2005.content))
z_zip_file_2005.extractall('.')
csv_2005_df = pd.read_csv('hd2005_data_stata.csv')
# Second example shows that something changed in the CSV files after
# 2005 (or seems to have changed).
url = 'https://nces.ed.gov/ipeds/datacenter/data/HD2006_Data_Stata.zip'
r_zip_file_2006 = requests.get(url, stream=True)
z_zip_file_2006 = zipfile.ZipFile(io.BytesIO(r_zip_file_2006.content))
z_zip_file_2006.extractall('.')
csv_2006_df = pd.read_csv('hd2006_data_stata.csv')
For 2006 Python raises:
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x96 in position 18: invalid start byte
UnicodeDecodeError Traceback (most recent call last)
<ipython-input-26-b26a150e37ee> in <module>()
----> 1 csv_2006_df = pd.read_csv('hd2006_data_stata.csv')
Any tips on how to overcome this?
Only took 7 months... Figured my answer. Wasn't rocket science.
csv_2006_df = pd.read_csv('hd2006_data_stata.csv',
encoding='ISO-8859-1')
Related
I have this following snippet on Colab:
def getMap7Data():
filename = "/content/maze_7.pkl"
infile = open(filename, "rb")
maze = pickle.load(infile)
infile.close()
return maze
The maze_7.pkl involves an object called maze, and it was produced by the previous person who was working on the file.
I can locally load the pickle and see its attributes. It's a very long list, and I don't know the exact structure. I'm going to use 10-15 attributes for now, but I might need more in the future.
Google Colab gives the following error:
Traceback (most recent call last):
File "/content/loadData.py", line 27, in getMap7Data
maze = pickle.load(infile)
ModuleNotFoundError: No module named 'maze'
Is there a way to load this pickle?
It's similar to the situation given here, and this question remains unanswered, as well.
I'm trying to do a twitter sentiment analysis and my dataset is a couple of .csv.gzip files.
This is what I did to convert them to all to one dataframe.
(I'm using google colab, if that has anything to do with the error, filename or something)
apr_files = [file[9:] for file in csv_collection if re.search(r"04+", file)]
apr_files
Output:
['0428_UkraineCombinedTweetsDeduped.csv.gzip',
'0430_UkraineCombinedTweetsDeduped.csv.gzip',
'0401_UkraineCombinedTweetsDeduped.csv.gzip']
temp_list = []
for file in apr_files:
print(f"Reading in {file}")
# unzip and read in the csv file as a dataframe
temp = pd.read_csv(file, compression="gzip", header=0, index_col=0)
# append dataframe to temp list
temp_list.append(temp)
Error:
Reading in 0428_UkraineCombinedTweetsDeduped.csv.gzip
Reading in 0430_UkraineCombinedTweetsDeduped.csv.gzip
/usr/local/lib/python3.7/dist-packages/IPython/core/interactiveshell.py:2882: DtypeWarning: Columns (15) have mixed types.Specify dtype option on import or set low_memory=False.
exec(code_obj, self.user_global_ns, self.user_ns)
Reading in 0401_UkraineCombinedTweetsDeduped.csv.gzip
---------------------------------------------------------------------------
UnicodeDecodeError Traceback (most recent call last)
<ipython-input-26-5cba3ca01b1e> in <module>()
3 print(f"Reading in {file}")
4 # unzip and read in the csv file as a dataframe
----> 5 tmp_df = pd.read_csv(file, compression="gzip", header=0, index_col=0)
6 # append dataframe to temp list
7 tmp_df_list.append(tmp_df)
8 frames
/usr/local/lib/python3.7/dist-packages/pandas/_libs/parsers.pyx in pandas._libs.parsers.raise_parser_error()
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xb8 in position 8048: invalid start byte
I assumed that this error might be because the tweets contain multiple characters (like emoji, non-english characters, etc.).
I just switched to Jupyter Notebook, and It worked fine there.
As of now, I don't know what was the issue with Google Colab though.
I am trying the following:
After downloading http://eric.clst.org/assets/wiki/uploads/Stuff/gz_2010_us_050_00_20m.json
In [2]: import geopandas
In [3]: geopandas.read_file('./gz_2010_us_050_00_20m.json')
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-3-83a1d4a0fc1f> in <module>
----> 1 geopandas.read_file('./gz_2010_us_050_00_20m.json')
~/miniconda3/envs/ml3/lib/python3.6/site-packages/geopandas/io/file.py in read_file(filename, **kwargs)
24 else:
25 f_filt = f
---> 26 gdf = GeoDataFrame.from_features(f_filt, crs=crs)
27
28 # re-order with column order from metadata, with geometry last
~/miniconda3/envs/ml3/lib/python3.6/site-packages/geopandas/geodataframe.py in from_features(cls, features, crs)
207
208 rows = []
--> 209 for f in features_lst:
210 if hasattr(f, "__geo_interface__"):
211 f = f.__geo_interface__
fiona/ogrext.pyx in fiona.ogrext.Iterator.__next__()
fiona/ogrext.pyx in fiona.ogrext.FeatureBuilder.build()
TypeError: startswith first arg must be bytes or a tuple of bytes, not str
On the page http://eric.clst.org/tech/usgeojson/ with 4 geojson files under the 20m column, the above file corresponds to the US Counties row, and is the only one that cannot be read out of the 4. The error message isn't very informative, I wonder what's the reason, please?
If your error message looks anything like "Polygons and MultiPolygons should follow the right-hand rule", it means the order of the coordinates in those GeoObjects should be clock-wise.
Here's an online tool to "fix" your objects, with a short explanation:
https://mapster.me/right-hand-rule-geojson-fixer/
Possibly an answer for people arriving at this page, I received the same error and the error was thrown due to encoding issues.
Try encoding the initial file with utf-8 or try opening the file with an encoding which you think is applied to the file. This fixed my error.
More info here
I'm fairly new to python and I promise I looked around for a while before I came here but I'm trying to make a stock reader in which someone can just type in whichever stock they want and it shows the data for it. So far everything is going well but I am having trouble with the user input, here is my code:
from alpha_vantage.timeseries import TimeSeries
import matplotlib.pyplot as plt
pwd = input('Enter Ticker Symbol Here: ')
ts = TimeSeries(key='HQL2R9KNYW99K4BT', output_format='pandas')
data, meta_data = ts.get_intraday(symbol=**'TSLA'**, interval='1min', outputsize='full') *#But Instead of tesla I want it to be user input.*
data['4. close'].plot()
plt.title('Intraday Times Series for the MSFT stock (1 min)')
plt.show()
The error I am getting is:
Traceback (most recent call last):
File "C:/Users/abakh/PycharmProjects/stock1/Stock1.py", line 7, in <module>
data, meta_data = ts.get_intraday(symbol=' + pwd + ', interval='1min', outputsize='full')
File "C:\Users\abakh\PycharmProjects\stock1\venv\lib\site-packages\alpha_vantage\alphavantage.py", line 178, in _format_wrapper
data = call_response[data_key]
KeyError: 'Time Series (1min)'
Never mind guys, While waiting for a response I was messing around and actually found a way to do it! instead of making a seperate imput I added a input on the meta_date line it self : data, meta_data = ts.get_intraday(symbol=input('Put here: '), interval='1min', outputsize='full')
I'm trying to cluster some data with python and scipy but the following code does not work for reason I do not understand:
from scipy.sparse import *
matrix = dok_matrix((en,en), int)
for pub in pubs:
authors = pub.split(";")
for auth1 in authors:
for auth2 in authors:
if auth1 == auth2: continue
id1 = e2id[auth1]
id2 = e2id[auth2]
matrix[id1, id2] += 1
from scipy.cluster.vq import vq, kmeans2, whiten
result = kmeans2(matrix, 30)
print result
It says:
Traceback (most recent call last):
File "cluster.py", line 40, in <module>
result = kmeans2(matrix, 30)
File "/usr/lib/python2.7/dist-packages/scipy/cluster/vq.py", line 683, in kmeans2
clusters = init(data, k)
File "/usr/lib/python2.7/dist-packages/scipy/cluster/vq.py", line 576, in _krandinit
return init_rankn(data)
File "/usr/lib/python2.7/dist-packages/scipy/cluster/vq.py", line 563, in init_rankn
mu = np.mean(data, 0)
File "/usr/lib/python2.7/dist-packages/numpy/core/fromnumeric.py", line 2374, in mean
return mean(axis, dtype, out)
TypeError: mean() takes at most 2 arguments (4 given)
When I'm using kmenas instead of kmenas2 I have the following error:
Traceback (most recent call last):
File "cluster.py", line 40, in <module>
result = kmeans(matrix, 30)
File "/usr/lib/python2.7/dist-packages/scipy/cluster/vq.py", line 507, in kmeans
guess = take(obs, randint(0, No, k), 0)
File "/usr/lib/python2.7/dist-packages/numpy/core/fromnumeric.py", line 103, in take
return take(indices, axis, out, mode)
TypeError: take() takes at most 3 arguments (5 given)
I think I have the problems because I'm using sparse matrices but my matrices are too big to fit the memory otherwise. Is there a way to use standard clustering algorithms from scipy with sparse matrices? Or I have to re-implement them myself?
I created a new version of my code to work with vector space
el = len(experts)
pl = len(pubs)
print el, pl
from scipy.sparse import *
P = dok_matrix((pl, el), int)
p_id = 0
for pub in pubs:
authors = pub.split(";")
for auth1 in authors:
if len(auth1) < 2: continue
id1 = e2id[auth1]
P[p_id, id1] = 1
from scipy.cluster.vq import kmeans, kmeans2, whiten
result = kmeans2(P, 30)
print result
But I'm still getting the error:
TypeError: mean() takes at most 2 arguments (4 given)
What am I doing wrong?
K-means cannot be run on distance matrixes.
It needs a vector space to compute means in, that is why it is called k-means. If you want to use a distance matrix, you need to look into purely distance based algorithms such as DBSCAN and OPTICS (both on Wikipedia).
May I suggest, "Affinity Propagation" from scikit-learn? On the work I've been doing with it, I find that it has generally been able to find the 'naturally' occurring clusters within my data set. The inputs into the algorithm are an affinity matrix, or similarity matrix, of any arbitrary similarity measure.
I don't have a good handle on the kind of data you have on hand, so I can't speak to the exact suitability of this method to your data set, but it may be worth a try, perhaps?
Alternatively, if you're looking to cluster graphs, I'd take a look at NetworkX. That might be a useful tool for you. The reason I suggest this is because it looks like the data you're looking to work with networks of authors. Hence, with NetworkX, you can put in an adjacency matrix and find out which authors are clustered together.
For a further elaboration on this, you can see a question that I had asked earlier for inspiration here.