pandas value_counts() with IntEnum raises RecursionError - pandas

I got the following code to elaborate on my problem. I'm using python 3.6 with pandas==0.25.3.
import pandas as pd
from enum import Enum, IntEnum
class BookType(Enum):
DRAMA = 5
ROMAN = 3
class AuthorType(IntEnum):
UNKNOWN = 0
GROUP = 1
MAN = 2
def print_num_type(df: pd.DataFrame, col_name: str, enum_type: Enum) -> int:
counts = df[col_name].value_counts()
val = counts[enum_type]
print('value counts:', counts)
print(f'Found "{val}" of type {enum_type}')
d = {'title': ['Charly Morry', 'James', 'Watson', 'Marry L.'], 'isbn': [21412412, 334764712, 12471021, 124141111], 'book_type': [BookType.DRAMA, BookType.ROMAN, BookType.ROMAN, BookType.ROMAN], 'author_type': [AuthorType.UNKNOWN, AuthorType.UNKNOWN, AuthorType.MAN, AuthorType.UNKNOWN]}
df = pd.DataFrame(data=d)
df.set_index(['title', 'isbn'], inplace=True)
df['book_type'] = df['book_type'].astype('category')
df['author_type'] = df['author_type'].astype('category')
print(df)
print(df.dtypes)
print_num_type(df, 'book_type', BookType.DRAMA)
print_num_type(df, 'author_type', AuthorType.UNKNOWN)
My pandas.DataFrame consists of two columns (book_type and author_type) of type categorical.
Furthermore, book_type is a class inheriting from type Enum and author_type from IntEnum. When calling print_num_type(df, 'book_type', BookType.DRAMA) everything works out as expected and the number of books of this type are printed, whereas print_num_type(df, 'author_type', AuthorType.UNKNOWN) raises the error:
Traceback (most recent call last):
File "C:\Users\User\AppData\Local\Programs\Python\Python36-32\lib\abc.py", line 182, in __instancecheck__
if subclass in cls._abc_cache:
File "C:\Users\User\AppData\Local\Programs\Python\Python36-32\lib\_weakrefset.py", line 72, in __contains__
wr = ref(item)
RecursionError: maximum recursion depth exceeded while calling a Python object
Exception ignored in: 'pandas._libs.lib.c_is_list_like'
Traceback (most recent call last):
File "C:\Users\User\AppData\Local\Programs\Python\Python36-32\lib\abc.py", line 182, in __instancecheck__
if subclass in cls._abc_cache:
File "C:\Users\User\AppData\Local\Programs\Python\Python36-32\lib\_weakrefset.py", line 72, in __contains__
wr = ref(item)
RecursionError: maximum recursion depth exceeded while calling a Python object
What am I doing wrong here?
Is there a workaround to get this error fixed? since I can't change the IntEnum type of AuthorType since it's provided from another library.
Thanks in advance!

See answer here
The main idea is that since x.value_counts() or counts in your function is itself a pandas Series, it's best to use .iat or .iloc when calling it, e.g, see iat docs
I think the easiest solution is to just use (x==0).sum(), or in your syntax:
val = (df[col_name]==enum_type).sum()
I put a minimal working example in the comments under your question so you can reproduce the problem/fix easily with the "x" notation.

What version of Pandas are you using? I realized after reproducing the error that upgrading Pandas (now on pandas-1.4.2) fixes the error, and the value_counts()[0] worked as expected.
run pip install --upgrade pandas

Related

"TypeError: 'dict' object is not callable" in Python pandas DataFrame

I imported pandas as pd.
>>>import pandas as pd
Assigned Series variables 'city_names' and 'population'
>>> city_names=pd.Series(['San Fransisco','San Jose','Sacromento'])
>>> population=pd.Series([8964,91598,034892])
Created DataFrame
>>> pd.DataFrame=({'City Name': city_names, 'Population':population})
While assigning DataFrame to the variable 'cities',
>>> cities=pd.DataFrame({'City Name ' : city_names, 'Population' : population})
I am getting the following error :
Traceback (most recent call last):
File "<pyshell#17>", line 1, in <module>
cities=pd.DataFrame({'City Name ' : city_names, 'Population' : population})
TypeError: 'dict' object is not callable
Also aware that 'dict' is a dictionary with (key,value) pairs! please let me know why this error occurs. I looked into other questions as well but could'nt find help.
Your code pd.DataFrame=({'City Name': city_names, 'Population':population}) replaced pd.DataFrame function with a dictionary.
#Ankur please modify this edit to your liking.
PiR edit:
pd.DataFrame is a class defined by pandas. A class is "callable" meaning that you can call it with parentheses like so pd.DataFrame(). However, in Python the = is an assignment operator and you did pd.DataFrame = {} which assigned some dictionary to the same spot that the DataFrame class constructor used to be. You should not use that line specified by #Ankur. It will break your stuff every time.

Generate random strings in pandas

I would like to create a string of one million keys with 200 different values:
N = 1000000
uniques_keys = [pd.core.common.rands(3) for i in range(200)]
keys = [random.choice(uniques_keys) for i in range(N)]
However, I get the following error
In [250]:import pandas as pd
In [251]:pd.core.common.rands(3)
Traceback (most recent call last):
File "<ipython-input-251-31d12e0a07e7>", line 1, in <module>
pd.core.common.rands(3)
AttributeError: module 'pandas.core.common' has no attribute 'rands'
I use pandas version 0.18.0.
You can use:
In [14]: pd.util.testing.rands_array?
Signature: pd.util.testing.rands_array(nchars, size, dtype='O')
Docstring: Generate an array of byte strings.
Demo:
In [15]: N = 1000000
In [16]: s_arr = pd.util.testing.rands_array(10, N)
In [17]: s_arr
Out[17]: array(['L6d2GwhHdT', '5oki5T8VYm', 'XKUblAUFyL', ..., 'BE5AdCa62a', 'X3zDFKj6iy', 'iwASB9xZV3'], dtype=object)
In [18]: len(s_arr)
Out[18]: 1000000
UPDATE: from 2020-04-21
In newer Pandas versions you might see the following deprecation warning:
FutureWarning: pandas.util.testing is deprecated. Use the functions in
the public API at pandas.testing instead.
in this case import this function as follows:
from pandas._testing import rands_array
There are several solutions:
First solution:
The function rands appears to be in pandas.util.testing now:
pd.util.testing.rands(3)
Second solution:
Go straight for the underlying numpy implementation (as found in the pandas source code):
import string
RANDS_CHARS = np.array(list(string.ascii_letters + string.digits),
dtype=(np.str_, 1))
nchars = 3
''.join(np.random.choice(RANDS_CHARS, nchars))
Third solution:
Call numpy.random.bytes (check that it fulfils your requirements).
Fourth solution:
See this question for other suggestions.

TypeError on read_csv, working in pandas .7, error in .8.0rc2, possible dependency error?

I am attempting to execute the following within python:
from pandas import *
tickdata = read_csv('/home/user/sourcefile.csv',index_col=0,parse_dates='TRUE')
The csv files has rows that look like:
2011/11/23 23:56:00.554389,1165.2500
2011/11/23 23:56:02.310943,1165.5000
2011/11/23 23:56:05.564009,1165.2500
On pandas .7, this executes fine. On pandas .8.0rc2, I get the error below. Because I have .7 and .8 installed on two different systems, I have not ruled out a dependency or python version difference. Any ideas on how to get this to execute under .8 are appreciated.
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python2.7/dist-packages/pandas-0.8.0rc2-py2.7-linux-x86_64.egg/pandas/io/parsers.py", line 225, in read_csv
return _read(TextParser, filepath_or_buffer, kwds)
File "/usr/local/lib/python2.7/dist-packages/pandas-0.8.0rc2-py2.7-linux-x86_64.egg/pandas/io/parsers.py", line 192, in _read
return parser.get_chunk()
File "/usr/local/lib/python2.7/dist-packages/pandas-0.8.0rc2-py2.7-linux-x86_64.egg/pandas/io/parsers.py", line 728, in get_chunk
index = self._agg_index(index)
File "/usr/local/lib/python2.7/dist-packages/pandas-0.8.0rc2-py2.7-linux-x86_64.egg/pandas/io/parsers.py", line 846, in _agg_index
if try_parse_dates and self._should_parse_dates(self.index_col):
File "/usr/local/lib/python2.7/dist-packages/pandas-0.8.0rc2-py2.7-linux-x86_64.egg/pandas/io/parsers.py", line 874, in _should_parse_dates
return i in to_parse or name in to_parse
TypeError: 'in <string>' requires string as left operand, not int
I fixed the parser bug shown in the stack trace that you pasted. However, I'm wondering whether your date column is named "TRUE" or did you mean to just pass a boolean? I haven't dug through pandas history but I know that in 0.8 we are supporting much more complex date parsing behavior as part of the time series API so here we're interpreting the string value as a column name.
I've reported the bug on GitHub (best place for bug reports):
https://github.com/pydata/pandas/issues/1544
Should have a resolution today or tomorrow.

Clustering of sparse matrix in python and scipy

I'm trying to cluster some data with python and scipy but the following code does not work for reason I do not understand:
from scipy.sparse import *
matrix = dok_matrix((en,en), int)
for pub in pubs:
authors = pub.split(";")
for auth1 in authors:
for auth2 in authors:
if auth1 == auth2: continue
id1 = e2id[auth1]
id2 = e2id[auth2]
matrix[id1, id2] += 1
from scipy.cluster.vq import vq, kmeans2, whiten
result = kmeans2(matrix, 30)
print result
It says:
Traceback (most recent call last):
File "cluster.py", line 40, in <module>
result = kmeans2(matrix, 30)
File "/usr/lib/python2.7/dist-packages/scipy/cluster/vq.py", line 683, in kmeans2
clusters = init(data, k)
File "/usr/lib/python2.7/dist-packages/scipy/cluster/vq.py", line 576, in _krandinit
return init_rankn(data)
File "/usr/lib/python2.7/dist-packages/scipy/cluster/vq.py", line 563, in init_rankn
mu = np.mean(data, 0)
File "/usr/lib/python2.7/dist-packages/numpy/core/fromnumeric.py", line 2374, in mean
return mean(axis, dtype, out)
TypeError: mean() takes at most 2 arguments (4 given)
When I'm using kmenas instead of kmenas2 I have the following error:
Traceback (most recent call last):
File "cluster.py", line 40, in <module>
result = kmeans(matrix, 30)
File "/usr/lib/python2.7/dist-packages/scipy/cluster/vq.py", line 507, in kmeans
guess = take(obs, randint(0, No, k), 0)
File "/usr/lib/python2.7/dist-packages/numpy/core/fromnumeric.py", line 103, in take
return take(indices, axis, out, mode)
TypeError: take() takes at most 3 arguments (5 given)
I think I have the problems because I'm using sparse matrices but my matrices are too big to fit the memory otherwise. Is there a way to use standard clustering algorithms from scipy with sparse matrices? Or I have to re-implement them myself?
I created a new version of my code to work with vector space
el = len(experts)
pl = len(pubs)
print el, pl
from scipy.sparse import *
P = dok_matrix((pl, el), int)
p_id = 0
for pub in pubs:
authors = pub.split(";")
for auth1 in authors:
if len(auth1) < 2: continue
id1 = e2id[auth1]
P[p_id, id1] = 1
from scipy.cluster.vq import kmeans, kmeans2, whiten
result = kmeans2(P, 30)
print result
But I'm still getting the error:
TypeError: mean() takes at most 2 arguments (4 given)
What am I doing wrong?
K-means cannot be run on distance matrixes.
It needs a vector space to compute means in, that is why it is called k-means. If you want to use a distance matrix, you need to look into purely distance based algorithms such as DBSCAN and OPTICS (both on Wikipedia).
May I suggest, "Affinity Propagation" from scikit-learn? On the work I've been doing with it, I find that it has generally been able to find the 'naturally' occurring clusters within my data set. The inputs into the algorithm are an affinity matrix, or similarity matrix, of any arbitrary similarity measure.
I don't have a good handle on the kind of data you have on hand, so I can't speak to the exact suitability of this method to your data set, but it may be worth a try, perhaps?
Alternatively, if you're looking to cluster graphs, I'd take a look at NetworkX. That might be a useful tool for you. The reason I suggest this is because it looks like the data you're looking to work with networks of authors. Hence, with NetworkX, you can put in an adjacency matrix and find out which authors are clustered together.
For a further elaboration on this, you can see a question that I had asked earlier for inspiration here.

Is lxml buggy in handling of attributes as dictionaries via attrib?

I am trying to migrate some code from using ElementTree to using lxml.etree and have encountered an error early on:
>>> import lxml.etree as ET
>>> main = ET.Element("main")
>>> another = ET.Element("another", foo="bar")
>>> main.attrib.update(another.attrib)
Traceback (most recent call last):
File "<pyshell#4>", line 1, in <module>
main.attrib.update(another.attrib)
File "lxml.etree.pyx", line 2153, in lxml.etree._Attrib.update
(src/lxml/lxml.etree.c:46972)
ValueError: too many values to unpack (expected 2)
But I am able to update using the following:
>>> main.attrib.update({'foo': 'bar'})
Is this a bug in lxml (version 2.3) or am I just missing something obvious?
I'm getting the same error, don't think that it's only 2.3 issue.
Workaround:
main.attrib.update(dict(another.attrib))
# or more efficient if it has many attributes:
main.attrib.update(another.attrib.iteritems())
UPDATE
lxml.etree._Attrib.update accepts dict or iterable (source). Although _Attrib has dict interface, it is not dict instance.
In [3]: type(another.attrib)
Out[3]: lxml.etree._Attrib
In [4]: isinstance(another.attrib, dict)
Out[4]: False
So update tries to iterate items as key, value. Maybe it's done for perfomance. Only lxml author knows.
Ways to change it in lxml:
Subclass dict.
Check for hasattr(sequence_or_dict, 'items').
I'm not familiar with Cython and don't know what is better.