Is lxml buggy in handling of attributes as dictionaries via attrib? - lxml

I am trying to migrate some code from using ElementTree to using lxml.etree and have encountered an error early on:
>>> import lxml.etree as ET
>>> main = ET.Element("main")
>>> another = ET.Element("another", foo="bar")
>>> main.attrib.update(another.attrib)
Traceback (most recent call last):
File "<pyshell#4>", line 1, in <module>
main.attrib.update(another.attrib)
File "lxml.etree.pyx", line 2153, in lxml.etree._Attrib.update
(src/lxml/lxml.etree.c:46972)
ValueError: too many values to unpack (expected 2)
But I am able to update using the following:
>>> main.attrib.update({'foo': 'bar'})
Is this a bug in lxml (version 2.3) or am I just missing something obvious?

I'm getting the same error, don't think that it's only 2.3 issue.
Workaround:
main.attrib.update(dict(another.attrib))
# or more efficient if it has many attributes:
main.attrib.update(another.attrib.iteritems())
UPDATE
lxml.etree._Attrib.update accepts dict or iterable (source). Although _Attrib has dict interface, it is not dict instance.
In [3]: type(another.attrib)
Out[3]: lxml.etree._Attrib
In [4]: isinstance(another.attrib, dict)
Out[4]: False
So update tries to iterate items as key, value. Maybe it's done for perfomance. Only lxml author knows.
Ways to change it in lxml:
Subclass dict.
Check for hasattr(sequence_or_dict, 'items').
I'm not familiar with Cython and don't know what is better.

Related

pandas value_counts() with IntEnum raises RecursionError

I got the following code to elaborate on my problem. I'm using python 3.6 with pandas==0.25.3.
import pandas as pd
from enum import Enum, IntEnum
class BookType(Enum):
DRAMA = 5
ROMAN = 3
class AuthorType(IntEnum):
UNKNOWN = 0
GROUP = 1
MAN = 2
def print_num_type(df: pd.DataFrame, col_name: str, enum_type: Enum) -> int:
counts = df[col_name].value_counts()
val = counts[enum_type]
print('value counts:', counts)
print(f'Found "{val}" of type {enum_type}')
d = {'title': ['Charly Morry', 'James', 'Watson', 'Marry L.'], 'isbn': [21412412, 334764712, 12471021, 124141111], 'book_type': [BookType.DRAMA, BookType.ROMAN, BookType.ROMAN, BookType.ROMAN], 'author_type': [AuthorType.UNKNOWN, AuthorType.UNKNOWN, AuthorType.MAN, AuthorType.UNKNOWN]}
df = pd.DataFrame(data=d)
df.set_index(['title', 'isbn'], inplace=True)
df['book_type'] = df['book_type'].astype('category')
df['author_type'] = df['author_type'].astype('category')
print(df)
print(df.dtypes)
print_num_type(df, 'book_type', BookType.DRAMA)
print_num_type(df, 'author_type', AuthorType.UNKNOWN)
My pandas.DataFrame consists of two columns (book_type and author_type) of type categorical.
Furthermore, book_type is a class inheriting from type Enum and author_type from IntEnum. When calling print_num_type(df, 'book_type', BookType.DRAMA) everything works out as expected and the number of books of this type are printed, whereas print_num_type(df, 'author_type', AuthorType.UNKNOWN) raises the error:
Traceback (most recent call last):
File "C:\Users\User\AppData\Local\Programs\Python\Python36-32\lib\abc.py", line 182, in __instancecheck__
if subclass in cls._abc_cache:
File "C:\Users\User\AppData\Local\Programs\Python\Python36-32\lib\_weakrefset.py", line 72, in __contains__
wr = ref(item)
RecursionError: maximum recursion depth exceeded while calling a Python object
Exception ignored in: 'pandas._libs.lib.c_is_list_like'
Traceback (most recent call last):
File "C:\Users\User\AppData\Local\Programs\Python\Python36-32\lib\abc.py", line 182, in __instancecheck__
if subclass in cls._abc_cache:
File "C:\Users\User\AppData\Local\Programs\Python\Python36-32\lib\_weakrefset.py", line 72, in __contains__
wr = ref(item)
RecursionError: maximum recursion depth exceeded while calling a Python object
What am I doing wrong here?
Is there a workaround to get this error fixed? since I can't change the IntEnum type of AuthorType since it's provided from another library.
Thanks in advance!
See answer here
The main idea is that since x.value_counts() or counts in your function is itself a pandas Series, it's best to use .iat or .iloc when calling it, e.g, see iat docs
I think the easiest solution is to just use (x==0).sum(), or in your syntax:
val = (df[col_name]==enum_type).sum()
I put a minimal working example in the comments under your question so you can reproduce the problem/fix easily with the "x" notation.
What version of Pandas are you using? I realized after reproducing the error that upgrading Pandas (now on pandas-1.4.2) fixes the error, and the value_counts()[0] worked as expected.
run pip install --upgrade pandas

Generate random strings in pandas

I would like to create a string of one million keys with 200 different values:
N = 1000000
uniques_keys = [pd.core.common.rands(3) for i in range(200)]
keys = [random.choice(uniques_keys) for i in range(N)]
However, I get the following error
In [250]:import pandas as pd
In [251]:pd.core.common.rands(3)
Traceback (most recent call last):
File "<ipython-input-251-31d12e0a07e7>", line 1, in <module>
pd.core.common.rands(3)
AttributeError: module 'pandas.core.common' has no attribute 'rands'
I use pandas version 0.18.0.
You can use:
In [14]: pd.util.testing.rands_array?
Signature: pd.util.testing.rands_array(nchars, size, dtype='O')
Docstring: Generate an array of byte strings.
Demo:
In [15]: N = 1000000
In [16]: s_arr = pd.util.testing.rands_array(10, N)
In [17]: s_arr
Out[17]: array(['L6d2GwhHdT', '5oki5T8VYm', 'XKUblAUFyL', ..., 'BE5AdCa62a', 'X3zDFKj6iy', 'iwASB9xZV3'], dtype=object)
In [18]: len(s_arr)
Out[18]: 1000000
UPDATE: from 2020-04-21
In newer Pandas versions you might see the following deprecation warning:
FutureWarning: pandas.util.testing is deprecated. Use the functions in
the public API at pandas.testing instead.
in this case import this function as follows:
from pandas._testing import rands_array
There are several solutions:
First solution:
The function rands appears to be in pandas.util.testing now:
pd.util.testing.rands(3)
Second solution:
Go straight for the underlying numpy implementation (as found in the pandas source code):
import string
RANDS_CHARS = np.array(list(string.ascii_letters + string.digits),
dtype=(np.str_, 1))
nchars = 3
''.join(np.random.choice(RANDS_CHARS, nchars))
Third solution:
Call numpy.random.bytes (check that it fulfils your requirements).
Fourth solution:
See this question for other suggestions.

Use of metadata with MultiIndex colum DataFrame

I have produced some software that is processing data for analysis and plotting. For each type of data the data frames are produced in a module dedicated for the type.
Depending on the structure of the data the data frame columns could be normal or multindex.
I will pass the data frames to a procedure function that will produce plots of columns that are numeric.
I would like to be able to "attach" a string to each of the "printable" column with a string that will be used as plot labels. This string will not be the same as the name of the column.
I don't seem to be able to figure out a good way to do this purely with pandas DataFrame, so far I don't have any other solution either.
I have seen posts about metadata but I don't completely understand if this functionality is supported or not? At least I don't get this to work, especially it seems like using frames with MultiIndex columns complicates things.
If it is not supported is it still on the todo list?
From my reading I get the impression it have worked differently in different versions of pandas and even depend on if python 2 or 3 is used.
I would like to know if there is a convenient way to accomplish what I require with Pandas data frames? Is using _metadata for this advisable? If so how?
I have looked around quite a bit but especially the MultiIndex concern seems to not be addressed anywhere.
This one seem to indicate that metadata should be supported but is it for data frames? I need Series in a DataFrame.
Adding meta-information/metadata to pandas DataFrame
This one seem to be a similar question but I have tried the solution and it did not help, I tried the solution but it seems not to help me.
Propagate pandas series metadata through joins
Here is some experimentation I have done based on my understanding of the use of _metadata functionality. It seems to indicate that the _metadata did not make any difference and that the attribute did not persist a copy. Also it shows that using MultiIndex is an even more "unsupported" case.
Python 2.7.9 (default, Dec 10 2014, 12:24:55) [MSC v.1500 32 bit (Intel)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import pandas as pd
>>> from numpy.random import randn # To get values for the test frames
>>> import platform # To print python version
>>> # A function to set labels of the columns
>>> def labelSetter(aDF) :
... DFtmp = aDF.copy() # Just to ensure it is a different dataframe
... for column in DFtmp.columns :
... DFtmp[column].myLab='This is '+column.__str__()
... DFtmp[column].notMyLab='This should not persist'
... return DFtmp
...
>>>
>>> print 'Pandas version: {}'.format(pd.version.version)
Pandas version: 0.15.2
>>>
>>> pd.Series._metadata.append('myLab');print pd.Series._metadata # now _metadata contains 'myLab'
['name', 'myLab']
>>>
>>> # Make dataframes normal columns and MultiIndex
>>> dfS=pd.DataFrame(randn(2, 6),columns=['a1','a2','a3','b1','b2','c1']);print dfS
a1 a2 a3 b1 b2 c1
0 -0.934869 -0.310979 0.362635 -0.994605 -0.880114 -1.663265
1 0.205341 -1.642080 -0.732969 -0.080109 -0.082483 -0.208360
>>>
>>> dfMI=pd.DataFrame(randn(2, 6),columns=[['a','a','a','b','b','c'],['a1','a2','a3','b1','b2','c1']]);print dfMI
a b c
a1 a2 a3 b1 b2 c1
0 -0.578399 0.478925 1.047342 -0.087225 1.905074 0.146105
1 0.640575 0.153328 -1.117847 1.043026 0.671220 -0.218550
>>>
>>> # Run the labelSetter function on the data frames
>>> dfSWlab=labelSetter(dfS)
>>> dfMIWlab=labelSetter(dfMI)
>>>
>>> print dfSWlab['a2'].myLab
This is a2
>>> # This worked
>>>
>>> print dfSWlab['a2'].notMyLab
This should not persist
>>> # 'notMyLab' has not been appended to _metadata but the label still persists.
>>>
>>> dfSWlabCopy=dfSWlab.copy() # make a copy to see if myLab persists.
>>>
>>> dfSWlabCopy['a2'].myLab
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Python27\lib\site-packages\pandas\core\generic.py", line 1942, in __getattr__
return object.__getattribute__(self, name)
AttributeError: 'Series' object has no attribute 'myLab'
>>> # 'myLab' was appended to _metadata but still did not persist the copy
>>>
>>> print dfMIWlab['a']['a2'].myLab
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Python27\lib\site-packages\pandas\core\generic.py", line 1942, in __getattr__
return object.__getattribute__(self, name)
AttributeError: 'Series' object has no attribute 'myLab'
>>> # For the MultiIndex data frame the 'myLab' is not accessible

'numpy.ndarray' object is not callable error

Hi I am getting the following error
'numpy.ndarray' object is not callable
when performing the calculation in the following manner
rolling_means = pd.rolling_mean(prices,20,min_periods=20)`
rolling_std = pd.rolling_std(prices, 20)`
#print rolling_means.head(20)
upper_band = rolling_means + (rolling_std)* 2
lower_band = rolling_means - (rolling_std)* 2
I am not sure how to resolve, can someone point me in right direction....
The error TypeError: 'numpy.ndarray' object is not callable means that you tried to call a numpy array as a function. We can reproduce the error like so in the repl:
In [16]: import numpy as np
In [17]: np.array([1,2,3])()
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
/home/user/<ipython-input-17-1abf8f3c8162> in <module>()
----> 1 np.array([1,2,3])()
TypeError: 'numpy.ndarray' object is not callable
If we are to assume that the error is indeed coming from the snippet of code that you posted (something that you should check,) then you must have reassigned either pd.rolling_mean or pd.rolling_std to a numpy array earlier in your code.
What I mean is something like this:
In [1]: import numpy as np
In [2]: import pandas as pd
In [3]: pd.rolling_mean(np.array([1,2,3]), 20, min_periods=5) # Works
Out[3]: array([ nan, nan, nan])
In [4]: pd.rolling_mean = np.array([1,2,3])
In [5]: pd.rolling_mean(np.array([1,2,3]), 20, min_periods=5) # Doesn't work anymore...
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
/home/user/<ipython-input-5-f528129299b9> in <module>()
----> 1 pd.rolling_mean(np.array([1,2,3]), 20, min_periods=5) # Doesn't work anymore...
TypeError: 'numpy.ndarray' object is not callable
So, basically you need to search the rest of your codebase for pd.rolling_mean = ... and/or pd.rolling_std = ... to see where you may have overwritten them.
Also, if you'd like, you can put in reload(pd) just before your snippet, which should make it run by restoring the value of pd to what you originally imported it as, but I still highly recommend that you try to find where you may have reassigned the given functions.
For everyone with this problem in 2021, sometimes you can have this problem when you create
a numpy variable with the same name as one of your function, what happens is that instead of calling the function python tries to call the numpy array as a function and you get the error, just change the name of the numpy variable
I met the same question and the solved.
The point is that my function parameters and variables have the same name.
Try to give them different name.

floats in NumPy structured array and native string formatting with .format()

Can anyone tell me why this NumPy record is having trouble with Python's new-style string formatting? All floats in the record choke on "{:f}".format(record).
Thanks for your help!
In [334]: type(tmp)
Out[334]: numpy.core.records.record
In [335]: tmp
Out[335]: ('XYZZ', 2001123, -23.823917388916016)
In [336]: tmp.dtype
Out[336]: dtype([('sta', '|S6'), ('ondate', '<i8'), ('lat', '<f4')])
# Some formatting works fine
In [337]: '{0.sta:6.6s} {0.ondate:8d}'.format(tmp)
Out[337]: 'XYZZ 2001123'
# Any float has trouble
In [338]: '{0.sta:6.6s} {0.ondate:8d} {0.lat:11.6f}'.format(tmp)
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
/Users/jkmacc/python/pisces/<ipython-input-338-e5f6bcc4f60f> in <module>()
----> 1 '{0.sta:6.6s} {0.ondate:8d} {0.lat:11.6f}'.format(tmp)
ValueError: Unknown format code 'f' for object of type 'str'
This question was answered on the NumPy user mailing list under "floats coerced to string with "{:f}".format() ?":
It seems that np.int64/32 and np.str inherit their respective native Python __format__(), but np.float32/64 doesn't get __builtin__.float.__format__(). That's not intuitive, but I see now why this works:
In [8]: '{:6.6s} {:8d} {:11.6f}'.format(tmp.sta, tmp.ondate, float(tmp.lat))
Out[8]: 'XYZZ 2001123 -23.820000'
Thanks!
-Jon
EDIT:
np.float32/int32 inherits from native Python types if your system is 32-bit. Same for 64-bit. A mismatch will generate the same problem as the original post.