I have produced some software that is processing data for analysis and plotting. For each type of data the data frames are produced in a module dedicated for the type.
Depending on the structure of the data the data frame columns could be normal or multindex.
I will pass the data frames to a procedure function that will produce plots of columns that are numeric.
I would like to be able to "attach" a string to each of the "printable" column with a string that will be used as plot labels. This string will not be the same as the name of the column.
I don't seem to be able to figure out a good way to do this purely with pandas DataFrame, so far I don't have any other solution either.
I have seen posts about metadata but I don't completely understand if this functionality is supported or not? At least I don't get this to work, especially it seems like using frames with MultiIndex columns complicates things.
If it is not supported is it still on the todo list?
From my reading I get the impression it have worked differently in different versions of pandas and even depend on if python 2 or 3 is used.
I would like to know if there is a convenient way to accomplish what I require with Pandas data frames? Is using _metadata for this advisable? If so how?
I have looked around quite a bit but especially the MultiIndex concern seems to not be addressed anywhere.
This one seem to indicate that metadata should be supported but is it for data frames? I need Series in a DataFrame.
Adding meta-information/metadata to pandas DataFrame
This one seem to be a similar question but I have tried the solution and it did not help, I tried the solution but it seems not to help me.
Propagate pandas series metadata through joins
Here is some experimentation I have done based on my understanding of the use of _metadata functionality. It seems to indicate that the _metadata did not make any difference and that the attribute did not persist a copy. Also it shows that using MultiIndex is an even more "unsupported" case.
Python 2.7.9 (default, Dec 10 2014, 12:24:55) [MSC v.1500 32 bit (Intel)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import pandas as pd
>>> from numpy.random import randn # To get values for the test frames
>>> import platform # To print python version
>>> # A function to set labels of the columns
>>> def labelSetter(aDF) :
... DFtmp = aDF.copy() # Just to ensure it is a different dataframe
... for column in DFtmp.columns :
... DFtmp[column].myLab='This is '+column.__str__()
... DFtmp[column].notMyLab='This should not persist'
... return DFtmp
...
>>>
>>> print 'Pandas version: {}'.format(pd.version.version)
Pandas version: 0.15.2
>>>
>>> pd.Series._metadata.append('myLab');print pd.Series._metadata # now _metadata contains 'myLab'
['name', 'myLab']
>>>
>>> # Make dataframes normal columns and MultiIndex
>>> dfS=pd.DataFrame(randn(2, 6),columns=['a1','a2','a3','b1','b2','c1']);print dfS
a1 a2 a3 b1 b2 c1
0 -0.934869 -0.310979 0.362635 -0.994605 -0.880114 -1.663265
1 0.205341 -1.642080 -0.732969 -0.080109 -0.082483 -0.208360
>>>
>>> dfMI=pd.DataFrame(randn(2, 6),columns=[['a','a','a','b','b','c'],['a1','a2','a3','b1','b2','c1']]);print dfMI
a b c
a1 a2 a3 b1 b2 c1
0 -0.578399 0.478925 1.047342 -0.087225 1.905074 0.146105
1 0.640575 0.153328 -1.117847 1.043026 0.671220 -0.218550
>>>
>>> # Run the labelSetter function on the data frames
>>> dfSWlab=labelSetter(dfS)
>>> dfMIWlab=labelSetter(dfMI)
>>>
>>> print dfSWlab['a2'].myLab
This is a2
>>> # This worked
>>>
>>> print dfSWlab['a2'].notMyLab
This should not persist
>>> # 'notMyLab' has not been appended to _metadata but the label still persists.
>>>
>>> dfSWlabCopy=dfSWlab.copy() # make a copy to see if myLab persists.
>>>
>>> dfSWlabCopy['a2'].myLab
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Python27\lib\site-packages\pandas\core\generic.py", line 1942, in __getattr__
return object.__getattribute__(self, name)
AttributeError: 'Series' object has no attribute 'myLab'
>>> # 'myLab' was appended to _metadata but still did not persist the copy
>>>
>>> print dfMIWlab['a']['a2'].myLab
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Python27\lib\site-packages\pandas\core\generic.py", line 1942, in __getattr__
return object.__getattribute__(self, name)
AttributeError: 'Series' object has no attribute 'myLab'
>>> # For the MultiIndex data frame the 'myLab' is not accessible
Related
I got the following code to elaborate on my problem. I'm using python 3.6 with pandas==0.25.3.
import pandas as pd
from enum import Enum, IntEnum
class BookType(Enum):
DRAMA = 5
ROMAN = 3
class AuthorType(IntEnum):
UNKNOWN = 0
GROUP = 1
MAN = 2
def print_num_type(df: pd.DataFrame, col_name: str, enum_type: Enum) -> int:
counts = df[col_name].value_counts()
val = counts[enum_type]
print('value counts:', counts)
print(f'Found "{val}" of type {enum_type}')
d = {'title': ['Charly Morry', 'James', 'Watson', 'Marry L.'], 'isbn': [21412412, 334764712, 12471021, 124141111], 'book_type': [BookType.DRAMA, BookType.ROMAN, BookType.ROMAN, BookType.ROMAN], 'author_type': [AuthorType.UNKNOWN, AuthorType.UNKNOWN, AuthorType.MAN, AuthorType.UNKNOWN]}
df = pd.DataFrame(data=d)
df.set_index(['title', 'isbn'], inplace=True)
df['book_type'] = df['book_type'].astype('category')
df['author_type'] = df['author_type'].astype('category')
print(df)
print(df.dtypes)
print_num_type(df, 'book_type', BookType.DRAMA)
print_num_type(df, 'author_type', AuthorType.UNKNOWN)
My pandas.DataFrame consists of two columns (book_type and author_type) of type categorical.
Furthermore, book_type is a class inheriting from type Enum and author_type from IntEnum. When calling print_num_type(df, 'book_type', BookType.DRAMA) everything works out as expected and the number of books of this type are printed, whereas print_num_type(df, 'author_type', AuthorType.UNKNOWN) raises the error:
Traceback (most recent call last):
File "C:\Users\User\AppData\Local\Programs\Python\Python36-32\lib\abc.py", line 182, in __instancecheck__
if subclass in cls._abc_cache:
File "C:\Users\User\AppData\Local\Programs\Python\Python36-32\lib\_weakrefset.py", line 72, in __contains__
wr = ref(item)
RecursionError: maximum recursion depth exceeded while calling a Python object
Exception ignored in: 'pandas._libs.lib.c_is_list_like'
Traceback (most recent call last):
File "C:\Users\User\AppData\Local\Programs\Python\Python36-32\lib\abc.py", line 182, in __instancecheck__
if subclass in cls._abc_cache:
File "C:\Users\User\AppData\Local\Programs\Python\Python36-32\lib\_weakrefset.py", line 72, in __contains__
wr = ref(item)
RecursionError: maximum recursion depth exceeded while calling a Python object
What am I doing wrong here?
Is there a workaround to get this error fixed? since I can't change the IntEnum type of AuthorType since it's provided from another library.
Thanks in advance!
See answer here
The main idea is that since x.value_counts() or counts in your function is itself a pandas Series, it's best to use .iat or .iloc when calling it, e.g, see iat docs
I think the easiest solution is to just use (x==0).sum(), or in your syntax:
val = (df[col_name]==enum_type).sum()
I put a minimal working example in the comments under your question so you can reproduce the problem/fix easily with the "x" notation.
What version of Pandas are you using? I realized after reproducing the error that upgrading Pandas (now on pandas-1.4.2) fixes the error, and the value_counts()[0] worked as expected.
run pip install --upgrade pandas
I am trying to read a Parquet file into a Pandas dataframe. Using the API's below (or even if I use pd.read_parquet() wrapper), I am hit by ValueError buffer source array is read-only.
Having searched around online, it seems to relate to Cython not supporting read-only buffer, however I couldn't find any solution on how to address this problem.
How can I read Parquet file into a Pandas dataframe when the API throws ValueError buffer source array is read-only?
In [1]: import pandas as pd
...: import numpy as np
...: import pyarrow as pa
...: import pyarrow.parquet as pq
In [2]: table = pq.read_table('Parquet/Journal.parquet', columns=['SOURCE_CODE','YEAR','MONTH','AMOUNT'])
In [3]: df = table.to_pandas()
In [4]: df.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 85326489 entries, 0 to 85326488
Data columns (total 4 columns):
AMOUNT float64
SOURCE_CODE category
YEAR category
MONTH category
dtypes: category(3), float64(1)
memory usage: 895.1 MB
In [5]: df.groupby(['SOURCE_CODE','YEAR','MONTH'])['AMOUNT'].sum()
This is a bug in the latest release of pandas (0.23.x) and will be solved in pandas 0.24+. This issue was reported already by other users: https://github.com/pandas-dev/pandas/issues/23276 and is fixed though the following pull request: https://github.com/pandas-dev/pandas/pull/21688
For the sane fix, you need to wait for a new pandas release or manually install the git master. As a workaround you might be able to fix this by adding a dummy float column via df['__dummy__'] = np.nan. This will force pandas' BlockManager to reorder the float columns and should turn AMOUNT into a writable column.
I resolved this by adding a single before applying groupby.
df = df.copy() #add this line
df.groupby(['SOURCE_CODE','YEAR','MONTH'])['AMOUNT'].sum()
I would like to create a string of one million keys with 200 different values:
N = 1000000
uniques_keys = [pd.core.common.rands(3) for i in range(200)]
keys = [random.choice(uniques_keys) for i in range(N)]
However, I get the following error
In [250]:import pandas as pd
In [251]:pd.core.common.rands(3)
Traceback (most recent call last):
File "<ipython-input-251-31d12e0a07e7>", line 1, in <module>
pd.core.common.rands(3)
AttributeError: module 'pandas.core.common' has no attribute 'rands'
I use pandas version 0.18.0.
You can use:
In [14]: pd.util.testing.rands_array?
Signature: pd.util.testing.rands_array(nchars, size, dtype='O')
Docstring: Generate an array of byte strings.
Demo:
In [15]: N = 1000000
In [16]: s_arr = pd.util.testing.rands_array(10, N)
In [17]: s_arr
Out[17]: array(['L6d2GwhHdT', '5oki5T8VYm', 'XKUblAUFyL', ..., 'BE5AdCa62a', 'X3zDFKj6iy', 'iwASB9xZV3'], dtype=object)
In [18]: len(s_arr)
Out[18]: 1000000
UPDATE: from 2020-04-21
In newer Pandas versions you might see the following deprecation warning:
FutureWarning: pandas.util.testing is deprecated. Use the functions in
the public API at pandas.testing instead.
in this case import this function as follows:
from pandas._testing import rands_array
There are several solutions:
First solution:
The function rands appears to be in pandas.util.testing now:
pd.util.testing.rands(3)
Second solution:
Go straight for the underlying numpy implementation (as found in the pandas source code):
import string
RANDS_CHARS = np.array(list(string.ascii_letters + string.digits),
dtype=(np.str_, 1))
nchars = 3
''.join(np.random.choice(RANDS_CHARS, nchars))
Third solution:
Call numpy.random.bytes (check that it fulfils your requirements).
Fourth solution:
See this question for other suggestions.
Here is my code snippet:
>>> import numpy as np
>>> import pandas as pd
>>> pd.__version__
'0.12.0'
>>> np.__version__
'1.6.2'
>>> pd.DataFrame(None,columns=['x'])['x'].values + np.min([0.5])
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: unsupported operand type(s) for +: 'numpy.ndarray' and 'numpy.float64'
I'm making a sum between an empty array and a scalar. Notice that if I construct the array with np.array or with pd.Series it works fine (the result is an empty array). The same is true if I replace np.min([0.5]) (which is a np.float64) with a float: 0.5.
I find this very strange! Of course I can easily patch my program, but I think this is a bug of pandas, isn't it?
addendum is it correct to initialize pd.DataFrame with None to get an empty DataFrame with given columns?
I am trying to migrate some code from using ElementTree to using lxml.etree and have encountered an error early on:
>>> import lxml.etree as ET
>>> main = ET.Element("main")
>>> another = ET.Element("another", foo="bar")
>>> main.attrib.update(another.attrib)
Traceback (most recent call last):
File "<pyshell#4>", line 1, in <module>
main.attrib.update(another.attrib)
File "lxml.etree.pyx", line 2153, in lxml.etree._Attrib.update
(src/lxml/lxml.etree.c:46972)
ValueError: too many values to unpack (expected 2)
But I am able to update using the following:
>>> main.attrib.update({'foo': 'bar'})
Is this a bug in lxml (version 2.3) or am I just missing something obvious?
I'm getting the same error, don't think that it's only 2.3 issue.
Workaround:
main.attrib.update(dict(another.attrib))
# or more efficient if it has many attributes:
main.attrib.update(another.attrib.iteritems())
UPDATE
lxml.etree._Attrib.update accepts dict or iterable (source). Although _Attrib has dict interface, it is not dict instance.
In [3]: type(another.attrib)
Out[3]: lxml.etree._Attrib
In [4]: isinstance(another.attrib, dict)
Out[4]: False
So update tries to iterate items as key, value. Maybe it's done for perfomance. Only lxml author knows.
Ways to change it in lxml:
Subclass dict.
Check for hasattr(sequence_or_dict, 'items').
I'm not familiar with Cython and don't know what is better.