Selecting columns from numpy recarray - numpy

I have an object from type numpy.core.records.recarray. I want to use it effectively as pandas dataframe. More precisely, I want to use a subset of its columns in order to obtain a new recarray, the same way you would do pandas_dataframe[[selected_columns]].
What's the easiest way to achieve this?

Without using pandas you can select a subset of the fields of a structured array (recarray). For example:
In [338]: dt=np.dtype('i,f,i,f')
In [340]: A=np.ones((3,),dtype=dt)
In [341]: A[:]=(1,2,3,4)
In [342]: A
Out[342]:
array([(1, 2.0, 3, 4.0), (1, 2.0, 3, 4.0), (1, 2.0, 3, 4.0)],
dtype=[('f0', '<i4'), ('f1', '<f4'), ('f2', '<i4'), ('f3', '<f4')])
a subset of the fields.
In [343]: B=A[['f1','f3']].copy()
In [344]: B
Out[344]:
array([(2.0, 4.0), (2.0, 4.0), (2.0, 4.0)],
dtype=[('f1', '<f4'), ('f3', '<f4')])
that can be modified independently of A:
In [346]: B['f3']=[.1,.2,.3]
In [347]: B
Out[347]:
array([(2.0, 0.10000000149011612), (2.0, 0.20000000298023224),
(2.0, 0.30000001192092896)],
dtype=[('f1', '<f4'), ('f3', '<f4')])
In [348]: A
Out[348]:
array([(1, 2.0, 3, 4.0), (1, 2.0, 3, 4.0), (1, 2.0, 3, 4.0)],
dtype=[('f0', '<i4'), ('f1', '<f4'), ('f2', '<i4'), ('f3', '<f4')])
The structured subset of fields is not highly developed. A[['f0','f1']] is enough for viewing, but it will warn or give an error if you try to modify that subset. That's why I used copy with B.
There's a set of functions that facilitate adding and removing fields from recarrays. I'll have to look up the access pattern. But mostly the construct a new dtype, and empty array, and then copy fields by name.
import numpy.lib.recfunctions as rf
update
With newer numpy versions, the multi-field index has changed
In [17]: B=A[['f1','f3']]
In [18]: B
Out[18]:
array([(2., 4.), (2., 4.), (2., 4.)],
dtype={'names':['f1','f3'], 'formats':['<f4','<f4'], 'offsets':[4,12], 'itemsize':16})
This B is a true view, referencing the same data buffer as A. The offsets lets it ignore the missing fields. Those fields can be removed with repack_fields as just documented.
But when putting this into a dataframe, it doesn't look like we need to do that.
In [19]: df = pd.DataFrame(A)
In [21]: df.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 3 entries, 0 to 2
Data columns (total 4 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 f0 3 non-null int32
1 f1 3 non-null float32
2 f2 3 non-null int32
3 f3 3 non-null float32
dtypes: float32(2), int32(2)
memory usage: 176.0 bytes
In [22]: df = pd.DataFrame(B)
In [24]: df.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 3 entries, 0 to 2
Data columns (total 2 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 f1 3 non-null float32
1 f3 3 non-null float32
dtypes: float32(2)
memory usage: 152.0 bytes
The frame created from B is smaller.
Sometimes when making a dataframe from an array, the array itself is used as the frame's memory. Changing values in the source array will change the values in the frame. But with structured arrays, pandas makes a copy of the data, with a different memory layout.
Columns of matching dtype are grouped into a common NumericBlock:
In [42]: pd.DataFrame(A)._data
Out[42]:
BlockManager
Items: Index(['f0', 'f1', 'f2', 'f3'], dtype='object')
Axis 1: RangeIndex(start=0, stop=3, step=1)
NumericBlock: slice(1, 5, 2), 2 x 3, dtype: float32
NumericBlock: slice(0, 4, 2), 2 x 3, dtype: int32
In [43]: pd.DataFrame(B)._data
Out[43]:
BlockManager
Items: Index(['f1', 'f3'], dtype='object')
Axis 1: RangeIndex(start=0, stop=3, step=1)
NumericBlock: slice(0, 2, 1), 2 x 3, dtype: float32

In addition to #hpaulj answer, you'll want to repack the copy, otherwise the copied subset will have the same size memory footprint as the original.
import numpy as np
# note that you have to import this library explicitly
import numpy.lib.recfunctions
# B has a subset of "colums" but uses the same amount of memory as A
B = A[['f1','f3']].copy()
# C has a smaller memory footprint
C = np.lib.recfunctions.repack_fields(B)

Related

Creating a Pandas DataFrame from a NumPy masked array?

I am trying to create a Pandas DataFrame from a NumPy masked array, which I understand is a supported operation. This is an example of the source array:
a = ma.array([(1, 2.2), (42, 5.5)],
dtype=[('a',int),('b',float)],
mask=[(True,False),(False,True)])
which outputs as:
masked_array(data=[(--, 2.2), (42, --)],
mask=[( True, False), (False, True)],
fill_value=(999999, 1.e+20),
dtype=[('a', '<i8'), ('b', '<f8')])
Attempting to create a DataFrame with pd.DataFrame(a) returns:
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-40-a4c5236a3cd4> in <module>
----> 1 pd.DataFrame(a)
/usr/local/anaconda/lib/python3.8/site-packages/pandas/core/frame.py in __init__(self, data, index, columns, dtype, copy)
636 # a masked array
637 else:
--> 638 data = sanitize_masked_array(data)
639 mgr = ndarray_to_mgr(
640 data,
/usr/local/anaconda/lib/python3.8/site-packages/pandas/core/construction.py in sanitize_masked_array(data)
452 """
453 mask = ma.getmaskarray(data)
--> 454 if mask.any():
455 data, fill_value = maybe_upcast(data, copy=True)
456 data.soften_mask() # set hardmask False if it was True
/usr/local/anaconda/lib/python3.8/site-packages/numpy/core/_methods.py in _any(a, axis, dtype, out, keepdims, where)
54 # Parsing keyword arguments is currently fairly slow, so avoid it for now
55 if where is True:
---> 56 return umr_any(a, axis, dtype, out, keepdims)
57 return umr_any(a, axis, dtype, out, keepdims, where=where)
58
TypeError: cannot perform reduce with flexible type
Is this operation indeed supported? Currently using Pandas 1.3.3 and NumPy 1.20.3.
Update
Is this supported?
According to the Pandas documentation here:
Alternatively, you may pass a numpy.MaskedArray as the data argument to the DataFrame constructor, and its masked entries will be considered missing.
The code above was my asking the question "What will I get?" if I passed a NumPy masked array to Pandas, but that was the result I was hoping for. Above was the simplest example I could come up with.
I do expect each Series/column in Pandas to be of a single type.
Update 2
Anyone interested in this should probably see this Pandas GitHub issue; it's noted there that Pandas has "deprecated support for MaskedRecords".
If the array has a simple dtype, the dataframe creation works (as documented):
In [320]: a = np.ma.array([(1, 2.2), (42, 5.5)],
...: mask=[(True,False),(False,True)])
In [321]: a
Out[321]:
masked_array(
data=[[--, 2.2],
[42.0, --]],
mask=[[ True, False],
[False, True]],
fill_value=1e+20)
In [322]: import pandas as pd
In [323]: pd.DataFrame(a)
Out[323]:
0 1
0 NaN 2.2
1 42.0 NaN
This a is (2,2), and the result is 2 rows, 2 columns
With the compound dtype, the shape is 1d:
In [326]: a = np.ma.array([(1, 2.2), (42, 5.5)],
...: dtype=[('a',int),('b',float)],
...: mask=[(True,False),(False,True)])
In [327]: a.shape
Out[327]: (2,)
The error is the result of a test on the mask. flexible type refers to your compound dtype:
In [330]: a.mask.any()
Traceback (most recent call last):
File "<ipython-input-330-8dc32ee3f59d>", line 1, in <module>
a.mask.any()
File "/usr/local/lib/python3.8/dist-packages/numpy/core/_methods.py", line 57, in _any
return umr_any(a, axis, dtype, out, keepdims)
TypeError: cannot perform reduce with flexible type
The documented pandas feature clearly does not apply to structured arrays. Without studying the pandas code I can't say exactly what it's trying to do at this point, but it's clear the code was not written with structured arrays in mind.
The non-masked part does work, with the desired column dtypes:
In [332]: pd.DataFrame(a.data)
Out[332]:
a b
0 1 2.2
1 42 5.5
Using the default fill:
In [344]: a.filled()
Out[344]:
array([(999999, 2.2e+00), ( 42, 1.0e+20)],
dtype=[('a', '<i8'), ('b', '<f8')])
In [345]: pd.DataFrame(a.filled())
Out[345]:
a b
0 999999 2.200000e+00
1 42 1.000000e+20
I'd have to look more at ma docs/code to see if it's possible to apply a different fill to the two fields. Filling with nan doesn't work for the int field. numpy doesn't have pandas' int none. I haven't worked enough with that pandas feature to know whether the resulting dtype is still int, or it is changed to object.
Anyways, you are pushing the bounds of both np.ma and pandas with this task.
edit
The default fill_value is a tuple, one for each field:
In [350]: a.fill_value
Out[350]: (999999, 1.e+20)
So we can fill the fields differently, and make a frame from that:
In [351]: a.filled((-1, np.nan))
Out[351]: array([(-1, 2.2), (42, nan)], dtype=[('a', '<i8'), ('b', '<f8')])
In [352]: pd.DataFrame(a.filled((-1, np.nan)))
Out[352]:
a b
0 -1 2.2
1 42 NaN
Looks like I can make a structured array with a pandas dtype, and its associated fill_value:
In [363]: a = np.ma.array([(1, 2.2), (42, 5.5)],
...: dtype=[('a',pd.Int64Dtype),('b',float)],
...: mask=[(True,False),(False,True)],
fill_value=(pd.NA,np.nan))
In [364]: a
Out[364]:
masked_array(data=[(--, 2.2), (42, --)],
mask=[( True, False), (False, True)],
fill_value=(<NA>, nan),
dtype=[('a', 'O'), ('b', '<f8')])
In [366]: pd.DataFrame(a.filled())
Out[366]:
a b
0 <NA> 2.2
1 42 NaN
The question is what would you expect to get? It would be ambiguous for pandas to convert your data.
If you want to get the original data:
>>> pd.DataFrame(a.data)
a b
0 1 2.2
1 42 5.5
If you want to consider masked values invalid:
>>> pd.DataFrame(a.filled(np.nan))
BUT, for this you should have all type float in the masked array

dtype is ignored when using multilevel columns

When using DataFrame.read_csv with multi level columns (read with header=) pandas seems to ignore the dtype= keyword.
Is there a way to make pandas use the passed types?
I am reading large data sets from CSV and therefore try to read the data already in the correct format to save CPU and memory.
I tried passing a dict using dtype with tuples as well as strings. It seems that dtype expects strings. At least I observed, that if I pass the level 0 keys the types are assigned, but unfortunately that would mean that all columns with the same level 0 label would get the same type. In the esample below columns (A, int16) and (A, int32) would get type object and (B, float32) and (B, int16) would get float32.
import pandas as pd
df= pd.DataFrame({
('A', 'int16'): pd.Series([1, 2, 3, 4], dtype='int16'),
('A', 'int32'): pd.Series([132, 232, 332, 432], dtype='int32'),
('B', 'float32'): pd.Series([1.01, 1.02, 1.03, 1.04], dtype='float32'),
('B', 'int16'): pd.Series([21, 22, 23, 24], dtype='int16')})
print(df)
df.to_csv('test_df.csv')
print(df.dtypes)
<i># full column name tuples with level 0/1 labels don't work</i>
df_new= pd.read_csv(
'test_df.csv',
header=list(range(2)),
dtype = {
('A', 'int16'): 'int16',
('A', 'int32'): 'int32'
})
print(df_new.dtypes)
<i># using the level 0 labels for dtype= seems to work</i>
df_new2= pd.read_csv(
'test_df.csv',
header=list(range(2)),
dtype={
'A':'object',
'B': 'float32'
})
print(df_new2.dtypes)
I'd expect the second print(df.dtypes) to output the same column types as the first print(df.dtypes), but it does not seem to use the dtype= argument at all and infers the types resulting in much more memory intense types.
Was I missing something?
Thank you in advance Jottbe
This is a bug, that is also present in the current version of pandas. I filed a bug report here.
But also for the current version, there is a workaround. It works perfectly, if the engine is switched to python:
df_new= pd.read_csv(
'test_df.csv',
header=list(range(2)),
engine='python',
dtype = {
('A', 'int16'): 'int16',
('A', 'int32'): 'int32'
})
print(df_new.dtypes)
The output is:
Unnamed: 0_level_0 Unnamed: 0_level_1 int64
A int16 int16
int32 int32
B float32 float64
int16 int64
So the "A-columns" are typed as specified in dtypes.

One hot encoding categorical features - Sparse form only

I have a dataframe that has int and categorical features. The categorical features are 2 types: numbers and strings.
I was able to One hot encode columns that were int and categorical that were numbers. I get an error when I try to One hot encode categorical columns that are strings.
ValueError: could not convert string to float: '13367cc6'
Since the dataframe is huge with high cardinality so I only want to convert it to a Sparse form. I would prefer a solution that uses from sklearn.preprocessing import OneHotEncoder since I am familiar with it.
I checked other questions too but none of them addresses what I am asking.
data = [[623, 'dog', 4], [123, 'cat', 2],[623, 'cat', 1], [111, 'lion', 6]]
The above dataframe contains 4 rows and 3 columns
Column names - ['animal_id', 'animal_name', 'number']
Assume that animal_id and animal_name are stored in pandas as category and number as int64 dtype.
Assuming you have the following DF:
In [124]: df
Out[124]:
animal_id animal_name number
0 623 dog 4
1 123 cat 2
2 623 cat 1
3 111 lion 6
In [125]: df.dtypes
Out[125]:
animal_id int64
animal_name category
number int64
dtype: object
first save animal_name column (if you need it in future):
In [126]: animal_name = df['animal_name']
convert animal_name column to categorical (memory saving) numeric column:
In [127]: df['animal_name'] = df['animal_name'].cat.codes.astype('category')
In [128]: df
Out[128]:
animal_id animal_name number
0 623 1 4
1 123 0 2
2 623 0 1
3 111 2 6
In [129]: df.dtypes
Out[129]:
animal_id int64
animal_name category
number int64
dtype: object
Now OneHotEncoder should work:
In [130]: enc = OneHotEncoder()
In [131]: enc.fit(df)
Out[131]:
OneHotEncoder(categorical_features='all', dtype=<class 'numpy.float64'>,
handle_unknown='error', n_values='auto', sparse=True)
In [132]: X = enc.fit(df)
In [134]: X.n_values_
Out[134]: array([624, 3, 7])
In [135]: enc.feature_indices_
Out[135]: array([ 0, 624, 627, 634], dtype=int32)
FYI, there are other powerful encoding schemes which did not add a big number of columns as onehot encoding (In fact they did not add any columns at all). Some of them are count encoding, target encoding. For more details, see my answer here and my ipynb here.

pandas MultiIndex resulting index structure on using xs vs loc between 0.15.2 & 0.18.0

The index structure on the result of slicing a subset of data using .xs & .loc on DataFrame with MultiIndex seems to have changed between v0.15.2 & 0.18.0.
Please refer to the code-snippet & output got in ipython notebook using different versions of Pandas.
import pandas as pd
print 'pandas-version: ', pd.__version__
import numpy as np
l1 = ['A', 'B', 'C', 'D']
l2 = sorted(['foo','bar','baz'])
nrows = len(l1) * len(l2)
s = pd.DataFrame(np.random.random( nrows * 2).reshape(nrows, 2),
index=pd.MultiIndex.from_product([l1, l2],
names=['one','two']))
# print s.index
l_all = slice(None)
# get all records matching 'foo' in level=1 using .loc
sub_loc = s.loc[(l_all, 'foo'),:]
print '.loc[(slice(None), "foo")] result:\n', sub_loc,
print '\n.loc result-index:\n', sub_loc.index
# get all records matching 'foo' in level=1 using .xs()
sub_xs = s.xs('foo', level=1)
print '\n.xs(\'foo\', level=1) result:\n', sub_xs,
print '\n .xs result index:\n', sub_xs.index
0.15.2 output
#######################
pandas-version: 0.15.2
.loc[(slice(None), "foo")] result:
0 1
one two
A foo 0.464551 0.372409
B foo 0.782062 0.268917
C foo 0.779423 0.787554
D foo 0.481901 0.232887
.loc result-index:
one two
A foo
B foo
C foo
D foo
.xs('foo', level=1) result:
0 1
one
A 0.464551 0.372409
B 0.782062 0.268917
C 0.779423 0.787554
D 0.481901 0.232887
.xs result index:
Index([u'A', u'B', u'C', u'D'], dtype='object')
0.18.0 output
##########################
pandas-version: 0.18.0
.loc[(slice(None), "foo")] result:
0 1
one two
A foo 0.723213 0.532838
B foo 0.736941 0.401252
C foo 0.217131 0.044254
D foo 0.712824 0.411026
.loc result-index:
MultiIndex(levels=[[u'A', u'B', u'C', u'D'], [u'bar', u'baz', u'foo']],
labels=[[0, 1, 2, 3], [2, 2, 2, 2]],
names=[u'one', u'two'])
.xs('foo', level=1) result:
0 1
one
A 0.723213 0.532838
B 0.736941 0.401252
C 0.217131 0.044254
D 0.712824 0.411026
.xs result index:
Index([u'A', u'B', u'C', u'D'], dtype='object', name=u'one')
calling sub_loc.index seems to return the same MultiIndex structure of the original DataFrame object (inconsistent with v0.15.2), but sub_xs.index seems to be consistent with earlier version.
Note: I'm using [Python 2.7.11 |Anaconda 1.8.0 (64-bit)| (default, Feb 16 2016, 09:58:36) [MSC v.1500 64 bit (AMD64)]]
Sorry, forget my other answer, the bug I filed is totally unrelated.
The right answer is: the "index structure" has not changed between the two versions. The only thing that changed is the way the index is represented when you print it.
In both cases you have a MultiIndex, with exactly the same levels and values. You are presumably puzzled by the fact that in 0.18.0 it seems to contain "baz" and "bar". But a MultiIndex can have level values it actually does not use, because, as in this example, it contained them when it was created, and unused level values are not eliminated when the lines using them are dropped. sub_loc.index in 0.15.2 also has "baz" and "bar" inside levels, except that the way it is represented when you print it doesn't reveal this.
And by the way, whether a MultiIndex which has been filtered still contains such "obsolete" labels or not is an implementation detail which you typically should not care of. In other words,
MultiIndex(levels=[[u'A', u'B', u'C', u'D'], [u'bar', u'baz', u'foo']],
labels=[[0, 1, 2, 3], [2, 2, 2, 2]],
names=[u'one', u'two'])
and
MultiIndex(levels=[[u'A', u'B', u'C', u'D'], [u'foo']],
labels=[[0, 1, 2, 3], [0, 0, 0, 0]],
names=[u'one', u'two'])
are for practical purposes exactly the same index, in the sense of "having the same values in the same positions", and hence behaving identically when used for assignments between Series, DataFrames...
(As you by now have probably clear, it is the labels component of the MultiIndex which determines which values of the levels are actually used, and in which positions.)
I think it is indeed a bug, which shows up also in simpler settings:
https://github.com/pydata/pandas/issues/12827
EDIT: well, probably not, since the example I made in the bug behaves the same in 0.14.1.

getting a default value from pandas dataframe when a key is not present

I have a dataframe multi-index where each key is a tuple of two. Currently, the order of the values in the key matters: df[(k1,k2)] is not the same as df[('k2,k1')]. also, sometimes k1,k2 exists in the dataframe while k2,k1 does not.
I'm trying to average the values of a certain columns for those two entries. currently, Im doing this:
if (k1,k2) in df.index.values and not (k2,k1) in df.index.values:
x = df[(k1,k2)]
if (k2,k1) in df.index.values and not (k1,k2) in df.index.values:
x = df[(k2,k1)]
if (k2,k1) in df.index.values and (k1,k2) in df.index.values:
x = (df[(k2,k1)] + df[k1,k2])/2
This is quit ugly... Im looking for something like a get_defualt method we have on a dictionary.. Is there something like this in pandas?
ix index access and mean function handle this for you. Fetch the two tuples from df.ix and apply the mean function to it: non existing keys are returned as nan values, and mean ignores nan values by default:
In [102]: df
Out[102]:
(26, 22) (10, 48) (48, 42) (48, 10) (42, 48)
a 311 NaN 724 879 42
In [103]: df.ix[:,[(10, 48), (48, 10)]].mean(axis=1)
Out[103]:
a 879
dtype: float64
In [104]: df.ix[:,[(42, 48), (48, 42)]].mean(axis=1)
Out[104]:
a 383
dtype: float64
In [105]: df.ix[:,[(26, 22), (22, 26)]].mean(axis=1)
Out[105]:
a 311
dtype: float64