dtype is ignored when using multilevel columns - pandas

When using DataFrame.read_csv with multi level columns (read with header=) pandas seems to ignore the dtype= keyword.
Is there a way to make pandas use the passed types?
I am reading large data sets from CSV and therefore try to read the data already in the correct format to save CPU and memory.
I tried passing a dict using dtype with tuples as well as strings. It seems that dtype expects strings. At least I observed, that if I pass the level 0 keys the types are assigned, but unfortunately that would mean that all columns with the same level 0 label would get the same type. In the esample below columns (A, int16) and (A, int32) would get type object and (B, float32) and (B, int16) would get float32.
import pandas as pd
df= pd.DataFrame({
('A', 'int16'): pd.Series([1, 2, 3, 4], dtype='int16'),
('A', 'int32'): pd.Series([132, 232, 332, 432], dtype='int32'),
('B', 'float32'): pd.Series([1.01, 1.02, 1.03, 1.04], dtype='float32'),
('B', 'int16'): pd.Series([21, 22, 23, 24], dtype='int16')})
print(df)
df.to_csv('test_df.csv')
print(df.dtypes)
<i># full column name tuples with level 0/1 labels don't work</i>
df_new= pd.read_csv(
'test_df.csv',
header=list(range(2)),
dtype = {
('A', 'int16'): 'int16',
('A', 'int32'): 'int32'
})
print(df_new.dtypes)
<i># using the level 0 labels for dtype= seems to work</i>
df_new2= pd.read_csv(
'test_df.csv',
header=list(range(2)),
dtype={
'A':'object',
'B': 'float32'
})
print(df_new2.dtypes)
I'd expect the second print(df.dtypes) to output the same column types as the first print(df.dtypes), but it does not seem to use the dtype= argument at all and infers the types resulting in much more memory intense types.
Was I missing something?
Thank you in advance Jottbe

This is a bug, that is also present in the current version of pandas. I filed a bug report here.
But also for the current version, there is a workaround. It works perfectly, if the engine is switched to python:
df_new= pd.read_csv(
'test_df.csv',
header=list(range(2)),
engine='python',
dtype = {
('A', 'int16'): 'int16',
('A', 'int32'): 'int32'
})
print(df_new.dtypes)
The output is:
Unnamed: 0_level_0 Unnamed: 0_level_1 int64
A int16 int16
int32 int32
B float32 float64
int16 int64
So the "A-columns" are typed as specified in dtypes.

Related

Creating a Pandas DataFrame from a NumPy masked array?

I am trying to create a Pandas DataFrame from a NumPy masked array, which I understand is a supported operation. This is an example of the source array:
a = ma.array([(1, 2.2), (42, 5.5)],
dtype=[('a',int),('b',float)],
mask=[(True,False),(False,True)])
which outputs as:
masked_array(data=[(--, 2.2), (42, --)],
mask=[( True, False), (False, True)],
fill_value=(999999, 1.e+20),
dtype=[('a', '<i8'), ('b', '<f8')])
Attempting to create a DataFrame with pd.DataFrame(a) returns:
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-40-a4c5236a3cd4> in <module>
----> 1 pd.DataFrame(a)
/usr/local/anaconda/lib/python3.8/site-packages/pandas/core/frame.py in __init__(self, data, index, columns, dtype, copy)
636 # a masked array
637 else:
--> 638 data = sanitize_masked_array(data)
639 mgr = ndarray_to_mgr(
640 data,
/usr/local/anaconda/lib/python3.8/site-packages/pandas/core/construction.py in sanitize_masked_array(data)
452 """
453 mask = ma.getmaskarray(data)
--> 454 if mask.any():
455 data, fill_value = maybe_upcast(data, copy=True)
456 data.soften_mask() # set hardmask False if it was True
/usr/local/anaconda/lib/python3.8/site-packages/numpy/core/_methods.py in _any(a, axis, dtype, out, keepdims, where)
54 # Parsing keyword arguments is currently fairly slow, so avoid it for now
55 if where is True:
---> 56 return umr_any(a, axis, dtype, out, keepdims)
57 return umr_any(a, axis, dtype, out, keepdims, where=where)
58
TypeError: cannot perform reduce with flexible type
Is this operation indeed supported? Currently using Pandas 1.3.3 and NumPy 1.20.3.
Update
Is this supported?
According to the Pandas documentation here:
Alternatively, you may pass a numpy.MaskedArray as the data argument to the DataFrame constructor, and its masked entries will be considered missing.
The code above was my asking the question "What will I get?" if I passed a NumPy masked array to Pandas, but that was the result I was hoping for. Above was the simplest example I could come up with.
I do expect each Series/column in Pandas to be of a single type.
Update 2
Anyone interested in this should probably see this Pandas GitHub issue; it's noted there that Pandas has "deprecated support for MaskedRecords".
If the array has a simple dtype, the dataframe creation works (as documented):
In [320]: a = np.ma.array([(1, 2.2), (42, 5.5)],
...: mask=[(True,False),(False,True)])
In [321]: a
Out[321]:
masked_array(
data=[[--, 2.2],
[42.0, --]],
mask=[[ True, False],
[False, True]],
fill_value=1e+20)
In [322]: import pandas as pd
In [323]: pd.DataFrame(a)
Out[323]:
0 1
0 NaN 2.2
1 42.0 NaN
This a is (2,2), and the result is 2 rows, 2 columns
With the compound dtype, the shape is 1d:
In [326]: a = np.ma.array([(1, 2.2), (42, 5.5)],
...: dtype=[('a',int),('b',float)],
...: mask=[(True,False),(False,True)])
In [327]: a.shape
Out[327]: (2,)
The error is the result of a test on the mask. flexible type refers to your compound dtype:
In [330]: a.mask.any()
Traceback (most recent call last):
File "<ipython-input-330-8dc32ee3f59d>", line 1, in <module>
a.mask.any()
File "/usr/local/lib/python3.8/dist-packages/numpy/core/_methods.py", line 57, in _any
return umr_any(a, axis, dtype, out, keepdims)
TypeError: cannot perform reduce with flexible type
The documented pandas feature clearly does not apply to structured arrays. Without studying the pandas code I can't say exactly what it's trying to do at this point, but it's clear the code was not written with structured arrays in mind.
The non-masked part does work, with the desired column dtypes:
In [332]: pd.DataFrame(a.data)
Out[332]:
a b
0 1 2.2
1 42 5.5
Using the default fill:
In [344]: a.filled()
Out[344]:
array([(999999, 2.2e+00), ( 42, 1.0e+20)],
dtype=[('a', '<i8'), ('b', '<f8')])
In [345]: pd.DataFrame(a.filled())
Out[345]:
a b
0 999999 2.200000e+00
1 42 1.000000e+20
I'd have to look more at ma docs/code to see if it's possible to apply a different fill to the two fields. Filling with nan doesn't work for the int field. numpy doesn't have pandas' int none. I haven't worked enough with that pandas feature to know whether the resulting dtype is still int, or it is changed to object.
Anyways, you are pushing the bounds of both np.ma and pandas with this task.
edit
The default fill_value is a tuple, one for each field:
In [350]: a.fill_value
Out[350]: (999999, 1.e+20)
So we can fill the fields differently, and make a frame from that:
In [351]: a.filled((-1, np.nan))
Out[351]: array([(-1, 2.2), (42, nan)], dtype=[('a', '<i8'), ('b', '<f8')])
In [352]: pd.DataFrame(a.filled((-1, np.nan)))
Out[352]:
a b
0 -1 2.2
1 42 NaN
Looks like I can make a structured array with a pandas dtype, and its associated fill_value:
In [363]: a = np.ma.array([(1, 2.2), (42, 5.5)],
...: dtype=[('a',pd.Int64Dtype),('b',float)],
...: mask=[(True,False),(False,True)],
fill_value=(pd.NA,np.nan))
In [364]: a
Out[364]:
masked_array(data=[(--, 2.2), (42, --)],
mask=[( True, False), (False, True)],
fill_value=(<NA>, nan),
dtype=[('a', 'O'), ('b', '<f8')])
In [366]: pd.DataFrame(a.filled())
Out[366]:
a b
0 <NA> 2.2
1 42 NaN
The question is what would you expect to get? It would be ambiguous for pandas to convert your data.
If you want to get the original data:
>>> pd.DataFrame(a.data)
a b
0 1 2.2
1 42 5.5
If you want to consider masked values invalid:
>>> pd.DataFrame(a.filled(np.nan))
BUT, for this you should have all type float in the masked array

Sklearn ColumnTransformer + Pipeline = TypeError

I am trying to use properly pipelines and column transformers from sklearn but always end up with an error. I reproduced it in the following example.
# Data to reproduce the error
X = pd.DataFrame([[1, 2 , 3, 1 ],
[1, '?', 2, 0 ],
[4, 5 , 6, '?']],
columns=['A', 'B', 'C', 'D'])
#SimpleImputer to change the values '?' with the mode
impute = SimpleImputer(missing_values='?', strategy='most_frequent')
#Simple one hot encoder
ohe = OneHotEncoder(handle_unknown='ignore', sparse=False)
col_transfo = ColumnTransformer(transformers=[
('missing_vals', impute, ['B', 'D']),
('one_hot', ohe, ['A', 'B'])],
remainder='passthrough'
)
Then calling the transformer as follows:
col_transfo.fit_transform(X)
Returns the following error:
TypeError: Encoders require their input to be uniformly strings or numbers. Got ['int', 'str']
ColumnTransformer applies its transformers in parallel, not in sequence. So the OneHotEncoder sees the un-imputed column B and balks at the mixed types.
In your case, it's probably fine to just impute on all the columns, and then encode A, B:
encoder = ColumnTransformer(transformers=[
('one_hot', ohe, ['A', 'B'])],
remainder='passthrough'
)
preproc = Pipeline(steps=[
('impute', impute),
('encode', encoder),
# optionally, just throw the model here...
])
If it's important that future missing values in A,C cause errors, then similarly wrap impute into its own ColumnTransformer.
See also Apply multiple preprocessing steps to a column in sklearn pipeline
It's giving you an error because OneHotEncoder accepts just one format of data. In your case, it's a mixture of numbers and object. To overcome this issue you can separate the pipeline after imputer and OneHotEncoder to use astype method on the output of the imputing . Something like:
ohe.fit_transform(imputer.fit_transform(X[['A','B']]).astype(float))
The error is not coming from the ColumnTransformer but from the OneHotEncoder object
col_transfo = ColumnTransformer(transformers=[
('missing_vals', impute, ['B', 'D'])],
remainder='passthrough'
)
col_transfo.fit_transform(X)
array([[2, 1, 1, 3],
[2, 0, 1, 2],
[5, 0, 4, 6]], dtype=object)
ohe.fit_transform(X)
TypeError: argument must be a string or number
OneHotEncoder is throwing this error because the object get mixed type of values (int + string) to encode on the same column, you need to cast the float columns to string in order to apply it

Parquet issue with infering schema on int column containing Null

I am reading the s3 key and converting it into parquet using pandas. And before converting into parquet I am type casting it so that pyarrow can infer the schema correctly.
The snippet looks something like below:
df = pd.read_csv(io.BytesIO(s3.get_object(Bucket=s3_bucket, Key=s3_key)['Body'].read()), sep='\t', error_bad_lines=False, warn_bad_lines=True)
df['col_name'] = df['col_name'].astype('int')
table = pa.Table.from_pandas(df)
buf = pa.BufferOutputStream()
pq.write_table(table, buf, compression='snappy')
So far so good.
The problem is, when int column has null value, pandas will take it as an object offcourse. Is there any way to typecast it into 'int'. One way could be to do fillna(0) or with 99999 first and then do the typecasting. It worked but then Null and 0 or 99999 has different meaning in that column.
So any idea how to typecast it into int? or anything I can do to modify the code above to handle this situation?
From the pandas documentation:
Because NaN is a float, a column of integers with even one missing values is cast to floating-point dtype
Since version 0.24 there are extended integer types which are capable of holding missing values. Typecast to dtype="Int64"
You can find more information under
https://pandas.pydata.org/pandas-docs/stable/user_guide/integer_na.html
EDIT: The proposed workaround in Arrow is
import pandas as pd
import pyarrow as pa
def from_pandas(df):
"""Cast Int64 to object before 'serializing'"""
for col in df:
if isinstance(df[col].dtype, pd.Int64Dtype):
df[col] = df[col].astype('object')
return pa.Table.from_pandas(df)
def to_pandas(tbl):
"""After 'deserializing', recover the correct int type"""
df = tbl.to_pandas(integer_object_nulls=True)
for col in df:
if (pa.types.is_integer(tbl.schema.field_by_name(col).type) and
pd.api.types.is_object_dtype(df[col].dtype)):
df[col] = df[col].astype('Int64')
return df
df = pd.Series([0, 1, None, 2, 822215679726100500], dtype='Int64', name='x').to_frame()
# df = pd.Series([0, 1, 3, 2, 822215679726100500], dtype='Int64', name='x').to_frame()
# df = pd.Series([0, 1, 3, 2, 15], dtype='Int64', name='x').to_frame()
# df = pd.Series([0, 1, 3, 2, 15], dtype='int16', name='x').to_frame()
df2 = to_pandas(from_pandas(df))
df2.dtypes
All credits to Thomas Buhrmann

Selecting columns from numpy recarray

I have an object from type numpy.core.records.recarray. I want to use it effectively as pandas dataframe. More precisely, I want to use a subset of its columns in order to obtain a new recarray, the same way you would do pandas_dataframe[[selected_columns]].
What's the easiest way to achieve this?
Without using pandas you can select a subset of the fields of a structured array (recarray). For example:
In [338]: dt=np.dtype('i,f,i,f')
In [340]: A=np.ones((3,),dtype=dt)
In [341]: A[:]=(1,2,3,4)
In [342]: A
Out[342]:
array([(1, 2.0, 3, 4.0), (1, 2.0, 3, 4.0), (1, 2.0, 3, 4.0)],
dtype=[('f0', '<i4'), ('f1', '<f4'), ('f2', '<i4'), ('f3', '<f4')])
a subset of the fields.
In [343]: B=A[['f1','f3']].copy()
In [344]: B
Out[344]:
array([(2.0, 4.0), (2.0, 4.0), (2.0, 4.0)],
dtype=[('f1', '<f4'), ('f3', '<f4')])
that can be modified independently of A:
In [346]: B['f3']=[.1,.2,.3]
In [347]: B
Out[347]:
array([(2.0, 0.10000000149011612), (2.0, 0.20000000298023224),
(2.0, 0.30000001192092896)],
dtype=[('f1', '<f4'), ('f3', '<f4')])
In [348]: A
Out[348]:
array([(1, 2.0, 3, 4.0), (1, 2.0, 3, 4.0), (1, 2.0, 3, 4.0)],
dtype=[('f0', '<i4'), ('f1', '<f4'), ('f2', '<i4'), ('f3', '<f4')])
The structured subset of fields is not highly developed. A[['f0','f1']] is enough for viewing, but it will warn or give an error if you try to modify that subset. That's why I used copy with B.
There's a set of functions that facilitate adding and removing fields from recarrays. I'll have to look up the access pattern. But mostly the construct a new dtype, and empty array, and then copy fields by name.
import numpy.lib.recfunctions as rf
update
With newer numpy versions, the multi-field index has changed
In [17]: B=A[['f1','f3']]
In [18]: B
Out[18]:
array([(2., 4.), (2., 4.), (2., 4.)],
dtype={'names':['f1','f3'], 'formats':['<f4','<f4'], 'offsets':[4,12], 'itemsize':16})
This B is a true view, referencing the same data buffer as A. The offsets lets it ignore the missing fields. Those fields can be removed with repack_fields as just documented.
But when putting this into a dataframe, it doesn't look like we need to do that.
In [19]: df = pd.DataFrame(A)
In [21]: df.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 3 entries, 0 to 2
Data columns (total 4 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 f0 3 non-null int32
1 f1 3 non-null float32
2 f2 3 non-null int32
3 f3 3 non-null float32
dtypes: float32(2), int32(2)
memory usage: 176.0 bytes
In [22]: df = pd.DataFrame(B)
In [24]: df.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 3 entries, 0 to 2
Data columns (total 2 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 f1 3 non-null float32
1 f3 3 non-null float32
dtypes: float32(2)
memory usage: 152.0 bytes
The frame created from B is smaller.
Sometimes when making a dataframe from an array, the array itself is used as the frame's memory. Changing values in the source array will change the values in the frame. But with structured arrays, pandas makes a copy of the data, with a different memory layout.
Columns of matching dtype are grouped into a common NumericBlock:
In [42]: pd.DataFrame(A)._data
Out[42]:
BlockManager
Items: Index(['f0', 'f1', 'f2', 'f3'], dtype='object')
Axis 1: RangeIndex(start=0, stop=3, step=1)
NumericBlock: slice(1, 5, 2), 2 x 3, dtype: float32
NumericBlock: slice(0, 4, 2), 2 x 3, dtype: int32
In [43]: pd.DataFrame(B)._data
Out[43]:
BlockManager
Items: Index(['f1', 'f3'], dtype='object')
Axis 1: RangeIndex(start=0, stop=3, step=1)
NumericBlock: slice(0, 2, 1), 2 x 3, dtype: float32
In addition to #hpaulj answer, you'll want to repack the copy, otherwise the copied subset will have the same size memory footprint as the original.
import numpy as np
# note that you have to import this library explicitly
import numpy.lib.recfunctions
# B has a subset of "colums" but uses the same amount of memory as A
B = A[['f1','f3']].copy()
# C has a smaller memory footprint
C = np.lib.recfunctions.repack_fields(B)

How to generate pandas DataFrame column of Categorical from string column?

I can convert a pandas string column to Categorical, but when I try to insert it as a new DataFrame column it seems to get converted right back to Series of str:
train['LocationNFactor'] = pd.Categorical.from_array(train['LocationNormalized'])
>>> type(pd.Categorical.from_array(train['LocationNormalized']))
<class 'pandas.core.categorical.Categorical'>
# however it got converted back to...
>>> type(train['LocationNFactor'][2])
<type 'str'>
>>> train['LocationNFactor'][2]
'Hampshire'
Guessing this is because Categorical doesn't map to any numpy dtype; so do I have to convert it to some int type, and thus lose the factor labels<->levels association?
What's the most elegant workaround to store the levels<->labels association and retain the ability to convert back? (just store as a dict like here, and manually convert when needed?)
I think Categorical is still not a first-class datatype for DataFrame, unlike R.
(Using pandas 0.10.1, numpy 1.6.2, python 2.7.3 - the latest macports versions of everything).
The only workaround for pandas pre-0.15 I found is as follows:
column must be converted to a Categorical for classifier, but numpy will immediately coerce the levels back to int, losing the factor information
so store the factor in a global variable outside the dataframe
.
train_LocationNFactor = pd.Categorical.from_array(train['LocationNormalized']) # default order: alphabetical
train['LocationNFactor'] = train_LocationNFactor.labels # insert in dataframe
[UPDATE: pandas 0.15+ added decent support for Categorical]
The labels<->levels is stored in the index object.
To convert an integer array to string array: index[integer_array]
To convert a string array to integer array: index.get_indexer(string_array)
Here is some exampe:
In [56]:
c = pd.Categorical.from_array(['a', 'b', 'c', 'd', 'e'])
idx = c.levels
In [57]:
idx[[1,2,1,2,3]]
Out[57]:
Index([b, c, b, c, d], dtype=object)
In [58]:
idx.get_indexer(["a","c","d","e","a"])
Out[58]:
array([0, 2, 3, 4, 0])