OneHotEncoder gives ValueError : Input contains NaN ; even though my DataFrame doesn't contain any NaN as indicated by df.isna() - pandas

I am working on the titanic dataset and when trying to apply OneHotEncoding on one of the columns called 'Embarked' which has 3 possible values 'S','Q' and 'C'.
It gives me the
ValueError: Input contains NaN
I checked the contents of the column by using 2 methods. The first one being the for-loop with value_counts and the second one by writing the entire table to a csv:
for col in X.columns:
print(col)
print(X[col].value_counts(dropna=False))
X.isna().to_csv("xisna.csv")
print("notna================== :",X.notna().shape)
X.dropna(axis=0,how='any',inplace=True)
print("X.shape " ,X.shape)
return pd.DataFrame(X)
Which yielded
Embarked
S 518
C 139
Q 55
Name: Embarked, dtype: int64
I checked the contents of csv and reading through the over 700 entries, I did not find any 'True'-statement.
The pipeline that blocks at the ("cat",One...)
cat_attribs=["Sex","Embarked"]
special_attribs = {'drop_attribs' : ["Name","Cabin","Ticket","PassengerId"], k : [3]}
full_pipeline = ColumnTransformer([
("fill",fill_pipeline,list(strat_train_set)),
("emb_cat",OneHotEncoder(),['Sex']),
("cat",OneHotEncoder(),['Embarked']),
])
So where exactly is the NaN-value that I am missing?

I figured it out, a ColumnTransformer will concatenate the transformations instead of passing them along to the next transformer in line.
So any transformations done in fill_pipeline, won't be noticed by OneHotEncoder since it is still working with the untransformed dataset.
So I had to put the one hot encoding into the fill_pipeline instead of the ColumnTransformer.
full_pipeline = ColumnTransformer([
("fill",fill_pipeline,list(strat_train_set)),
("emb_cat",OneHotEncoder(),['Sex']),
("cat",OneHotEncoder(),['Embarked']),
])

Related

Pandas (with def and np.where): error with values in a dataframe row conditioned on another dataframe row

I have dataframes A of shape XxY with values and dataframes B of shape ZxY to be filled with statistics calculated from A.
As an example:
A = pd.DataFrame(np.array(range(9)).reshape((3,3)))
B = pd.DataFrame(np.array(range(6)).reshape((2,3)))
Now I need to fill row 1 of B with quantile(0.5) of A columns where row 0 of B > 1 (else: np.nan). I need to use a function of the kind:
def mydef(df0, df1):
df1.loc[1] = np.where(df1.loc[0]>1,
df0.quantile(0.5),
np.nan)
pass
mydef(A,B)
Now B is:
0 1 2
0 0.0 1.0 2.0
1 NaN NaN 3.5
It works perfectly for these mock dataframes and all my real ones apart from one.
For that one this error is raised:
ValueError: cannot set using a list-like indexer with a different length than the value
When I run the same code without calling a function, it doesn't raise any error.
Since I need to use a function, any suggestion?
I found the error. I erroneously had the same label twice in the index. Essentially my dataframe B was something like:
B = pd.DataFrame(np.array(range(9)).reshape((3,3)), index=[0,0,1])
so that calling the def:
def mydef(df0, df1):
df1.loc[1] = np.where(df1.loc[0]>1,
df0.quantile(0.5),
np.nan)
pass
would cause the condition and the if-false lines of np.where to not match their shapes, I guess.
Still not sure why working outside the def worked.

Python CountVectorizer for Pandas DataFrame

I have got a pandas dataframe which looks like the following:
df.head()
categorized.Hashtags
0 icietmaintenant supyoga standuppaddleportugal ...
1 instapaysage bretagne labellebretagne bretagne...
2 bretagne lescrepescestlavie quimper bzh labret...
3 bretagne mer paysdiroise magnifique phare plou...
4 bateaux baiededouarnenez voiliers vieuxgreemen..
Now instead of using pandas get_dummmies() command I would like to use CountVectorizer to create the same output. Because get_dummies takes too much time.
df_x = df["categorized.Hashtags"]
vect = CountVectorizer(min_df=0.,max_df=1.0)
X = vect.fit_transform(df_x)
count_vect_df = pd.DataFrame(X.todense(), columns = vect.get_feature_names())
When I now output the respective data frame "count_vect_df" then the data frame contains a lot of columns which are empty/ contains only zero values. How can I avoid this?
Cheers,
Andi
From scikit-learn CountVectorizer docs:
Convert a collection of text documents to a matrix of token counts
This implementation produces a sparse representation of the counts
using scipy.sparse.csr_matrix.
The CountVectorizer returns a sparse-matrix, which contains most of zero values, where non-zero values represent the number of times that specific term has appeared in the particular document.

Indexing lists in a Pandas dataframe column based on variable length

I've got a column in a Pandas dataframe comprised of variable-length lists and I'm trying to find an efficient way of extracting elements conditional on list length. Consider this minimal reproducible example:
t = pd.DataFrame({'a':[['1234','abc','444'],
['5678'],
['2468','def']]})
Say I want to extract the 2nd element (where relevant) into a new column, and use NaN otherwise. I was able to get it in a very inefficient way:
_ = []
for index,row in t.iterrows():
if (len(row['a']) > 1):
_.append(row['a'][1])
else:
_.append(np.nan)
t['element_two'] = _
And I gave an attempt using np.where(), but I'm not specifying the 'if' argument correctly:
np.where(t['a'].str.len() > 1, lambda x: x['a'][1], np.nan)
Corrections and tips to other solutions would be greatly appreciated! I'm coming from R where I take vectorization for granted.
I'm on pandas 0.25.3 and numpy 1.18.1.
Use str accesor :
n = 2
t['second'] = t['a'].str[n-1]
print(t)
a second
0 [1234, abc, 444] abc
1 [5678] NaN
2 [2468, def] def
While not incredibly efficient, apply is at least clean:
t['a'].apply(lambda _: np.nan if len(_)<2 else _[1])

Why does sum and lambda sum differ in transform?

For the dataframe:
df = pd.DataFrame({
'key1': [1,1,1,2,3,np.nan],
'key2': ['one','two','one', 'three', 'two','one'],
'data1': [1,2,3,3,4,5]
})
The following transform using the sum function does not produce an error:
df.groupby(['key1'])['key1'].transform(sum)
However, this transform, also using the sum function, produces an error:
df.groupby(['key1'])['key1'].transform(lambda x : sum(x))
ValueError: Length mismatch: Expected axis has 5 elements, new values have 6 elements
Why?
This is probably a bug, but the reason as to why the two behave differently is easily explained by the fact that pandas internally overrides the builtin sum, min, and max functions. When you pass any of these functions to pandas, they are internally replaced by the numpy equivalents.
Now, your grouper has NaNs, and NaNs are automatically excluded, as the docs mention. With any of the builtin pandas agg functions, this issue appears to be handled as NaNs inserted in the output automatically, as you see with the first statement. The output is the same if you run df.groupby(['key1'])['key1'].transform('sum'). However, when you pass a lambda as in the second statement, for whatever reason this automatic replacement of missing outputs with NaN is not done.
A possible workaround is to group on the strings:
df.groupby(df.key1.astype(str))['key1'].transform(lambda x : sum(x))
0 3.0
1 3.0
2 3.0
3 2.0
4 3.0
5 NaN
Name: key1, dtype: float64
This way, the NaNs are not dropped, and you get rid of the length mismatch.

Pandas fill cells in a column with NaN values, derive the value from other cells in the row

I have a dataframe:
a b c
0 1 2 3
1 1 1 1
2 3 7 NaN
3 2 3 5
...
I want to fill column "three" inplace (update the values) where the values are NaN using a machine learning algorithm.
I don't know how to do it inplace. Sample code:
import pandas as pd
import numpy as np
from sklearn.linear_model import LinearRegression
df=pd.DataFrame([range(3), [1, 5, np.NaN], [2, 2, np.NaN], [4,5,9], [2,5,7]],columns=['a','b','c'])
x=[]
y=[]
for row in df.iterrows():
index,data = row
if(not pd.isnull(data['c'])):
x.append(data[['a','b']].tolist())
y.append(data['c'])
model = LinearRegression()
model.fit(x,y)
#this line does not do it in place.
df[~df.c.notnull()].assign(c = lambda x:model.predict(x[['a','b']]))
But this gives me a copy of the dataframe. Only option I have left is using a for loop however, I don't want to do that. I think there should be more pythonic way of doing it using pandas. Can someone please help? Or is there any other way of doing this?
You'll have to do something like :
df.loc[pd.isnull(df['three']), 'three'] = _result of model_
This modifies directly dataframe df
This way you first filter the dataframe to keep the slice you want to modify (pd.isnull(df['three'])), then from that slice you select the column you want to modify (three).
On the right hand side of the equal, it expects to get an array / list / series with the same number of lines than the filtered dataframe ( in your example, one line)
You may have to adjust depending on what your model returns exactly
EDIT
You probably need to do stg like this
pred = model.predict(df[['a', 'b']])
df['pred'] = model.predict(df[['a', 'b']])
df.loc[pd.isnull(df['c']), 'c'] = df.loc[pd.isnull(df['c']), 'pred']
Note that a significant part of the issue comes from the way you are using scikit learn in your example. You need to pass the whole dataset to the model when you predict.
The simplest way is yo transpose first, then forward fill/backward fill at your convenience.
df.T.ffill().bfill().T