How to select NaN values in pandas in specific range - pandas

I have a dataframe like this:
df = pd.DataFrame({'col1': [5,6,np.nan, np.nan,np.nan, 4, np.nan, np.nan,np.nan, np.nan,7,8,8, np.nan, 5 , np.nan]})
df:
col1
0 5.0
1 6.0
2 NaN
3 NaN
4 NaN
5 4.0
6 NaN
7 NaN
8 NaN
9 NaN
10 7.0
11 8.0
12 8.0
13 NaN
14 5.0
15 NaN
These NaN values should be replaced in the following way. The first selection should look like this.
2 NaN
3 NaN
4 NaN
5 4.0
6 NaN
7 NaN
8 NaN
9 NaN
And then these Nan values should be replace with the only value in that selection, 4.
The second selection is:
13 NaN
14 5.0
15 NaN
and these NaN values should be replaced with 5.
With isnull() you can select the NaN values in a dataframe but how are able to filter/select these specific ranges in pandas?

Solution if missing values are around one non missing val - solution create unique groups and replace in groups by forward and back filling:
#test missing values
s = df['col1'].isna()
#create unique groups
v = s.ne(s.shift()).cumsum()
#count groups and get only 1 value around, filter only misising values groups
mask = v.map(v.value_counts()).eq(1) | s
#groups for replacement per groups
g = mask.ne(mask.shift()).cumsum()
df['col2'] = df.groupby(g)['col1'].apply(lambda x: x.ffill().bfill())
print (df)
col1 col2
0 5.0 5.0
1 6.0 6.0
2 NaN 4.0
3 NaN 4.0
4 NaN 4.0
5 4.0 4.0
6 NaN 4.0
7 NaN 4.0
8 NaN 4.0
9 NaN 4.0
10 7.0 7.0
11 8.0 8.0
12 8.0 8.0
13 NaN 5.0
14 5.0 5.0
15 NaN 5.0

Related

Prevent pandas interpolate from extrapolating

I am trying to interpolate a some data containing NaN's. I would like to fill 1-3 consecutive NaN's, but I cannot figure out to do so with pd.interpolate()
data_chunk = np.array([np.nan, np.nan, np.nan, 4, 5, np.nan, np.nan, np.nan, np.nan, 10, np.nan, np.nan, np.nan, 14])
data_chunk = pd.DataFrame(data_chunk)[0]
print(data_chunk)
print(data_chunk.interpolate(method='linear', limit_direction='both', limit=3, limit_area='inside'))
Original data:
0 NaN
1 NaN
2 NaN
3 4.0
4 5.0
5 NaN
6 NaN
7 NaN
8 NaN
9 10.0
10 NaN
11 NaN
12 NaN
13 14.0
Attempt at interpolating:
0 NaN
1 NaN
2 NaN
3 4.0
4 5.0
5 6.0
6 7.0
7 8.0
8 9.0
9 10.0
10 11.0
11 12.0
12 13.0
13 14.0
Expected result:
0 NaN
1 NaN
2 NaN
3 4.0
4 5.0
5 NaN
6 NaN
7 NaN
8 NaN
9 10.0
10 11.0
11 12.0
12 13.0
13 14.0
Any help would be appreciated :)
Create a boolean mask to see which NA-groups have less than 4 consecutive NA's.
mask = (data_chunk.notnull() != data_chunk.shift().notnull()).cumsum().reset_index().groupby(0).transform('count') < 4
Select interpolated values if mask == True and otherwise keep the original values.
pd.concat([interpolated[mask.values[:,0] ==True], data_chunk[mask.values[:,0] == False]]).sort_index()

How to keep True and None Value using pandas?

I've one DataFrame
import pandas as pd
data = {'a': [1,2,3,None,4,None,2,4,5,None],'b':[6,6,6,'NaN',4,'NaN',11,11,11,'NaN']}
df = pd.DataFrame(data)
condition = (df['a']>2) | (df['a'] == None)
print(df[condition])
a b
0 1.0 6
1 2.0 6
2 3.0 6
3 NaN NaN
4 4.0 4
5 NaN NaN
6 2.0 11
7 4.0 11
8 5.0 11
9 NaN NaN
Here, i've to keep where condition is coming True and Where None is there i want to keep those rows as well.
Expected output is :
a b
2 3.0 6
3 NaN NaN
4 4.0 4
5 NaN NaN
7 4.0 11
8 5.0 11
9 NaN NaN
Thanks in Advance
You can use another | or condition (Note: See #ALlolz's comment, you shouldnt compare a series with np.nan)
condition = (df['a']>2) | (df['a'].isna())
df[condition]
a b
2 3.0 6
3 NaN NaN
4 4.0 4
5 NaN NaN
7 4.0 11
8 5.0 11
9 NaN NaN

How to perform a rolling window on a pandas DataFrame, whereby each row consists nan values that should not be replaced?

I have the following dataframe:
df = pd.DataFrame([[0, 1, 2, 4, np.nan, np.nan, np.nan,1],
[0, 1, 2 ,np.nan, np.nan, np.nan,np.nan,1],
[0, 2, 2 ,np.nan, 2, np.nan,1,1]])
With output:
0 1 2 3 4 5 6 7
0 0 1 2 4 NaN NaN NaN 1
1 0 1 2 NaN NaN NaN NaN 1
2 0 2 2 NaN 2 NaN 1 1
with dtypes:
df.dtypes
0 int64
1 int64
2 int64
3 float64
4 float64
5 float64
6 float64
7 int64
Then the underneath rolling summation is applied:
df.rolling(window = 7, min_periods =1, axis = 'columns').sum()
And the output is as follows:
0 1 2 3 4 5 6 7
0 0.0 1.0 3.0 4.0 4.0 4.0 4.0 4.0
1 0.0 1.0 3.0 NaN NaN NaN NaN 4.0
2 0.0 2.0 4.0 NaN 2.0 2.0 3.0 5.0
I notice that the rolling window stops and starts again whenever the dtype of the next column is different.
I however have a dataframe whereby all columns are of the same object type.
df = df.astype('object')
which has output:
0 1 2 3 4 5 6 7
0 0.0 1.0 3.0 7.0 7.0 7.0 7.0 8.0
1 0.0 1.0 3.0 3.0 3.0 3.0 3.0 4.0
2 0.0 2.0 4.0 4.0 6.0 6.0 7.0 8.0
My desired output however, stops and starts again after a nan value appears. This would look like:
0 1 2 3 4 5 6 7
0 0.0 1.0 3.0 7.0 NaN NaN NaN 8.0
1 0.0 1.0 3.0 NaN NaN NaN Nan 4.0
2 0.0 2.0 4.0 NaN 6.0 NaN 7.0 8.0
I figured there must be a way that NaN values are not considered but also not filled in with values obtained from the rolling window.
Anything would help!
Workaround is:
Where are the nan-values located:
nan = df.isnull()
Apply the rolling window.
df = df.rolling(window = 7, min_periods =1, axis = 'columns').sum()
Only show values labeled as false.
df[~nan]

Is there a way to merge pandas dataframes on row and column index?

I want to merge two pandas data frames that share the same index as well as some columns. pd.merge creates duplicate columns, but I would like to merge on both axes at the same time.
tried pd.merge and pd.concat but did not get the right result.
my try: df3=pd.merge(df1, df2, left_index=True, right_index=True, how='left')
df1
Var#1 Var#2 Var#3 Var#4 Var#5 Var#6 Var#7
ID
323 7 6 8 7.0 2.0 2.0 10.0
324 2 1 5 3.0 4.0 2.0 1.0
675 9 8 1 NaN NaN NaN NaN
676 3 7 2 NaN NaN NaN NaN
df2
Var#6 Var#7 Var#8 Var#9
ID
675 1 9 2 8
676 3 2 0 7
ideally I would get:
df3
Var#1 Var#2 Var#3 Var#4 Var#5 Var#6 Var#7 Var#8 Var#9
ID
323 7 6 8 7.0 2.0 2.0 10.0 NaN NaN
324 2 1 5 3.0 4.0 2.0 1.0 NaN NaN
675 9 8 1 NaN NaN 1 9 2 8
676 3 7 2 NaN NaN 3 2 0 7
IIUC, use df.combine_first():
df3=df1.combine_first(df2)
print(df3)
Var#1 Var#2 Var#3 Var#4 Var#5 Var#6 Var#7 Var#8 Var#9
ID
323 7 6 8 7.0 2.0 2.0 10.0 NaN NaN
324 2 1 5 3.0 4.0 2.0 1.0 NaN NaN
675 9 8 1 NaN NaN 1.0 9.0 2.0 8.0
676 3 7 2 NaN NaN 3.0 2.0 0.0 7.0
You can concat and group the data
pd.concat([df1, df2], 1).groupby(level = 0, axis = 1).first()
Var#1 Var#2 Var#3 Var#4 Var#5 Var#6 Var#7 Var#8 Var#9
ID
323 7.0 6.0 8.0 7.0 2.0 2.0 10.0 NaN NaN
324 2.0 1.0 5.0 3.0 4.0 2.0 1.0 NaN NaN
675 9.0 8.0 1.0 NaN NaN 1.0 9.0 2.0 8.0
676 3.0 7.0 2.0 NaN NaN 3.0 2.0 0.0 7.0

(pandas) Why does .bfill().ffill() act differently than ffill().bfill() on groups?

I think I'm missing something basic conceptually, but I'm not able to find the answer in the docs.
>>> df=pd.DataFrame({'a':[1,1,2,2,3,3], 'b':[5,np.nan, 6, np.nan, np.nan, np.nan]})
>>> df
a b
0 1 5.0
1 1 NaN
2 2 6.0
3 2 NaN
4 3 NaN
5 3 NaN
Using ffill() and then bfill():
>>> df.groupby('a')['b'].ffill().bfill()
0 5.0
1 5.0
2 6.0
3 6.0
4 NaN
5 NaN
Using bfill() and then ffill():
>>> df.groupby('a')['b'].bfill().ffill()
0 5.0
1 5.0
2 6.0
3 6.0
4 6.0
5 6.0
Doesn't the second way break the groupings? Will the first way always make sure that the values are filled in only with other values in that group?
I think you need:
print (df.groupby('a')['b'].apply(lambda x: x.ffill().bfill()))
0 5.0
1 5.0
2 6.0
3 6.0
4 NaN
5 NaN
Name: b, dtype: float64
print (df.groupby('a')['b'].apply(lambda x: x.bfill().ffill()))
0 5.0
1 5.0
2 6.0
3 6.0
4 NaN
5 NaN
Name: b, dtype: float64
because in your sample only first ffill or bfill is DataFrameGroupBy.ffill or DataFrameGroupBy.bfill, second is working with output Series. So it break groups, because Series has no groups.
print (df.groupby('a')['b'].ffill())
0 5.0
1 5.0
2 6.0
3 6.0
4 NaN
5 NaN
Name: b, dtype: float64
print (df.groupby('a')['b'].bfill())
0 5.0
1 NaN
2 6.0
3 NaN
4 NaN
5 NaN
Name: b, dtype: float64