Is it possible to do pandas groupby transform rolling mean? - pandas

Is it possible for pandas to do something like:
df.groupby("A").transform(pd.rolling_mean,10)

You can do this without the transform or apply:
df = pd.DataFrame({'grp':['A']*5+['B']*5,'data':[1,2,3,4,5,2,4,6,8,10]})
df.groupby('grp')['data'].rolling(2, min_periods=1).mean()
Output:
grp
A 0 1.0
1 1.5
2 2.5
3 3.5
4 4.5
B 5 2.0
6 3.0
7 5.0
8 7.0
9 9.0
Name: data, dtype: float64
Update per comment:
df = pd.DataFrame({'grp':['A']*5+['B']*5,'data':[1,2,3,4,5,2,4,6,8,10]},
index=[*'ABCDEFGHIJ'])
df['avg_2'] = df.groupby('grp')['data'].rolling(2, min_periods=1).mean()\
.reset_index(level=0, drop=True)
Output:
grp data avg_2
A A 1 1.0
B A 2 1.5
C A 3 2.5
D A 4 3.5
E A 5 4.5
F B 2 2.0
G B 4 3.0
H B 6 5.0
I B 8 7.0
J B 10 9.0

Related

Dynamic sum of one column based on NA values of another column in Pandas

I've got an ordered dataframe, df. It's grouped by 'ID' and ordered by 'order'
df = pd.DataFrame(
{'ID': ['A', 'A', 'A', 'A', 'A', 'A', 'A', 'A', 'A', 'A', 'A','A', 'A','A', 'B','B', 'B','B', 'B', 'B', 'B','B'],
'order': [1,3,4,6,7,9,11,12,13,14,15,16,19,25,8,10,15,17,20,25,29,31],
'col1': [1,2,np.nan, 1,2,3,4,5, np.nan, np.nan,6,7,8,9,np.nan,np.nan,np.nan,10,11,12,np.nan,13],
'col2': [1,5,6,np.nan,1,2,3,np.nan,2,3,np.nan,np.nan,3,1,5,np.nan,np.nan, np.nan,2,3, np.nan,np.nan],
}
)
In each ID group, I would need to sum col1 for those rows that have col2 as NA. The sum includes the value of col1 for which the next value of col2 exists:
I prefer a vecterised solution to make it fast, but it could be difficult.
i need to use this in a groupby (as col1_dynamic_sum should be grouped by ID)
What i have done so far, is define a function that helps count the number of previous consecutive NAs in the row:
def count_prev_consec_na(input_col):
"""
This function takes a dataframe Series (column) and outputs the number of consecutive misisng values in previous rows
"""
try:
a1 = input_col.isna() + 0 ## missing
a2 = ~input_col.isna() + 0 ## not missing
b1 = a1.shift().fillna(0) ## prev missing
d = a1.cumsum()
e = b1*a2
f = d*e
g = f.replace(0, np.nan)
h=g.ffill()
h = h.fillna(0)
i = h.shift()
result = h-i
result = result.fillna(0)
return (result)
except Exception as e:
print(e.message)
return None
I think one solution is to use this to get a dynamic number of rows that needs to be rolled back for sum:
df['roll_back_count'] = df.groupby(['ID'], as_index = False).col2.transform(count_prev_consec_na)
ID order col1 col2 roll_back_count
A 1 1.0 1.0 0.0
A 3 2.0 5.0 0.0
A 4 NaN 6.0 0.0
A 6 1.0 NaN 0.0
A 7 2.0 1.0 1.0 ## I want to sum col1 of order 6 and 7 and remove order 6 row
A 9 3.0 2.0 0.0
A 11 4.0 3.0 0.0
A 12 5.0 NaN 0.0
A 13 NaN 2.0 1.0 ## I want to sum col1 of order 12 and 13 and remove order 12 row
A 14 NaN 3.0 0.0
A 15 6.0 NaN 0.0
A 16 7.0 NaN 0.0
A 19 8.0 3.0 2.0 ## I want to sum col1 of order 15,16,19 and remove order 15 and 16 rows
A 25 9.0 1.0 0.0
B 8 NaN 5.0 0.0
B 10 NaN NaN 0.0
B 15 NaN NaN 0.0
B 17 10.0 NaN 0.0 ## I want to sum col1 of order 10,15,17,20 and remove order 10,15,17 rows
B 20 11.0 2.0 3.0
B 25 12.0 3.0 0.0
B 29 NaN NaN 0.0
B 31 13.0 NaN 0.0
this is my desired output:
desired_output:
ID order col1_dynamic_sum col2
A 1 1.0 1
A 3 2.0 5
A 4 NaN 6
A 7 3.0 1
A 9 3.0 2
A 11 4.0 3
A 13 5.0 2
B 14 NaN 3
B 19 21.0 3
B 25 9.0 1
B 8 NaN 5
B 20 21.0 2
B 25 12.0 3
note: the sums should ignore NAs
again, i prefer vecterised solution, but it might not be possible due to the rolling effect.
Gah, I think I found a solution that doesn't involve rolling at all!
I created a new grouping ID based on NA values of the col2, using the index of rows that don't have any values. I would then use this grouping ID to aggregate!
def create_na_group(rollback_col):
a = ~rollback_col.isna() + 0
b = a.replace(0, np.nan)
c = rollback_col.index
d = c*b
d = d.bfill()
return(d)
df['na_group'] = df.groupby(['ID'], as_index = False).col2.transform(create_na_group)
df = df.loc[~df.na_group.isna()]
desired_output = df.groupby(['ID','na_group'], as_index=False).agg(
order = ('order', 'last')
, col1_dyn_sum = ('col1', sum)
, col2 = ('col2', sum)
)
I just have to find a way to make sure NaN don't become 0, like in rows 2,7 and 10.
ID na_group order col1_dyn_sum col2
0 A 0.0 1 1.0 1.0
1 A 1.0 3 2.0 5.0
2 A 2.0 4 0.0 6.0
3 A 4.0 7 3.0 1.0
4 A 5.0 9 3.0 2.0
5 A 6.0 11 4.0 3.0
6 A 8.0 13 5.0 2.0
7 A 9.0 14 0.0 3.0
8 A 12.0 19 21.0 3.0
9 A 13.0 25 9.0 1.0
10 B 14.0 8 0.0 5.0
11 B 18.0 20 21.0 2.0
12 B 19.0 25 12.0 3.0
I'll just creat two separate sum columns with lamba x: x.sum(skipna = False) and lamba x: x.sum(skipna = True) and then if the skipna = True sum column is 0 and skipna = False sum column is NA then I'll leave the final sum as NA, otherwise, I use the skipna = True sum column as the final desired output.

Pandas how to group rows by a dictionary of {row : group}

I have a dataframe n rows:
1 2 3
3 4 1
5 3 2
9 8 2
7 2 6
0 0 0
4 4 4
8 4 1
...
and a dictionary of keys , so that row is a key and the value is the group:
d = {0 : 0 , 1: 0, 2 : 0, 3 : 1, 4 : 1, 5: 2, 6: 2}
I want to group by the keys and then apply mean on the groups.
So I will get:
3 3 2 #This is the mean of rows 0,1,2 from the original df, as d[0]=d[1]=d[2]=0
8 5 4
2 2 2
8 4 1
What is the best way to do so?
Simply use the dictionary in the groupby it will replace the index value by the dictionary value matching on the key:
df.groupby(d).mean()
output:
a b c
0.0 3.0 3.0 2.0
1.0 8.0 5.0 4.0
2.0 2.0 2.0 2.0
If you also want to get the missing keys, use dropna=False in groupby. Those keys will be listed in the 'NaN' group:
df.groupby(d, dropna=False).mean()
output:
a b c
0.0 3.0 3.0 2.0
1.0 8.0 5.0 4.0
2.0 2.0 2.0 2.0
NaN 8.0 4.0 1.0
And for a range index instead of the dictionary keys:
df.groupby(d, dropna=False, as_index=False).mean()
output:
a b c
0 3.0 3.0 2.0
1 8.0 5.0 4.0
2 2.0 2.0 2.0
3 8.0 4.0 1.0
used input:
a b c
0 1 2 3
1 3 4 1
2 5 3 2
3 9 8 2
4 7 2 6
5 0 0 0
6 4 4 4
7 8 4 1

Multiply dataframe column by dictionary when match

I have a dataframe
Brand Value
0 A 2
1 B 5
2 C 6
3 D 1
4 E 4
5 F 6
and a dictionary
dic={C:10}
I want to multiply value by the dictionary value, when there is a match on brand and the key.
So the output is
Brand Value
0 A 2
1 B 5
2 C 60
3 D 1
4 E 4
5 F 6
Use:
df['Value'] = df['Value'].mul(df['Brand'].map(dic)).fillna(df['Value'])
# print(df)
Brand Value
0 A 2.0
1 B 5.0
2 C 60.0
3 D 1.0
4 E 4.0
5 F 6.0
You can do a map with fillna:
df['Value'] *= df['Brand'].map(dic).fillna(1)
Output:
Brand Value
0 A 2.0
1 B 5.0
2 C 60.0
3 D 1.0
4 E 4.0
5 F 6.0

Pandas: using SimpleImputer transforms dataframe into a series?

I've got a dataframe with some NaNs. I'd like to fill them with the column mean values. It's all good but after applying the code below, the dataframe seems to have been change to a series, all values suddenly have precision of lots of places after the decimal point, the column names of the original dataframe have been lost and replaced with 0,1,2, I know I can recreate/reset all of this but is it possible to use SimpleImputer without changing the underlying structure/type of the data?
impute = SimpleImputer(missing_values=np.nan, strategy='mean')
impute.fit(dfn)
dfn_mean=impute.transform(dfn)
I think you can use only pandas solution with DataFrame.fillna and mean, where by default are omited non numeric columns:
df = pd.DataFrame({
'A':list('abcdef'),
'B':[4,5,4,5,5,4],
'C':[7,8,9,4,np.nan,3],
'D':[1,3,5,7,1,0],
'E':[5,3,6,9,np.nan,4],
'F':list('aaabbb')
})
df = df.fillna(df.mean())
print (df)
A B C D E F
0 a 4 7.0 1 5.0 a
1 b 5 8.0 3 3.0 a
2 c 4 9.0 5 6.0 a
3 d 5 4.0 7 9.0 b
4 e 5 6.2 1 5.4 b
5 f 4 3.0 0 4.0 b
Your solution should be changed with processing only floats columns by DataFrame.select_dtypes:
from sklearn.impute import SimpleImputer
impute = SimpleImputer(missing_values=np.nan,strategy='mean')
c = df.select_dtypes(np.floating).columns
df[c] = impute.fit_transform(df[c])
print (df)
A B C D E F
0 a 4 7.0 1 5.0 a
1 b 5 8.0 3 3.0 a
2 c 4 9.0 5 6.0 a
3 d 5 4.0 7 9.0 b
4 e 5 6.2 1 5.4 b
5 f 4 3.0 0 4.0 b
Or only numeric, but then integers columns are converted to floats:
from sklearn.impute import SimpleImputer
impute = SimpleImputer(missing_values=np.nan,strategy='mean')
c = df.select_dtypes(np.number).columns
df[c] = impute.fit_transform(df[c])
print (df)
A B C D E F
0 a 4.0 7.0 1.0 5.0 a
1 b 5.0 8.0 3.0 3.0 a
2 c 4.0 9.0 5.0 6.0 a
3 d 5.0 4.0 7.0 9.0 b
4 e 5.0 6.2 1.0 5.4 b
5 f 4.0 3.0 0.0 4.0 b

mode returns Exception: Must produce aggregated value

for this dataframe
values ii
0 3.0 4
1 0.0 1
2 3.0 8
3 2.0 5
4 2.0 1
5 3.0 5
6 2.0 4
7 1.0 8
8 0.0 5
9 1.0 1
This line returns "Must ptoduce aggregated values
bii2=df.groupby(['ii'])['values'].agg(pd.Series.mode)
While this line works
bii3=df.groupby('ii')['values'].agg(lambda x: pd.Series.mode(x)[0])
Could you explain why is that?
Problem is mode return sometimes 2 or more values, check solution with GroupBy.apply:
bii2=df.groupby(['ii'])['values'].apply(pd.Series.mode)
print (bii2)
ii
1 0 0.0
1 1.0
2 2.0
4 0 2.0
1 3.0
5 0 0.0
1 2.0
2 3.0
8 0 1.0
1 3.0
Name: values, dtype: float64
And pandas agg need scalar in output, so return error. So if select first value it working nice
bii3=df.groupby('ii')['values'].agg(lambda x: pd.Series.mode(x).iat[0])
print (bii3)
ii
1 0.0
4 2.0
5 0.0
8 1.0
Name: values, dtype: float64