Create a new pandas DataFrame Column with a groupby - pandas

I have a dataframe and I'd like to group by a column value and then do a calculation to create a new column. Below is the set up data:
import pandas as pd
df = pd.DataFrame({
'Red' : [1,2,3,4,5,6,7,8,9,10],
'Groups':['A','B','A','A','B','C','B','C','B','C'],
'Blue':[10,20,30,40,50,60,70,80,90,100]
})
df.groupby('Groups').apply(print)
What I want to do is create a 'TOTAL' column in the original dataframe. If it is the first record of the group 'TOTAL' gets a zero otherwise TOTAL will get the ['Blue'] at index subtracted by ['Red'] at index-1.
I tried to do this in a function below but it does not work.
def funct(group):
count = 0
lst = []
for info in group:
if count == 0:
lst.append(0)
count += 1
else:
num = group.iloc[count]['Blue'] - group.iloc[count-1]['Red']
lst.append(num)
count += 1
group['Total'] = lst
return group
df = df.join(df.groupby('Groups').apply(funct))
The code works for the first group but then errors out.
The desired outcome is:
df_final = pd.DataFrame({
'Red' : [1,2,3,4,5,6,7,8,9,10],
'Groups':['A','B','A','A','B','C','B','C','B','C'],
'Blue':[10,20,30,40,50,60,70,80,90,100],
'Total':[0,0,29,37,48,0,65,74,83,92]
})
df_final
df_final.groupby('Groups').apply(print)
Thank you for the help!

For each group, calculate the difference between Blue and shifted Red (Red at previous index):
df['Total'] = (df.groupby('Groups')
.apply(lambda g: g.Blue - g.Red.shift().fillna(g.Blue))
.reset_index(level=0, drop=True))
df
Red Groups Blue Total
0 1 A 10 0.0
1 2 B 20 0.0
2 3 A 30 29.0
3 4 A 40 37.0
4 5 B 50 48.0
5 6 C 60 0.0
6 7 B 70 65.0
7 8 C 80 74.0
8 9 B 90 83.0
9 10 C 100 92.0
Or as #anky has commented, you can avoid apply by shifting Red column first:
df['Total'] = (df.Blue - df.Red.groupby(df.Groups).shift()).fillna(0, downcast='infer')
df
Red Groups Blue Total
0 1 A 10 0
1 2 B 20 0
2 3 A 30 29
3 4 A 40 37
4 5 B 50 48
5 6 C 60 0
6 7 B 70 65
7 8 C 80 74
8 9 B 90 83
9 10 C 100 92

Related

Keep the second entry in a dataframe

I am showing you below an example dataset and the output desired.
ID number
1 50
1 49
1 48
2 47
2 40
2 31
3 60
3 51
3 42
Example output
1 49
2 40
3 51
I want to keep the second entry for every group in my dataset. I have already grouped them by ID but not I want for each Id to keep the second entry and remove all the duplicates afterwards from ID.
Use GroupBy.nth with 1 for second rows, because python counts from 0:
df1 = df.groupby('ID', as_index=False).nth(1)
print (df1)
ID number
1 1 49
4 2 40
7 3 51
Another solution with GroupBy.cumcount for counter and filtering by boolean indexing:
df1 = df[df.groupby('ID').cumcount() == 1]
Details:
print (df.groupby('ID').cumcount())
0 0
1 1
2 2
3 0
4 1
5 2
6 0
7 1
8 2
dtype: int64
EDIT: Solution for second maximal value -s first sorting and then get second row - values has to be unique per groups:
df = (df.sort_values(['ID','number'], ascending=[True, False])
.groupby('ID', as_index=False)
.nth(1))
print (df)
ID number
1 1 49
4 2 40
7 3 51
If want second maximal value if exist duplicates add DataFrame.drop_duplicates:
print (df)
ID number
0 1 50 <-first max
1 1 50 <-first max
2 1 48 <-second max
3 2 47
4 2 40
5 2 31
6 3 60
7 3 51
8 3 42
df3 = (df.drop_duplicates(['ID','number'])
.sort_values(['ID','number'], ascending=[True, False])
.groupby('ID', as_index=False)
.nth(1))
print (df3)
ID number
2 1 48
4 2 40
7 3 51
If that is the case we can use duplicated + drop_duplicates
df=df[df.duplicated('ID')].drop_duplicates('ID')
ID number
1 1 49
4 2 40
7 3 51
Flexible solution cumcount
df[df.groupby('ID').cumcount()==1].copy()
ID number
1 1 49
4 2 40
7 3 51

Winsorize within groups of dataframe

I have a dataframe like this:
df = pd.DataFrame([[1,2],
[1,4],
[1,5],
[2,65],
[2,34],
[2,23],
[2,45]], columns = ['label', 'score'])
Is there an efficient way to create a column score_winsor that winsorises the score column within the groups at the 1% level?
I tried this with no success:
df['score_winsor'] = df.groupby('label')['score'].transform(lambda x: max(x.quantile(.01), min(x, x.quantile(.99))))
You could use scipy's implementation of winsorize
df["score_winsor"] = df.groupby('label')['score'].transform(lambda row: winsorize(row, limits=[0.01,0.01]))
Output
>>> df
label score score_winsor
0 1 2 2
1 1 4 4
2 1 5 5
3 2 65 65
4 2 34 34
5 2 23 23
6 2 45 45
This works:
df['score_winsor'] = df.groupby('label')['score'].transform(lambda x: np.maximum(x.quantile(.01), np.minimum(x, x.quantile(.99))))
Output
print(df.to_string())
label score score_winsor
0 1 2 2.04
1 1 4 4.00
2 1 5 4.98
3 2 65 64.40
4 2 34 34.00
5 2 23 23.33
6 2 45 45.00

Update dataframe column with values from another dataframe by index

I have two DataFrames.
One of them contains: item id, name, quantity and price.
Another: item id, name and quantity.
The problem is to update names and quantity in first DataFrame taking information from the second DataFrame by item id. Also, first DataFrame has not all item id's, so I need to take into account only those rows from the second DataFrame, which are in the first one.
DataFrame 1
In [1]: df1
Out[1]:
id name quantity price
0 10 X 10 15
1 11 Y 30 20
2 12 Z 20 15
3 13 X 15 10
4 14 X 12 15
DataFrame 2
In [2]: df2
Out[2]:
id name quantity
0 10 A 3
1 12 B 3
2 13 C 6
I've tried to use apply to iterate through rows and modify column value by condition like this:
def modify(row):
row['name'] = df2[df2['id'] == row['id']]['name'].get_values()[0]
row['quantity'] = df2[df2['id'] == row['id']]['quantity'].get_values()[0]
df1.apply(modify, axis=1)
But it doesn't have any results. DataFrame 1 is still the same
I am expecting something like this first:
In [1]: df1
Out[1]:
id name quantity price
0 10 A 3 15
1 11 Y 30 20
2 12 B 3 15
3 13 C 6 10
4 14 X 12 15
After that I want to drop the rows, which were not modified to get:
In [1]: df1
Out[1]:
id name quantity price
0 10 A 3 15
1 12 B 3 15
2 13 C 6 10
Using update
df1=df1.set_index('id')
df1.update(df2.set_index('id'))
df1=df1.reset_index()
Out[740]:
id name quantity price
0 10 A 3.0 15
1 11 Y 30.0 20
2 12 B 3.0 15
3 13 C 6.0 10
4 14 X 12.0 15
new_df = df.merge(df2, on='id')
new.drop(['name_x','quantity_x'], inplace=True, axis=1)
new.columns = ['id','price','name','quantity']
Output
id price name quantity
0 10 15 A 3
1 12 15 B 3
2 13 10 C 6

Pandas Dataframe aggregate different groups of columns

I have a dataframe
df = pd.DataFrame(
[np.random.randint(1,10,8),
np.random.randint(1,10,8),
np.random.randint(1,10,8),
np.random.randint(1,10,8)]).T
# left col is the index
>> a b c d group
0 5 6 3 2 g1
1 5 6 6 6 g1
2 3 9 5 3 g1
3 5 6 8 2 g1
4 2 2 9 6 g1
5 9 5 4 8 g2
6 1 3 5 2 g2
7 3 8 8 6 g2
I want to groupby "group" column and then do a few different operations:
• For column "a" I want to get the min and max value
• For the rest I want to sum them
min_max_col = ['a']
sum_cols = ['b','c','d']
Is there a simple way to do this?
The result should look something like this:
>> min max sum_b sum_c sum_d
g1 2 5 29 48 19
g2 1 9 16 48 16
Use agg
df = df.groupby('group').agg({'a':[ np.min, np.max], 'b': np.sum, 'c': np.sum, 'd': np.sum})
df.columns = ['min', 'max', 'sum_b', 'sum_c', 'sum_d']
df = df.reset_index()
group min max sum_b sum_c sum_d
0 g1 2 5 29 31 19
1 g2 1 9 16 17 16
This is different because we are leveraging pandas internally referenced sum, min, and max functions. It is my opinion that we should leverage those as much as possible.
f = dict(
a=['min', 'max'],
b='sum',
c='sum',
d='sum'
)
df.groupby('group').agg(f)
a b c d
min max sum sum sum
group
g1 2 5 29 31 19
g2 1 9 16 17 16

Python Pandas: How to take categorical average of a column?

for a given dataframe as follows:
1 a 10
2 a 20
3 a 30
4 b 10
5 b 100
where column 1 is index, column 2 is some categorical value and column 3 is a number. I want categorical mean over column 2, which should look something like this:
a 20
b 55
The value for a is calculated as
(10+20+30)/3 = 20
The value for b is calculated as
(10+100)/2 = 55
I think you can use groupby with mean and reset_index:
print df
a b c
0 1 a 10
1 2 a 20
2 3 a 30
3 4 b 10
4 5 b 100
df1 = df.groupby('b')['c'].mean().reset_index()
print df1
b c
0 a 20
1 b 55
print df1.c.max()
55
print df1.c.min()
20