pandas dataframe group by create a new column - pandas

I have a dataframe with the below format:
name date
Anne 2018/07/04
Anne 2018/07/06
Bob 2015/10/01
Bob 2015/10/10
Bob 2015/11/11
Anne 2018/07/05
... ...
I would like to add a column which is a relative number of days passed from the minimum date of the person.
for each row:
relative_day = (person's date) - (minimum of person's date)
The output is:
name date relative_day
Anne 2018/07/04 0
Anne 2018/07/04 2
Bob 2015/10/01 0
Bob 2015/10/01 9
Bob 2015/11/11 41
Anne 2018/07/05 1
I tried to groupby name first and then writing a for loop over each name and add a column but it gives the error of
A value is trying to be set on a copy of a slice from a DataFrame.
Here is the code I have tried so far:
df['relative_day'] = None
person_groups = df.groupby('name')
for person_name, person_dates in person_groups:
person_dates['relative_day'] = person_dates['date'].min()

Get the name as an index, group on the name, then subtract the minimum to get your relative dates.
result = df.astype({"date": np.datetime64}).set_index("name")
result.assign(relative_day=result['date'] - result.groupby("name")['date'].transform("min"))
date relative_day
name
Anne 2018-07-04 0 days
Anne 2018-07-06 2 days
Bob 2015-10-01 0 days
Bob 2015-10-10 9 days
Bob 2015-11-11 41 days
Anne 2018-07-05 1 days

Let us try
df.date=pd.to_datetime(df.date)
df['new'] = (df.date - df.groupby('name').date.transform('min')).dt.days
df
name date new
0 Anne 2018-07-04 0
1 Anne 2018-07-06 2
2 Bob 2015-10-01 0
3 Bob 2015-10-10 9
4 Bob 2015-11-11 41
5 Anne 2018-07-05 1

#sammywemmy has a good solution. I want to show another possible way.
import pandas as pd
# read dataset
df = pd.read_csv('data.csv')
# change column data type
df['date'] = pd.to_datetime(df['date'], format='%Y/%m/%d')
# group by name
df_group = df.groupby('name')
# get minimum date value
df_group_min = df_group['date'].min()
# create minimum date column by name
df['min'] = df.apply(lambda r: df_group_min[r['name']], axis=1)
# calculate relative day
df['relative_day'] = (df['date'] - df['min']).dt.days
# remove minimum column
df.drop('min', axis=1, inplace=True)
print(df)
Output
name date relative_day
0 Anne 2018-07-04 0
1 Anne 2018-07-06 2
2 Bob 2015-10-01 0
3 Bob 2015-10-10 9
4 Bob 2015-11-11 41
5 Anne 2018-07-05 1

Related

Pandas: Drop duplicates that appear within a time interval pandas

We have a dataframe containing an 'ID' and 'DAY' columns, which shows when a specific customer made a complaint. We need to drop duplicates from the 'ID' column, but only if the duplicates happened 30 days apart, tops. Please see the example below:
Current Dataset:
ID DAY
0 1 22.03.2020
1 1 18.04.2020
2 2 10.05.2020
3 2 13.01.2020
4 3 30.03.2020
5 3 31.03.2020
6 3 24.02.2021
Goal:
ID DAY
0 1 22.03.2020
1 2 10.05.2020
2 2 13.01.2020
3 3 30.03.2020
4 3 24.02.2021
Any suggestions? I have tried groupby and then creating a loop to calculate the difference between each combination, but because the dataframe has millions of rows this would take forever...
You can compute the difference between successive dates per group and use it to form a mask to remove days that are less than 30 days apart:
df['DAY'] = pd.to_datetime(df['DAY'], dayfirst=True)
mask = (df
.sort_values(by=['ID', 'DAY'])
.groupby('ID')['DAY']
.diff().lt('30d')
.sort_index()
)
df[~mask]
NB. the potential drawback of this approach is that if the customer makes a new complaint within the 30days, this restarts the threshold for the next complaint
output:
ID DAY
0 1 2020-03-22
2 2 2020-10-05
3 2 2020-01-13
4 3 2020-03-30
6 3 2021-02-24
Thus another approach might be to resample the data per group to 30days:
(df
.groupby('ID')
.resample('30d', on='DAY').first()
.dropna()
.convert_dtypes()
.reset_index(drop=True)
)
output:
ID DAY
0 1 2020-03-22
1 2 2020-01-13
2 2 2020-10-05
3 3 2020-03-30
4 3 2021-02-24
You can try group by ID column and diff the DAY column in each group
df['DAY'] = pd.to_datetime(df['DAY'], dayfirst=True)
from datetime import timedelta
m = timedelta(days=30)
out = df.groupby('ID').apply(lambda group: group[~group['DAY'].diff().abs().le(m)]).reset_index(drop=True)
print(out)
ID DAY
0 1 2020-03-22
1 2 2020-05-10
2 2 2020-01-13
3 3 2020-03-30
4 3 2021-02-24
To convert to original date format, you can use dt.strftime
out['DAY'] = out['DAY'].dt.strftime('%d.%m.%Y')
print(out)
ID DAY
0 1 22.03.2020
1 2 10.05.2020
2 2 13.01.2020
3 3 30.03.2020
4 3 24.02.2021

Compare two data frames for different values in a column

I have two dataframe, please tell me how I can compare them by operator name, if it matches, then add the values ​​of quantity and time to the first data frame.
In [2]: df1 In [3]: df2
Out[2]: Out[3]:
Name count time Name count time
0 Bob 123 4:12:10 0 Rick 9 0:13:00
1 Alice 99 1:01:12 1 Jone 7 0:24:21
2 Sergei 78 0:18:01 2 Bob 10 0:15:13
85 rows x 3 columns 105 rows x 3 columns
I want to get:
In [5]: df1
Out[5]:
Name count time
0 Bob 133 4:27:23
1 Alice 99 1:01:12
2 Sergei 78 0:18:01
85 rows x 3 columns
Use set_index and add them together. Finally, update back.
df1 = df1.set_index('Name')
df1.update(df1 + df2.set_index('Name'))
df1 = df1.reset_index()
Out[759]:
Name count time
0 Bob 133.0 04:27:23
1 Alice 99.0 01:01:12
2 Sergei 78.0 00:18:01
Note: I assume time columns in both df1 and df2 are already in correct date/time format. If they are in string format, you need to convert them before running above commands as follows:
df1.time = pd.to_timedelta(df1.time)
df2.time = pd.to_timedelta(df2.time)

Selecting all the previous 6 months data records from occurrence of a particular value in a column in pandas

I want to select all the previous 6 months records for a customer whenever a particular transaction is done by the customer.
Data looks like:
Cust_ID Transaction_Date Amount Description
1 08/01/2017 12 Moved
1 03/01/2017 15 X
1 01/01/2017 8 Y
2 10/01/2018 6 Moved
2 02/01/2018 12 Z
Here, I want to see for the Description "Moved" and then select all the last 6 months for every Cust_ID.
Output should look like:
Cust_ID Transaction_Date Amount Description
1 08/01/2017 12 Moved
1 03/01/2017 15 X
2 10/01/2018 6 Moved
I want to do this in python. Please help.
Idea is created Series of datetimes filtered by Moved and shifted by MonthOffset, last filter by Series.map values less like this offsets:
EDIT: Get all datetimes for each Moved values:
df['Transaction_Date'] = pd.to_datetime(df['Transaction_Date'])
df = df.sort_values(['Cust_ID','Transaction_Date'])
df['g'] = df['Description'].iloc[::-1].eq('Moved').cumsum()
s = (df[df['Description'].eq('Moved')]
.set_index(['Cust_ID','g'])['Transaction_Date'] - pd.offsets.MonthOffset(6))
mask = df.join(s.rename('a'), on=['Cust_ID','g'])['a'] < df['Transaction_Date']
df1 = df[mask].drop('g', axis=1)
EDIT1: Get all datetimes for Moved with minimal datetimes per groups, another Moved per groups are removed:
print (df)
Cust_ID Transaction_Date Amount Description
0 1 10/01/2017 12 X
1 1 01/23/2017 15 Moved
2 1 03/01/2017 8 Y
3 1 08/08/2017 12 Moved
4 2 10/01/2018 6 Moved
5 2 02/01/2018 12 Z
#convert to datetimes
df['Transaction_Date'] = pd.to_datetime(df['Transaction_Date'])
#mask for filter Moved rows
mask = df['Description'].eq('Moved')
#filter and sorting this rows
df1 = df[mask].sort_values(['Cust_ID','Transaction_Date'])
print (df1)
Cust_ID Transaction_Date Amount Description
1 1 2017-01-23 15 Moved
3 1 2017-08-08 12 Moved
4 2 2018-10-01 6 Moved
#get duplicated filtered rows in df1
mask = df1.duplicated('Cust_ID')
#create Series for map
s = df1[~mask].set_index('Cust_ID')['Transaction_Date'] - pd.offsets.MonthOffset(6)
print (s)
Cust_ID
1 2016-07-23
2 2018-04-01
Name: Transaction_Date, dtype: datetime64[ns]
#create mask for filter out another Moved (get only first for each group)
m2 = ~mask.reindex(df.index, fill_value=False)
df1 = df[(df['Cust_ID'].map(s) < df['Transaction_Date']) & m2]
print (df1)
Cust_ID Transaction_Date Amount Description
0 1 2017-10-01 12 X
1 1 2017-01-23 15 Moved
2 1 2017-03-01 8 Y
4 2 2018-10-01 6 Moved
EDIT2:
#get last duplicated filtered rows in df1
mask = df1.duplicated('Cust_ID', keep='last')
#create Series for map
s = df1[~mask].set_index('Cust_ID')['Transaction_Date']
print (s)
Cust_ID
1 2017-08-08
2 2018-10-01
Name: Transaction_Date, dtype: datetime64[ns]
m2 = ~mask.reindex(df.index, fill_value=False)
#filter by between Moved and next 6 months
df3 = df[df['Transaction_Date'].between(df['Cust_ID'].map(s), df['Cust_ID'].map(s + pd.offsets.MonthOffset(6))) & m2]
print (df3)
Cust_ID Transaction_Date Amount Description
3 1 2017-08-08 12 Moved
0 1 2017-10-01 12 X
4 2 2018-10-01 6 Moved

How to remove duplicate entires using the latest time in Pandas

Here is the snippet:
test = pd.DataFrame({'uid':[1,1,2,2,3,3],
'start_time':[datetime(2017,7,20),datetime(2017,6,20),datetime(2017,5,20),datetime(2017,4,20),datetime(2017,3,20),datetime(2017,2,20)],
'amount': [10,11,12,13,14,15]})
Output:
amount start_time uid
0 10 2017-07-20 1
1 11 2017-06-20 1
2 12 2017-05-20 2
3 13 2017-04-20 2
4 14 2017-03-20 3
5 15 2017-02-20 3
Desired Output:
amount start_time uid
0 10 2017-07-20 1
2 12 2017-05-20 2
4 14 2017-03-20 3
I want to group by uid and mind the row with the latest start_time. Basically, I want to remove duplicate uid by only selecting the uid with the latest start_time.
I tried test.groupby(['uid'])['start_time'].max() but it doesn't work as it only returns back the uid and start_time column. I need the amount column as well.
Update: Thanks to #jezrael & #EdChum, you guys always help me out on this forum, thank you so much!
I tested both solutions in terms of execution time on a dataset of 1136 rows and 30 columns:
Method A: test.sort_values('start_time', ascending=False).drop_duplicates('uid')
Total execution time: 3.21 ms
Method B: test.loc[test.groupby('uid')['start_time'].idxmax()]
Total execution time: 65.1 ms
I guess groupby requires more time to compute.
Use idxmax to return the index of the latest time and use this to index the original df:
In[35]:
test.loc[test.groupby('uid')['start_time'].idxmax()]
Out[35]:
amount start_time uid
0 10 2017-07-20 1
2 12 2017-05-20 2
4 14 2017-03-20 3
Use sort_values by column start_time with drop_duplicates by uid:
df = test.sort_values('start_time', ascending=False).drop_duplicates('uid')
print (df)
amount start_time uid
0 10 2017-07-20 1
2 12 2017-05-20 2
4 14 2017-03-20 3
If need output with ordered uid:
print (test.sort_values('start_time', ascending=False)
.drop_duplicates('uid')
.sort_values('uid'))

Pandas group by cumsum keep columns

I have spent a few hours now trying to do a "cumulative group by sum" on a pandas dataframe. I have looked at all the stackoverflow answers and surprisingly none of them can solve my (very elementary) problem:
I have a dataframe:
df1
Out[8]:
Name Date Amount
0 Jack 2016-01-31 10
1 Jack 2016-02-29 5
2 Jack 2016-02-29 8
3 Jill 2016-01-31 10
4 Jill 2016-02-29 5
I am trying to
group by ['Name','Date'] and
cumsum 'Amount'.
That is it.
So the desired output is:
df1
Out[10]:
Name Date Cumsum
0 Jack 2016-01-31 10
1 Jack 2016-02-29 23
2 Jill 2016-01-31 10
3 Jill 2016-02-29 15
EDIT: I am simplifying the question. With the current answers I still can't get the correct "running" cumsum. Look closely, I want to see the cumulative sum "10, 23, 10, 15". In words, I want to see, at every consecutive date, the total cumulative sum for a person. NB: If there are two entries on one date for the same person, I want to sum those and then add them to the running cumsum and only then print the sum.
You need assign output to new column and then remove Amount column by drop:
df1['Cumsum'] = df1.groupby(by=['Name','Date'])['Amount'].cumsum()
df1 = df1.drop('Amount', axis=1)
print (df1)
Name Date Cumsum
0 Jack 2016-01-31 10
1 Jack 2016-02-29 5
2 Jack 2016-02-29 13
3 Jill 2016-01-31 10
4 Jill 2016-02-29 5
Another solution with assign:
df1 = df1.assign(Cumsum=df1.groupby(by=['Name','Date'])['Amount'].cumsum())
.drop('Amount', axis=1)
print (df1)
Name Date Cumsum
0 Jack 2016-01-31 10
1 Jack 2016-02-29 5
2 Jack 2016-02-29 13
3 Jill 2016-01-31 10
4 Jill 2016-02-29 5
EDIT by comment:
First groupby columns Name and Date and aggregate sum, then groupby by level Name and aggregate cumsum.
df = df1.groupby(by=['Name','Date'])['Amount'].sum()
.groupby(level='Name').cumsum().reset_index(name='Cumsum')
print (df)
Name Date Cumsum
0 Jack 2016-01-31 10
1 Jack 2016-02-29 23
2 Jill 2016-01-31 10
3 Jill 2016-02-29 15
Set the index first, then groupby.
df.set_index(['Name', 'Date']).groupby(level=[0, 1]).Amount.cumsum().reset_index()
After the OP changed their question, this is now the correct answer.
df1.groupby(
['Name','Date']
)Amount.sum().groupby(
level='Name'
).cumsum()
This is the same answer provided by jezrael