Pandas groupby and rolling window - pandas

I`m trying to calculate the sum of one field for a specific period of time, after grouping function is applied.
My dataset look like this:
Date Company Country Sold
01.01.2020 A BE 1
02.01.2020 A BE 0
03.01.2020 A BE 1
03.01.2020 A BE 1
04.01.2020 A BE 1
05.01.2020 B DE 1
06.01.2020 B DE 0
I would like to add a new column per each row, that calculates the sum of Sold (per each group "Company, Country" for the last 7 days - not including the current day
Date Company Country Sold LastWeek_Count
01.01.2020 A BE 1 0
02.01.2020 A BE 0 1
03.01.2020 A BE 1 1
03.01.2020 A BE 1 1
04.01.2020 A BE 1 3
05.01.2020 B DE 1 0
06.01.2020 B DE 0 1
I tried the following, but it is also including the current date, and it gives differnt values for the same date, i.e 03.01.2020
df['LastWeek_Count'] = df.groupby(['Company', 'Country']).rolling(7, on ='Date')['Sold'].sum().reset_index()
Is there a buildin function in pandas that I can use to perform these calculations?

You can use a .rolling window of 8 and then subtract the sum of the Date (for each grouped row) to effectively get the previous 7 days. For this sample data, we should also pass min_periods=1 (otherwise you will get NaN values, but for your actual dataset, you will need to decide what you want to do with windows that are < 8).
Then from the .rolling window of 8, simply do another .groupby of the relevant columns but also include Date this time, and take the max value of the newly created LastWeek_Count column. You need to take the max, because you have multiple records per day, so by taking the max, you are taking the total aggregated amount per Date.
Then, create a series that takes the grouped by sum per Date. In the final step subtract the sum by date from the rolling 8-day max, which is a workaround to how you can get the sum of the previous 7 days, as there is not a parameter for an offset with .rolling:
df['Date'] = pd.to_datetime(df['Date'], dayfirst=True)
df['LastWeek_Count'] = df.groupby(['Company', 'Country']).rolling(8, min_periods=1, on='Date')['Sold'].sum().reset_index()['Sold']
df['LastWeek_Count'] = df.groupby(['Company', 'Country', 'Date'])['LastWeek_Count'].transform('max')
s = df.groupby(['Company', 'Country', 'Date'])['Sold'].transform('sum')
df['LastWeek_Count'] = (df['LastWeek_Count']-s).astype(int)
Out[17]:
Date Company Country Sold LastWeek_Count
0 2020-01-01 A BE 1 0
1 2020-01-02 A BE 0 1
2 2020-01-03 A BE 1 1
3 2020-01-03 A BE 1 1
4 2020-01-04 A BE 1 3
5 2020-01-05 B DE 1 0
6 2020-01-06 B DE 0 1

One way would be to first consolidate the Sold value of each group (['Date', 'Company', 'Country']) on a single line using a temporary DF.
After that, apply your .groupby with .rolling with an interval of 8 rows.
After calculating the sum, subtract the value of each line with the value in Sold column and add that column in the original DF with .merge
#convert Date column to datetime
df['Date'] = pd.to_datetime(df['Date'], format='%d.%m.%Y')
#create a temporary DataFrame
df2 = df.groupby(['Date', 'Company', 'Country'])['Sold'].sum().reset_index()
#calc the lastweek
df2['LastWeek_Count'] = (df2.groupby(['Company', 'Country'])
.rolling(8, min_periods=1, on = 'Date')['Sold']
.sum().reset_index(drop=True)
)
#subtract the value of 'lastweek' from the current 'Sold'
df2['LastWeek_Count'] = df2['LastWeek_Count'] - df2['Sold']
#add th2 new column in the original DF
df.merge(df2.drop(columns=['Sold']), on = ['Date', 'Company', 'Country'])
#output:
Date Company Country Sold LastWeek_Count
0 2020-01-01 A BE 1 0.0
1 2020-01-02 A BE 0 1.0
2 2020-01-03 A BE 1 1.0
3 2020-01-03 A BE 1 1.0
4 2020-01-04 A BE 1 3.0
5 2020-01-05 B DE 1 0.0
6 2020-01-06 B DE 0 1.0

Related

How to produce monthly count when given a date range in pandas?

I have a dataframe that records users, a label, and the start and end date of them being labelled as such
e.g.
user
label
start_date
end_date
1
x
2018-01-01
2018-10-01
2
x
2019-05-10
2020-01-01
3
y
2019-04-01
2022-04-20
1
b
2018-10-01
2020-05-08
etc
where each row is for a given user and a label; a user appears multiple times for different labels
I want to get a count of users for every month for each label, such as this:
date
count_label_x
count_label_y
count_label_b
count_label_
2018-01
10
0
20
5
2018-02
2
5
15
3
2018-03
20
6
8
3
etc
where for instance for the first entry of the previous table, that user should be counted once for every month between his start and end date. The problem boils down to this and since I only have a few labels I can filter labels one by one and produce one output for each label. But how do I check and count users given an interval?
Thanks
You can use date_range combined with to_period to generate the active months, then pivot_table with aggfunc='nunique' to aggregate the unique user (if you want to count the duplicated users use aggfunc='count'):
out = (df
.assign(period=[pd.date_range(a, b, freq='M').to_period('M')
for a,b in zip(df['start_date'], df['end_date'])])
.explode('period')
.pivot_table(index='period', columns='label', values='user',
aggfunc='nunique', fill_value=0)
)
output:
label b x y
period
2018-01 0 1 0
2018-02 0 1 0
2018-03 0 1 0
2018-04 0 1 0
2018-05 0 1 0
...
2021-12 0 0 1
2022-01 0 0 1
2022-02 0 0 1
2022-03 0 0 1
handling NaT
if you have the same start/end and want to count the value:
out = (df
.assign(period=[pd.date_range(a, b, freq='M').to_period('M')
for a,b in zip(df['start_date'], df['end_date'])])
.explode('period')
.assign(period=lambda d: d['period'].fillna(d['start_date'].dt.to_period('M')))
.pivot_table(index='period', columns='label', values='user',
aggfunc='nunique', fill_value=0)
)

Pandas: Drop duplicates that appear within a time interval pandas

We have a dataframe containing an 'ID' and 'DAY' columns, which shows when a specific customer made a complaint. We need to drop duplicates from the 'ID' column, but only if the duplicates happened 30 days apart, tops. Please see the example below:
Current Dataset:
ID DAY
0 1 22.03.2020
1 1 18.04.2020
2 2 10.05.2020
3 2 13.01.2020
4 3 30.03.2020
5 3 31.03.2020
6 3 24.02.2021
Goal:
ID DAY
0 1 22.03.2020
1 2 10.05.2020
2 2 13.01.2020
3 3 30.03.2020
4 3 24.02.2021
Any suggestions? I have tried groupby and then creating a loop to calculate the difference between each combination, but because the dataframe has millions of rows this would take forever...
You can compute the difference between successive dates per group and use it to form a mask to remove days that are less than 30 days apart:
df['DAY'] = pd.to_datetime(df['DAY'], dayfirst=True)
mask = (df
.sort_values(by=['ID', 'DAY'])
.groupby('ID')['DAY']
.diff().lt('30d')
.sort_index()
)
df[~mask]
NB. the potential drawback of this approach is that if the customer makes a new complaint within the 30days, this restarts the threshold for the next complaint
output:
ID DAY
0 1 2020-03-22
2 2 2020-10-05
3 2 2020-01-13
4 3 2020-03-30
6 3 2021-02-24
Thus another approach might be to resample the data per group to 30days:
(df
.groupby('ID')
.resample('30d', on='DAY').first()
.dropna()
.convert_dtypes()
.reset_index(drop=True)
)
output:
ID DAY
0 1 2020-03-22
1 2 2020-01-13
2 2 2020-10-05
3 3 2020-03-30
4 3 2021-02-24
You can try group by ID column and diff the DAY column in each group
df['DAY'] = pd.to_datetime(df['DAY'], dayfirst=True)
from datetime import timedelta
m = timedelta(days=30)
out = df.groupby('ID').apply(lambda group: group[~group['DAY'].diff().abs().le(m)]).reset_index(drop=True)
print(out)
ID DAY
0 1 2020-03-22
1 2 2020-05-10
2 2 2020-01-13
3 3 2020-03-30
4 3 2021-02-24
To convert to original date format, you can use dt.strftime
out['DAY'] = out['DAY'].dt.strftime('%d.%m.%Y')
print(out)
ID DAY
0 1 22.03.2020
1 2 10.05.2020
2 2 13.01.2020
3 3 30.03.2020
4 3 24.02.2021

Is there a way to group columns with multiple conditions using pandas?

My dataframe is similar to:
transaction date cash
0 1 2020-01-01 72
1 2 2020-01-03 100
2 2 2020-01-05 -75
3 3 2020-01-05 82
I want the output to group by transaction and to sum the cash for each transaction (if there is two amounts) BUT ALSO to return the later date. So for transaction 2 the end result would show transaction, date, cash as: 2,1/5/2020, 25...
Not sure how to make tables to help the visuals in my question yet so sorry, please let me know if there's any questions.
Use groupby + agg. Check the docs examples.
output = df.groupby('transaction').agg({'date': 'max', 'cash': 'sum'})
This solution assumes that the date column is encoded as proper datetime instances. If this is not currently the case, try df['date'] = pd.to_datetime(df['date']) before doing the following.
>>> df
transaction date cash
0 1 2020-01-01 72
1 2 2020-01-03 100
2 2 2020-01-05 -75
3 3 2020-01-05 82
>>> transactions = df.groupby('transaction')
>>> pd.concat((transactions['cash'].sum(), transactions['date'].max()), axis=1)
cash date
transaction
1 72 2020-01-01
2 25 2020-01-05
3 82 2020-01-05
transactions['date'].max() picks the date furthest into the future of those with the same transaction ID.

Create an incremental count from a cumulative count by date segmented by another series in a Pandas data frame

I have cumulative data (the series 'cumulative_count') in a date frame ('df1') that is segmented by the series 'state' and I want to create a new series in the data frame that shows incremental count by 'state'.
So:
df1 = pd.DataFrame({'date': ['2020-01-03','2020-01-03','2020-01-03','2020-01-04','2020-01-04','2020-01-04','2020-01-05','2020-01-05','2020-01-05'],'state': ['NJ','NY','CT','NJ','NY','CT','NJ','NY','CT'], 'cumulative_count': [1,3,5,3,6,7,19,15,20]})
...is transformed to have the new series added ('incremental count') where the incremental count is calculated by date but also segmented by state with the result generated being...
df2 = pd.DataFrame({'date': ['2020-01-03','2020-01-03','2020-01-03','2020-01-04','2020-01-04','2020-01-04','2020-01-05','2020-01-05','2020-01-05'],'state': ['NJ','NY','CT','NJ','NY','CT','NJ','NY','CT'], 'cumulative_count': [1,3,5,3,6,7,19,15,20],'incremental_count': [1,3,5,2,3,2,16,9,13]})
Any recommendations on how to do this would be greatly appreciated. Thanks!
Since your DataFrame is already sorted by 'date', you are looking to take the diff within each state group. Then fillna to get the correct value for the first date within each state.
df1['incremental_count'] = (df1.groupby('state')['cumulative_count'].diff()
.fillna(df1['cumulative_count'], downcast='infer'))
date state cumulative_count incremental_count
0 2020-01-03 NJ 1 1
1 2020-01-03 NY 3 3
2 2020-01-03 CT 5 5
3 2020-01-04 NJ 3 2
4 2020-01-04 NY 6 3
5 2020-01-04 CT 7 2
6 2020-01-05 NJ 19 16
7 2020-01-05 NY 15 9
8 2020-01-05 CT 20 13

Pandas Lambda Function Format Month and Day

I have a DF "ltyc" that looks like this:
month day wind_speed
0 1 1 11.263604
1 1 2 11.971495
2 1 3 11.989080
3 1 4 12.558736
4 1 5 11.850899
And, i apply a lambda function:
ltyc['date'] = pd.to_datetime(ltyc["month"], format='%m').apply(lambda dt: dt.replace(year=2020))
To get it to look like this:
month day wind_speed date
0 1 1 11.263604 2020-01-01
1 1 2 11.971495 2020-01-01
2 1 3 11.989080 2020-01-01
3 1 4 12.558736 2020-01-01
4 1 5 11.850899 2020-01-01
Except, I need it to look like this so that the days change also...but I cannot figure out how to format the lambda statement to do this instead as this is what I need.
month day wind_speed date
0 1 1 11.263604 2020-01-01
1 1 2 11.971495 2020-01-02
2 1 3 11.989080 2020-01-03
3 1 4 12.558736 2020-01-04
4 1 5 11.850899 2020-01-05
I have tried this:
ltyc['date'] = pd.to_datetime(ltyc["month"], format='%m%d').apply(lambda dt: dt.replace(year=2020))
and i get this error:
ValueError: time data '1' does not match format '%m%d' (match)
Thank you for help since i'm trying to figure out the lambda functions.
create a series with value 2020 and name year. Concat it to ['month', 'day'] and passing to pd.to_datetime. As long as, you passing a dataframe with columns names in this order year, month, date, pd.to_datetime will convert it to the appropriate datetime series.
#Allolz suggestion:
ltyc['date'] = pd.to_datetime(ltyc[['day', 'month']].assign(year=2020))
Out[367]:
month day wind_speed date
0 1 1 11.263604 2020-01-01
1 1 2 11.971495 2020-01-02
2 1 3 11.989080 2020-01-03
3 1 4 12.558736 2020-01-04
4 1 5 11.850899 2020-01-05
Or you may use reindex to create the sub-dataframe to pass to pd.to_datetime
ltyc['date'] = pd.to_datetime(ltyc.reindex(['year','month','day'],
axis=1, fill_value=2020))
Original:
s = pd.Series([2020]*len(ltyc), name='year')
ltyc['date'] = pd.to_datetime(pd.concat([s, ltyc[['month','day']]], axis=1))
This is similar to a previous answer, but does not persist the 'helper' column with the year. In brief, we pass a data frame with three columns (year, month, day) to the to_datetime() function.
ltyc['date'] = pd.to_datetime(ltyc
.assign(year=2020)
.filter(['year', 'month', 'day'])
)
You could also use your method and add month and day together with .astype(str) and then add %d to the format. The problem with your lambda is that you only considered month, so this is how you would consider month and day.
ltyc['date'] = (pd.to_datetime(ltyc["month"].astype(str) + '-' + ltyc["day"].astype(str),
format='%m-%d')
.apply(lambda dt: dt.replace(year=2020)))
output:
month day wind_speed date
0 1 1 11.263604 2020-01-01
1 1 2 11.971495 2020-01-02
2 1 3 11.989080 2020-01-03
3 1 4 12.558736 2020-01-04
4 1 5 11.850899 2020-01-05