Calculate the variance for each element in the sample separately - pandas

I had DF with name of attraction, date and ride sum.
import pandas as pd
attr = pd.DataFrame(
{'rides':['circuit','circuit',
'roller coaster', 'roller coaster',
'car', 'car', 'car',
'train', 'train'],
'date':['2019-06-22', '2019-06-23',
'2019-06-29', '2019-07-06',
'2019-09-01', '2019-09-07', '2019-09-08',
'2019-09-14', '2019-09-15'],
'ride_sum':[663, 483,
858, 602,
326, 2, 86,
70, 134]})
rides date ride_sum
0 circuit 2019-06-22 663
1 circuit 2019-06-23 483
2 roller coaster 2019-06-29 858
3 roller coaster 2019-07-06 602
4 car 2019-09-01 326
5 car 2019-09-07 2
6 car 2019-09-08 86
7 train 2019-09-14 70
8 train 2019-09-15 134
I can calculate this manually, but my dataframe has more than 1000 lines and more than 30 different rides.
In the example, it looks like this
print(attr.loc[attr['rides'] == 'circuit']['ride_sum'].var(),
attr.loc[attr['rides'] == 'roller coaster']['ride_sum'].var(),
attr.loc[attr['rides'] == 'car']['ride_sum'].var(),
attr.loc[attr['rides'] == 'train']['ride_sum'].var())
16200.0 32768.0 28272.0 2048.0
I want to get a dataframe with a variance for each rides that looks like this
rides var
0 circuit 16200.0
1 roller coaster 32768.0
2 car 28272.0
3 train 2048.0

Try groupby together with var() like this:
attr.groupby("rides").var().reset_index()
rides ride_sum
0 car 28272
1 circuit 16200
2 roller coaster 32768
3 train 2048
(reset_index() is not necessarily required)

Do this:
attr.groupby(attr.rides).agg(["var"]).reset_index()
EDIT:
For kurtosis, there is no aggregate. You need to do this:
attr.groupby(attr.rides).apply(pd.DataFrame.kurt).reset_index()
With your example, there are fewer than three values per group, so it'll return NaN.

Use the function unique in pandas to take unique rides and apply a loop for to take var
Example:
unique_rides = unique(attr['rides'])
for ride in unque_rides:
print(attr.loc[attr['rides'] == ride]['ride_sum'].var())
Thank you

Related

Pandas equivalent of partition by [duplicate]

Trying to create a new column from the groupby calculation. In the code below, I get the correct calculated values for each date (see group below) but when I try to create a new column (df['Data4']) with it I get NaN. So I am trying to create a new column in the dataframe with the sum of Data3 for the all dates and apply that to each date row. For example, 2015-05-08 is in 2 rows (total is 50+5 = 55) and in this new column I would like to have 55 in both of the rows.
import pandas as pd
import numpy as np
from pandas import DataFrame
df = pd.DataFrame({
'Date' : ['2015-05-08', '2015-05-07', '2015-05-06', '2015-05-05', '2015-05-08', '2015-05-07', '2015-05-06', '2015-05-05'],
'Sym' : ['aapl', 'aapl', 'aapl', 'aapl', 'aaww', 'aaww', 'aaww', 'aaww'],
'Data2': [11, 8, 10, 15, 110, 60, 100, 40],
'Data3': [5, 8, 6, 1, 50, 100, 60, 120]
})
group = df['Data3'].groupby(df['Date']).sum()
df['Data4'] = group
You want to use transform this will return a Series with the index aligned to the df so you can then add it as a new column:
In [74]:
df = pd.DataFrame({'Date': ['2015-05-08', '2015-05-07', '2015-05-06', '2015-05-05', '2015-05-08', '2015-05-07', '2015-05-06', '2015-05-05'], 'Sym': ['aapl', 'aapl', 'aapl', 'aapl', 'aaww', 'aaww', 'aaww', 'aaww'], 'Data2': [11, 8, 10, 15, 110, 60, 100, 40],'Data3': [5, 8, 6, 1, 50, 100, 60, 120]})
​
df['Data4'] = df['Data3'].groupby(df['Date']).transform('sum')
df
Out[74]:
Data2 Data3 Date Sym Data4
0 11 5 2015-05-08 aapl 55
1 8 8 2015-05-07 aapl 108
2 10 6 2015-05-06 aapl 66
3 15 1 2015-05-05 aapl 121
4 110 50 2015-05-08 aaww 55
5 60 100 2015-05-07 aaww 108
6 100 60 2015-05-06 aaww 66
7 40 120 2015-05-05 aaww 121
How do I create a new column with Groupby().Sum()?
There are two ways - one straightforward and the other slightly more interesting.
Everybody's Favorite: GroupBy.transform() with 'sum'
#Ed Chum's answer can be simplified, a bit. Call DataFrame.groupby rather than Series.groupby. This results in simpler syntax.
# The setup.
df[['Date', 'Data3']]
Date Data3
0 2015-05-08 5
1 2015-05-07 8
2 2015-05-06 6
3 2015-05-05 1
4 2015-05-08 50
5 2015-05-07 100
6 2015-05-06 60
7 2015-05-05 120
df.groupby('Date')['Data3'].transform('sum')
0 55
1 108
2 66
3 121
4 55
5 108
6 66
7 121
Name: Data3, dtype: int64
It's a tad faster,
df2 = pd.concat([df] * 12345)
%timeit df2['Data3'].groupby(df['Date']).transform('sum')
%timeit df2.groupby('Date')['Data3'].transform('sum')
10.4 ms ± 367 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
8.58 ms ± 559 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
Unconventional, but Worth your Consideration: GroupBy.sum() + Series.map()
I stumbled upon an interesting idiosyncrasy in the API. From what I tell, you can reproduce this on any major version over 0.20 (I tested this on 0.23 and 0.24). It seems like you consistently can shave off a few milliseconds of the time taken by transform if you instead use a direct function of GroupBy and broadcast it using map:
df.Date.map(df.groupby('Date')['Data3'].sum())
0 55
1 108
2 66
3 121
4 55
5 108
6 66
7 121
Name: Date, dtype: int64
Compare with
df.groupby('Date')['Data3'].transform('sum')
0 55
1 108
2 66
3 121
4 55
5 108
6 66
7 121
Name: Data3, dtype: int64
My tests show that map is a bit faster if you can afford to use the direct GroupBy function (such as mean, min, max, first, etc). It is more or less faster for most general situations upto around ~200 thousand records. After that, the performance really depends on the data.
(Left: v0.23, Right: v0.24)
Nice alternative to know, and better if you have smaller frames with smaller numbers of groups. . . but I would recommend transform as a first choice. Thought this was worth sharing anyway.
Benchmarking code, for reference:
import perfplot
perfplot.show(
setup=lambda n: pd.DataFrame({'A': np.random.choice(n//10, n), 'B': np.ones(n)}),
kernels=[
lambda df: df.groupby('A')['B'].transform('sum'),
lambda df: df.A.map(df.groupby('A')['B'].sum()),
],
labels=['GroupBy.transform', 'GroupBy.sum + map'],
n_range=[2**k for k in range(5, 20)],
xlabel='N',
logy=True,
logx=True
)
I suggest in general to use the more powerful apply, with which you can write your queries in single expressions even for more complicated uses, such as defining a new column whose values are defined are defined as operations on groups, and that can have also different values within the same group!
This is more general than the simple case of defining a column with the same value for every group (like sum in this question, which varies by group by is the same within the same group).
Simple case (new column with same value within a group, different across groups):
# I'm assuming the name of your dataframe is something long, like
# `my_data_frame`, to show the power of being able to write your
# data processing in a single expression without multiple statements and
# multiple references to your long name, which is the normal style
# that the pandas API naturally makes you adopt, but which make the
# code often verbose, sparse, and a pain to generalize or refactor
my_data_frame = pd.DataFrame({
'Date': ['2015-05-08', '2015-05-07', '2015-05-06', '2015-05-05', '2015-05-08', '2015-05-07', '2015-05-06', '2015-05-05'],
'Sym': ['aapl', 'aapl', 'aapl', 'aapl', 'aaww', 'aaww', 'aaww', 'aaww'],
'Data2': [11, 8, 10, 15, 110, 60, 100, 40],
'Data3': [5, 8, 6, 1, 50, 100, 60, 120]})
​
(my_data_frame
# create groups by 'Date'
.groupby(['Date'])
# for every small Group DataFrame `gdf` with the same 'Date', do:
# assign a new column 'Data4' to it, with the value being
# the sum of 'Data3' for the small dataframe `gdf`
.apply(lambda gdf: gdf.assign(Data4=lambda gdf: gdf['Data3'].sum()))
# after groupby operations, the variable(s) you grouped by on
# are set as indices. In this case, 'Date' was set as an additional
# level for the (multi)index. But it is still also present as a
# column. Thus, we drop it from the index:
.droplevel(0)
)
### OR
# We don't even need to define a variable for our dataframe.
# We can chain everything in one expression
(pd
.DataFrame({
'Date': ['2015-05-08', '2015-05-07', '2015-05-06', '2015-05-05', '2015-05-08', '2015-05-07', '2015-05-06', '2015-05-05'],
'Sym': ['aapl', 'aapl', 'aapl', 'aapl', 'aaww', 'aaww', 'aaww', 'aaww'],
'Data2': [11, 8, 10, 15, 110, 60, 100, 40],
'Data3': [5, 8, 6, 1, 50, 100, 60, 120]})
.groupby(['Date'])
.apply(lambda gdf: gdf.assign(Data4=lambda gdf: gdf['Data3'].sum()))
.droplevel(0)
)
Out:
Date
Sym
Data2
Data3
Data4
3
2015-05-05
aapl
15
1
121
7
2015-05-05
aaww
40
120
121
2
2015-05-06
aapl
10
6
66
6
2015-05-06
aaww
100
60
66
1
2015-05-07
aapl
8
8
108
5
2015-05-07
aaww
60
100
108
0
2015-05-08
aapl
11
5
55
4
2015-05-08
aaww
110
50
55
(Why are the python expression within parentheses? So that we don't need to sprinkle our code with backslashes all over the place, and we can put comments within our expression code to describe every step.)
What is powerful about this? It's that it is harnessing the full power of the "split-apply-combine paradigm". It is allowing you to think in terms of "splitting your dataframe into blocks" and "running arbitrary operations on those blocks" without reducing/aggregating, i.e., without reducing the number of rows. (And without writing explicit, verbose loops and resorting to expensive joins or concatenations to glue the results back.)
Let's consider a more complex example. One in which you have multiple time series of data in your dataframe. You have a column that represents a kind of product, a column that has timestamps, and a column that contains the number of items sold for that product at some time of the year. You would like to group by product and obtain a new column, that contains the cumulative total for the items that are sold for each category. We want a column that, within every "block" with the same product, is still a time series, and is monotonically increasing (only within a block).
How can we do this? With groupby + apply!
(pd
.DataFrame({
'Date': ['2021-03-11','2021-03-12','2021-03-13','2021-03-11','2021-03-12','2021-03-13'],
'Product': ['shirt','shirt','shirt','shoes','shoes','shoes'],
'ItemsSold': [300, 400, 234, 80, 10, 120],
})
.groupby(['Product'])
.apply(lambda gdf: (gdf
# sort by date within a group
.sort_values('Date')
# create new column
.assign(CumulativeItemsSold=lambda df: df['ItemsSold'].cumsum())))
.droplevel(0)
)
Out:
Date
Product
ItemsSold
CumulativeItemsSold
0
2021-03-11
shirt
300
300
1
2021-03-12
shirt
400
700
2
2021-03-13
shirt
234
934
3
2021-03-11
shoes
80
80
4
2021-03-12
shoes
10
90
5
2021-03-13
shoes
120
210
Another advantage of this method? It works even if we have to group by multiple fields! For example, if we had a 'Color' field for our products, and we wanted the cumulative series grouped by (Product, Color), we can:
(pd
.DataFrame({
'Date': ['2021-03-11','2021-03-12','2021-03-13','2021-03-11','2021-03-12','2021-03-13',
'2021-03-11','2021-03-12','2021-03-13','2021-03-11','2021-03-12','2021-03-13'],
'Product': ['shirt','shirt','shirt','shoes','shoes','shoes',
'shirt','shirt','shirt','shoes','shoes','shoes'],
'Color': ['yellow','yellow','yellow','yellow','yellow','yellow',
'blue','blue','blue','blue','blue','blue'], # new!
'ItemsSold': [300, 400, 234, 80, 10, 120,
123, 84, 923, 0, 220, 94],
})
.groupby(['Product', 'Color']) # We group by 2 fields now
.apply(lambda gdf: (gdf
.sort_values('Date')
.assign(CumulativeItemsSold=lambda df: df['ItemsSold'].cumsum())))
.droplevel([0,1]) # We drop 2 levels now
Out:
Date
Product
Color
ItemsSold
CumulativeItemsSold
6
2021-03-11
shirt
blue
123
123
7
2021-03-12
shirt
blue
84
207
8
2021-03-13
shirt
blue
923
1130
0
2021-03-11
shirt
yellow
300
300
1
2021-03-12
shirt
yellow
400
700
2
2021-03-13
shirt
yellow
234
934
9
2021-03-11
shoes
blue
0
0
10
2021-03-12
shoes
blue
220
220
11
2021-03-13
shoes
blue
94
314
3
2021-03-11
shoes
yellow
80
80
4
2021-03-12
shoes
yellow
10
90
5
2021-03-13
shoes
yellow
120
210
(This possibility of easily extending to grouping over multiple fields is the reason why I like to put the arguments of groupby always in a list, even if it's a single name, like 'Product' in the previous example.)
And you can do all of this synthetically in a single expression. (Sure, if python's lambdas were a bit nicer to look at, it would look even nicer.)
Why did I go over a general case? Because this is one of the first SO questions that pops up when googling for things like "pandas new column groupby".
Additional thoughts on the API for this kind of operation
Adding columns based on arbitrary computations made on groups is much like the nice idiom of defining new column using aggregations over Windows in SparkSQL.
For example, you can think of this (it's Scala code, but the equivalent in PySpark looks practically the same):
val byDepName = Window.partitionBy('depName)
empsalary.withColumn("avg", avg('salary) over byDepName)
as something like (using pandas in the way we have seen above):
empsalary = pd.DataFrame(...some dataframe...)
(empsalary
# our `Window.partitionBy('depName)`
.groupby(['depName'])
# our 'withColumn("avg", avg('salary) over byDepName)
.apply(lambda gdf: gdf.assign(avg=lambda df: df['salary'].mean()))
.droplevel(0)
)
(Notice how much synthetic and nicer the Spark example is. The pandas equivalent looks a bit clunky. The pandas API doesn't make writing these kinds of "fluent" operations easy).
This idiom in turns comes from SQL's Window Functions, which the PostgreSQL documentation gives a very nice definition of: (emphasis mine)
A window function performs a calculation across a set of table rows that are somehow related to the current row. This is comparable to the type of calculation that can be done with an aggregate function. But unlike regular aggregate functions, use of a window function does not cause rows to become grouped into a single output row — the rows retain their separate identities. Behind the scenes, the window function is able to access more than just the current row of the query result.
And gives a beautiful SQL one-liner example: (ranking within groups)
SELECT depname, empno, salary, rank() OVER (PARTITION BY depname ORDER BY salary DESC) FROM empsalary;
depname
empno
salary
rank
develop
8
6000
1
develop
10
5200
2
develop
11
5200
2
develop
9
4500
4
develop
7
4200
5
personnel
2
3900
1
personnel
5
3500
2
sales
1
5000
1
sales
4
4800
2
sales
3
4800
2
Last thing: you might also be interested in pandas' pipe, which is similar to apply but works a bit differently and gives the internal operations a bigger scope to work on. See here for more
df = pd.DataFrame({
'Date' : ['2015-05-08', '2015-05-07', '2015-05-06', '2015-05-05', '2015-05-08', '2015-05-07', '2015-05-06', '2015-05-05'],
'Sym' : ['aapl', 'aapl', 'aapl', 'aapl', 'aaww', 'aaww', 'aaww', 'aaww'],
'Data2': [11, 8, 10, 15, 110, 60, 100, 40],
'Data3': [5, 8, 6, 1, 50, 100, 60, 120]
})
print(pd.pivot_table(data=df,index='Date',columns='Sym', aggfunc={'Data2':'sum','Data3':'sum'}))
output
Data2 Data3
Sym aapl aaww aapl aaww
Date
2015-05-05 15 40 1 120
2015-05-06 10 100 6 60
2015-05-07 8 60 8 100
2015-05-08 11 110 5 50

SUM in dataframe of rows that has the same date and ADD new column

My code starts this way: it takes data from HERE and I want to extract al the rows that contain "fascia_anagrafica" equal to "20-29". In italian "fascia_anagrafica" means "age range". That was relatively simple, as you see below, and I dropped some unimportant values.
import pandas as pd
import json
import numpy
import sympy
from numpy import arange,exp
from scipy.optimize import curve_fit
from matplotlib import pyplot
import math
import decimal
df = pd.read_csv('https://raw.githubusercontent.com/italia/covid19-opendata-
vaccini/master/dati/somministrazioni-vaccini-latest.csv')
df = df[df["fascia_anagrafica"] == "20-29"]
df01=df.drop(columns= ["fornitore","area","sesso_maschile","sesso_femminile","seconda_dose","pregressa_infezione","dose_aggiuntiva","codice_NUTS1","codice_NUTS2","codice_regione_ISTAT","nome_area"])
now dataframe looks like this:IMAGE
as you see, for every date there is the "20-29 age range" and for every line you may find the value "prima_dose" which stands for "first_dose".
Now the problem:
If you take into consideration the date "2020-12-27" you will notice that it is repeated about 20 times (with 20 different values) since in italy there are 21 regions, then the same applies for the other dates. Unfortunately they are not always 21, because in certain regions they didn't put any values in some days so the dataframe is NOT periodic.
I want to add a column in the dataframe that makes a sum of the values that has same date fo all dates in the dataframe. An exaple here:
Date.................prima_dose...........sum_column
2020-8-9.............. 1.......................13 <----this is (1+3+4+5 in the day 2020-8-9)
2020-8-9..............3........................8 <----this is (2+5+1 in the day 2020-8-10)
2020-8-9.............. 4..............and so on...
2020-8-9.............. 5
2020-8-10.............. 2
2020-8-10.............. 5
2020-8-10.............. 1
thanks!
If you just want to sum all the values of 'prima_dose' for each date and get the result in a new dataframe, you could use groupby.sum():
result = df01.groupby('data_somministrazione')['prima_dose'].sum().reset_index()
Prints:
>>> result
data_somministrazione prima_dose
0 2020-12-27 700
1 2020-12-28 171
2 2020-12-29 87
3 2020-12-30 486
4 2020-12-31 2425
.. ... ...
289 2021-10-12 11583
290 2021-10-13 12532
291 2021-10-14 15347
292 2021-10-15 13689
293 2021-10-16 9293
[294 rows x 2 columns]
This will change the structure of your current dataframe, and return a unique row per date
If you want to add a new column in your existing dataframe without altering it's structure, you should use groupby.transform():
df01['prima_dose_per_date'] = df01.groupby('data_somministrazione')['prima_dose'].transform('sum')
Prints:
>>> df01
data_somministrazione fascia_anagrafica prima_dose prima_dose_per_date
0 2020-12-27 20-29 2 700
7 2020-12-27 20-29 9 700
12 2020-12-27 20-29 60 700
17 2020-12-27 20-29 59 700
23 2020-12-27 20-29 139 700
... ... ... ...
138475 2021-10-16 20-29 533 9293
138484 2021-10-16 20-29 112 9293
138493 2021-10-16 20-29 0 9293
138502 2021-10-16 20-29 529 9293
138515 2021-10-16 20-29 0 9293
[15595 rows x 4 columns]
This will keep the current structure of your dataframe and return a new column with the sum of prima_dose per each date.

Pandas-way to separate a DataFrame based on previouse groupby() explorations without loosing the not-grouped columns

I tried to translate the problem with my real data to example data presented in my question. Maybe I just have a simple technical problem. Or maybe my whole way and workflow is not the best?
The objectiv
There are persons (column name) who have eaten different fruit's at different day's. And there is some more data (column foo and bar) I do not want to lose.
I want to separate/split the original data, without loosing the additational data (in foo and bar).
The condition to separate is the number of unique fruits eaten at the specific days.
That is the initial data
>>> df
name day fruit foo bar
0 Tim 1 Apple 708 20
1 Tim 1 Apple 135 743
2 Tim 2 Apple 228 562
3 Anna 1 Banana 495 924
4 Anna 1 Strawberry 236 542
5 Bob 1 Strawberry 420 894
6 Bob 2 Apple 27 192
7 Bob 2 Kiwi 671 145
The separated interim result should look like this two DataFrame's:
>>> two
name day fruit foo bar
0 Anna 1 Banana 495 924
1 Anna 1 Strawberry 236 542
2 Bob 2 Apple 27 192
3 Bob 2 Kiwi 671 145
>>> non_two
name day fruit foo bar
0 Tim 1 Apple 708 20
1 Tim 1 Apple 135 743
2 Tim 2 Apple 228 562
3 Bob 1 Strawberry 420 894
Example explanation in words: Tim ate just Apple's at day 1 and 2. It does not matter how many apples. It just matters that it is one unique fruit.
What I have done so far
I did some groupby() magic to find out who and when have eaten two or less/more then two unique fruits.
import pandas as pd
import random as rd
data = {'name': ['Tim', 'Tim', 'Tim', 'Anna', 'Anna', 'Bob', 'Bob', 'Bob'],
'day': [1, 1, 2, 1, 1, 1, 2, 2],
'fruit': ['Apple', 'Apple', 'Apple', 'Banana', 'Strawberry',
'Strawberry', 'Apple', 'Kiwi'],
'foo': rd.sample(range(1000), 8),
'bar': rd.sample(range(1000), 8)
}
# That is the primary DataFrame
df = pd.DataFrame(data)
# Explore the data
a = df[['name', 'day', 'fruit']].groupby(['name', 'day', 'fruit']).count().reset_index()
b = a.groupby(['name', 'day']).count()
# People who ate 2 fruits on specific days
two = b[(b.fruit == 2)].reset_index()
print(two)
# People who ate less or more then 2 fruits on specific days
non_two = b[(b.fruit != 2)].reset_index()
print(non_two)
Here is my roadblocker
With the dataframes two and non_two I have the informations I want. Know I want to separate the initial dataframe based on that informations. I think name and day are the columns I should use to select and separate in the initial dataframe.
# filter mask
mymask = (df.name == two.name) & (df.day == two.day)
df_two = df[mymask]
df_non_two = df[~mymask]
But this does not work. The first line raise ValueError: Can only compare identically-labeled Series objects.
Use DataFrameGroupBy.nunique in GroupBy.transform, so possible filter original DataFrame:
mymask = df.groupby(['name', 'day'])['fruit'].transform('nunique').eq(2)
df_two = df[mymask]
df_non_two = df[~mymask]
print (df_two)
name day fruit foo bar
3 Anna 1 Banana 335 62
4 Anna 1 Strawberry 286 694
6 Bob 2 Apple 822 738
7 Bob 2 Kiwi 793 449

Capturing Pandas aggregation in to Lists

I have a column of data which has datetime and another column which has a numeric field (Length) and I am able to aggregate as below, where I am grouping by datetime and getting min/mean/max of all Lengths.
Code:
df.groupby(['DateTime']).agg({'Length': ['min', 'mean', 'max']})
Output:
Length
min mean max
DateTime
2020-11-24 14:30:00 118 1172.712000 1505
2020-11-24 14:30:01 118 1246.719495 1508
2020-11-24 14:30:02 115 1062.351156 1508
I need a simple way to capture this output in a set of lists, something like this:
outputdatelist=[2020-11-24 14:30:00, 2020-11-24 14:30:01,...]
outputlen_min=[118, 118, 115]
Similarly for mean, max.
Is there a way to do it?
Lets say the input df is like below,
DateTime Length
0 2018-01-01 100
1 2018-02-01 100
2 2018-03-01 100
3 2018-04-01 100
4 2018-05-01 100
Try the code:
df1 = df.groupby(['DateTime']).agg({'Length': ['min', 'mean', 'max']}).reset_index()
outputdatelist = df1['DateTime'].tolist()
outputlen_min = df1['Length']['min'].tolist()
Prints:
print(outputdatelist)
['2018-01-01', '2018-02-01', '2018-03-01', '2018-04-01', '2018-05-01']
print(outputlen_min)
[100, 100, 100, 100, 100]
similarly for mean and max columns.

Is there any way to fill in the missing rows for sale records in sequencial calendar

everyone:
Here is my question related to the pandas package how to fill in the rows which are missing in the sequencial calendar.
Background:
the table is a sample of my dataset with sale record. As you know, some of products are poor sales. Therefore, we can find some records are absent for "Category-A & product-seed" in 201003 -201005. Hence, it is hard for me to calculate the "sequential growth rate%" for each group in catagory-product.
Initially, I would like to use "groupby+apply" to dig out which periods are lost for each group, then I can recover and "pct_change" them. While it don't work. I don't know where is the root cause...
If you know how to do that, could you share with us your opinion? Appreciate!
Dataset:
Calendar:
Result:
Add some information:
my calendar is a srting that is consist of "month/quarter/semi-annua/annuall" insteal of date-time format. For example, 2010Q1,or 2019H1. So I hope there is a way to fill in the missing rows by my specific calendar.
In otherwords, I would like to do the first step is to calculate which rows are absent between my specific calendar. And then seconde step is that python can inser the missing rows with catagory and product infomation. Thanks.
So depending on what you have on your data this can achieved efficiently in several ways. I will point out two.
First the data:
df = pd.DataFrame(
{'Month': [201001, 201002, 201006, 201007, 201008, 201001, 201002, 201007, 201008],
'Category': ['A'] * 9,
'Product': ['seed'] * 5 + ['flower'] * 4,
'Sales': [200, 332, 799, 122, 994, 799, 122, 994, 100]}
).set_index(['Month', 'Category', 'Product'])
Reshape df
This will work only if ALL POSSIBLE DATES appear at least once in the df.
df = df.unstack(['Category', 'Product']).fillna(0).stack(['Category', 'Product'])
print(df.reset_index())
Output
Month Category Product Sales
0 201001 A flower 799.0
1 201001 A seed 200.0
2 201002 A flower 122.0
3 201002 A seed 332.0
4 201006 A flower 0.0
5 201006 A seed 799.0
6 201007 A flower 994.0
7 201007 A seed 122.0
8 201008 A flower 100.0
9 201008 A seed 994.0
As you can see, this sample data does not include months 3-5
Reindex
If we build a new Index with all possible combinations of date/product pandas will add the missing rows with df.reindex()
months = np.arange(201001, 201008, dtype=np.int)
cats = ['A']
products =['seed', 'flower']
df = df.reindex(
index=pd.MultiIndex.from_product(
[months, cats, products],
names=df.index.names),
fill_value=0)
print(df.reset_index())
Output
Month Category Product Sales
0 201001 A seed 200
1 201001 A flower 799
2 201002 A seed 332
3 201002 A flower 122
4 201003 A seed 0
5 201003 A flower 0
6 201004 A seed 0
7 201004 A flower 0
8 201005 A seed 0
9 201005 A flower 0
10 201006 A seed 799
11 201006 A flower 0
12 201007 A seed 122
13 201007 A flower 994