How to apply *multiple* functions to pandas groupby apply? - pandas

I have a dataframe which shall be grouped and then on each group several functions shall be applied. Normally, I would do this with groupby().agg() (cf. Apply multiple functions to multiple groupby columns), but the functions I'm interested do not need one column as input but multiple columns.
I learned that, when I have one function that has multiple columns as input, I need apply (cf. Pandas DataFrame aggregate function using multiple columns).
But what do I need, when I have multiple functions that have multiple columns as input?
import pandas as pd
df = pd.DataFrame({'x':[2, 3, -10, -10], 'y':[10, 13, 20, 30], 'id':['a', 'a', 'b', 'b']})
def mindist(data): #of course these functions are more complicated in reality
return min(data['y'] - data['x'])
def maxdist(data):
return max(data['y'] - data['x'])
I would expect something like df.groupby('id').apply([mindist, maxdist])
min max
id
a 8 10
b 30 40
(achieved with pd.DataFrame({'mindist':df.groupby('id').apply(mindist),'maxdist':df.groupby('id').apply(maxdist)} - which obviously isn't very handy if I have a dozend of functions to apply on the grouped dataframe). Initially I thought this OP had the same question, but he seems to be fine with aggregate, meaning his functions take only one column as input.

For this specific issue, how about groupby after difference?
(df['x']-df['y']).groupby(df['id']).agg(['min','max'])
More generically, you could probably do something like
df.groupby('id').apply(lambda x:pd.Series({'min':mindist(x),'max':maxdist(x)}))

IIUC you want to use several functions within the same group. In this case you should return a pd.Series. In the following toy example I want to
sum columns A and B then calculate the mean
sum columns C and D then calculate the std
import pandas as pd
df = pd.util.testing.makeDataFrame().head(10)
df["key"] = ["key1"] * 5 + ["key2"] * 5
def fun(x):
m = (x["A"]+x["B"]).mean()
s = (x["C"]+x["D"]).std()
return pd.Series({"meanAB":m, "stdCD":s})
df.groupby("key").apply(fun)
Update
Which in your case became
import pandas as pd
df = pd.DataFrame({'x':[2, 3, -10, -10],
'y':[10, 13, 20, 30],
'id':['a', 'a', 'b', 'b']})
def mindist(data): #of course these functions are more complicated in reality
return min(data['y'] - data['x'])
def maxdist(data):
return max(data['y'] - data['x'])
def fun(data):
return pd.Series({"maxdist":maxdist(data),
"mindist":mindist(data)})
df.groupby('id').apply(fun)

Related

Filter dataframe based on condition before groupby

Suppose I have a dataframe like this
Create sample dataframe:
import pandas as pd
import numpy as np
data = {
'gender': np.random.choice(['m', 'f'], size=100),
'vaccinated': np.random.choice([0, 1], size=100),
'got sick': np.random.choice([0, 1], size=100)
}
df = pd.DataFrame(data)
and I want to see, by gender, what proportion of vaccinated people got sick.
I've tries something like this:
df.groupby('gender').agg(lambda group: sum(group['vaccinated']==1 & group['sick']==1)
/sum(group['sick']==1))
but this doesn't work because agg works on the series level. Same applies for transform. apply doesn't work either, but I'm not as clear why or how apply functions on groupby objects.
Any ideas how to accomplish this with a single line of code?
You could first filter for the vaccinated people and then group by gender and calculate the proportion of people that got sick..
df[df.vaccinated == 1].groupby("gender").agg({"got sick":"mean"})
Output:
got sick
gender
f 0.548387
m 0.535714
In this case the proportion is calculated based on a sample data that I've created
The docs for GroupBy.apply state that the function is applied "group-wise". This means that the function is called on each group separately as a data frame.
That is, df.groupby(c).apply(f) is conceptually equivalent to:
results = {}
for val in df[c]:
group = df.loc[df[c] == val]
result = f(group)
results[val] = result
pd.concat(results)
We can use this understanding to apply your custom aggregation function, using a top-level def just to make the code easier to read:
def calc_vax_sick_frac(group):
vaccinated = group['vaccinated'] == 1
sick = group['sick'] == 1
return (vaccinated & sick).sum() / sick.sum()
(
df
.groupby('gender')
.apply(calc_vax_sick_frac)
)

How to apply a function on a column of a pandas dataframe? [duplicate]

I have a pandas dataframe with two columns. I need to change the values of the first column without affecting the second one and get back the whole dataframe with just first column values changed. How can I do that using apply() in pandas?
Given a sample dataframe df as:
a b
0 1 2
1 2 3
2 3 4
3 4 5
what you want is:
df['a'] = df['a'].apply(lambda x: x + 1)
that returns:
a b
0 2 2
1 3 3
2 4 4
3 5 5
For a single column better to use map(), like this:
df = pd.DataFrame([{'a': 15, 'b': 15, 'c': 5}, {'a': 20, 'b': 10, 'c': 7}, {'a': 25, 'b': 30, 'c': 9}])
a b c
0 15 15 5
1 20 10 7
2 25 30 9
df['a'] = df['a'].map(lambda a: a / 2.)
a b c
0 7.5 15 5
1 10.0 10 7
2 12.5 30 9
Given the following dataframe df and the function complex_function,
import pandas as pd
def complex_function(x, y=0):
if x > 5 and x > y:
return 1
else:
return 2
df = pd.DataFrame(data={'col1': [1, 4, 6, 2, 7], 'col2': [6, 7, 1, 2, 8]})
col1 col2
0 1 6
1 4 7
2 6 1
3 2 2
4 7 8
there are several solutions to use apply() on only one column. In the following I will explain them in detail.
I. Simple solution
The straightforward solution is the one from #Fabio Lamanna:
df['col1'] = df['col1'].apply(complex_function)
Output:
col1 col2
0 2 6
1 2 7
2 1 1
3 2 2
4 1 8
Only the first column is modified, the second column is unchanged. The solution is beautiful. It is just one line of code and it reads almost like english: "Take 'col1' and apply the function complex_function to it."
However, if you need data from another column, e.g. 'col2', it won't work. If you want to pass the values of 'col2' to variable y of the complex_function, you need something else.
II. Solution using the whole dataframe
Alternatively, you could use the whole dataframe as described in this SO post or this one:
df['col1'] = df.apply(lambda x: complex_function(x['col1']), axis=1)
or if you prefer (like me) a solution without a lambda function:
def apply_complex_function(x):
return complex_function(x['col1'])
df['col1'] = df.apply(apply_complex_function, axis=1)
There is a lot going on in this solution that needs to be explained. The apply() function works on pd.Series and pd.DataFrame. But you cannot use df['col1'] = df.apply(complex_function).loc[:, 'col1'], because it would throw a ValueError.
Hence, you need to give the information which column to use. To complicate things, the apply() function does only accept callables. To solve this, you need to define a (lambda) function with the column x['col1'] as argument; i.e. we wrap the column information in another function.
Unfortunately, the default value of the axis parameter is zero (axis=0), which means it will try executing column-wise and not row-wise. This wasn't a problem in the first solution, because we gave apply() a pd.Series. But now the input is a dataframe and we must be explicit (axis=1). (I marvel how often I forget this.)
Whether you prefer the version with the lambda function or without is subjective. In my opinion the line of code is complicated enough to read even without a lambda function thrown in. You only need the (lambda) function as a wrapper. It is just boilerplate code. A reader should not be bothered with it.
Now, you can modify this solution easily to take the second column into account:
def apply_complex_function(x):
return complex_function(x['col1'], x['col2'])
df['col1'] = df.apply(apply_complex_function, axis=1)
Output:
col1 col2
0 2 6
1 2 7
2 1 1
3 2 2
4 2 8
At index 4 the value has changed from 1 to 2, because the first condition 7 > 5 is true but the second condition 7 > 8 is false.
Note that you only needed to change the first line of code (i.e. the function) and not the second line.
Side note
Never put the column information into your function.
def bad_idea(x):
return x['col1'] ** 2
By doing this, you make a general function dependent on a column name! This is a bad idea, because the next time you want to use this function, you cannot. Worse: Maybe you rename a column in a different dataframe just to make it work with your existing function. (Been there, done that. It is a slippery slope!)
III. Alternative solutions without using apply()
Although the OP specifically asked for a solution with apply(), alternative solutions were suggested. For example, the answer of #George Petrov suggested to use map(); the answer of #Thibaut Dubernet proposed assign().
I fully agree that apply() is seldom the best solution, because apply() is not vectorized. It is an element-wise operation with expensive function calling and overhead from pd.Series.
One reason to use apply() is that you want to use an existing function and performance is not an issue. Or your function is so complex that no vectorized version exists.
Another reason to use apply() is in combination with groupby(). Please note that DataFrame.apply() and GroupBy.apply() are different functions.
So it does make sense to consider some alternatives:
map() only works on pd.Series, but accepts dict and pd.Series as input. Using map() with a function is almost interchangeable with using apply(). It can be faster than apply(). See this SO post for more details.
df['col1'] = df['col1'].map(complex_function)
applymap() is almost identical for dataframes. It does not support pd.Series and it will always return a dataframe. However, it can be faster. The documentation states: "In the current implementation applymap calls func twice on the first column/row to decide whether it can take a fast or slow code path.". But if performance really counts you should seek an alternative route.
df['col1'] = df.applymap(complex_function).loc[:, 'col1']
assign() is not a feasible replacement for apply(). It has a similar behaviour in only the most basic use cases. It does not work with the complex_function. You still need apply() as you can see in the example below. The main use case for assign() is method chaining, because it gives back the dataframe without changing the original dataframe.
df['col1'] = df.assign(col1=df.col1.apply(complex_function))
Annex: How to speed up apply()?
I only mention it here because it was suggested by other answers, e.g. #durjoy. The list is not exhaustive:
Do not use apply(). This is no joke. For most numeric operations, a vectorized method exists in pandas. If/else blocks can often be refactored with a combination of boolean indexing and .loc. My example complex_function could be refactored in this way.
Refactor to Cython. If you have a complex equation and the parameters of the equation are in your dataframe, this might be a good idea. Check out the official pandas user guide for more information.
Use raw=True parameter. Theoretically, this should improve the performance of apply() if you are just applying a NumPy reduction function, because the overhead of pd.Series is removed. Of course, your function has to accept an ndarray. You have to refactor your function to NumPy. By doing this, you will have a huge performance boost.
Use 3rd party packages. The first thing you should try is Numba. I do not know swifter mentioned by #durjoy; and probably many other packages are worth mentioning here.
Try/Fail/Repeat. As mentioned above, map() and applymap() can be faster - depending on the use case. Just time the different versions and choose the fastest. This approach is the most tedious one with the least performance increase.
You don't need a function at all. You can work on a whole column directly.
Example data:
>>> df = pd.DataFrame({'a': [100, 1000], 'b': [200, 2000], 'c': [300, 3000]})
>>> df
a b c
0 100 200 300
1 1000 2000 3000
Half all the values in column a:
>>> df.a = df.a / 2
>>> df
a b c
0 50 200 300
1 500 2000 3000
Although the given responses are correct, they modify the initial data frame, which is not always desirable (and, given the OP asked for examples "using apply", it might be they wanted a version that returns a new data frame, as apply does).
This is possible using assign: it is valid to assign to existing columns, as the documentation states (emphasis is mine):
Assign new columns to a DataFrame.
Returns a new object with all original columns in addition to new ones. Existing columns that are re-assigned will be overwritten.
In short:
In [1]: import pandas as pd
In [2]: df = pd.DataFrame([{'a': 15, 'b': 15, 'c': 5}, {'a': 20, 'b': 10, 'c': 7}, {'a': 25, 'b': 30, 'c': 9}])
In [3]: df.assign(a=lambda df: df.a / 2)
Out[3]:
a b c
0 7.5 15 5
1 10.0 10 7
2 12.5 30 9
In [4]: df
Out[4]:
a b c
0 15 15 5
1 20 10 7
2 25 30 9
Note that the function will be passed the whole dataframe, not only the column you want to modify, so you will need to make sure you select the right column in your lambda.
If you are really concerned about the execution speed of your apply function and you have a huge dataset to work on, you could use swifter to make faster execution, here is an example for swifter on pandas dataframe:
import pandas as pd
import swifter
def fnc(m):
return m*3+4
df = pd.DataFrame({"m": [1,2,3,4,5,6], "c": [1,1,1,1,1,1], "x":[5,3,6,2,6,1]})
# apply a self created function to a single column in pandas
df["y"] = df.m.swifter.apply(fnc)
This will enable your all CPU cores to compute the result hence it will be much faster than normal apply functions. Try and let me know if it become useful for you.
Let me try a complex computation using datetime and considering nulls or empty spaces. I am reducing 30 years on a datetime column and using apply method as well as lambda and converting datetime format. Line if x != '' else x will take care of all empty spaces or nulls accordingly.
df['Date'] = df['Date'].fillna('')
df['Date'] = df['Date'].apply(lambda x : ((datetime.datetime.strptime(str(x), '%m/%d/%Y') - datetime.timedelta(days=30*365)).strftime('%Y%m%d')) if x != '' else x)
Make a copy of your dataframe first if you need to modify a column
Many answers here suggest modifying some column and assign the new values to the old column. It is common to get the SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame. warning. This happens when your dataframe was created from another dataframe but is not a proper copy.
To silence this warning, make a copy and assign back.
df = df.copy()
df['a'] = df['a'].apply('add', other=1)
apply() only needs the name of the function
You can invoke a function by simply passing its name to apply() (no need for lambda). If your function needs additional arguments, you can pass them either as keyword arguments or pass the positional arguments as args=. For example, suppose you have file paths in your dataframe and you need to read files in these paths.
def read_data(path, sep=',', usecols=[0]):
return pd.read_csv(path, sep=sep, usecols=usecols)
df = pd.DataFrame({'paths': ['../x/yz.txt', '../u/vw.txt']})
df['paths'].apply(read_data) # you don't need lambda
df['paths'].apply(read_data, args=(',', [0, 1])) # pass the positional arguments to `args=`
df['paths'].apply(read_data, sep=',', usecols=[0, 1]) # pass as keyword arguments
Don't apply a function, call the appropriate method directly
It's almost never ideal to apply a custom function on a column via apply(). Because apply() is a syntactic sugar for a Python loop with a pandas overhead, it's often slower than calling the same function in a list comprehension, never mind, calling optimized pandas methods. Almost all numeric operators can be directly applied on the column and there are corresponding methods for all of them.
# add 1 to every element in column `a`
df['a'] += 1
# for every row, subtract column `a` value from column `b` value
df['c'] = df['b'] - df['a']
If you want to apply a function that has if-else blocks, then you should probably be using numpy.where() or numpy.select() instead. It is much, much faster. If you have anything larger than 10k rows of data, you'll notice the difference right away.
For example, if you have a custom function similar to func() below, then instead of applying it on the column, you could operate directly on the columns and return values using numpy.select().
def func(row):
if row == 'a':
return 1
elif row == 'b':
return 2
else:
return -999
# instead of applying a `func` to each row of a column, use `numpy.select` as below
import numpy as np
conditions = [df['col'] == 'a', df['col'] == 'b']
choices = [1, 2]
df['new'] = np.select(conditions, choices, default=-999)
As you can see, numpy.select() has very minimal syntax difference from an if-else ladder; only need to separate conditions and choices into separate lists. For other options, check out this answer.

Pandas specify resulting column names when using agg without group by

I am trying to write a function that would summarize my pandas dataframe. This function should be able to do summarization by group as well as without grouping (depending on whether i specify group by parameter).
In order to be able to perform arbitrary aggregation functions, i pass a dictionary as an argument to pandas agg. The dictionary specifies what aggregations should be performed - and ideally it would specify what would be the names of the columns that come out as a result of my aggregation.
Unfortunately, while i can easily name all columns while using df.groupby([[).agg({}), i can't specify the column names in the same way while aggregating the whole dataframe (i.e. df.agg({}))
Example
Let's have a dataframe like this:
df = pd.DataFrame(
{
'my_grouping_col':['A', 'A', 'B', 'B', 'C', 'C'],
'my_value_col_1': [1,2,3,4,5,6],
'my_value_col_2': [7,8,9,10,11,12]
}
)
If i want to aggregate it by group while using some custom functions i could do:
df.groupby(['my_grouping_col']).agg(
{'my_value_col_1': ['min', 'max', ('Q1', lambda x: x.quantile(0.25))],
'my_value_col_2':['min', 'max', ('Q3', lambda x: x.quantile(0.75))]
}
)
Therefore i can use a tuple to specify the aggregation function as well as it's resulting column name.
Now i would like to be able to use the same syntax even without doing groupby:
df.agg(
{'my_value_col_1': ['min', 'max', ('Q1', lambda x: x.quantile(0.25))],
'my_value_col_2':['min', 'max', ('Q3', lambda x: x.quantile(0.75))]
}
)
But this gives me: AttributeError: 'Q1' is not a valid function for 'Series' object.
I can imagine two workarounds that i would rather not use:
Option 1:
Add a column with same value in all rows and then groupby. Afterwards reset index with drop=True to remove this column.
df['my_temporary_grouping_column'] = 1
df.groupby(['my_temporary_grouping_column']).agg(
{'my_value_col_1': ['min', 'max', ('Q1', lambda x: x.quantile(0.25))],
'my_value_col_2':['min', 'max', ('Q3', lambda x: x.quantile(0.75))]
}
).reset_index(drop = True)
Which gives the desired results:
my_value_col_1 my_value_col_2
min max Q1 min max Q3
my_grouping_col
A 1 2 1.25 7 8 7.75
B 3 4 3.25 9 10 9.75
C 5 6 5.25 11 12 11.75
Option 2
aggregate without groupby as:
df.agg(
{'my_value_col_1': ['min', 'max', lambda x: x.quantile(0.25)],
'my_value_col_2':['min', 'max', lambda x: x.quantile(0.75)]
}
)
And rename ... But i think the second option is really impractical.
Can i somehow change the dictionary specifying the details of the aggregation (including specifying column names) so that it works in both cases - with groupby and without? If not what would be a good way to specify the column names in agg without groupby? Note that i do not wish to use anything like: agg(Q3 = lambda x: x.quantile(0.75)). I would like to use the code within a function which takes as a parameter the details of what aggregations should be performed, such as:
def summarise(data, columns, group_cols, functions):
tdict = {}
[tdict.update({i.functions}) for i in columns]
if group_cols is not None:
out = data.groupby(group_cols).agg(tdict)
else:
data['temporary_group_col'] = 1
out = data.groupby(['temporary_group_col']).agg(tdict).reset_index(drop = True)
return out

pandas groupby returns multiindex with two more aggregates

When grouping by a single column, and using as_index=False, the behavior is expected in pandas. However, when I use .agg, as_index no longer appears to behave as expected. In short, it doesn't appear to matter.
# imports
import pandas as pd
import numpy as np
# set the seed
np.random.seed(834)
df = pd.DataFrame(np.random.rand(10, 1), columns=['a'])
df['letter'] = np.random.choice(['a','b'], size=10)
summary = df.groupby('letter', as_index=False).agg([np.count_nonzero, np.mean])
summary
returns:
a
count_nonzero mean
letter
a 6.0 0.539313
b 4.0 0.456702
When I would have expected the axis to be 0 1 with letter as a column in the dataframe.
In summary, I want to be able to group by one or more columns, summarize a single column with multiple aggregates, and return a dataframe that does not have the group by columns as the index, nor a Multi Index in the column.
The comment from #Trenton did the trick.
summary = df.groupby('letter')['a'].agg([np.count_nonzero, np.mean]).reset_index()

pandas groupby keeping other columns

This question is similar to this one, but in my case I need to apply a function that returns a Series rather than a single value for each group — that question is about aggregating with sum, but I need to use rank (so the difference is like that between agg and transform).
I have data on firms over time. This generates some dummy data that looks like my use case:
import numpy as np
import pandas as pd
dates = pd.date_range('1926', '2020', freq='M')
ndates = len(dates)
nfirms = 5000
cols = list('ABCDE')
df = pd.DataFrame(np.random.randn(nfirms*ndates,len(cols)),
index=np.tile(dates,nfirms),
columns=cols)
df.insert(0, 'id', np.repeat(np.arange(nfirms), ndates))
I need to calculate ranks of column E within each date (the index), but keeping column id.
If I just use groupby and .rank I get this:
df.groupby(level=0)['E'].rank()
1926-01-31 3226.0
1926-02-28 1042.0
1926-03-31 1611.0
1926-04-30 2591.0
1926-05-31 30.0
...
2019-08-31 1973.0
2019-09-30 227.0
2019-10-31 4381.0
2019-11-30 1654.0
2019-12-31 1572.0
Name: E, Length: 5640000, dtype: float64
This has the same dimension as df but I'm not sure it's safe to merge on the index — I really need to join on the id column also. Can I assume that the order remains the same?
If the order in the output is the same as in the output, I think I can do this:
df['ranks'] = df.groupby(level=0)['E'].rank()
But something about this seems strange, and I assume there is a way to include additional columns in the groupby output.
(I'm also not clear if calling .rank() is equivalent to .transform('rank').)