Replace rows in each groups, by the first row value. Pandas Groupby - pandas

Here is a dataframe:
df = pd.DataFrame({'A' : ['foo', 'foo', 'bar', 'bar', 'bar'],
'B' : ['1', '2','2', '4', '1']})
Below is how I want it to look,
And here is how I have tried and failed.
groups = df.groupby([A])
groups.apply(lambda g: g[g[B] == g[B].first()]).reset_index(drop=True)

You can do:
df['B'] = df.groupby('A')['B'].transform('first')
or, if data already sorted by A as showned:
df['B'] = df['B'].mask(df['A'].duplicated()).ffill()
Output:
A B
0 foo 1
1 foo 1
2 bar 2
3 bar 2
4 bar 2

Use drop_duplicates + repeat
s=df.drop_duplicates('A')
s=s.reindex(s.index.repeat(df.A.value_counts()))
Out[555]:
A B
0 foo 1
0 foo 1
0 foo 1
2 bar 2
2 bar 2

Related

Allocate values between two pandas dataframes

Consider two dataframes:
>> import pandas as pd
>> df1 = pd.DataFrame({"category": ["foo", "foo", "bar", "bar", "bar"], "quantity": [1,2,1,2,3]})
>> print(df1)
category quantity
0 foo 1
1 foo 2
2 bar 1
3 bar 2
4 bar 3
>> df2 = pd.DataFrame({
"category": ["foo", "foo", "foo", "foo", "bar", "bar", "bar", "bar", "bar", "bar"],
"item": ["A", "B", "C", "D", "E", "F", "G", "H", "I", "J"]
})
>> print(df2)
category item
0 foo A
1 foo B
2 foo C
3 foo D
4 bar E
5 bar F
6 bar G
7 bar H
8 bar I
9 bar J
How can I create a new column in df1 (new dataframe called df3) which joins on category column of df1 and allocates the item column in df2. So, create something like:
>> df3 = pd.DataFrame({
"category": ["foo", "foo", "bar", "bar", "bar"],
"quantity": [1,2,1,2,3],
"item": ["A", "B,C", "E", "F,G", "H,I,J"]
})
category quantity item
0 foo 1 A
1 foo 2 B,C
2 bar 1 E
3 bar 2 F,G
4 bar 3 H,I,J
You can create helper DataFrame by repeat rows by quantity column by Index.repeat with DataFrame.loc, convert index to column for avoid lost indices and create helper column g in both DataFrames for merging by duplicated categories by GroupBy.cumcount, then use DataFrame.merge with aggregate join:
df11 = (df1.loc[df1.index.repeat(df1['quantity'])].reset_index()
.assign(g = lambda x: x.groupby('category').cumcount()))
df22 = df2.assign(g = df2.groupby('category').cumcount())
df = (df11.merge(df22, on=['g','category'], how='left')
.groupby(['index','category','quantity'])['item']
.agg(lambda x: ','.join(x.dropna()))
.droplevel(0)
.reset_index())
print (df)
category quantity item
0 foo 1 A
1 foo 2 B,C
2 bar 1 E
3 bar 2 F,G
4 bar 3 H,I,J
You can use an iterator with itertools.islice:
from itertools import islice
# aggregate the items as iterator
s = df2.groupby('category')['item'].agg(iter)
# for each category, allocate as many items as needed and join
df1['item'] = (df1.groupby('category', group_keys=False)['quantity']
.apply(lambda g:
g.map(lambda x: ','.join(list(islice(s[g.name], x)))))
)
Output:
category quantity item
0 foo 1 A
1 foo 2 B,C
2 bar 1 E
3 bar 2 F,G
4 bar 3 H,I,J
Note that if you don't have enough items, this will just use what is available.
Example using df2 truncated after F as input:
category quantity item
0 foo 1 A
1 foo 2 B,C
2 bar 1 E
3 bar 2 F
4 bar 3
def function1(dd:pd.DataFrame):
col2=dd.quantity.cumsum()
col1=col2.shift(fill_value=0)
return dd.assign(col1=col1,col2=col2).apply(lambda ss:",".join(
df2.loc[df2.category==ss.category,"item"].iloc[ss.col1:ss.col2].tolist()
),axis=1)
df1.assign(item=df1.groupby('category').apply(function1).droplevel(0))
out
category quantity item
0 foo 1 A
1 foo 2 B,C
2 bar 1 E
3 bar 2 F,G
4 bar 3 H,I,J

Build a decision Column by ANDing multiple columns in pandas

I have a pandas data frame which is shown below:
>>> x = [[1,2,3,4,5],[1,2,4,4,3],[2,4,5,6,7]]
>>> columns = ['a','b','c','d','e']
>>> df = pd.DataFrame(data = x, columns = columns)
>>> df
a b c d e
0 1 2 3 4 5
1 1 2 4 4 3
2 2 4 5 6 7
I have an array of objects (conditions) as shown below:
[
{
'header' : 'a',
'condition' : '==',
'values' : [1]
},
{
'header' : 'b',
'condition' : '==',
'values' : [2]
},
...
]
and an assignHeader which is:
assignHeader = decision
now I want to do an operation which builds up all the conditions from the conditions array by looping through it, for example something like this:
pConditions = []
for eachCondition in conditions:
header = eachCondition['header']
values = eachCondition['values']
if eachCondition['condition'] == "==":
pConditions.append(df[header].isin(values))
else:
pConditions.append(~df[header].isin(values))
df[assignHeader ] = and(pConditions)
I was thinking of using all operator in pandas but am unable to crack the right syntax to do so. The list I shared can go big and dynamic and so I want to use this nested approach and check for the equality. Does anyone know a way to do so?
Final Output:
conditons = [df['a']==1,df['b']==2]
>>> df['decision'] = (df['a']==1) & (df['b']==2)
>>> df
a b c d e decision
0 1 2 3 4 5 True
1 1 2 4 4 3 True
2 2 4 5 6 7 False
Here conditions array will be variable. And I want to have a function which takes df, 'newheadernameandconditions` as input and returns the output as shown below:
>>> df
a b c d e decision
0 1 2 3 4 5 True
1 1 2 4 4 3 True
2 2 4 5 6 7 False
where newheadername = 'decision'
I was able to solve the problem using the code shown below. I am not sure if this is kind of fast way of getting things done, but would love to know your inputs in case you have any specific thing to point out.
def andMerging(conditions, mergeHeader, df):
if len(conditions) != 0:
df[mergeHeader] = pd.concat(conditions, axis = 1).all(axis = 1)
return df
where conditions are an array of pd.Series with boolean values.
And conditions are formatted as shown below:
def prepareForConditionMerging(conditionsArray, df):
conditions = []
for prop in conditionsArray:
condition = prop['condition']
values = prop['values']
header = prop['header']
if type(values) == str:
values = [values]
if condition=="==":
conditions.append(df[header].isin(values))
else:
conditions.append(~df[header].isin(values))
# Here we can add more conditions such as greater than less than etc.
return conditions

Pandas add a summary column that counts values that are not empty strings

I have a table that looks like this:
A B C
1 foo
2 foobar blah
3
I want to count up the non empty columns from A, B and C to get a summary column like this:
A B C sum
1 foo 1
2 foobar blah 2
3 0
Here is how I'm trying to do it:
import pandas as pd
df = { 'A' : ["foo", "foobar", ""],
'B' : ["", "blah", ""],
'C' : ["","",""]}
df = pd.DataFrame(df)
print(df)
df['sum'] = df[['A', 'B', 'C']].notnull().sum(axis=1)
df['sum'] = (df[['A', 'B', 'C']] != "").sum(axis=1)
These last two lines are different ways to get what I want but they aren't working. Any suggestions?
df['sum'] = (df[['A', 'B', 'C']] != "").sum(axis=1)
Worked. Thanks for the assistance.
This one-liner worked for me :)
df["sum"] = df.replace("", np.nan).T.count().reset_index().iloc[:,1]

How to replace pd.NamedAgg to a code compliant with pandas 0.24.2?

Hello I am obliged to downgrade Pandas versioon to '0.24.2'
As a result, the function pd.NamedAgg is not recognizable anymore.
import pandas as pd
import numpy as np
agg_cols = ['A', 'B', 'C']
agg_df = df.groupby(agg_cols).agg(
max_foo=pd.NamedAgg(column='Foo', aggfunc=np.max),
min_foo=pd.NamedAgg(column='Foo', aggfunc=np.min)
).reset_index()
Can you help me please change my code to make it compliant with the version 0.24.2??
Thank you a lot.
Sample:
agg_df = df.groupby(agg_cols)['Foo'].agg(
[('max_foo', np.max),('min_foo', np.min)]
).reset_index()
df = pd.DataFrame({
'A':list('a')*6,
'B':[4,5,4,5,5,4],
'C':[7]*6,
'Foo':[1,3,5,7,1,0],
'E':[5,3,6,9,2,4],
'F':list('aaabbb')
})
agg_cols = ['A', 'B', 'C']
agg_df = df.groupby(agg_cols).agg(
max_foo=pd.NamedAgg(column='Foo', aggfunc=np.max),
min_foo=pd.NamedAgg(column='Foo', aggfunc=np.min)
).reset_index()
print (agg_df)
A B C max_foo min_foo
0 a 4 7 5 0
1 a 5 7 7 1
Because there is only one column Foo for processing add column Foo after groupby and pass tuples with new columns names with aggregate functions:
agg_df = df.groupby(agg_cols)['Foo'].agg(
[('max_foo', np.max),('min_foo', np.min)]
).reset_index()
print (agg_df)
A B C max_foo min_foo
0 a 4 7 5 0
1 a 5 7 7 1
Another idea is pass dictionary of lists of aggregate functions:
agg_df = df.groupby(agg_cols).agg({'Foo':['max', 'min']})
agg_df.columns = [f'{b}_{a}' for a, b in agg_df.columns]
agg_df = agg_df.reset_index()
print (agg_df)
A B C max_foo min_foo
0 a 4 7 5 0
1 a 5 7 7 1

How to get all of the matching pandas Dataframe rows in which a grouping in one column includes a match in another column?

I have the following pandas dataframe:
>>> df = pd.DataFrame(
... {'A': ['foo', 'foo', 'bar', 'bar', 'baz', 'baz'],
... 'B': ['red', 'blue', 'yellow', 'green', 'grey', 'red']})
>>> df
A B
0 foo red
1 foo blue
2 bar yellow
3 bar green
4 baz grey
5 baz red
I want to filter for all of the rows in which an element in column A has a value of 'red' in column B. If I do a simple filter I get:
>>> df[df['B'] == 'red']
A B
0 foo red
5 baz red
But I want all of the rows for foo and baz since any of those rows have 'red' in column B:
A B
0 foo red
1 foo blue
4 baz grey
5 baz red
You can first find all unique values of A where condition:
print (df.ix[df['B'] == 'red', 'A'].unique())
['foo' 'baz']
Then use another condition with isin with boolean indexing:
print (df.A.isin(df.ix[df['B'] == 'red', 'A'].unique()))
0 True
1 True
2 False
3 False
4 True
5 True
Name: A, dtype: bool
print (df[df.A.isin(df.ix[df['B'] == 'red', 'A'].unique())])
A B
0 foo red
1 foo blue
4 baz grey
5 baz red
In sample can be omit unique, if there is only one red value per group. But if there is multiple values, unique is necessary.