pandas: aggregate array during groupby, equivalent of SQL's array_agg? - pandas

I've got this dataframe:
df1 = pd.DataFrame([
{ 'id': 1, 'spend': 60, 'store': 'Stockport' },
{ 'id': 2, 'spend': 68, 'store': 'Didsbury' },
{ 'id': 3, 'spend': 70, 'store': 'Stockport' },
{ 'id': 4, 'spend': 35, 'store': 'Didsbury' },
{ 'id': 5, 'spend': 16, 'store': 'Didsbury' },
{ 'id': 6, 'spend': 12, 'store': 'Didsbury' },
])
I've grouped it by store and got the total spend by store:
df.groupby("store").agg({'spend': 'sum'})\
.reset_index().sort_values("spend", ascending=False)
store spend
Didsbury 131
Stockport 130
Is there a way I can get the IDs for each store as a column in the grouped object? Like the equivalent of ARRAY_AGG in Postgres? So the desired output would be:
store spend ids
Didsbury 131 [2,4,5,6]
Stockport 130 [1,3]

We can use named_aggregations, which is an aggregation method available since pandas >= 0.25.0.
Notice how we can instantely rename our column to "ids":
df1.groupby('store').agg(
spend=('spend', 'sum'),
ids=('id', list)
).reset_index()
store spend ids
0 Didsbury 131 [2, 4, 5, 6]
1 Stockport 130 [1, 3]

You can pass list like aggregation function for id column:
df = (df1.groupby("store").agg({'spend': 'sum', 'id':list})
.reset_index()
.sort_values("spend", ascending=False))
print (df)
store spend id
0 Didsbury 131 [2, 4, 5, 6]
1 Stockport 130 [1, 3]

Related

RowNumber Window Query for Hiscores Ranking - Django

I'm trying to build a game hiscore view with rankings for my Django site, and I'm having some issues.
The query I have is the following:
row_number_rank = Window(
expression=RowNumber(),
partition_by=[F('score_type')],
order_by=F('score').desc()
)
hiscores = Hiscore.objects.annotate(rank=row_number_rank).values()
The query above works perfectly, and properly assigns each row a rank according to how it compares to other scores within each score type.
The result of this is the following:
{ 'id': 2, 'username': 'Bob', 'score_type': 'wins', 'score': 12, 'rank': 1 }
{ 'id': 1, 'username': 'John', 'score_type': 'wins', 'score': 5, 'rank': 2 }
{ 'id': 4, 'username': 'John', 'score_type': 'kills', 'score': 37, 'rank': 1 }
{ 'id': 3, 'username': 'John', 'score_type': 'kills', 'score': 5, 'rank': 2 }
{ 'id': 5, 'username': 'Bob', 'score_type': 'kills', 'score': 2, 'rank': 3 }
The issue comes in when I want to retrieve only a specific user's scores from the above results. If I append a filter(username="Bob") the query is now:
row_number_rank = Window(
expression=RowNumber(),
partition_by=[F('score_type')],
order_by=F('score').desc()
)
hiscores = Hiscore.objects.annotate(rank=row_number_rank).filter(username='Bob').values()
Unexpectedly, adding this filter step has yielded the following incorrect results:
{ 'id': 2, 'username': 'Bob', 'score_type': 'wins', 'score': 12, 'rank': 1 }
{ 'id': 5, 'username': 'Bob', 'score_type': 'kills', 'score': 2, 'rank': 1 }
Randomly, the rank on the id=5 entry has decided to change to 1 instead of its correct value of 3.
Why would adding this filter step modify the values of the fields in the QuerySet, instead of just excluding the proper elements from it?
Thanks.

pandas row wise comparison and apply condition

This is my dataframe:
df = pd.DataFrame(
{
"name": ["bob_x", "mad", "jay_x", "bob_y", "jay_y", "joe"],
"score": [3, 5, 6, 2, 4, 1],
}
)
I want to compare the score of bob_x with 'bob_y, and retain the row with the lowest, and do the same for jay_xandjay_y. No change is required for madandjoe`.
You can first split the names by _ and keep the first part, then groupby and keep the lowest value:
import pandas as pd
df = pd.DataFrame({"name": ["bob_x", "mad", "jay_x", "bob_y", "jay_y", "joe"],"score": [3, 5, 6, 2, 4, 1]})
df['name'] = df['name'].str.split('_').str[0]
df.groupby('name')['score'].min().reset_index()
Result:
name
score
0
bob
2
1
jay
4
2
joe
1
3
mad
5

How to return a list into a dataframe based on matching index of other column

I have a two data frames, one made up with a column of numpy array list, and other with two columns. I am trying to match the elements in the 1st dataframe (df) to get two columns, o1 and o2 from the df2, by matching based on index. I was wondering i can get some inputs.. please note the string 'A1' in column in 'o1' is repeated twice in df2 and as you may see in my desired output dataframe the duplicates are removed in column o1.
import numpy as np
import pandas as pd
array_1 = np.array([[0, 2, 3], [3, 4, 6], [1,2,3,6]])
#dataframe 1
df = pd.DataFrame({ 'A': array_1})
#dataframe 2
df2 = pd.DataFrame({ 'o1': ['A1', 'B1', 'A1', 'C1', 'D1', 'E1', 'F1'], 'o2': [15, 17, 18, 19, 20, 7, 8]})
#desired output
df_output = pd.DataFrame({ 'A': array_1, 'o1': [['A1', 'C1'], ['C1', 'D1', 'F1'], ['B1','A1','C1','F1']],
'o2': [[15, 18, 19], [19, 20, 8], [17,18,19,8]] })
# please note in the output, the 'index 0 of df1 has 0&2 which have same element i.e. 'A1', the output only shows one 'A1' by removing duplicated one.
I believe you can explode df and use that to extract information from df2, then finally join back to df
s = df['A'].explode()
df_output= df.join(df2.loc[s].groupby(s.index).agg(lambda x: list(set(x))))
Output:
A o1 o2
0 [0, 2, 3] [C1, A1] [18, 19, 15]
1 [3, 4, 6] [F1, D1, C1] [8, 19, 20]
2 [1, 2, 3, 6] [F1, B1, C1, A1] [8, 17, 18, 19]

Timeseries: Groupby and calculate variance

I have the following dataframe with timeseries data:
df = pd.DataFrame(columns = ['id', 'value'])
df['value'] =[9, 16, 10, 12, 11, 14]
df['id'] = [1, 1, 1, 2, 2, 2]
For each timeseries (defined by column 'id' I want to calculate the variance to find timeseries that do not change at all or only very little.
The final dataframe should look like this:
df_end = pd.DataFrame(columns = ['id','value', 'var'])
df_end['value'] =[9, 16, 10, 12, 11, 14]
df_end['id'] = [1, 1, 1, 2, 2, 2]
df_end['var'] = [21, 21, 21, 2.3, 2.3, 2.3]
I tried:
df.groupby(df['id']).var()
which gives me the values, but I couldn't put it into the df in the right form. I am sure, there is a handy function for this that I don't know about yet!
Thanks for helping out!
Use GroupBy.transform with specify column value:
df['var'] = df.groupby('id')['value'].transform('var')
print (df)
id value var
0 1 9 14.333333
1 1 16 14.333333
2 1 10 14.333333
3 2 12 2.333333
4 2 11 2.333333
5 2 14 2.333333

Convert list of dictionary in a dataframe to seperate dataframe

To convert list of dictionary already present in the dataset to a dataframe.
The dataset looks something like this.
[{'id': 35, 'name': 'Comedy'}]
How do I convert this list of dictionary to dataframe?
Thank you for your time!
I want to retrieve:
Comedy
from the list of dictionary.
Use:
df = pd.DataFrame({'col':[[{'id': 35, 'name': 'Comedy'}],[{'id': 35, 'name': 'Western'}]]})
print (df)
col
0 [{'id': 35, 'name': 'Comedy'}]
1 [{'id': 35, 'name': 'Western'}]
df['new'] = df['col'].apply(lambda x: x[0].get('name'))
print (df)
col new
0 [{'id': 35, 'name': 'Comedy'}] Comedy
1 [{'id': 35, 'name': 'Western'}] Western
If possible multiple dicts in list:
df = pd.DataFrame({'col':[[{'id': 35, 'name': 'Comedy'}, {'id':4, 'name':'Horror'}],
[{'id': 35, 'name': 'Western'}]]})
print (df)
col
0 [{'id': 35, 'name': 'Comedy'}, {'id': 4, 'name...
1 [{'id': 35, 'name': 'Western'}]
df['new'] = df['col'].apply(lambda x: [y.get('name') for y in x])
print (df)
col new
0 [{'id': 35, 'name': 'Comedy'}, {'id': 4, 'name... [Comedy, Horror]
1 [{'id': 35, 'name': 'Western'}] [Western]
And if want extract all values:
df1 = pd.concat([pd.DataFrame(x) for x in df['col']], ignore_index=True)
print (df1)
id name
0 35 Comedy
1 4 Horror
2 35 Western