converting some columns in groupby to multilevel in pandas - pandas

I have a groupby object that looks like the following dataframe:
df = pd.DataFrame({'user1':[2,4,21,21],'user2':[6,13,76,76],'param1':[0,2,0,1],'param2':['x','a','a','d'],'count':[1
,3,2,1]}, columns=['user1','user2','param1','param2','count'])
df = df.set_index(['user1','user2','param1','param2'])
which gives
count
user1 user2 param1 param2
2 6 0 x 1
4 13 2 a 3
21 76 0 a 2
1 d 1
I want to have something like this:
param1 0 1 2
param2 a x d a
user1 user2
2 6 0 1 0 0
4 13 0 0 0 3
21 76 2 0 1 0

Use Series.unstack with sorting columns by DataFrame.sort_index:
df = df['count'].unstack([2,3], fill_value=0).sort_index(axis=1)
print (df)
param1 0 1 2
param2 a x d a
user1 user2
2 6 0 1 0 0
4 13 0 0 0 3
21 76 2 0 1 0

Related

Dataframe within a Dataframe - to create new column_

For the following dataframe:
import pandas as pd
df=pd.DataFrame({'list_A':[3,3,3,3,3,\
2,2,2,2,2,2,2,4,4,4,4,4,4,4,4,4,4,4,4]})
How can 'list_A' be manipulated to give 'list_B'?
Desired output:
list_A
list_B
0
3
1
1
3
1
2
3
1
3
3
0
4
2
1
5
2
1
6
2
0
7
2
0
8
4
1
9
4
1
10
4
1
11
4
1
12
4
0
13
4
0
14
4
0
15
4
0
16
4
0
As you can see, if List_A has the number 3 - then the first 3 values of List_B are '1' and then the value of List_B changes to '0', until List_A changes value again.
GroupBy.cumcount
df['list_B'] = df['list_A'].gt(df.groupby('list_A').cumcount()).astype(int)
print(df)
Output
list_A list_B
0 3 1
1 3 1
2 3 1
3 3 0
4 3 0
5 2 1
6 2 1
7 2 0
8 2 0
9 2 0
10 2 0
11 2 0
12 4 1
13 4 1
14 4 1
15 4 1
16 4 0
17 4 0
18 4 0
19 4 0
20 4 0
21 4 0
22 4 0
23 4 0
EDIT
blocks = df['list_A'].ne(df['list_A'].shift()).cumsum()
df['list_B'] = df['list_A'].gt(df.groupby(blocks).cumcount()).astype(int)

Using If-else to change values in Pandas

I’ve a pd df consists three columns: ID, t, and ind1.
import pandas as pd
dat = {'ID': [1,1,1,1,2,2,2,3,3,3,3,4,4,4,5,5,6,6,6],
't': [0,1,2,3,0,1,2,0,1,2,3,0,1,2,0,1,0,1,2],
'ind1' : [1,1,1,1,0,0,0,0,0,0,0,1,1,1,1,1,0,0,0]
}
df = pd.DataFrame(dat, columns = ['ID', 't', 'ind1'])
print (df)
What I need to do is to create a new column (res) that
for all ID with ind1==0, then res is zero.
for all ID with
ind1==1 and if t==max(t) (group by ID), then res = 1, otherwise zero.
Here’s anticipated output
Check with groupby with idxmax , then where with transform all
df['res']=df.groupby('ID').t.transform('idxmax').where(df.groupby('ID').ind1.transform('all')).eq(df.index).astype(int)
df
Out[160]:
ID t ind1 res
0 1 0 1 0
1 1 1 1 0
2 1 2 1 0
3 1 3 1 1
4 2 0 0 0
5 2 1 0 0
6 2 2 0 0
7 3 0 0 0
8 3 1 0 0
9 3 2 0 0
10 3 3 0 0
11 4 0 1 0
12 4 1 1 0
13 4 2 1 1
14 5 0 1 0
15 5 1 1 1
16 6 0 0 0
17 6 1 0 0
18 6 2 0 0
This works on the knowledge that the ID column is sorted :
cond1 = df.ind1.eq(0)
cond2 = df.ind1.eq(1) & (df.t.eq(df.groupby("ID").t.transform("max")))
df["res"] = np.select([cond1, cond2], [0, 1], 0)
df
ID t ind1 res
0 1 0 1 0
1 1 1 1 0
2 1 2 1 0
3 1 3 1 1
4 2 0 0 0
5 2 1 0 0
6 2 2 0 0
7 3 0 0 0
8 3 1 0 0
9 3 2 0 0
10 3 3 0 0
11 4 0 1 0
12 4 1 1 0
13 4 2 1 1
14 5 0 1 0
15 5 1 1 1
16 6 0 0 0
17 6 1 0 0
18 6 2 0 0
Use groupby.apply:
df['res'] = (df.groupby('ID').apply(lambda x: x['ind1'].eq(1)&x['t'].eq(x['t'].max()))
.astype(int).reset_index(drop=True))
print(df)
ID t ind1 res
0 1 0 1 0
1 1 1 1 0
2 1 2 1 0
3 1 3 1 1
4 2 0 0 0
5 2 1 0 0
6 2 2 0 0
7 3 0 0 0
8 3 1 0 0
9 3 2 0 0
10 3 3 0 0
11 4 0 1 0
12 4 1 1 0
13 4 2 1 1
14 5 0 1 0
15 5 1 1 1
16 6 0 0 0
17 6 1 0 0
18 6 2 0 0

Pandas groupby with MultiIndex columns and different levels

I want to do a groupby on a MultiIndex dataframe, counting the occurrences for each column for every user2 in df:
>>> df
user1 user2 count
0 1 2
a x d a
0 2 6 0 1 0 0
1 4 6 0 0 0 3
2 21 76 2 0 1 0
3 5 18 0 0 0 0
Note that user1 and user2 are at the same level as count (side effect of merging).
Desired output:
user2 count
0 1 2
a x d a
0 6 0 1 0 1
1 76 1 0 0 0
3 18 0 0 0 0
I've tried
>>> df.groupby(['user2','count'])
but I get
ValueError: Grouper for 'count' not 1-dimensional
GENERATOR CODE:
df = pd.DataFrame({'user1':[2,4,21,21],'user2':[6,6,76,76],'param1':[0,2,0,1],'param2':['x','a','a','d'],'count':[1,3,2,1]}, columns=['user1','user2','param1','param2','count'])
df = df.set_index(['user1','user2','param1','param2'])
df = df.unstack([2,3]).sort_index(axis=1).reset_index()
df2 = pd.DataFrame({'user1':[2,5,21],'user2':[6,18,76]})
df2.columns = pd.MultiIndex.from_product([df2.columns, [''],['']])
final_df = df2.merge(df, on=['user1','user2'], how='outer').fillna(0)
IIUC, you want:
final_df.where(final_df>0).groupby('user2').count().drop('user1', axis=1).reset_index()
Output:
user2 count
0 1 2
a x d a
0 6 0 1 0 1
1 18 0 0 0 0
2 76 1 0 1 0
Avoid dropping columns, select only 'count', and changed function to sum:
final_df.where(final_df>0).groupby('user2').sum()[['count']].reset_index()
Output:
user2 count
0 1 2
a x d a
0 6 0.0 1.0 0.0 3.0
1 18 0.0 0.0 0.0 0.0
2 76 2.0 0.0 1.0 0.0
To void dropping user2 equal to zero values also.
final_df[['count']].where(final_df[['count']]>0)\
.groupby(final_df.user2).sum().reset_index()

Pandas: The best way to create new Frame by specific criteria

I have a DataFrame:
df = pd.DataFrame({'id':[1,1,1,1,2,2,2,3,3,3,4,4],
'sex': [0,0,0,1,0,0,0,1,1,0,1,1]})
id sex
0 1 0
1 1 0
2 1 0
3 1 1
4 2 0
5 2 0
6 2 0
7 3 1
8 3 1
9 3 0
10 4 1
11 4 1
I want to get new DateFrame where there are only id's with both sex values.
So I want to get something like this.
id sex
0 1 0
1 1 0
2 1 0
3 1 1
4 3 1
5 3 1
6 3 0
Using groupby and filter with required condition
In [2952]: df.groupby('id').filter(lambda x: set(x.sex) == set([0,1]))
Out[2952]:
id sex
0 1 0
1 1 0
2 1 0
3 1 1
7 3 1
8 3 1
9 3 0
Also,
In [2953]: df.groupby('id').filter(lambda x: all([any(x.sex == v) for v in [0,1]]))
Out[2953]:
id sex
0 1 0
1 1 0
2 1 0
3 1 1
7 3 1
8 3 1
9 3 0
Use drop_duplicates by both columns and then get size of one column by value_counts first.
Then filter all values by boolean indexing with isin:
s = df.drop_duplicates()['id'].value_counts()
print (s)
3 2
1 2
4 1
2 1
Name: id, dtype: int64
df = df[df['id'].isin(s.index[s == 2])]
print (df)
id sex
0 1 0
1 1 0
2 1 0
3 1 1
7 3 1
8 3 1
9 3 0
One more:)
df.groupby('id').filter(lambda x: x['sex'].nunique()>1)
id sex
0 1 0
1 1 0
2 1 0
3 1 1
7 3 1
8 3 1
9 3 0
Use isin()
Something like this:
df = pd.DataFrame({'id':[1,1,1,1,2,2,2,3,3,3,4,4],
'sex': [0,0,0,1,0,0,0,1,1,0,1,1]})
male = df[df['sex'] == 0]
male = male['id']
female = df[df['sex'] == 1]
female = female['id']
df = df[(df['id'].isin(male)) & (df['id'].isin(female))]
print(df)
Output:
id sex
0 1 0
1 1 0
2 1 0
3 1 1
7 3 1
8 3 1
9 3 0
Or you can try this
m=df.groupby('id')['sex'].nunique().eq(2)
df.loc[df.id.isin(m[m].index)]
Out[112]:
id sex
0 1 0
1 1 0
2 1 0
3 1 1
7 3 1
8 3 1
9 3 0

Pandas: Grouping by values when a column is a list

I have a DataFrame like this one:
df = pd.DataFrame({'type':[[1,3],[1,2,3],[2,3]], 'value':[4,5,6]})
type | value
-------------
1,3 | 4
1,2,3| 5
2,3 | 6
I would like to group by the different values in the 'type' column so for example the sum of value would be:
type | sum
------------
1 | 9
2 | 11
3 | 15
Thanks for your help!
You need first reshape Dataframe by column type by DataFrame constructor, stack and reset_index. Then cast column type to int and last groupby with aggregating sum:
df1 = pd.DataFrame(df['type'].values.tolist(), index = df['value']) \
.stack() \
.reset_index(name='type')
df1.type = df1.type.astype(int)
print (df1)
value level_1 type
0 4 0 1
1 4 1 3
2 5 0 1
3 5 1 2
4 5 2 3
5 6 0 2
6 6 1 3
print (df1.groupby('type', as_index=False)['value'].sum())
type value
0 1 9
1 2 11
2 3 15
Another solution with join:
df1 = pd.DataFrame(df['type'].values.tolist()) \
.stack() \
.reset_index(level=1, drop=True) \
.rename('type') \
.astype(int)
print (df1)
0 1
0 3
1 1
1 2
1 3
2 2
2 3
Name: type, dtype: int32
df2 = df[['value']].join(df1)
print (df2)
value type
0 4 1
0 4 3
1 5 1
1 5 2
1 5 3
2 6 2
2 6 3
print (df2.groupby('type', as_index=False)['value'].sum())
type value
0 1 9
1 2 11
2 3 15
Version with Series where select first level of index by get_level_values, convert to Series by to_series and aggregate sum. Last reset_index and rename column index to type:
df1 = pd.DataFrame(df['type'].values.tolist(), index = df['value']).stack().astype(int)
print (df1)
value
4 0 1
1 3
5 0 1
1 2
2 3
6 0 2
1 3
dtype: int32
print (df1.index.get_level_values(0)
.to_series()
.groupby(df1.values)
.sum()
.reset_index()
.rename(columns={'index':'type'}))
type value
0 1 9
1 2 11
2 3 15
Edit by comment - it is a bit modified second solution with DataFrame.pop:
df = pd.DataFrame({'type':[[1,3],[1,2,3],[2,3]],
'value1':[4,5,6],
'value2':[1,2,3],
'value3':[4,6,1]})
print (df)
type value1 value2 value3
0 [1, 3] 4 1 4
1 [1, 2, 3] 5 2 6
2 [2, 3] 6 3 1
df1 = pd.DataFrame(df.pop('type').values.tolist()) \
.stack() \
.reset_index(level=1, drop=True) \
.rename('type') \
.astype(int)
print (df1)
0 1
0 3
1 1
1 2
1 3
2 2
2 3
Name: type, dtype: int32
print (df.join(df1).groupby('type', as_index=False).sum())
type value1 value2 value3
0 1 9 3 10
1 2 11 5 7
2 3 15 6 11