difference between rows in a multi-level dataframe - pandas

I am looking to find difference between two rows in a multi-level data by iterating values within certain class and been trying different techniques by reading tutorials as I am still new to python/pandas power.
what I am trying to do is to find out difference between scores of teacher and each student in certain class.
dataframe:
Class, Name ,Reference, stats
X ,SHE ,student, 30
X ,GHE ,student, 20
X ,GMK ,student ,10
X ,JKO ,teacher ,50
Y ,HHH ,student ,20
Y ,KLP ,teacher ,30
Output:
Class,teacher, student, difference
X, JKO, SHE,20
X, JKO,GHE, 30
X, JKO, GMK, 40
Y, KLP, HHH, 10
Can anyone help me by guiding me towards the right direction? there can be more than 1 teachers in a class.
Thank you

Just split your dataset into two data frames, one for students one for teachers. Then merge.
students = df[df.Reference == 'student'][['Class','Name','stats']]
teachers = df[df.Reference == 'teacher'][['Class','Name','stats']]
new_df = students.merge(teachers, on='Class', suffixes=('_student','_teacher'))
new_df['difference'] = new_df.stats_teacher - new_df.stats_student
print(new_df)
Class Name_student stats_student Name_teacher stats_teacher difference
0 X SHE 30 JKO 50 20
1 X GHE 20 JKO 50 30
2 X GMK 10 JKO 50 40
3 Y HHH 20 KLP 30 10

Use:
print (df)
Class Name Reference stats
0 X SHE student 30
1 X GHE student 20
2 X GMK student 10
3 X JKO teacher 50
4 X ABC teacher 100 <-added one new row for general data
5 Y HHH student 20
6 Y KLP teacher 30
df = (df.query("Reference == 'teacher'")
.merge(df.query("Reference == 'student'"), on='Class', suffixes=('_t','_s'))
.assign(difference=lambda x: x['stats_t'] - x['stats_s'])
.drop(['Reference_s','Reference_t','stats_s','stats_t'], axis=1)
.rename(columns={'Name_s':'student','Name_t':'teacher'})
)
print (df)
Class teacher student difference
0 X JKO SHE 20
1 X JKO GHE 30
2 X JKO GMK 40
3 X ABC SHE 70
4 X ABC GHE 80
5 X ABC GMK 90
6 Y KLP HHH 10
Explanation:
Filter DataFrame by query with student and teacher rows
Then merge by column Class for all combinations in per groups
Then assign new column with subtract
Remove unnecessary columns by drop
Last rename columns

Below is the code with many for loops. So there should an optimized solution than this. (Later i will try to update this solution in a better way)
import pandas as pd
df = pd.read_csv("student.csv")
ref = df[df['Reference'] == 'teacher'].index.values.astype(int)
df['TeacherName'] = 'NA'
df['Difference'] = 0
for i in range(len(ref)):
if(i == 0):
for j in range(ref[i]+1):
df['TeacherName'][j] = df['Name'][ref[i]]
df['Difference'][j] = df['stats'][ref[i]] - df['stats'][j]
else:
for j in range(ref[i-1]+1, ref[i]):
df['TeacherName'][j] = df['Name'][ref[i]]
df['Difference'][j] = df['stats'][ref[i]] - df['stats'][j]
df[~ df.index.isin(ref)]
I'm getting the index of row for every occurrence of df['Reference'] == 'teacher' into a list named ref which will be dropped from df after loop statements.

Related

Trying to mark first encounter in every group pandas

I have a df with 5 columns:
What i'm trying to do is mark value of first customer interaction after talking to human in every specific group.
Hopefully the outcome would be like this:
What I have tried is shifting type column to put previous row in front of type to check if its customer and prev row is human. However, I can't figure out a grouping option to get min index for each group for each occurrence.
This works:
k = pd.DataFrame(df.groupby('group').apply(lambda g: (g['type'].eq('customer') & g['type'].shift(1).eq('human')).pipe(lambda x: [x.idxmax(), x[::-1].idxmax()])).tolist())
df['First'] = ''
df['Last'] = ''
df.loc[k[1], 'First'] = 'F'
df.loc[k[1], 'Last'] = 'L'
Output:
>>> df
group type First Last
0 x bot
1 x customer
2 x bot
3 x customer
4 x human
5 x customer F
6 x human
7 x customer L
8 y bot
9 y customer
10 y bot
11 y customer
12 y human
13 y customer F
14 y human
15 y customer L
16 z bot
17 z customer
18 z bot
19 z customer
20 z human
21 z customer F
22 z human
23 z customer L
24 z customer
25 z customer

Pandas column merging on condition

This is my pandas df:
Id Protein A_Egg B_Meat C_Milk Category
A 10 10 20 0 egg
B 20 10 0 10 milk
C 20 10 10 10 meat
D 25 20 10 0 egg
I wish to merge protein column with other column based on "Category"
My output is
Id Protein_final
A 20
B 30
C 30
D 45
Ideally, I would like to show how I am approaching but, I am frankly clueless!!
EDIT: Also, How to handle is the category is blank or does meet one of the column (in that can final should be same as initial value in protein column)
Use DataFrame.lookup with some preprocessing with remove values in columns names before _ and lowercase, last add to column:
arr = df.rename(columns=lambda x: x.split('_')[-1].lower()).lookup(df.index, df['Category'])
df['Protein'] += arr
print (df)
Id Protein A_Egg B_Meat C_Milk Category
0 A 20 10 20 0 egg
1 B 30 10 0 10 milk
2 C 30 10 10 10 meat
3 D 45 20 10 0 egg
If need only 2 columns finally:
df = df[['Id','Protein']]
You can melt the dataframe, and filter for rows where category equals the variable column, and sum the final columns :
(
df
.melt(["Id", "Protein", "Category"])
.assign(variable=lambda x: x.variable.str[2:].str.lower(),
Protein_final=lambda x: x.Protein + x.value)
.query("Category == variable")
.filter(["Id", "Protein_final"])
)
Id Protein_final
0 A 20
3 D 45
6 C 30
9 B 30

How to *multiply* (for lack of a better term) two dataframes [duplicate]

The contents of this post were originally meant to be a part of
Pandas Merging 101,
but due to the nature and size of the content required to fully do
justice to this topic, it has been moved to its own QnA.
Given two simple DataFrames;
left = pd.DataFrame({'col1' : ['A', 'B', 'C'], 'col2' : [1, 2, 3]})
right = pd.DataFrame({'col1' : ['X', 'Y', 'Z'], 'col2' : [20, 30, 50]})
left
col1 col2
0 A 1
1 B 2
2 C 3
right
col1 col2
0 X 20
1 Y 30
2 Z 50
The cross product of these frames can be computed, and will look something like:
A 1 X 20
A 1 Y 30
A 1 Z 50
B 2 X 20
B 2 Y 30
B 2 Z 50
C 3 X 20
C 3 Y 30
C 3 Z 50
What is the most performant method of computing this result?
Let's start by establishing a benchmark. The easiest method for solving this is using a temporary "key" column:
pandas <= 1.1.X
def cartesian_product_basic(left, right):
return (
left.assign(key=1).merge(right.assign(key=1), on='key').drop('key', 1))
cartesian_product_basic(left, right)
pandas >= 1.2
left.merge(right, how="cross") # implements the technique above
col1_x col2_x col1_y col2_y
0 A 1 X 20
1 A 1 Y 30
2 A 1 Z 50
3 B 2 X 20
4 B 2 Y 30
5 B 2 Z 50
6 C 3 X 20
7 C 3 Y 30
8 C 3 Z 50
How this works is that both DataFrames are assigned a temporary "key" column with the same value (say, 1). merge then performs a many-to-many JOIN on "key".
While the many-to-many JOIN trick works for reasonably sized DataFrames, you will see relatively lower performance on larger data.
A faster implementation will require NumPy. Here are some famous NumPy implementations of 1D cartesian product. We can build on some of these performant solutions to get our desired output. My favourite, however, is #senderle's first implementation.
def cartesian_product(*arrays):
la = len(arrays)
dtype = np.result_type(*arrays)
arr = np.empty([len(a) for a in arrays] + [la], dtype=dtype)
for i, a in enumerate(np.ix_(*arrays)):
arr[...,i] = a
return arr.reshape(-1, la)
Generalizing: CROSS JOIN on Unique or Non-Unique Indexed DataFrames
Disclaimer
These solutions are optimised for DataFrames with non-mixed scalar dtypes. If dealing with mixed dtypes, use at your
own risk!
This trick will work on any kind of DataFrame. We compute the cartesian product of the DataFrames' numeric indices using the aforementioned cartesian_product, use this to reindex the DataFrames, and
def cartesian_product_generalized(left, right):
la, lb = len(left), len(right)
idx = cartesian_product(np.ogrid[:la], np.ogrid[:lb])
return pd.DataFrame(
np.column_stack([left.values[idx[:,0]], right.values[idx[:,1]]]))
cartesian_product_generalized(left, right)
0 1 2 3
0 A 1 X 20
1 A 1 Y 30
2 A 1 Z 50
3 B 2 X 20
4 B 2 Y 30
5 B 2 Z 50
6 C 3 X 20
7 C 3 Y 30
8 C 3 Z 50
np.array_equal(cartesian_product_generalized(left, right),
cartesian_product_basic(left, right))
True
And, along similar lines,
left2 = left.copy()
left2.index = ['s1', 's2', 's1']
right2 = right.copy()
right2.index = ['x', 'y', 'y']
left2
col1 col2
s1 A 1
s2 B 2
s1 C 3
right2
col1 col2
x X 20
y Y 30
y Z 50
np.array_equal(cartesian_product_generalized(left, right),
cartesian_product_basic(left2, right2))
True
This solution can generalise to multiple DataFrames. For example,
def cartesian_product_multi(*dfs):
idx = cartesian_product(*[np.ogrid[:len(df)] for df in dfs])
return pd.DataFrame(
np.column_stack([df.values[idx[:,i]] for i,df in enumerate(dfs)]))
cartesian_product_multi(*[left, right, left]).head()
0 1 2 3 4 5
0 A 1 X 20 A 1
1 A 1 X 20 B 2
2 A 1 X 20 C 3
3 A 1 X 20 D 4
4 A 1 Y 30 A 1
Further Simplification
A simpler solution not involving #senderle's cartesian_product is possible when dealing with just two DataFrames. Using np.broadcast_arrays, we can achieve almost the same level of performance.
def cartesian_product_simplified(left, right):
la, lb = len(left), len(right)
ia2, ib2 = np.broadcast_arrays(*np.ogrid[:la,:lb])
return pd.DataFrame(
np.column_stack([left.values[ia2.ravel()], right.values[ib2.ravel()]]))
np.array_equal(cartesian_product_simplified(left, right),
cartesian_product_basic(left2, right2))
True
Performance Comparison
Benchmarking these solutions on some contrived DataFrames with unique indices, we have
Do note that timings may vary based on your setup, data, and choice of cartesian_product helper function as applicable.
Performance Benchmarking Code
This is the timing script. All functions called here are defined above.
from timeit import timeit
import pandas as pd
import matplotlib.pyplot as plt
res = pd.DataFrame(
index=['cartesian_product_basic', 'cartesian_product_generalized',
'cartesian_product_multi', 'cartesian_product_simplified'],
columns=[1, 10, 50, 100, 200, 300, 400, 500, 600, 800, 1000, 2000],
dtype=float
)
for f in res.index:
for c in res.columns:
# print(f,c)
left2 = pd.concat([left] * c, ignore_index=True)
right2 = pd.concat([right] * c, ignore_index=True)
stmt = '{}(left2, right2)'.format(f)
setp = 'from __main__ import left2, right2, {}'.format(f)
res.at[f, c] = timeit(stmt, setp, number=5)
ax = res.div(res.min()).T.plot(loglog=True)
ax.set_xlabel("N");
ax.set_ylabel("time (relative)");
plt.show()
Continue Reading
Jump to other topics in Pandas Merging 101 to continue learning:
Merging basics - basic types of joins
Index-based joins
Generalizing to multiple DataFrames
Cross join *
* you are here
After pandas 1.2.0 merge now have option cross
left.merge(right, how='cross')
Using itertools product and recreate the value in dataframe
import itertools
l=list(itertools.product(left.values.tolist(),right.values.tolist()))
pd.DataFrame(list(map(lambda x : sum(x,[]),l)))
0 1 2 3
0 A 1 X 20
1 A 1 Y 30
2 A 1 Z 50
3 B 2 X 20
4 B 2 Y 30
5 B 2 Z 50
6 C 3 X 20
7 C 3 Y 30
8 C 3 Z 50
Here's an approach with triple concat
m = pd.concat([pd.concat([left]*len(right)).sort_index().reset_index(drop=True),
pd.concat([right]*len(left)).reset_index(drop=True) ], 1)
col1 col2 col1 col2
0 A 1 X 20
1 A 1 Y 30
2 A 1 Z 50
3 B 2 X 20
4 B 2 Y 30
5 B 2 Z 50
6 C 3 X 20
7 C 3 Y 30
8 C 3 Z 50
One option is with expand_grid from pyjanitor:
# pip install pyjanitor
import pandas as pd
import janitor as jn
others = {'left':left, 'right':right}
jn.expand_grid(others = others)
left right
col1 col2 col1 col2
0 A 1 X 20
1 A 1 Y 30
2 A 1 Z 50
3 B 2 X 20
4 B 2 Y 30
5 B 2 Z 50
6 C 3 X 20
7 C 3 Y 30
8 C 3 Z 50
I think the simplest way would be to add a dummy column to each data frame, do an inner merge on it and then drop that dummy column from the resulting cartesian dataframe:
left['dummy'] = 'a'
right['dummy'] = 'a'
cartesian = left.merge(right, how='inner', on='dummy')
del cartesian['dummy']

Pandas groupby sort each group values and order dataframe groups based on max of each group

I have a dataset containing 3 columns, I’m trying to group them and print each group in sorted fashion (based on highest value in each group). The records in each group also have to be in sorted fashion.
Dataset looks like below.
key1,key2,val
b,y,21
c,y,25
c,z,10
b,x,20
b,z,5
c,x,17
a,x,15
a,y,18
a,z,100
df=pd.read_csv('/tmp/hello.csv')
df['max'] = df.groupby(['key1'])['val'].transform('max')
dff=df.sort_values(['max', 'val'], ascending=False).drop('max', axis=1)
I'm applying transform as it works per group basis and then sorting the values.
Above code results in my desired dataframe:
a,z,100
a,y,18
a,x,15
c,y,25
c,x,17
c,z,10
b,y,21
b,x,20
b,z,5
But, the same code fails for below dataset.
key1,key2,val
b,y,10
c,y,10
c,z,10
b,x,2
b,z,2
c,x,2
a,x,2
a,y,2
a,z,2
Below is the desired output
key1,key2,val
c,y,10
c,z,10
c,x,2
b,y,10
b,x,2
b,z,2
a,x,2
a,y,2
a,z,2
Please help me in properly grouping and sorting the dataframe for my scenario.
Add column key1 to sort_values because in second DataFrame are multiple maximum values 10 per groups, so sorting cannot distingush groups:
df['max'] = df.groupby(['key1'])['val'].transform('max')
dff=df.sort_values(['max','key1', 'val'], ascending=False).drop('max', axis=1)
print (dff)
key1 key2 val
8 a z 100
7 a y 18
6 a x 15
1 c y 25
5 c x 17
2 c z 10
0 b y 21
3 b x 20
4 b z 5
df['max'] = df.groupby(['key1'])['val'].transform('max')
dff=df.sort_values(['max','key1', 'val'], ascending=False).drop('max', axis=1)
print (dff)
key1 key2 val
1 c y 10
2 c z 10
5 c x 2
0 b y 10
3 b x 2
4 b z 2
6 a x 2
7 a y 2
8 a z 2

Groupby on two columns with bins(ranges) on one of them in Pandas Dataframe

I am trying to make segregate my data into buckets based on certain user attributes and I would like to see some counts in each of the buckets.For this I have imported this data into a Pandas Dataframe.
I have data that has user city, kids age and their unique id. I would like to know the count of users who reside in city A and have kids in age group 0-5.
Sample Data frame looks something like this:
city kids_age user_id
A 10 1
B 4 2
A 4 3
C 8 4
A 3 5
Expected Output:
city bin count
A 0-5 2
5-10 1
B 0-5 1
5-10 0
C 0-5 0
5-10 1
I tried group by on two columns city and kids age:
user_details_df_cropped_1.groupby(['city', 'kids_age']).count()
It gave me an output that looks something like this:
city kids_age user_id count
A 10 1 1
4 3 1
3 5 1
B 4 2 1
C 8 4 1
I returns me the users grouped by city, but not really by kids age bins(ranges). What am I missing here? Appreciate the help!!
Use cut for binning, pass to DataFrame.groupby, add 0 rows with DataFrame.stack
DataFrame.unstack an last convert to DataFrame by Series.reset_index:
bins = [0,5,10]
labels = ['{}-{}'.format(i, j) for i, j in zip(bins[:-1], bins[1:])]
b = pd.cut(df['kids_age'], bins=bins, labels=labels, include_lowest=True)
df = df.groupby(['city', b]).size().unstack(fill_value=0).stack().reset_index(name='count')
print (df)
city kids_age count
0 A 0-5 2
1 A 5-10 1
2 B 0-5 1
3 B 5-10 0
4 C 0-5 0
5 C 5-10 1
Another solution with DataFrame.reindex and MultiIndex.from_product for added mising rows filled by 0:
bins = [0,5,10]
labels = ['{}-{}'.format(i, j) for i, j in zip(bins[:-1], bins[1:])]
b = pd.cut(df['kids_age'], bins=bins, labels=labels, include_lowest=True)
mux = pd.MultiIndex.from_product([df['city'].unique(), labels], names=['city','kids_age'])
df = (df.groupby(['city', b])
.size()
.reindex(mux, fill_value=0)
.reset_index(name='count'))
print (df)
city kids_age count
0 A 0-5 2
1 A 5-10 1
2 B 0-5 1
3 B 5-10 0
4 C 0-5 0
5 C 5-10 1