How to keep index in pandas pivot table - pandas

Suppose I create a pandas pivot table:
adults_per_hh= pd.pivot_table(data,index=["hh_id"],values=["adult"],aggfunc=np.sum)
adults_per_hh.shape
(1000,1)
I want to keep hh_id as a column in addition to adult. What is the most efficient way to do this?

I think you need reset_index if use pivot_table, because first column is index:
print (data)
adult hh_id
0 4 1
1 5 1
2 6 3
3 1 2
4 2 2
print (pd.pivot_table(data,index=["hh_id"],values=["adult"],aggfunc=np.sum))
adult
hh_id
1 9
2 3
3 6
adults_per_hh= pd.pivot_table(data,index=["hh_id"],values=["adult"],aggfunc=np.sum)
.reset_index()
print (adults_per_hh)
hh_id adult
0 1 9
1 2 3
2 3 6
Another solution is use groupby and aggregate sum:
adults_per_hh = data.groupby("hh_id")["adult"].sum().reset_index()
print (adults_per_hh)
hh_id adult
0 1 9
1 2 3
2 3 6
Timings:
#random dataframe
np.random.seed(100)
N = 10000000
data = pd.DataFrame(np.random.randint(50, size=(N,2)), columns=['hh_id','adult'])
#[10000000 rows x 2 columns]
print (data)
In [60]: %timeit (pd.pivot_table(data,index=["hh_id"],values=["adult"],aggfunc=np.sum).reset_index())
1 loop, best of 3: 384 ms per loop
In [61]: %timeit (data.groupby("hh_id", as_index=False)["adult"].sum())
1 loop, best of 3: 381 ms per loop
In [62]: %timeit (data.groupby("hh_id")["adult"].sum().reset_index())
1 loop, best of 3: 355 ms per loop

Related

assign min values based on iloc and its range from another two column

df
A idx1 idx2
2022/1/1 0 2 4
2022/1/2 1 1 3
2022/1/3 2 0 3
2022/1/4 3 3 4
2022/1/5 4 0 4
expected df
A idx1 idx2 lowest
2022/1/1 0 2 4 2
2022/1/2 1 1 3 1
2022/1/3 2 0 3 0
2022/1/4 3 3 4 3
2022/1/5 4 0 4 0
Goal
Assign lowest column from A column using iloc method where start index is idx1.values and end index is idx2.values as below:
df['lowest']=df.A.iloc[df.idx1.values:df.idx2.values].min()
But get TypeError: cannot do positional indexing on DatetimeIndex with these indexers.
And I don't want to change original index.
And if there are million rows, the speed should be considered.So numpy method is welcomed.
The fastest way I can think of is doing it the following way. Did try to time it and it was 10 times faster than normal apply:
Change values of column A to numpy array first. Then still use apply along axis=1 but raw=True which passes each row as numpy array. See documentation for information:
the passed function will receive ndarray objects instead. If you are just applying a NumPy reduction function this will achieve much better performance.
arr = df['A'].to_numpy()
df['lowest'] = df.apply(lambda row: arr[row[1]:row[2]].min(), axis=1, raw=True)
IIUC, You can try it like this:
df['lowest'] = df.apply(lambda row: df.iloc[row['idx1']:row['idx2'], 0].min(), axis=1)
print(df)
Output:
A idx1 idx2 lowest
2022-01-01 0 2 4 2
2022-01-02 1 1 3 1
2022-01-03 2 0 3 0
2022-01-04 3 3 4 3
2022-01-05 4 0 4 0

Pandas : Get a column value where another column is the minimum in a sub-grouping [duplicate]

I'm using groupby on a pandas dataframe to drop all rows that don't have the minimum of a specific column. Something like this:
df1 = df.groupby("item", as_index=False)["diff"].min()
However, if I have more than those two columns, the other columns (e.g. otherstuff in my example) get dropped. Can I keep those columns using groupby, or am I going to have to find a different way to drop the rows?
My data looks like:
item diff otherstuff
0 1 2 1
1 1 1 2
2 1 3 7
3 2 -1 0
4 2 1 3
5 2 4 9
6 2 -6 2
7 3 0 0
8 3 2 9
and should end up like:
item diff otherstuff
0 1 1 2
1 2 -6 2
2 3 0 0
but what I'm getting is:
item diff
0 1 1
1 2 -6
2 3 0
I've been looking through the documentation and can't find anything. I tried:
df1 = df.groupby(["item", "otherstuff"], as_index=false)["diff"].min()
df1 = df.groupby("item", as_index=false)["diff"].min()["otherstuff"]
df1 = df.groupby("item", as_index=false)["otherstuff", "diff"].min()
But none of those work (I realized with the last one that the syntax is meant for aggregating after a group is created).
Method #1: use idxmin() to get the indices of the elements of minimum diff, and then select those:
>>> df.loc[df.groupby("item")["diff"].idxmin()]
item diff otherstuff
1 1 1 2
6 2 -6 2
7 3 0 0
[3 rows x 3 columns]
Method #2: sort by diff, and then take the first element in each item group:
>>> df.sort_values("diff").groupby("item", as_index=False).first()
item diff otherstuff
0 1 1 2
1 2 -6 2
2 3 0 0
[3 rows x 3 columns]
Note that the resulting indices are different even though the row content is the same.
You can use DataFrame.sort_values with DataFrame.drop_duplicates:
df = df.sort_values(by='diff').drop_duplicates(subset='item')
print (df)
item diff otherstuff
6 2 -6 2
7 3 0 0
1 1 1 2
If possible multiple minimal values per groups and want all min rows use boolean indexing with transform for minimal values per groups:
print (df)
item diff otherstuff
0 1 2 1
1 1 1 2 <-multiple min
2 1 1 7 <-multiple min
3 2 -1 0
4 2 1 3
5 2 4 9
6 2 -6 2
7 3 0 0
8 3 2 9
print (df.groupby("item")["diff"].transform('min'))
0 1
1 1
2 1
3 -6
4 -6
5 -6
6 -6
7 0
8 0
Name: diff, dtype: int64
df = df[df.groupby("item")["diff"].transform('min') == df['diff']]
print (df)
item diff otherstuff
1 1 1 2
2 1 1 7
6 2 -6 2
7 3 0 0
The above answer worked great if there is / you want one min. In my case there could be multiple mins and I wanted all rows equal to min which .idxmin() doesn't give you. This worked
def filter_group(dfg, col):
return dfg[dfg[col] == dfg[col].min()]
df = pd.DataFrame({'g': ['a'] * 6 + ['b'] * 6, 'v1': (list(range(3)) + list(range(3))) * 2, 'v2': range(12)})
df.groupby('g',group_keys=False).apply(lambda x: filter_group(x,'v1'))
As an aside, .filter() is also relevant to this question but didn't work for me.
I tried everyone's method and I couldn't get it to work properly. Instead I did the process step-by-step and ended up with the correct result.
df.sort_values(by='item', inplace=True, ignore_index=True)
df.drop_duplicates(subset='diff', inplace=True, ignore_index=True)
df.sort_values(by=['diff'], inplace=True, ignore_index=True)
For a little more explanation:
Sort items by the minimum value you want
Drop the duplicates of the column you want to sort with
Resort the data because the data is still sorted by the minimum values
If you know that all of your "items" have more than one record you can sort, then use duplicated:
df.sort_values(by='diff').duplicated(subset='item', keep='first')

Pandas: keep the first three rows containing a value for each unique value [duplicate]

Suppose I have pandas DataFrame like this:
df = pd.DataFrame({'id':[1,1,1,2,2,2,2,3,4], 'value':[1,2,3,1,2,3,4,1,1]})
which looks like:
id value
0 1 1
1 1 2
2 1 3
3 2 1
4 2 2
5 2 3
6 2 4
7 3 1
8 4 1
I want to get a new DataFrame with top 2 records for each id, like this:
id value
0 1 1
1 1 2
3 2 1
4 2 2
7 3 1
8 4 1
I can do it with numbering records within group after groupby:
dfN = df.groupby('id').apply(lambda x:x['value'].reset_index()).reset_index()
which looks like:
id level_1 index value
0 1 0 0 1
1 1 1 1 2
2 1 2 2 3
3 2 0 3 1
4 2 1 4 2
5 2 2 5 3
6 2 3 6 4
7 3 0 7 1
8 4 0 8 1
then for the desired output:
dfN[dfN['level_1'] <= 1][['id', 'value']]
Output:
id value
0 1 1
1 1 2
3 2 1
4 2 2
7 3 1
8 4 1
But is there more effective/elegant approach to do this? And also is there more elegant approach to number records within each group (like SQL window function row_number()).
Did you try
df.groupby('id').head(2)
Output generated:
id value
id
1 0 1 1
1 1 2
2 3 2 1
4 2 2
3 7 3 1
4 8 4 1
(Keep in mind that you might need to order/sort before, depending on your data)
EDIT: As mentioned by the questioner, use
df.groupby('id').head(2).reset_index(drop=True)
to remove the MultiIndex and flatten the results:
id value
0 1 1
1 1 2
2 2 1
3 2 2
4 3 1
5 4 1
Since 0.14.1, you can now do nlargest and nsmallest on a groupby object:
In [23]: df.groupby('id')['value'].nlargest(2)
Out[23]:
id
1 2 3
1 2
2 6 4
5 3
3 7 1
4 8 1
dtype: int64
There's a slight weirdness that you get the original index in there as well, but this might be really useful depending on what your original index was.
If you're not interested in it, you can do .reset_index(level=1, drop=True) to get rid of it altogether.
(Note: From 0.17.1 you'll be able to do this on a DataFrameGroupBy too but for now it only works with Series and SeriesGroupBy.)
Sometimes sorting the whole data ahead is very time consuming.
We can groupby first and doing topk for each group:
g = df.groupby(['id']).apply(lambda x: x.nlargest(topk,['value'])).reset_index(drop=True)
df.groupby('id').apply(lambda x : x.sort_values(by = 'value', ascending = False).head(2).reset_index(drop = True))
Here sort values ascending false gives similar to nlargest and True gives similar to nsmallest.
The value inside the head is the same as the value we give inside nlargest to get the number of values to display for each group.
reset_index is optional and not necessary.
This works for duplicated values
If you have duplicated values in top-n values, and want only unique values, you can do like this:
import pandas as pd
ifile = "https://raw.githubusercontent.com/bhishanpdl/Shared/master/data/twitter_employee.tsv"
df = pd.read_csv(ifile,delimiter='\t')
print(df.query("department == 'Audit'")[['id','first_name','last_name','department','salary']])
id first_name last_name department salary
24 12 Shandler Bing Audit 110000
25 14 Jason Tom Audit 100000
26 16 Celine Anston Audit 100000
27 15 Michale Jackson Audit 70000
If we do not remove duplicates, for the audit department we get top 3 salaries as 110k,100k and 100k.
If we want to have not-duplicated salaries per each department, we can do this:
(df.groupby('department')['salary']
.apply(lambda ser: ser.drop_duplicates().nlargest(3))
.droplevel(level=1)
.sort_index()
.reset_index()
)
This gives
department salary
0 Audit 110000
1 Audit 100000
2 Audit 70000
3 Management 250000
4 Management 200000
5 Management 150000
6 Sales 220000
7 Sales 200000
8 Sales 150000
To get the first N rows of each group, another way is via groupby().nth[:N]. The outcome of this call is the same as groupby().head(N). For example, for the top-2 rows for each id, call:
N = 2
df1 = df.groupby('id', as_index=False).nth[:N]
To get the largest N values of each group, I suggest two approaches.
First sort by "id" and "value" (make sure to sort "id" in ascending order and "value" in descending order by using the ascending parameter appropriately) and then call groupby().nth[].
N = 2
df1 = df.sort_values(by=['id', 'value'], ascending=[True, False])
df1 = df1.groupby('id', as_index=False).nth[:N]
Another approach is to rank the values of each group and filter using these ranks.
# for the entire rows
N = 2
msk = df.groupby('id')['value'].rank(method='first', ascending=False) <= N
df1 = df[msk]
# for specific column rows
df1 = df.loc[msk, 'value']
Both of these are much faster than groupby().apply() and groupby().nlargest() calls as suggested in the other answers on here(1, 2, 3). On a sample with 100k rows and 8000 groups, a %timeit test showed that it was 24-150 times faster than those solutions.
Also, instead of slicing, you can also pass a list/tuple/range to a .nth() call:
df.groupby('id', as_index=False).nth([0,1])
# doesn't even have to be consecutive
# the following returns 1st and 3rd row of each id
df.groupby('id', as_index=False).nth([0,2])

Replace values in pandas dataframe with other on condition

i have a dataframe
id main_value
1 10
2 3
4 1
6 10
i want to change main_value of id = 4,such that it should decrement by 2.
i know a method using .loc
freq = 3
if freq == 3:
df.loc[df.id==4, ['main_value']] = df.main_value.loc[df.id==4] - 2
But this seems very lengthy, is there a better way to do this?
I think you can use:
df.loc[df.id==4, 'main_value'] -= 2
print (df)
id main_value
0 1 10
1 2 3
2 4 -1
3 6 10

how to drop duplicated columns data based on column name in pandas

Assume I have a table like below
A B C B
0 0 1 2 3
1 4 5 6 7
I'd like to drop column B. I tried to use drop_duplicates, but it seems that it only works based on duplicated data not header.
Hope anyone know how to do this.
Use Index.duplicated with loc or iloc and boolean indexing:
print (~df.columns.duplicated())
[ True True True False]
df = df.loc[:, ~df.columns.duplicated()]
print (df)
A B C
0 0 1 2
1 4 5 6
df = df.iloc[:, ~df.columns.duplicated()]
print (df)
A B C
0 0 1 2
1 4 5 6
Timings:
np.random.seed(123)
cols = ['A','B','C','B']
#[1000 rows x 30 columns]
df = pd.DataFrame(np.random.randint(10, size=(1000,30)),columns = np.random.choice(cols, 30))
print (df)
In [115]: %timeit (df.groupby(level=0, axis=1).first())
1000 loops, best of 3: 1.48 ms per loop
In [116]: %timeit (df.groupby(level=0, axis=1).mean())
1000 loops, best of 3: 1.58 ms per loop
In [117]: %timeit (df.iloc[:, ~df.columns.duplicated()])
1000 loops, best of 3: 338 µs per loop
In [118]: %timeit (df.loc[:, ~df.columns.duplicated()])
1000 loops, best of 3: 346 µs per loop
You can groupby
We use the axis=1 and level=0 parameters to specify that we are grouping by columns. Then use the first method to grab the first column within each group defined by unique column names.
df.groupby(level=0, axis=1).first()
A B C
0 0 1 2
1 4 5 6
We could have also used last
df.groupby(level=0, axis=1).last()
A B C
0 0 3 2
1 4 7 6
Or mean
df.groupby(level=0, axis=1).mean()
A B C
0 0 2 2
1 4 6 6