Dataframe apply set is not removing duplicate values - pandas

My dataset can sometimes include duplicates in one concatenated column like this:
Total
0 Thriller,Satire,Thriller
1 Horror,Thriller,Horror
2 Mystery,Horror,Mystery
3 Adventure,Horror,Horror
When doind this
df['Total'].str.split(",").apply(set)
I get
Total
0 {Thriller,Satire}
1 {Horror,Thriller}
2 {Mystery,Horror,Crime}
3 {Adventure,Horror}
And after encoding it with
df['Total'].str.get_dummies(sep=",")
I get a header looking like this
{'Horror {'Mystery {'Thriller ... Horror Thriller'}
Instead of
Horror Mystery Thriller
How do I get rid of the curly brackets when using Pandas dataframe?

Method Series.str.get_dummies working nice also with duplicates.
So omit code for unique values:
df['Total'] = df['Total'].str.split(",").apply(set)
And use only:
df1 = df['Total'].str.get_dummies(sep=",")
print (df1)
Adventure Horror Mystery Satire Thriller
0 0 0 0 1 1
1 0 1 0 0 1
2 0 1 1 0 0
3 1 1 0 0 0
BUt if need remopve duplicates add Series.str.join:
df1 = df['Total'].str.split(",").apply(set).str.join(',').str.get_dummies(sep=",")

Related

incompatible index of inserted column with frame index with group by and count

I have data that looks like this:
CHROM POS REF ALT ... is_sever_int is_sever_str is_sever_f encoding_str
0 chr1 14907 A G ... 1 1 one one
1 chr1 14930 A G ... 1 1 one one
These are the columns that I'm interested to perform calculations on (example) :
is_severe snp _id encoding
1 1 one
1 1 two
0 1 one
1 2 two
0 2 two
0 2 one
what I want to do is to count for each snp_id and severe_id how many ones and twos are in the encoding column :
snp_id is_svere encoding_one encoding_two
1 1 1 1
1 0 1 0
2 1 0 1
2 0 1 1
I tried this :
df.groupby(["snp_id","is_sever_f","encoding_str"])["encoding_str"].count()
but it gave the error :
incompatible index of inserted column with frame index
then i tried this:
df["count"]=df.groupby(["snp_id","is_sever_f","encoding_str"],as_index=False)["encoding_str"].count()
and it returned:
Expected a 1D array, got an array with shape (2532831, 3)
how can i fix this? thank you:)
Let's try groupby with whole columns and get size of each group then unstack the encoding index.
out = (df.groupby(['is_severe', 'snp_id', 'encoding']).size()
.unstack(fill_value=0)
.add_prefix('encoding_')
.reset_index())
print(out)
encoding is_severe snp_id encoding_one encoding_two
0 0 1 1 0
1 0 2 1 1
2 1 1 1 1
3 1 2 0 1
Try as follows:
Use pd.get_dummies to convert categorical data in column encoding into indicator variables.
Chain df.groupby and get sum to turn double rows per group into one row (i.e. [0,1] and [1,0] will become [1,1] where df.snp_id == 2 and df.is_severe == 0).
res = pd.get_dummies(data=df, columns=['encoding'])\
.groupby(['snp_id','is_severe'], as_index=False, sort=False).sum()
print(res)
snp_id is_severe encoding_one encoding_two
0 1 1 1 1
1 1 0 1 0
2 2 1 0 1
3 2 0 1 1
If your actual df has more columns, limit the assigment to the data parameter inside get_dummies. I.e. use:
res = pd.get_dummies(data=df[['is_severe', 'snp_id', 'encoding']],
columns=['encoding']).groupby(['snp_id','is_severe'],
as_index=False, sort=False)\
.sum()

Adding new column as a sum of the subsquent columns [duplicate]

This question already has answers here:
how do I insert a column at a specific column index in pandas?
(6 answers)
Closed last year.
I have this df:
id car truck bus bike
0 1 1 0 0
1 0 0 1 0
2 1 1 1 1
I want to add another column count to this df but after id and before car to sum the values of the rows, like this:
id count car truck bus bike
0 2 1 1 0 0
1 1 0 0 1 0
2 4 1 1 1 1
I know how to add the column using this code:
df.loc[:,'count'] = df.sum(numeric_only=True, axis=1)
but the above code add the new column in the last position.
How can I fix this?
There are several ways. I provided two ways here.
#1. Changing column order after creating count column:
df.loc[:,'count'] = df.sum(numeric_only=True, axis=1)
df.columns = ['id', 'count', 'car', 'truck', 'bus', 'bike']
print(df)
# id count car truck bus bike
#0 0 2 1 1 0 0
#1 1 2 0 0 1 0
#2 2 6 1 1 1 1
#2. Inserting a Series to specific position using insert function:
df.insert(1, "count", df.sum(numeric_only=True, axis=1))
print(df)
# id count car truck bus bike
#0 0 2 1 1 0 0
#1 1 2 0 0 1 0
#2 2 6 1 1 1 1
try this slight modification of your code:
import pandas as pd
df = pd.DataFrame(data={'id':[0,1,2],'car':[1,0,1],'truck':[1,0,1],'bus':[0,1,1],'bike':[0,0,1]})
count = df.drop(columns=['id'],axis=1).sum(numeric_only=True, axis=1)
df.insert(1, "count", count)
print(df)

Pandas Group Columns by Value of 1 and Sort By Frequency

I have to take this dataframe:
d = {'Apple': [0,0,1,0,1,0], 'Aurora': [0,0,0,0,0,1], 'Barn': [0,1,1,0,0,0]}
df = pd.DataFrame(data=d)
Apple Aurora Barn
0 0 0 0
1 0 0 1
2 1 0 1
3 0 0 0
4 1 0 0
5 0 1 0
And count the frequency of the number one in each column, and create a new dataframe that looks like this:
df = pd.DataFrame([['Apple',0.3333], ['Aurora',0.166666], ['Barn', 0.3333]], columns = ['index', 'value'])
index value
0 Apple 0.333300
1 Aurora 0.166666
2 Barn 0.333300
I have tried this:
df['freq'] = df.groupby(1)[1].transform('count')
But I get an error: KeyError: 1
So I'm not sure how to count the value 1 across rows and columns, and group by column names and the frequency of 1 in each column.
If I understand correctly, you could do simply this:
freq = df.mean()
Output:
>>> freq
Apple 0.333333
Aurora 0.166667
Barn 0.333333
dtype: float64

Pandas : Get a column value where another column is the minimum in a sub-grouping [duplicate]

I'm using groupby on a pandas dataframe to drop all rows that don't have the minimum of a specific column. Something like this:
df1 = df.groupby("item", as_index=False)["diff"].min()
However, if I have more than those two columns, the other columns (e.g. otherstuff in my example) get dropped. Can I keep those columns using groupby, or am I going to have to find a different way to drop the rows?
My data looks like:
item diff otherstuff
0 1 2 1
1 1 1 2
2 1 3 7
3 2 -1 0
4 2 1 3
5 2 4 9
6 2 -6 2
7 3 0 0
8 3 2 9
and should end up like:
item diff otherstuff
0 1 1 2
1 2 -6 2
2 3 0 0
but what I'm getting is:
item diff
0 1 1
1 2 -6
2 3 0
I've been looking through the documentation and can't find anything. I tried:
df1 = df.groupby(["item", "otherstuff"], as_index=false)["diff"].min()
df1 = df.groupby("item", as_index=false)["diff"].min()["otherstuff"]
df1 = df.groupby("item", as_index=false)["otherstuff", "diff"].min()
But none of those work (I realized with the last one that the syntax is meant for aggregating after a group is created).
Method #1: use idxmin() to get the indices of the elements of minimum diff, and then select those:
>>> df.loc[df.groupby("item")["diff"].idxmin()]
item diff otherstuff
1 1 1 2
6 2 -6 2
7 3 0 0
[3 rows x 3 columns]
Method #2: sort by diff, and then take the first element in each item group:
>>> df.sort_values("diff").groupby("item", as_index=False).first()
item diff otherstuff
0 1 1 2
1 2 -6 2
2 3 0 0
[3 rows x 3 columns]
Note that the resulting indices are different even though the row content is the same.
You can use DataFrame.sort_values with DataFrame.drop_duplicates:
df = df.sort_values(by='diff').drop_duplicates(subset='item')
print (df)
item diff otherstuff
6 2 -6 2
7 3 0 0
1 1 1 2
If possible multiple minimal values per groups and want all min rows use boolean indexing with transform for minimal values per groups:
print (df)
item diff otherstuff
0 1 2 1
1 1 1 2 <-multiple min
2 1 1 7 <-multiple min
3 2 -1 0
4 2 1 3
5 2 4 9
6 2 -6 2
7 3 0 0
8 3 2 9
print (df.groupby("item")["diff"].transform('min'))
0 1
1 1
2 1
3 -6
4 -6
5 -6
6 -6
7 0
8 0
Name: diff, dtype: int64
df = df[df.groupby("item")["diff"].transform('min') == df['diff']]
print (df)
item diff otherstuff
1 1 1 2
2 1 1 7
6 2 -6 2
7 3 0 0
The above answer worked great if there is / you want one min. In my case there could be multiple mins and I wanted all rows equal to min which .idxmin() doesn't give you. This worked
def filter_group(dfg, col):
return dfg[dfg[col] == dfg[col].min()]
df = pd.DataFrame({'g': ['a'] * 6 + ['b'] * 6, 'v1': (list(range(3)) + list(range(3))) * 2, 'v2': range(12)})
df.groupby('g',group_keys=False).apply(lambda x: filter_group(x,'v1'))
As an aside, .filter() is also relevant to this question but didn't work for me.
I tried everyone's method and I couldn't get it to work properly. Instead I did the process step-by-step and ended up with the correct result.
df.sort_values(by='item', inplace=True, ignore_index=True)
df.drop_duplicates(subset='diff', inplace=True, ignore_index=True)
df.sort_values(by=['diff'], inplace=True, ignore_index=True)
For a little more explanation:
Sort items by the minimum value you want
Drop the duplicates of the column you want to sort with
Resort the data because the data is still sorted by the minimum values
If you know that all of your "items" have more than one record you can sort, then use duplicated:
df.sort_values(by='diff').duplicated(subset='item', keep='first')

rolling sum of a column in pandas dataframe at variable intervals

I have a list of index numbers that represent index locations for a DF. list_index = [2,7,12]
I want to sum from a single column in the DF by rolling through each number in list_index and totaling the counts between the index points (and restart count at 0 at each index point). Here is a mini example.
The desired output is in OUTPUT column, which increments every time there is another 1 from COL 1 and RESTARTS the count at 0 on the location after the number in the list_index.
I was able to get it to work with a loop but there are millions of rows in the DF and it takes a while for the loop to run. It seems like I need a lambda function with a sum but I need to input start and end point in index.
Something like lambda x:x.rolling(start_index, end_index).sum()? Can anyone help me out on this.
You can try of cummulative sum and retrieving only 1 values related information , rolling sum with diffferent intervals is not possible
a = df['col'].eq(1).cumsum()
df['output'] = a - a.mask(df['col'].eq(1)).ffill().fillna(0).astype(int)
Out:
col output
0 0 0
1 1 1
2 1 2
3 0 0
4 1 1
5 1 2
6 1 3
7 0 0
8 0 0
9 0 0
10 0 0
11 1 1
12 1 2
13 0 0
14 0 0
15 1 1