Adding missing rows with condition - indexing

i have data that look like this:
X snp_id is_severe encoding_1 encoding_2 encoding_0
1 0 GL000191.1-37698 0 0 1 7
2 1 GL000191.1-37698 1 0 2 11
3 2 GL000191.1-37922 1 1 0 12
what i wish to do is: for every snp_id if it has only is_severe column ==0 or only is_severe column ==1 that add an extra row with the missing is_severe value and other will be equal to zero
example :
GL000191.1-37698 is Ok because he has is_severe 0 and 1 values but GL000191.1-37922 has only 1 . so i would like to add:
X snp_id is_severe encoding_1 encoding_2 encoding_0
1 0 GL000191.1-37698 0 0 1 7
2 1 GL000191.1-37698 1 0 2 11
3 2 GL000191.1-37922 1 1 0 12
4 3 GL000191.1-37922 0 0 0 0
and if the data looked like this :
X snp_id is_severe encoding_1 encoding_2 encoding_0
1 0 GL000191.1-37698 0 0 1 7
2 1 GL000191.1-37698 1 0 2 11
3 2 GL000191.1-37922 0 1 0 12
the result would be :
X snp_id is_severe encoding_1 encoding_2 encoding_0
1 0 GL000191.1-37698 0 0 1 7
2 1 GL000191.1-37698 1 0 2 11
3 2 GL000191.1-37922 0 1 0 12
4 3 GL000191.1-37922 1 0 0 0
I read about indexing in some questions asked but the problem is that I'm supposed to do it for the snp_id column and it is a string and not an integer.
I also thought about the possibility of pivoting but I'm not sure it would help and then filling the NANs created with values , but it didn't work well:
count_pivote=count.pivot(index='snp_id', columns=["is_severe","encoding_1","encoding_2"], values=["encoding_1","encoding_2"])
is there any way to do this ?

Related

Using If-else to change values in Pandas

I’ve a pd df consists three columns: ID, t, and ind1.
import pandas as pd
dat = {'ID': [1,1,1,1,2,2,2,3,3,3,3,4,4,4,5,5,6,6,6],
't': [0,1,2,3,0,1,2,0,1,2,3,0,1,2,0,1,0,1,2],
'ind1' : [1,1,1,1,0,0,0,0,0,0,0,1,1,1,1,1,0,0,0]
}
df = pd.DataFrame(dat, columns = ['ID', 't', 'ind1'])
print (df)
What I need to do is to create a new column (res) that
for all ID with ind1==0, then res is zero.
for all ID with
ind1==1 and if t==max(t) (group by ID), then res = 1, otherwise zero.
Here’s anticipated output
Check with groupby with idxmax , then where with transform all
df['res']=df.groupby('ID').t.transform('idxmax').where(df.groupby('ID').ind1.transform('all')).eq(df.index).astype(int)
df
Out[160]:
ID t ind1 res
0 1 0 1 0
1 1 1 1 0
2 1 2 1 0
3 1 3 1 1
4 2 0 0 0
5 2 1 0 0
6 2 2 0 0
7 3 0 0 0
8 3 1 0 0
9 3 2 0 0
10 3 3 0 0
11 4 0 1 0
12 4 1 1 0
13 4 2 1 1
14 5 0 1 0
15 5 1 1 1
16 6 0 0 0
17 6 1 0 0
18 6 2 0 0
This works on the knowledge that the ID column is sorted :
cond1 = df.ind1.eq(0)
cond2 = df.ind1.eq(1) & (df.t.eq(df.groupby("ID").t.transform("max")))
df["res"] = np.select([cond1, cond2], [0, 1], 0)
df
ID t ind1 res
0 1 0 1 0
1 1 1 1 0
2 1 2 1 0
3 1 3 1 1
4 2 0 0 0
5 2 1 0 0
6 2 2 0 0
7 3 0 0 0
8 3 1 0 0
9 3 2 0 0
10 3 3 0 0
11 4 0 1 0
12 4 1 1 0
13 4 2 1 1
14 5 0 1 0
15 5 1 1 1
16 6 0 0 0
17 6 1 0 0
18 6 2 0 0
Use groupby.apply:
df['res'] = (df.groupby('ID').apply(lambda x: x['ind1'].eq(1)&x['t'].eq(x['t'].max()))
.astype(int).reset_index(drop=True))
print(df)
ID t ind1 res
0 1 0 1 0
1 1 1 1 0
2 1 2 1 0
3 1 3 1 1
4 2 0 0 0
5 2 1 0 0
6 2 2 0 0
7 3 0 0 0
8 3 1 0 0
9 3 2 0 0
10 3 3 0 0
11 4 0 1 0
12 4 1 1 0
13 4 2 1 1
14 5 0 1 0
15 5 1 1 1
16 6 0 0 0
17 6 1 0 0
18 6 2 0 0

pandas aggregate based on continuous same rows

Suppose I have this data frame and I want to aggregate and sum values on column 'a' based on the labels that have the same amount.
a label
0 1 0
1 3 0
2 5 0
3 2 1
4 2 1
5 2 1
6 3 0
7 3 0
8 4 1
The desired result will be:
a label
0 9 0
1 6 1
2 6 0
3 4 1
and not this:
a label
0 15 0
1 10 1
IIUC
s=df.groupby(df.label.diff().ne(0).cumsum()).agg({'a':'sum','label':'first'})
s
Out[280]:
a label
label
1 9 0
2 6 1
3 6 0
4 4 1

Pandas: The best way to create new Frame by specific criteria

I have a DataFrame:
df = pd.DataFrame({'id':[1,1,1,1,2,2,2,3,3,3,4,4],
'sex': [0,0,0,1,0,0,0,1,1,0,1,1]})
id sex
0 1 0
1 1 0
2 1 0
3 1 1
4 2 0
5 2 0
6 2 0
7 3 1
8 3 1
9 3 0
10 4 1
11 4 1
I want to get new DateFrame where there are only id's with both sex values.
So I want to get something like this.
id sex
0 1 0
1 1 0
2 1 0
3 1 1
4 3 1
5 3 1
6 3 0
Using groupby and filter with required condition
In [2952]: df.groupby('id').filter(lambda x: set(x.sex) == set([0,1]))
Out[2952]:
id sex
0 1 0
1 1 0
2 1 0
3 1 1
7 3 1
8 3 1
9 3 0
Also,
In [2953]: df.groupby('id').filter(lambda x: all([any(x.sex == v) for v in [0,1]]))
Out[2953]:
id sex
0 1 0
1 1 0
2 1 0
3 1 1
7 3 1
8 3 1
9 3 0
Use drop_duplicates by both columns and then get size of one column by value_counts first.
Then filter all values by boolean indexing with isin:
s = df.drop_duplicates()['id'].value_counts()
print (s)
3 2
1 2
4 1
2 1
Name: id, dtype: int64
df = df[df['id'].isin(s.index[s == 2])]
print (df)
id sex
0 1 0
1 1 0
2 1 0
3 1 1
7 3 1
8 3 1
9 3 0
One more:)
df.groupby('id').filter(lambda x: x['sex'].nunique()>1)
id sex
0 1 0
1 1 0
2 1 0
3 1 1
7 3 1
8 3 1
9 3 0
Use isin()
Something like this:
df = pd.DataFrame({'id':[1,1,1,1,2,2,2,3,3,3,4,4],
'sex': [0,0,0,1,0,0,0,1,1,0,1,1]})
male = df[df['sex'] == 0]
male = male['id']
female = df[df['sex'] == 1]
female = female['id']
df = df[(df['id'].isin(male)) & (df['id'].isin(female))]
print(df)
Output:
id sex
0 1 0
1 1 0
2 1 0
3 1 1
7 3 1
8 3 1
9 3 0
Or you can try this
m=df.groupby('id')['sex'].nunique().eq(2)
df.loc[df.id.isin(m[m].index)]
Out[112]:
id sex
0 1 0
1 1 0
2 1 0
3 1 1
7 3 1
8 3 1
9 3 0

how populate columns dependng found value?

I have a pandas DataFrame with customers ID and columns related to months (1,2,3....)
I have a column with the number of months since last purchase
I am using the following to populate the relevant months columns
dt.loc[dt.month == 1, '1'] = 1
dt.loc[dt.month == 2, '2'] = 1
dt.loc[dt.month == 3, '3'] = 1
etc,
How can I populate the columns in a better way to avoid creating 12 statements?
pd.get_dummies
pd.get_dummies(dt.month)
Consider the dataframe dt
dt = pd.DataFrame(dict(
month=np.random.randint(1, 13, (10)),
a=range(10)
))
a month
0 0 8
1 1 3
2 2 8
3 3 11
4 4 3
5 5 4
6 6 1
7 7 5
8 8 3
9 9 11
Add columns like this
dt.join(pd.get_dummies(dt.month))
a month 1 3 4 5 8 11
0 0 8 0 0 0 0 1 0
1 1 3 0 1 0 0 0 0
2 2 8 0 0 0 0 1 0
3 3 11 0 0 0 0 0 1
4 4 3 0 1 0 0 0 0
5 5 4 0 0 1 0 0 0
6 6 1 1 0 0 0 0 0
7 7 5 0 0 0 1 0 0
8 8 3 0 1 0 0 0 0
9 9 11 0 0 0 0 0 1
If you wanted the column names to be strings
dt.join(pd.get_dummies(dt.month).rename(columns='month {}'.format))
a month month 1 month 3 month 4 month 5 month 8 month 11
0 0 8 0 0 0 0 1 0
1 1 3 0 1 0 0 0 0
2 2 8 0 0 0 0 1 0
3 3 11 0 0 0 0 0 1
4 4 3 0 1 0 0 0 0
5 5 4 0 0 1 0 0 0
6 6 1 1 0 0 0 0 0
7 7 5 0 0 0 1 0 0
8 8 3 0 1 0 0 0 0
9 9 11 0 0 0 0 0 1

Matplotlib pcolor not plotting correctly

I am trying to create a heat map from a DataFrame (df) of IDs (rows) and Positions (columns) at which a motif is possible. If the motif is present the value of the table is 1 and 0 if it is not present. Such as:
ID Position 1 2 3 4 5 6 7 8 9 10 ...etc
A 0 1 0 0 0 1 0 0 0 1
B 1 0 1 0 1 0 0 1 0 0
C 0 0 0 1 0 0 1 0 1 0
D 1 0 1 0 0 0 1 0 1 0
I then multiply this matrix by itself to find the number of times the motifs present co-occur with motifs at other positions using the code:
df.T.dot(df)
To obtain the Data Frame:
POS 1 2 3 4 5 6 7 8 9 10 ...
1 2 0 2 0 1 0 1 1 1 0
2 0 1 0 0 0 1 0 0 0 1
3 2 0 2 0 1 0 1 1 1 0
4 0 0 0 1 0 0 1 0 1 0
5 1 0 1 0 1 0 0 1 0 0
6 0 1 0 0 0 1 0 0 0 1
7 1 0 1 1 0 0 2 0 2 0
8 1 0 1 0 1 0 0 1 0 0
9 1 0 1 1 0 0 2 0 2 0
10 0 1 0 0 0 1 0 0 0 1
...
Which is symmetrical with the diagonal, however when I try to create the Heat Map using
pylab.pcolor(df)
It gives me an asymmetrical map that does not seem to be representing the dotted matrix. I don't have enough reputation to post an image though.
Does anyone know why this might be occurring? Thanks