Convert values of an entire column in a datafarme pandas - pandas

I have the following dataframe:
chr start_position end_position gene_name
0 Chr Position Ref Gene_Name
1 chr22 24128945 G nan
2 chr19 45867080 G ERCC2
3 chr3 52436341 C BAP1
4 chr7 151875065 G KMT2C
5 chr19 1206633 CGGGT STK11
and I'd like to convert the entire 'end_position' column to contain the values of the 'start_position'+len('end_position'), the results should be:
chr start_position end_position gene_name
0 Chr Position Ref Gene_Name
1 chr22 24128945 24128946 nan
2 chr19 45867080 45867081 ERCC2
3 chr3 52436341 52436342 BAP1
4 chr7 151875065 151875066 KMT2C
5 chr19 1206633 1206638 STK11
I have written the below script:
patient_vcf_to_df.apply(pd.to_numeric, errors='ignore')
patient_vcf_to_df['end_position'] = patient_vcf_to_df['end_position'].map(lambda x: patient_vcf_to_df['start_position'] + len(x))
but I got the error:
TypeError: must be str, not int
Anyone knows how can I fix the problem?
Thanks a lot!

first I'd read your CSV in a way that the 0 row would become a header (column names):
df = pd.read_csv(filename, header=1)
to get the following DF:
Chr Position Ref Gene_Name
0 chr22 24128945 G NaN
1 chr19 45867080 G ERCC2
2 chr3 52436341 C BAP1
3 chr7 151875065 G KMT2C
4 chr19 1206633 CGGGT STK11
as a positive side-effect:
In [99]: df.dtypes
Out[99]:
chr object
position int64 # <--- NOTE
ref object
gene_name object
dtype: object
if you want to lower-case your columns:
In [97]: df.columns = df.columns.str.lower()
In [98]: df
Out[98]:
chr position ref gene_name
0 chr22 24128945 G NaN
1 chr19 45867080 G ERCC2
2 chr3 52436341 C BAP1
3 chr7 151875065 G KMT2C
4 chr19 1206633 CGGGT STK11
to make sure that position column is of a numeric dtype:
df['position'] = pd.to_numeric(df['position'], errors='coerce')
and then:
In [101]: df['end_position'] = df['position'] + df['ref'].str.len()
In [102]: df
Out[102]:
chr position ref gene_name end_position
0 chr22 24128945 G NaN 24128946
1 chr19 45867080 G ERCC2 45867081
2 chr3 52436341 C BAP1 52436342
3 chr7 151875065 G KMT2C 151875066
4 chr19 1206633 CGGGT STK11 1206638

Related

How to merge two files together matching exactly by 2 columns?

I have file 1 with 5778 lines with 15 columns.
Sample from output_armlympho.txt:
NUMBER CHROM POS ID REF ALT A1 TEST OBS_CT BETA SE L95 U95 T_STAT P
42484 1 18052645 rs260514:18052645:G:A G A G ADD 1597 0.0147047 0.0656528 -0.113972 0.143382 0.223977 0.822804
42485 1 18054638 rs35592535:18054638:GC:G GC G G ADD 1597 0.0138673 0.0269643 -0.0389816 0.0667163 0.514286 0.607124
42486 7 18054785 rs1572792:18054785:G:A G A A ADD 1597 -0.0126002 0.0256229 -0.0628202
I have another file with 25958247 lines and 16 columns
Sample from file1:
column1 column2 column3 column4 column5 column6 column7 column8 column9 column10 column11 column12 column13 column14 column15 column16
1 chr1_10000044_A_T_b38 ENS 171773 29 30 0.02 0.33 0.144 0.14 chr1 10000044 A T chr1 10060102
2 chr7_10000044_A_T_b38 ENS -58627 29 30 0.024 0.26 0.16 0.15 chr7 10000044 A T chr7 18054785
4 chr1_10000044_A_T_b38 ENS 89708 29 30 0.0 0.03 -0.0 0.038 chr1 10000044 A T chr1 18054638
5 chr1_10000044_A_T_b38 ENS -472482 29 30 0.02 0.16 0.11 0.07 chr1 10000044 A T chr1 18052645
I want to merge these files together so that the second and third column from file 1 (CHROM POS) exactly matches the 15th and 16th columns from file 2 (column15 column16). However a problem is that in column15, the format is chr[number] e.g. chr1 and in file 1 is just 1. So I need a way to match 1 to chr1 or 7 to chr7 and via position. There may also be repeated lines in file 2. E.g. repeated values that are the same in column15 and column16. Both files aren't ordered in the same way.
Expected output: (outputs all the columns from file 1 and 2).
column1 column2 column3 column4 column5 column6 column7 column8 column9 column10 column11 column12 column13 column14 column15 column16 NUMBER CHROM POS ID REF ALT A1 TEST OBS_CT BETA SE L95 U95 T_STAT P
2 chr7_10000044_A_T_b38 ENS -58627 29 30 0.024 0.26 0.16 0.15 chr7 10000044 A T chr7 18054785 42486 7 18054785 42486 7 18054785 rs1572792:18054785:G:A G A A ADD 1597 -0.0126002 0.0256229 -0.0628202
4 chr1_10000044_A_T_b38 ENS 89708 29 30 0.0 0.03 -0.0 0.038 chr1 10000044 A T chr1 18054638 42485 1 18054638 rs35592535:18054638:GC:G GC G G ADD 1597 0.0138673 0.0269643 -0.0389816 0.0667163 0.514286 0.607124
5 chr1_10000044_A_T_b38 ENS -472482 29 30 0.02 0.16 0.11 0.07 chr1 10000044 A T chr1 18052645 42484 1 18052645 rs260514:18052645:G:A G A G ADD 1597 0.0147047 0.0656528 -0.113972 0.143382 0.223977 0.822804
Current attempt:
awk 'NR==FNR {Tmp[$3] = $16 FS $4; next} ($16 in Tmp) {print $0 FS Tmp[$16]}' output_armlympho.txt file1 > test
Assumptions:
within the file output_armlympho.txt the combination of the 2nd and 3rd columns are unique
One awk idea:
awk '
FNR==1 { if (header) print $0,header; else header=$0; next }
FNR==NR { lines["chr" $2,$3]=$0; next }
($15,$16) in lines { print $0, lines[$15,$16] }
' output_armlympho.txt file1
This generates:
column1 column2 column3 column4 column5 column6 column7 column8 column9 column10 column11 column12 column13 column14 column15 column16 NUMBER CHROM POS ID REF ALT A1 TEST OBS_CT BETA SE L95 U95 T_STAT P
2 chr7_10000044_A_T_b38 ENS -58627 29 30 0.024 0.26 0.16 0.15 chr7 10000044 A T chr7 18054785 42486 7 18054785 rs1572792:18054785:G:A G A A ADD 1597 -0.0126002 0.0256229 -0.0628202
4 chr1_10000044_A_T_b38 ENS 89708 29 30 0.0 0.03 -0.0 0.038 chr1 10000044 A T chr1 18054638 42485 1 18054638 rs35592535:18054638:GC:G GC G G ADD 1597 0.0138673 0.0269643 -0.0389816 0.0667163 0.514286 0.607124
5 chr1_10000044_A_T_b38 ENS -472482 29 30 0.02 0.16 0.11 0.07 chr1 10000044 A T chr1 18052645 42484 1 18052645 rs260514:18052645:G:A G A G ADD 1597 0.0147047 0.0656528 -0.113972 0.143382 0.223977 0.822804

Python pandas: Faster way than numpy.select?

I have a dataframe with two columns looking like this:
GebTyp BAK
0 RH C
1 MFH A
2 RH J
3 RH F
4 RH K
... ... ..
25046 MFH C
25047 MFH G
25048 MFH I
25049 MFH A
25050 MFH B
And another one with values for each pair of these two columns.
BAK EFH/DHH RH MFH GMH HH
0 A 231.0 222.0 265.0 186.0 156.0
1 B 271.0 222.0 204.0 186.0 156.0
2 C 214.0 186.0 222.0 197.0 167.0
3 D 242.0 183.0 236.0 201.0 171.0
4 E 184.0 155.0 188.0 196.0 143.0
5 F 198.0 179.0 162.0 158.0 121.0
6 G 134.0 145.0 138.0 134.0 104.0
7 H 159.0 118.0 143.0 103.0 73.0
8 I 120.0 110.0 119.0 97.0 87.0
9 J 91.0 89.0 86.0 75.0 69.0
10 K NaN NaN NaN NaN NaN
11 L NaN NaN NaN NaN NaN
I can assign each individual value correctly with numpy.select like this:
def GWB()
conditions = [
(mc["BAK"] == "A" & mc["GebTyp"] == "EFH/DHH"),
(mc["BAK"] == "A" & mc["GebTyp"] == "RH"),
(mc["BAK"] == "A" & mc["GebTyp"] == "MFH"),
(mc["BAK"] == "A" & mc["GebTyp"] == "GMH"),
(mc["BAK"] == "A" & mc["GebTyp"] == "HH"),
]
values = [
(231),
(222),
(265).
(186),
(156)
]
df["result"] = np.select(conditions,values)
GWB()
But this would result in roughly 80 lines of code, also in this case I'm working only with the first dataframe, assigning the values manually. I was wondering if there would be a faster/shorter way to do this task?
Use DataFrame.merge with DataFrame.melt:
df = df1.merge(df2.melt('BAK', value_name='result', var_name='GebTyp'),
how='left',
on=['BAK','GebTyp'])
print (df)
GebTyp BAK result
0 RH C 186.0
1 MFH A 265.0
2 RH J 89.0
3 RH F 179.0
4 RH K NaN
5 ... .. NaN
6 MFH C 222.0
7 MFH G 138.0
8 MFH I 119.0
9 MFH A 265.0
10 MFH B 204.0

apply a function to each secuence of rows in a column

I have a df like this:
xx
A 3
B 4
C 1
D 5
E 7
F 6
G 3
H 5
I 8
J 5
I would like to apply the pct_change function to column XX to every 5 rows:
to generate the following output:
xx
A NaN
B 0.333333
C -0.750000
D 4.000000
E 0.400000
F NaN
G -0.500000
H 0.666667
I 0.600000
J -0.375000
How could I achieve this?
Create np.arange by length of df and use integer divison by 5 and pass it to groupby function:
df = df.groupby(np.arange(len(df)) // 5).pct_change()
print (df)
xx
A NaN
B 0.333333
C -0.750000
D 4.000000
E 0.400000
F NaN
G -0.500000
H 0.666667
I 0.600000
J -0.375000

Pandas: Joining dataframes from different sources

Have the following datasets from two different sources i.e. Oracle and MySQL:
DF1 (Oracle):
A B C
1122 8827
822 8282 6622
727 72 1183
91 5092
992 113 7281
DF2 (MySQL):
E F G
8827 6363
822 5526 9393
727 928 6671
9221 7282
992 921 7262
445 6298
Need to join these in pandas such that the below result is obtained.
Expected o/p:
A B C F G
822 8282 6622 5526 9393
727 72 1183 928 6671
992 113 7281 921 7262
1122 8827
91 5092
8827 6363
445 6298
Update_1:
As suggested, tried the following:
import pandas as pd
data1 = [['',1122,8827],[822,8282,6622],[727,72,1183],['',91,5092],[992,113,7281]]
df1 = pd.DataFrame(data1,columns=['A','B','C'],dtype=float)
print df1
data2 = [['',8827,6363],[822,5526,9393],[727,928,6671],['',9221,7282],[992,921,7262],['',445,6298]]
df2 = pd.DataFrame(data2,columns=['E','F','G'],dtype=float)
print df2
DF11 = df1.set_index(df1['A'].fillna(df1.groupby('A').cumcount().astype(str)+'A'))
DF22 = df2.set_index(df2['E'].fillna(df2.groupby(['E']).cumcount().astype(str)+'E'))
DF11.merge(DF22, left_index=True, right_index=True, how='outer')\
.reset_index(drop=True)\
.drop('E', axis=1)
getting the following:
A B C F G
0 727 72.0 1183.0 928.0 6671.0
1 822 8282.0 6622.0 5526.0 9393.0
2 992 113.0 7281.0 921.0 7262.0
3 1122.0 8827.0 8827.0 6363.0
4 1122.0 8827.0 9221.0 7282.0
5 1122.0 8827.0 445.0 6298.0
6 91.0 5092.0 8827.0 6363.0
7 91.0 5092.0 9221.0 7282.0
8 91.0 5092.0 445.0 6298.0
Q: How to avoid the repetition of values and get the expected o/p?
Your problem is complicated by nulls in the join key. You try some logic like this to achieve your result, or create a different key for joins that doesn't have nulls.
DF11 = DF1.set_index(DF1['A'].fillna(DF1.groupby('A').cumcount().astype(str)+'A'))
DF22 = DF2.set_index(DF2['E'].fillna(DF2.groupby(['E']).cumcount().astype(str)+'E'))
DF11.merge(DF22, left_index=True, right_index=True, how='outer')\
.reset_index(drop=True)\
.drop('E', axis=1)
Output:
A B C F G
0 NaN 1122.0 8827.0 NaN NaN
1 822.0 8282.0 6622.0 5526.0 9393.0
2 727.0 72.0 1183.0 928.0 6671.0
3 NaN 91.0 5092.0 NaN NaN
4 992.0 113.0 7281.0 921.0 7262.0
5 NaN NaN NaN 8827.0 6363.0
6 NaN NaN NaN 9221.0 7282.0
7 NaN NaN NaN 445.0 6298.0
Update, due the fact your data has blanks and not np.nan, I had to add a method in those statement to replace '' with np.nan to get fillna to work correctly.
df1.set_index(df1['A'].replace('',np.nan).fillna(df1.groupby('A').cumcount().astype(str)+'A'))
Try this:
import pandas as pd
data1 = [['',1122,8827],[822,8282,6622],[727,72,1183],['',91,5092],[992,113,7281]]
df1 = pd.DataFrame(data1,columns=['A','B','C'],dtype=float)
print(df1)
data2 = [['',8827,6363],[822,5526,9393],[727,928,6671],['',9221,7282],[992,921,7262],['',445,6298]]
df2 = pd.DataFrame(data2,columns=['E','F','G'],dtype=float)
print(df2)
DF11 = df1.set_index(df1['A'].replace('',np.nan).fillna(df1.groupby('A').cumcount().astype(str)+'A'))
DF22 = df2.set_index(df2['E'].replace('',np.nan).fillna(df2.groupby(['E']).cumcount().astype(str)+'E'))
DF11.merge(DF22, left_index=True, right_index=True, how='outer')\
.reset_index(drop=True)\
.drop('E', axis=1)
Question, for your desired output, did you intentionally leave out column E?
if not...
I'm not sure whether or not the dataframes coming from different sources would have any bearing on how they would be joined together.
import pandas as pd
...
frames = [DF1, DF2]
result = pd.concat(frames)
This should perform the join you want to accomplish.

How to subset a tsv file based on a pattern?

I have two files. One file is a tab separated file containing multiple columns. the other file is a list of gene names. I have to extract only those rows which have the genes listed in file 2 are present in file 1.
I tried the below command but it extract all the rows:
awk 'NR==FNR{a[$0]=1;next} {for(i in a){if($10~i){print;break}}}' File2 file1
File1:
Input line ID Chrom Position Strand Ref. base(s) Alt. base(s) Sample ID HUGO symbol Sequence ontology Protein sequen
3 VAR113_NM-02_TUMOR_DNA chr1 11082255 + G T NM-02_TUMOR_DNA TARDBP MS K263N . PASS het 3 25
4 VAR114_NM-02_TUMOR_DNA chr1 15545868 + G T NM-02_TUMOR_DNA TMEM51 MS V131F . PASS het 3 13
6 VAR116_NM-02_TUMOR_DNA chr1 20676680 + C T NM-02_TUMOR_DNA VWA5B1 SY S970S . PASS het 4 34
7 rs149021429_NM-02_TUMOR_DNA chr1 21554495 + C A NM-02_TUMOR_DNA ECE1 SY S570S . PASS het 3
16 VAR126_NM-02_TUMOR_DNA chr1 39905109 + C T NM-02_TUMOR_DNA MACF1 SY V4069V . PASS het 4 17
21 VAR131_NM-02_TUMOR_DNA chr1 101387378 + G T NM-02_TUMOR_DNA SLC30A7 MS A275S . PASS het 4 45
24 VAR134_NM-02_TUMOR_DNA chr1 113256156 + C A NM-02_TUMOR_DNA PPM1J MS S135I . PASS het 3 9
25 rs201097299_NM-02_TUMOR_DNA chr1 145326106 + A T NM-02_TUMOR_DNA NBPF10 MS M1327L . PASS het 5
26 VAR136_NM-02_TUMOR_DNA chr1 149859281 + T C NM-02_TUMOR_DNA HIST2H2AB SY E62E . PASS het 11
27 VAR137_NM-02_TUMOR_DNA chr1 150529801 + C A NM-02_TUMOR_DNA ADAMTSL4 SY S679S . PASS het 3
28 rs376491237_NM-02_TUMOR_DNA chr1 150532649 + C A NM-02_TUMOR_DNA ADAMTSL4 SY R1068R . PASS het
34 VAR144_NM-02_TUMOR_DNA chr1 160389277 + T A NM-02_TUMOR_DNA VANGL2 SY L226L . PASS het 3 6
35 VAR145_NM-02_TUMOR_DNA chr1 161012389 + C A NM-02_TUMOR_DNA USF1 MS D44Y . PASS het 3 32
37 VAR147_NM-02_TUMOR_DNA chr1 200954042 + G T NM-02_TUMOR_DNA KIF21B MS R1250S . PASS het 3 21
41 rs191896925_NM-02_TUMOR_DNA chr1 207760805 + G T NM-02_TUMOR_DNA CR1 MS G1869W . PASS het 3
42 VAR152_NM-02_TUMOR_DNA chr1 208218427 + C A NM-02_TUMOR_DNA PLXNA2 SY G1208G . PASS het 3 13
43 VAR153_NM-02_TUMOR_DNA chr1 222715425 + A G NM-02_TUMOR_DNA HHIPL2 SY Y349Y . PASS het 10 41
44 VAR154_NM-02_TUMOR_DNA chr1 222715452 + T A NM-02_TUMOR_DNA HHIPL2 SY G340G . PASS het 5 46
45 VAR155_NM-02_TUMOR_DNA chr1 223568296 + G A NM-02_TUMOR_DNA C1orf65 SY G493G . PASS het 3 25
48 VAR158_NM-02_TUMOR_DNA chr2 8931258 + G A NM-02_TUMOR_DNA KIDINS220 MS P458L . PASS het 3 13
51 VAR161_NM-02_TUMOR_DNA chr2 37229656 + C A NM-02_TUMOR_DNA HEATR5B MS G1704C . PASS het 4 9
60 VAR170_NM-02_TUMOR_DNA chr2 84775506 + G T NM-02_TUMOR_DNA DNAH6 MS Q427H . PASS het 3 20
63 VAR173_NM-02_TUMOR_DNA chr2 86378563 + C A NM-02_TUMOR_DNA IMMT MS A420S . PASS het 6 29
64 VAR174_NM-02_TUMOR_DNA chr2 86716546 + G T NM-02_TUMOR_DNA KDM3A MS C1140F . PASS het 3 18
65 VAR175_NM-02_TUMOR_DNA chr2 96852612 + C A NM-02_TUMOR_DNA STARD7 SY L323L . PASS het 2 2
67 VAR177_NM-02_TUMOR_DNA chr2 121747740 + C A NM-02_TUMOR_DNA GLI2 MS P1417H . PASS het 2 2
71 rs199770435_NM-02_TUMOR_DNA chr2 130872871 + C T NM-02_TUMOR_DNA POTEF SY G184G . PASS het 8
72 rs199695856_NM-02_TUMOR_DNA chr2 132919171 + A G NM-02_TUMOR_DNA ANKRD30BL SY H36H . PASS het
73 rs111295191_NM-02_TUMOR_DNA chr2 132919192 + G A NM-02_TUMOR_DNA ANKRD30BL SY N29N . PASS het
76 VAR186_NM-02_TUMOR_DNA chr2 167084231 + T A NM-02_TUMOR_DNA SCN9A SY A1392A . PASS het 3 19
77 VAR187_NM-02_TUMOR_DNA chr2 168100115 + C G NM-02_TUMOR_DNA XIRP2 MS T738S . PASS het 9 49
78 VAR188_NM-02_TUMOR_DNA chr2 179343033 + G T NM-02_TUMOR_DNA FKBP7 MS A65D . PASS het 3 7
79 VAR189_NM-02_TUMOR_DNA chr2 179544108 + G C NM-02_TUMOR_DNA TTN MS P11234A . PASS het 3 17
82 VAR192_NM-02_TUMOR_DNA chr2 220074164 + G T NM-02_TUMOR_DNA ZFAND2B MS E92D . PASS het 2 2
83 VAR193_NM-02_TUMOR_DNA chr2 220420892 + C A NM-02_TUMOR_DNA OBSL1 MS G1487W . PASS het 3 9
84 rs191578275_NM-02_TUMOR_DNA chr2 233273263 + C A NM-02_TUMOR_DNA ALPPL2 MS P279Q . PASS het 3
86 VAR196_NM-02_TUMOR_DNA chr2 241815391 + G T NM-02_TUMOR_DNA AGXT SY L272L . PASS het 3 10
88 VAR198_NM-02_TUMOR_DNA chr3 9484995 + C T NM-02_TUMOR_DNA SETD5 SG R361* . PASS het 3 18
96 VAR206_NM-02_TUMOR_DNA chr3 49848502 + G T NM-02_TUMOR_DNA UBA7 MS P382H . PASS het 5 38
102 VAR212_NM-02_TUMOR_DNA chr3 58302669 + G T NM-02_TUMOR_DNA RPP14 MS L89F . PASS het 3 30
103 VAR213_NM-02_TUMOR_DNA chr3 63981750 + C A NM-02_TUMOR_DNA ATXN7 MS T751K . PASS het 3 13
104 rs146577101_NM-02_TUMOR_DNA chr3 97868656 + C T NM-02_TUMOR_DNA OR5H14 MS R143W . PASS het 4
107 rs58176285_NM-02_TUMOR_DNA chr3 123419183 + G A NM-02_TUMOR_DNA MYLK SY A1044A . PASS het 18
108 VAR218_NM-02_TUMOR_DNA chr3 123419189 + C T NM-02_TUMOR_DNA MYLK SY K1042K . PASS het 23 174
115 VAR225_NM-02_TUMOR_DNA chr3 183753779 + C A NM-02_TUMOR_DNA HTR3D MS P91T . PASS het 4 48
File2:
FBN1
HELZ
RALGPS2
DYNC1I2
NFE2L2
POSTN
INO80
I want those row which contains these genes.
So if I am following you correctly you just want to search $9 in file1 using the genes in file2 and I add MYLK to the list I get:
Maybe:
awk 'NR==FNR{A[$1];next}$9 in A' file2 file1
**empty line** (since `MYLK` was found after the line break it is included
107 rs58176285_NM-02_TUMOR_DNA chr3 123419183 + G A NM-02_TUMOR_DNA MYLK SY A1044A . PASS het 18
108 VAR218_NM-02_TUMOR_DNA chr3 123419189 + C T NM-02_TUMOR_DNA MYLK SY K1042K . PASS het 23 174
To remove the line break from the output:
awk 'NR==FNR{A[$1];next}$9 in A' file2 file1 | awk '!/^$/'
107 rs58176285_NM-02_TUMOR_DNA chr3 123419183 + G A NM-02_TUMOR_DNA MYLK SY A1044A . PASS het 18
108 VAR218_NM-02_TUMOR_DNA chr3 123419189 + C T NM-02_TUMOR_DNA MYLK SY K1042K . PASS het 23 174