How to replace variables across multiple columns using awk? - awk
I have a file that looks like this with 2060 lines with a header (column names) at the top:
FID IID late_telangiectasia_G1 late_atrophy_G1 late_atrophy_G2 late_nipple_retraction_G1 late_nipple_retraction_G2 late_oedema_G1 late_oedema_G2 late_induration_tumour_G1 late_induration_outside_G1 late_induration_G2 late_arm_lympho_G1 late_hyper_G1
1 470502 1 0 0 0 0 0 0 0 0 0 0 0
2 470514 0 0 0 0 0 0 0 0 0 0 0 0
3 470422 0 0 0 0 0 0 0 0 0 0 0 1
4 470510 0 0 0 0 0 1 0 1 1 1 0 1
5 470506 0 0 0 0 0 0 0 0 0 0 0 0
6 471948 0 0 0 0 0 0 0 1 0 0 0 0
7 469922 -9 -9 -9 -9 -9 -9 -9 -9 -9 -9 -9 -9
8 471220 0 1 1 -9 -9 0 0 1 1 1 0 0
9 470498 0 1 0 0 0 0 0 0 0 0 0 0
10 471993 0 1 1 0 0 0 0 0 0 0 0 0
11 470414 0 1 0 0 0 0 0 0 1 0 0 0
12 470522 0 0 0 0 0 0 0 0 0 0 0 0
13 470345 0 0 0 0 0 0 0 0 0 0 0 0
14 471275 0 1 0 -9 0 0 0 1 0 0 0 0
15 471283 0 1 0 0 0 0 0 1 1 0 0 0
16 472577 0 1 0 0 0 0 0 1 0 0 0 0
17 470492 0 1 0 0 0 0 0 0 0 0 0 0
18 472889 0 0 0 -9 0 0 0 0 0 0 0 0
19 470500 0 1 0 1 0 0 0 0 1 0 0 0
20 470493 0 0 0 0 0 0 0 1 1 0 0 0
I want to replace all the 0 -> 1 and the 1 -> 2 from column 3 to 12. I don't want to replace the -9.
I know for a single column the command will be:
awk'
{
if($3==1)$3=2
if($3==0)$3=1
}
1'file
Therefore, for multiple columns is there an easier way to specify a range rather than manually type every column number?
awk'
{
if($3,$4,$5,$6,$7,$8,$9,$10,$11,$12==1)$3,$4,$5,$6,$7,$8,$9,$10,$11,$12=2
if($3,$4,$5,$6,$7,$8,$9,$10,$11,$12==0)$3,$4,$5,$6,$7,$8,$9,$10,$11,$12=1
}
1'file
Thanks in advance
You could use a loop and change the field values accessing the field value using $i
awk '
{
for(i=3; i<=12; i++) {
if ($i==1 || $i==0) $i++
}
}1
' file | column -t
One possibility if you want to change almost all of your fields (as in your case) is to just save the ones you don't want to change and then change everything else:
$ awk 'NR>1{hd=$1 FS $2; tl=$13 FS $14; $1=$2=$13=$14=""; gsub(1,2); gsub(0,1); $0=hd $0 tl} 1' file
FID IID late_telangiectasia_G1 late_atrophy_G1 late_atrophy_G2 late_nipple_retraction_G1 late_nipple_retraction_G2 late_oedema_G1 late_oedema_G2 late_induration_tumour_G1 late_induration_outside_G1 late_induration_G2 late_arm_lympho_G1 late_hyper_G1
1 470502 2 1 1 1 1 1 1 1 1 1 0 0
2 470514 1 1 1 1 1 1 1 1 1 1 0 0
3 470422 1 1 1 1 1 1 1 1 1 1 0 1
4 470510 1 1 1 1 1 2 1 2 2 2 0 1
5 470506 1 1 1 1 1 1 1 1 1 1 0 0
6 471948 1 1 1 1 1 1 1 2 1 1 0 0
7 469922 -9 -9 -9 -9 -9 -9 -9 -9 -9 -9 -9 -9
8 471220 1 2 2 -9 -9 1 1 2 2 2 0 0
9 470498 1 2 1 1 1 1 1 1 1 1 0 0
10 471993 1 2 2 1 1 1 1 1 1 1 0 0
11 470414 1 2 1 1 1 1 1 1 2 1 0 0
12 470522 1 1 1 1 1 1 1 1 1 1 0 0
13 470345 1 1 1 1 1 1 1 1 1 1 0 0
14 471275 1 2 1 -9 1 1 1 2 1 1 0 0
15 471283 1 2 1 1 1 1 1 2 2 1 0 0
16 472577 1 2 1 1 1 1 1 2 1 1 0 0
17 470492 1 2 1 1 1 1 1 1 1 1 0 0
18 472889 1 1 1 -9 1 1 1 1 1 1 0 0
19 470500 1 2 1 2 1 1 1 1 2 1 0 0
20 470493 1 1 1 1 1 1 1 2 2 1 0 0
pipe it to column -t for alignment if you like.
Or using GNU awk for the 3rg arg to match() and retaining white space:
$ awk 'NR>1{ match($0,/((\S+\s+){2})((\S+\s+){9}\S+)(.*)/,a); gsub(1,2,a[3]); gsub(0,1,a[3]); $0=a[1] a[3] a[5] } 1' file
FID IID late_telangiectasia_G1 late_atrophy_G1 late_atrophy_G2 late_nipple_retraction_G1 late_nipple_retraction_G2 late_oedema_G1 late_oedema_G2 late_induration_tumour_G1 late_induration_outside_G1 late_induration_G2 late_arm_lympho_G1 late_hyper_G1
1 470502 2 1 1 1 1 1 1 1 1 1 0 0
2 470514 1 1 1 1 1 1 1 1 1 1 0 0
3 470422 1 1 1 1 1 1 1 1 1 1 0 1
4 470510 1 1 1 1 1 2 1 2 2 2 0 1
5 470506 1 1 1 1 1 1 1 1 1 1 0 0
6 471948 1 1 1 1 1 1 1 2 1 1 0 0
7 469922 -9 -9 -9 -9 -9 -9 -9 -9 -9 -9 -9 -9
8 471220 1 2 2 -9 -9 1 1 2 2 2 0 0
9 470498 1 2 1 1 1 1 1 1 1 1 0 0
10 471993 1 2 2 1 1 1 1 1 1 1 0 0
11 470414 1 2 1 1 1 1 1 1 2 1 0 0
12 470522 1 1 1 1 1 1 1 1 1 1 0 0
13 470345 1 1 1 1 1 1 1 1 1 1 0 0
14 471275 1 2 1 -9 1 1 1 2 1 1 0 0
15 471283 1 2 1 1 1 1 1 2 2 1 0 0
16 472577 1 2 1 1 1 1 1 2 1 1 0 0
17 470492 1 2 1 1 1 1 1 1 1 1 0 0
18 472889 1 1 1 -9 1 1 1 1 1 1 0 0
19 470500 1 2 1 2 1 1 1 1 2 1 0 0
20 470493 1 1 1 1 1 1 1 2 2 1 0 0
It is hard to tell if that is space delimited or tab delimited?
Here is a ruby that will deal with either space or tab delimited fields and will convert the result to tab delimited.
Note: Ruby arrays are zero based, so fields 1,2 are [0..1] and fields 3-12 are [2..11]
ruby -r csv -e 'options={:col_sep=>"\t", :converters=>:all, :headers=>true}
data=CSV.parse($<.read.gsub(/[[:blank:]]+/,"\t"), **options)
data.each_with_index{
|r,i| data[i]=r[0..1]+r[2..11].map{|e| (e==1 || e==0) ? e+1 : e}+r[12..]}
puts data.to_csv(**options)
' file
Prints:
FID IID late_telangiectasia_G1 late_atrophy_G1 late_atrophy_G2 late_nipple_retraction_G1 late_nipple_retraction_G2 late_oedema_G1 late_oedema_G2 late_induration_tumour_G1 late_induration_outside_G1 late_induration_G2 late_arm_lympho_G1 late_hyper_G1
1 470502 2 1 1 1 1 1 1 1 1 1 0 0
2 470514 1 1 1 1 1 1 1 1 1 1 0 0
3 470422 1 1 1 1 1 1 1 1 1 1 0 1
4 470510 1 1 1 1 1 2 1 2 2 2 0 1
5 470506 1 1 1 1 1 1 1 1 1 1 0 0
6 471948 1 1 1 1 1 1 1 2 1 1 0 0
7 469922 -9 -9 -9 -9 -9 -9 -9 -9 -9 -9 -9 -9
8 471220 1 2 2 -9 -9 1 1 2 2 2 0 0
9 470498 1 2 1 1 1 1 1 1 1 1 0 0
10 471993 1 2 2 1 1 1 1 1 1 1 0 0
11 470414 1 2 1 1 1 1 1 1 2 1 0 0
12 470522 1 1 1 1 1 1 1 1 1 1 0 0
13 470345 1 1 1 1 1 1 1 1 1 1 0 0
14 471275 1 2 1 -9 1 1 1 2 1 1 0 0
15 471283 1 2 1 1 1 1 1 2 2 1 0 0
16 472577 1 2 1 1 1 1 1 2 1 1 0 0
17 470492 1 2 1 1 1 1 1 1 1 1 0 0
18 472889 1 1 1 -9 1 1 1 1 1 1 0 0
19 470500 1 2 1 2 1 1 1 1 2 1 0 0
20 470493 1 1 1 1 1 1 1 2 2 1 0 0
With awk you can do:
awk -v OFS="\t" 'FNR>1{for(i=3;i<=12;i++)if ($i~"^[10]$")$i=$i+1} $1=$1' file
# same output
gawk -v RS='[[:space:]]+' '++c > 2 && /^(0|1)$/ { ++$0 }
{ printf "%s", $0 RT } RT ~ /\n/ { c = 0 }' file
Related
Using If-else to change values in Pandas
I’ve a pd df consists three columns: ID, t, and ind1. import pandas as pd dat = {'ID': [1,1,1,1,2,2,2,3,3,3,3,4,4,4,5,5,6,6,6], 't': [0,1,2,3,0,1,2,0,1,2,3,0,1,2,0,1,0,1,2], 'ind1' : [1,1,1,1,0,0,0,0,0,0,0,1,1,1,1,1,0,0,0] } df = pd.DataFrame(dat, columns = ['ID', 't', 'ind1']) print (df) What I need to do is to create a new column (res) that for all ID with ind1==0, then res is zero. for all ID with ind1==1 and if t==max(t) (group by ID), then res = 1, otherwise zero. Here’s anticipated output
Check with groupby with idxmax , then where with transform all df['res']=df.groupby('ID').t.transform('idxmax').where(df.groupby('ID').ind1.transform('all')).eq(df.index).astype(int) df Out[160]: ID t ind1 res 0 1 0 1 0 1 1 1 1 0 2 1 2 1 0 3 1 3 1 1 4 2 0 0 0 5 2 1 0 0 6 2 2 0 0 7 3 0 0 0 8 3 1 0 0 9 3 2 0 0 10 3 3 0 0 11 4 0 1 0 12 4 1 1 0 13 4 2 1 1 14 5 0 1 0 15 5 1 1 1 16 6 0 0 0 17 6 1 0 0 18 6 2 0 0
This works on the knowledge that the ID column is sorted : cond1 = df.ind1.eq(0) cond2 = df.ind1.eq(1) & (df.t.eq(df.groupby("ID").t.transform("max"))) df["res"] = np.select([cond1, cond2], [0, 1], 0) df ID t ind1 res 0 1 0 1 0 1 1 1 1 0 2 1 2 1 0 3 1 3 1 1 4 2 0 0 0 5 2 1 0 0 6 2 2 0 0 7 3 0 0 0 8 3 1 0 0 9 3 2 0 0 10 3 3 0 0 11 4 0 1 0 12 4 1 1 0 13 4 2 1 1 14 5 0 1 0 15 5 1 1 1 16 6 0 0 0 17 6 1 0 0 18 6 2 0 0
Use groupby.apply: df['res'] = (df.groupby('ID').apply(lambda x: x['ind1'].eq(1)&x['t'].eq(x['t'].max())) .astype(int).reset_index(drop=True)) print(df) ID t ind1 res 0 1 0 1 0 1 1 1 1 0 2 1 2 1 0 3 1 3 1 1 4 2 0 0 0 5 2 1 0 0 6 2 2 0 0 7 3 0 0 0 8 3 1 0 0 9 3 2 0 0 10 3 3 0 0 11 4 0 1 0 12 4 1 1 0 13 4 2 1 1 14 5 0 1 0 15 5 1 1 1 16 6 0 0 0 17 6 1 0 0 18 6 2 0 0
How to create dummy variables on Ordinal columns in Python
I am new to Python. I have created dummy columns on categorical column using pandas get_dummies. How to create dummy columns on ordinal column (say column Rating has values 1,2,3...,10)
Consider the dataframe df df = pd.DataFrame(dict(Cats=list('abcdcba'), Ords=[3, 2, 1, 0, 1, 2, 3])) df Cats Ords 0 a 3 1 b 2 2 c 1 3 d 0 4 c 1 5 b 2 6 a 3 pd.get_dummies works the same on either column with df.Cats pd.get_dummies(df.Cats) a b c d 0 1 0 0 0 1 0 1 0 0 2 0 0 1 0 3 0 0 0 1 4 0 0 1 0 5 0 1 0 0 6 1 0 0 0 with df.Ords 0 1 2 3 0 0 0 0 1 1 0 0 1 0 2 0 1 0 0 3 1 0 0 0 4 0 1 0 0 5 0 0 1 0 6 0 0 0 1 with both pd.get_dummies(df) Ords Cats_a Cats_b Cats_c Cats_d 0 3 1 0 0 0 1 2 0 1 0 0 2 1 0 0 1 0 3 0 0 0 0 1 4 1 0 0 1 0 5 2 0 1 0 0 6 3 1 0 0 0 Notice that it split out Cats but not Ords Let's expand on this by adding another Cats2 column and calling pd.get_dummies pd.get_dummies(df.assign(Cats2=df.Cats))) Ords Cats_a Cats_b Cats_c Cats_d Cats2_a Cats2_b Cats2_c Cats2_d 0 3 1 0 0 0 1 0 0 0 1 2 0 1 0 0 0 1 0 0 2 1 0 0 1 0 0 0 1 0 3 0 0 0 0 1 0 0 0 1 4 1 0 0 1 0 0 0 1 0 5 2 0 1 0 0 0 1 0 0 6 3 1 0 0 0 1 0 0 0 Interesting, it splits both object columns but not the numeric one.
Truth table with 5 inputs and 3 outputs
I have to make a truth table with 5 inputs and 3 outputs, something like this: A B C D E red green blue 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 1 0 0 0 1 0 0 0 1 . . . . 1 1 0 1 0 0 1 1 . . . 1 1 1 1 1 1 0 1 etc. (in total 32 rows, the numbers in the rgb table represents the number of 1's in each row in binary i.e in row 1 1 0 1 0 there are three 1's, so three in binary is 0 1 1). I would like to present the result of it in the Atanua (http://sol.gfxile.net/atanua/index.html) tool (so fore example when I press button E, the blue light will shine, when pressing A B D the green and blue light will shine and so on). But there is a requirement that I can only use AND, OR, NOT operands, and each operand can only have two inputs. Although I'm using Karnaugh map to minimize it, still for so many records the results for each output are very long (especially for the last one). I tried to simplify it more by adding all of the three output boolean functions into one, and the minimization process ended pretty well: A + B + C + D It seems to work fine (but as there is only one output light, it works only in red green blue column separately). My concern is the fact that I would like to have three outputs (three lights, not one), and is that even possible after this kind of minimization? Is there a good solution to do it in Atanua? Or do I have to make 3 separate boolean functions, no matter how long they will be (and there is a lot of them even after minimization)? EDIT: the whole truth table :) A B C D E R G B 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 1 0 0 0 1 0 0 0 1 0 0 0 1 1 0 1 0 0 0 1 0 0 0 0 1 0 0 1 0 1 0 1 0 0 0 1 1 0 0 1 0 0 0 1 1 1 0 1 1 0 1 0 0 0 0 0 1 0 1 0 0 1 0 1 0 0 1 0 1 0 0 1 0 0 1 0 1 1 0 1 1 0 1 1 0 0 0 1 0 0 1 1 0 1 0 1 1 0 1 1 1 0 0 1 1 0 1 1 1 1 1 0 0 1 0 0 0 0 0 0 1 1 0 0 0 1 0 1 0 1 0 0 1 0 0 1 0 1 0 0 1 1 0 1 1 1 0 1 0 0 0 1 0 1 0 1 0 1 0 1 1 1 0 1 1 0 0 1 1 1 0 1 1 1 1 0 0 1 1 0 0 0 0 1 0 1 1 0 0 1 0 1 1 1 1 0 1 0 0 1 1 1 1 0 1 1 1 0 0 1 1 1 0 0 0 1 1 1 1 1 0 1 1 0 0 1 1 1 1 0 1 0 0 1 1 1 1 1 1 0 1 And the karnaugh map for each color (~is the gate NOT, * is AND, + OR): RED: BCDE+ACDE+ABDE+ABCE+ABCD GREEN: ~A~BDE+~AC~DE+~ACD~E+~BCD~E+~AB~CE+B~CD~E+BC~D~E+A~B~CE+A~B~CD+A~BC~D+AB~C~D BLUE: ~A~B~C~DE+~A~B~CD~E+~A~BC~D~E+~A~BCDE+~AB~C~D~E+~AB~CDE+~ABC~DE+~ABCD~E+A~B~C~D~E+A~B~CDE+A~BC~DE+A~BCD~E+AB~C~DE+AB~CD~E+ABC~D~E+ABCDE
Have to admit that the formulas are somewhat ugly, but it's not too complicated to implement with logic gatters, because you can reuse parts. A -----+------+------------- - - - NOT | +------|--AND- ~AB | | | AND-----|---|-- ~A~B +--AND-+ | | +--|---|-- A~B NOT AND--|-- AB B -----+------+---+---------- - - - Here as an example I created all combinations of [not]A and [not]B. You can do the same for C and D. So you can get any combination of [not]A and [not]B and [not]C and [not]D by combining a wire from each "box" with an and gatter (e.g. for ABCD we would take the AB wire AND the CD wire).
How to find matched rows in 2 files based on column3 and create extra file with rank value
I have 2 files, I need to merge based on column3 (pos). Then find matched position and create an desirable output as follows using awk. I would like to have output with 4 columns. The 4th columns indicate common position across 2 files with rank number. File1.txt SNP-ID Chr Pos rs62637813 1 52058 rs150021059 1 52238 rs4477212 1 52356 kgp15717912 1 53424 rs140052487 1 54353 rs9701779 1 56537 kgp7727307 1 56962 kgp15297216 1 72391 rs3094315 1 75256 rs3131972 1 75272 kgp6703048 1 75406 kgp22792200 1 75665 kgp15557302 1 75769 File2.txt: SNP-ID Chr Pos Chip1 rs58108140 1 10583 1 rs189107123 1 10611 2 rs180734498 1 13302 3 rs144762171 1 13327 4 rs201747181 1 13957 5 rs151276478 1 13980 6 rs140337953 1 30923 7 rs199681827 1 46402 8 rs200430748 1 47190 9 rs187298206 1 51476 10 rs116400033 1 51479 11 rs190452223 1 51914 12 rs181754315 1 51935 13 rs185832753 1 51954 14 rs62637813 1 52058 15 rs190291950 1 52144 16 rs201374420 1 52185 17 rs150021059 1 52238 18 rs199502715 1 53234 19 rs140052487 1 54353 20 Desirable-output: SNP-ID Chr Pos Chip1 Chip2 rs58108140 1 10583 1 0 rs189107123 1 10611 2 0 rs180734498 1 13302 3 0 rs144762171 1 13327 4 0 rs201747181 1 13957 5 0 rs151276478 1 13980 6 0 rs140337953 1 30923 7 0 rs199681827 1 46402 8 0 rs200430748 1 47190 9 0 rs187298206 1 51476 10 0 rs116400033 1 51479 11 0 rs190452223 1 51914 12 0 rs181754315 1 51935 13 0 rs185832753 1 51954 14 0 rs62637813 1 52058 15 1 rs190291950 1 52144 16 0 rs201374420 1 52185 17 0 rs150021059 1 52238 18 2 rs199502715 1 53234 19 0 rs140052487 1 54353 20 3
I don't quite understand what you mean by "rank" awk ' NR==FNR {pos[$3]=1; next} FNR==1 {print $0, "Chip2"; next} {print $0, ($3 in pos ? ++rank : 0)} ' File1.txt File2.txt | column -t SNP-ID Chr Pos Chip1 Chip2 rs58108140 1 10583 1 0 rs189107123 1 10611 2 0 rs180734498 1 13302 3 0 rs144762171 1 13327 4 0 rs201747181 1 13957 5 0 rs151276478 1 13980 6 0 rs140337953 1 30923 7 0 rs199681827 1 46402 8 0 rs200430748 1 47190 9 0 rs187298206 1 51476 10 0 rs116400033 1 51479 11 0 rs190452223 1 51914 12 0 rs181754315 1 51935 13 0 rs185832753 1 51954 14 0 rs62637813 1 52058 15 1 rs190291950 1 52144 16 0 rs201374420 1 52185 17 0 rs150021059 1 52238 18 2 rs199502715 1 53234 19 0 rs140052487 1 54353 20 3
Matplotlib pcolor not plotting correctly
I am trying to create a heat map from a DataFrame (df) of IDs (rows) and Positions (columns) at which a motif is possible. If the motif is present the value of the table is 1 and 0 if it is not present. Such as: ID Position 1 2 3 4 5 6 7 8 9 10 ...etc A 0 1 0 0 0 1 0 0 0 1 B 1 0 1 0 1 0 0 1 0 0 C 0 0 0 1 0 0 1 0 1 0 D 1 0 1 0 0 0 1 0 1 0 I then multiply this matrix by itself to find the number of times the motifs present co-occur with motifs at other positions using the code: df.T.dot(df) To obtain the Data Frame: POS 1 2 3 4 5 6 7 8 9 10 ... 1 2 0 2 0 1 0 1 1 1 0 2 0 1 0 0 0 1 0 0 0 1 3 2 0 2 0 1 0 1 1 1 0 4 0 0 0 1 0 0 1 0 1 0 5 1 0 1 0 1 0 0 1 0 0 6 0 1 0 0 0 1 0 0 0 1 7 1 0 1 1 0 0 2 0 2 0 8 1 0 1 0 1 0 0 1 0 0 9 1 0 1 1 0 0 2 0 2 0 10 0 1 0 0 0 1 0 0 0 1 ... Which is symmetrical with the diagonal, however when I try to create the Heat Map using pylab.pcolor(df) It gives me an asymmetrical map that does not seem to be representing the dotted matrix. I don't have enough reputation to post an image though. Does anyone know why this might be occurring? Thanks