awk to filter file using another capturing all instances - awk

In the below awk I am trying to capture all conditions ofKCNMA1, the line in gene (which is a one column list of names) that are in $8 of file which is tab-delimited
So in the below example all instances/lines where KCNMA1 appear in $8would be printed to output.
There could also be multiple ;, however the name, in this case KCNMA1 will be included. The awk seems to capture 2 of the possible 4 conditions but not all instances as shown by the current output. Thank you :).
gene
KCNMA1
file
R_Index Chr Start End Ref Alt Func.IDP.refGene Gene.IDP.refGene GeneDetail.IDP.refGene
4629 chr10 78944590 78944590 G A intergenic NONE;KCNMA1 dist=NONE;dist=451371
4630 chr10 79396463 79396463 C T intronic KCNMA1 .
4631 chr10 79397777 79397777 C - exonic KCNMA1;X1X .
4632 chr10 81318663 81318663 C G exonic SFTPA2 .
4633 chr10 89397777 89397777 - GAA exonic NONE;X1X;KCNMA1 .
current output
R_Index Chr Start End Ref Alt Func.IDP.refGene Gene.IDP.refGene GeneDetail.IDP.refGene
1 chr10 79396463 79396463 C T intronic KCNMA1 .
2 chr10 79397777 79397777 C - exonic KCNMA1;X1X .
desired output (tab-delimeted)
R_Index Chr Start End Ref Alt Func.IDP.refGene Gene.IDP.refGene GeneDetail.IDP.refGene
4629 chr10 78944590 78944590 G A intergenic NONE;KCNMA1 dist=NONE;dist=451371
4630 chr10 79396463 79396463 C T intronic KCNMA1 .
4631 chr10 79397777 79397777 C - exonic KCNMA1;X1X .
4633 chr10 89397777 89397777 - GAA exonic NONE;X1X;KCNMA1 .
awk
awk -F'\t' 'NR==FNR{a[$0];next} FNR==1{print} {x=$8; sub(/;.*/,"",x)} x in a{$1=++c; print}' gene file > out

For the single gene, just pass as a variable
$ awk -v gene='KCNMA1' -v d=';' 'NR==1 || d $8 d ~ d gene d' file
the counter you're using seems unnecessary since you want to have the first field.
If you want to support a file based gene list, you can use this
$ awk -v d=';' 'NR==FNR {genes[$0]; next}
FNR==1;
{for(g in genes)
if(d $8 d ~ d g d) print}' genes file

Related

Cut Columns and Append to Same File

I'm working with a tab separated file on MacOS. The file contains 15 columns and thousands of rows. I want to cut columns 1, 2, and 3 and then append them with columns 11, 12, and 13. I was hoping to do this in a pipe so that no extra files need to be created. The only post I found used a command sponge but I evidently don't have that on MacOS, or it isn't in my BASH.
The input tsv file is actually being generated within the same line of code,
arbitrary command to generate input.tsv | cut -f1-3,11-13 | <Step to cut -f4-6 and append -f1-3> | sort > out.file
Input tsv
chr1 21018 21101 A B C D E F G chr1 20752 21209
chr10 74645 74836 A B C D E F G chr10 74638 74898
chr10 75267 75545 A B C D E F G chr10 75280 75917
chr4 212478 212556 A B C D E F G chr4 212491 213285
Desired Output tsv
chr1 21018 21101
chr1 20752 21209
chr10 74638 74898
chr10 74645 74836
chr10 75280 75917
chr4 212478 212556
chr4 212491 213285
Using perl and awk :
code
perl -pe 's/chr[0-9]+/\n$&/g' file | awk '/./{print $1, $2, $3}'
 Output
chr1 21018 21101
chr1 20752 21209
chr10 74645 74836
chr10 74638 74898
chr10 75267 75545
chr10 75280 75917
chr4 212478 212556
chr4 212491 213285
here is short awk solution:
awk '{print $1, $2, $3, "\n" $1, $12, $13;}' input.tsv
output:
chr1 21018 21101
chr1 20752 21209
chr10 74645 74836
chr10 74638 74898
chr10 75267 75545
chr10 75280 75917
chr4 212478 212556
chr4 212491 213285
Explanation
{ # for each input line
print $1, $2, $3; # print 1st field, append 2nd and 3rd fields. Terminate with new line
print $1, $12, $13; #print 1st field, append 12th and 13th field. Terminate with new line
}

manipulating columns in a text file in awk

I have a tab separated text file and want to do some math operation on one column and make a new tab separated text file.
this is an example of my file:
chr1 144520803 144520804 12 chr1 144520813 58
chr1 144520840 144520841 12 chr1 144520845 36
chr1 144520840 144520841 12 chr1 144520845 36
chr1 144520848 144520849 14 chr1 144520851 32
chr1 144520848 144520849 14 chr1 144520851 32
i want to change the 4th column. in fact I want to divide every single element in the 4th column by sum of all elements in the 4th column and then multiply by 1000000 . like the expected output.
expected output:
chr1 144520803 144520804 187500 chr1 144520813 58
chr1 144520840 144520841 187500 chr1 144520845 36
chr1 144520840 144520841 187500 chr1 144520845 36
chr1 144520848 144520849 218750 chr1 144520851 32
chr1 144520848 144520849 218750 chr1 144520851 32
I am trying to do that in awk using the following command but it does not return what I want. do you know how to fix it:
awk '{print $1 "\t" $2 "\t" $3 "\t" $4/{sum+=$4}*1000000 "\t" $5 "\t" $6 "\t" $7}' myfile.txt > new_file.txt
you need two passes, one to compute the sum and then to scale the field
something like this
$ awk -v OFS='\t' 'NR==FNR {sum+=$4; next}
{$4*=(1000000/sum)}1' file{,} > newfile

Extracting information from lines having columns occuring more than once

I have a file :
chr1 1234 2345 EG1234:E1
chr1 2350 2673 EG1234:E2
chr1 2673 2700 EG1234:E2
chr1 2700 2780 EG1234:E2
chr2 5672 5700 EG2345:E1
chr2 5705 5890 EG2345:E2
chr2 6000 6010 EG2345:E3
chr2 6010 6020 EG2345:E3
As you can see there is a specific ID before ':' and there is an id that is repeated after ':' which might be common to more than one row , I want an output that looks something like this:
chr1 1234 2345 EG1234:E1 (output as it is since it doesn't have duplicate id in the next row)
chr1 2350 2780 EG1234:E2 (since duplicate the 1st and 2nd column of 1st occurrence &
3rd and 4 th column of the last occurence)
similarly
chr2 5672 5700 EG2345:E1
chr2 5705 5890 EG2345:E2
chr2 6000 6020 EG2345:E3
I was trying to use a key to move to next column but I am not quiet sure as to how would I extract the column wise values
awk '{key=$4; if (!(key in data)) c[++n]=key; data[key]=$0} END{for (i=1; i<=n; i++) print data[c[i]]}' file1
In short I want to extract the first two columns of first occurrence and last two columns from the last occurrence of any rows with duplicate 4 th column
This one only messes up the record order:
($1 FS $4 in a) { # combination of $1 and $4 is the key
split(a[$1 FS $4],b) # split to get the old $2
a[$1 FS $4]=b[1] FS b[2] FS $3 FS b[4] # update $3
next
}
{
a[$1 FS $4]=$0 # new key found
}
END {
for(i in a) # print them all
print a[i]
}
Test it:
$ awk -f foo.awk foo.txt
chr1 EG1234:E2 2350 2780
chr2 EG2345:E1 5672 5700
chr2 EG2345:E2 5705 5890
chr2 EG2345:E3 6000 6020
chr1 EG1234:E1 1234 2345
One-liner:
$ awk '($1 FS $4 in a) {split(a[$1 FS $4],b); a[$1 FS $4]=b[1] FS b[2] FS $3 FS b[4]; next} {a[$1 FS $4]=$0} END {for(i in a) print a[i]}' foo.txt
Using awk, considering the key1:key2 as a unique combination and if applying it to filter duplicates. Here $4 represents the key1:key2 from your file.
awk '!seen[$4]++' file
chr1 1234 2345 EG1234:E1
chr1 2350 2673 EG1234:E2
chr2 5672 5700 EG2345:E1
chr2 5705 5890 EG2345:E2
chr2 6000 6010 EG2345:E3
The logic is straight forward, the line identified by key1:key2 is printed only if it is not seen already.

awk to count and rename symbol in field

I am trying to count a symbol (-) in $5 (ref) and output that symbol renamed and the count using awk. The input file is tab-delimited and the awk below is close, but outputs extra data with incorrect counts and I'm not sure how to fix it. Thank you :).
awk
awk -F'\t' 'BEGIN {printf "Category\tCount\n" } $5 ~ /-/ {printf "indel" } {a[$5]++} END { for (i in a) {printf "%s\t\t%s\n",i , a[i] }}' input
input
Index Mutation Call Start End Ref Alt Func.refGene Gene.refGene ExonicFunc.refGene Sanger
13 c.[1035-3T>C]+[1035-3T>C] 166170127 166170127 T C intronic SCN2A
16 c.[2994C>T]+[=] 166210776 166210776 C T exonic SCN2A synonymous SNV
19 c.[4914T>A]+[4914T>A] 166245230 166245230 T A exonic SCN2A synonymous SNV
20 c.[5109C>T]+[=] 166245425 166245425 C T exonic SCN2A synonymous SNV
21 c.[5139C>T]+[=] 166848646 166848646 G A exonic SCN1A synonymous SNV
22 c.3152_3153insAACCACT 166892841 166892841 - AGTGGTT exonic SCN1A frameshift insertion TP
23 c.2044-5delT 166898947 166898947 A - intronic SCN1A
25 c.1530_1531insA 166901684 166901684 - T exonic SCN1A frameshift insertion FP
current output
Category Count
indelindelindelindel 5
A 4
C 7
Ref 1
- 4
G 2
T 6
TCCT 1
desired output
Category Count
indel 2
here you go...
$ awk -F'\t' '$5=="-"{count++}
END{print "Category","Count";
print "indel",count}' file |
column -t
Category Count
indel 2

awk to substitute value of field based on another

I am trying to use awk to substitute the value of the Classification field NF+1 with the value of the CLINSIG field NF-1 if that value is Benign. I think the awk is close but currently I get an empty file. What's wrong?
input
Chr Start End Ref Alt Func.refGene PopFreqMax CLINSIG Classification
chr1 43395635 43395635 C T exonic 0.12 Benign VUS
chr1 43396414 43396414 G A exonic 0.14 Benign VUS
chr1 172410967 172410967 G A exonic 0.66 VUS
awk
awk -v OFS='\t' '{ if ($(NF-1) == "Benign") sub($(NF+1)=$(NF-1); print $0 }' input
desired output
Chr Start End Ref Alt Func.refGene PopFreqMax CLINSIG Classification
chr1 43395635 43395635 C T exonic 0.12 Benign Benign
chr1 43396414 43396414 G A exonic 0.14 Benign Benign
chr1 172410967 172410967 G A exonic 0.66 VUS
You probably mean Classification field NF, not NF+1:
$ awk -v OFS='\t' '$(NF-1)=="Benign" {$(NF)=$(NF-1)} {print $0 }' input
Chr Start End Ref Alt Func.refGene PopFreqMax CLINSIG Classification
chr1 43395635 43395635 C T exonic 0.12 Benign Benign
chr1 43396414 43396414 G A exonic 0.14 Benign Benign
chr1 172410967 172410967 G A exonic 0.66 VUS