Subtracting from values ending with specific digits? - awk

I have a .bed (.tsv) file that looks like this:
chr1 0 100000
chr1 100000 200000
chr1 200000 300000
chr1 300000 425234
I want to perform the operation -1 from only values in column 3 that end in "000", using sed or awk so that the output looks like:
chr1 0 99999
chr1 100000 199999
chr1 200000 299999
chr1 300000 425234
Embarassingly enough, the best I've come up with is:
awk {sub(/000$/,"999",$3); print $1,$2,$3}' oldfile > newfile
which simply substituites the last 3 digits for 999, rather than actually subtracting.
Any help is appreciated is always!

Awk can easily perform arithmetic, too.
awk 'BEGIN{FS=OFS="\t"} $3 ~ /000$/ {$3 -= 1}1' oldfile > newfile
This is assuming all the lines in your file always have three fields and that you want to print all the lines.
sed has no idea about even the simplest arithmetic so it's not particularly suitable for this.

I would use GNU AWK for this as follows, let file.txt content be
chr1 0 100000
chr1 100000 200000
chr1 200000 300000
chr1 300000 425234
then
awk 'BEGIN{OFS="\t"}($3%1000==0){$3-=1}{print}' file.txt
output
chr1 0 99999
chr1 100000 199999
chr1 200000 299999
chr1 300000 425234
Explanation: Use tab (\t) as output field separator (OFS). If remainder from diving $3 by 1000 is zero (i.e. $3 is multiply of 1000) then subtract 1 from $3, for each line print.
(tested in gawk 4.2.1)

Related

awk to calculate difference between two files and output specific text based on value

I am trying to use awk to check if each $2 in file1 falls between $2 and $3 of the matching $4 line of file2. If it does then in $5 of file2, exon if it does not intron. I think the awk below will do that, but I am struggling trying to is add a calculation that if the difference is less than or equal to 10, then $5 is splicing. I have added an example of line 1 as well.
The 6th line is an example of the splicing, because the $2 value in file1 is 2 away from the $2 value in file2. My actual data is very large with file2 always being several hundreds of thousand lines. File 1 will be variable but usually ~100 lines. The files are hardcoded in this example but will be gotten from a bash for loop. That will provide the input. Thank you :).
file1 tab-delimited with whitespace after $3 and $4
chr1 17345304 17345315 SDHB
chr1 17345516 17345524 SDHB
chr1 93306242 93306261 RPL5
chr1 93307262 93307291 RPL5
chrX 153295819 153296875 MECP2
chrX 153295810 153296830 MECP2
file2 tab-delimited
chr1 17345375 17345453 SDHB_cds_0_0_chr1_17345376_r 0 -
chr1 17349102 17349225 SDHB_cds_1_0_chr1_17349103_r 0 -
chr1 17350467 17350569 SDHB_cds_2_0_chr1_17350468_r 0 -
chr1 17354243 17354360 SDHB_cds_3_0_chr1_17354244_r 0 -
chr1 17355094 17355231 SDHB_cds_4_0_chr1_17355095_r 0 -
chr1 17359554 17359640 SDHB_cds_5_0_chr1_17359555_r 0 -
chr1 17371255 17371383 SDHB_cds_6_0_chr1_17371256_r 0 -
chr1 17380442 17380514 SDHB_cds_7_0_chr1_17380443_r 0 -
chr1 93297671 93297674 RPL5_cds_0_0_chr1_93297672_f 0 +
chr1 93298945 93299015 RPL5_cds_1_0_chr1_93298946_f 0 +
chr1 93299101 93299217 RPL5_cds_2_0_chr1_93299102_f 0 +
chr1 93300335 93300470 RPL5_cds_3_0_chr1_93300336_f 0 +
chr1 93301746 93301949 RPL5_cds_4_0_chr1_93301747_f 0 +
chr1 93303012 93303190 RPL5_cds_5_0_chr1_93303013_f 0 +
chr1 93306107 93306196 RPL5_cds_6_0_chr1_93306108_f 0 +
chr1 93307322 93307422 RPL5_cds_7_0_chr1_93307323_f 0 +
chrX 153295817 153296901 MECP2_cds_0_0_chrX_153295818_r 0 -
chrX 153297657 153298008 MECP2_cds_1_0_chrX_153297658_r 0 -
chrX 153357641 153357667 MECP2_cds_2_0_chrX_153357642_r 0 -
desired output tab-delimited
chr1 17345304 17345315 SDHB intron
chr1 17345516 17345524 SDHB intron
chr1 93306242 93306261 RPL5 intron
chr1 93307262 93307291 RPL5 intron
chrX 153295819 153296875 MECP2 exon
chrX 153295810 153296800 MECP2 splicing
awk
awk '
FNR==NR{
a[$4];
min[$4]=$2;
max[$4]=$3;
next
}
{
split($4,array,"_");
print $0,(array[1] in a) && ($2>=min[array[1]] &&
$2<=max[array[1]])?"exon":"intron"
}' file1 OFS="\t" file2 > output
example of line 1
a[$4] = SDHB
min[$4] = 17345304
max[$4] = 17345315
array[1] = SDHB, 17345304 >= 17345375 && array[1] = SDHB, 17345315 <= 17345453 ---- intron

manipulating columns in a text file in awk

I have a tab separated text file and want to do some math operation on one column and make a new tab separated text file.
this is an example of my file:
chr1 144520803 144520804 12 chr1 144520813 58
chr1 144520840 144520841 12 chr1 144520845 36
chr1 144520840 144520841 12 chr1 144520845 36
chr1 144520848 144520849 14 chr1 144520851 32
chr1 144520848 144520849 14 chr1 144520851 32
i want to change the 4th column. in fact I want to divide every single element in the 4th column by sum of all elements in the 4th column and then multiply by 1000000 . like the expected output.
expected output:
chr1 144520803 144520804 187500 chr1 144520813 58
chr1 144520840 144520841 187500 chr1 144520845 36
chr1 144520840 144520841 187500 chr1 144520845 36
chr1 144520848 144520849 218750 chr1 144520851 32
chr1 144520848 144520849 218750 chr1 144520851 32
I am trying to do that in awk using the following command but it does not return what I want. do you know how to fix it:
awk '{print $1 "\t" $2 "\t" $3 "\t" $4/{sum+=$4}*1000000 "\t" $5 "\t" $6 "\t" $7}' myfile.txt > new_file.txt
you need two passes, one to compute the sum and then to scale the field
something like this
$ awk -v OFS='\t' 'NR==FNR {sum+=$4; next}
{$4*=(1000000/sum)}1' file{,} > newfile

Awk OR conditional not working

Input: A tab-separated input file with 15 columns where column 15 is an integer.
Output: The number of lines that satisfy the conditional.
My code:
$ closest-features --closest --no-overlaps --delim '\t' --dist --ec megatrans_enhancers.sorted.bed ../../data/alu_repeats.sorted.bed | awk -v OFS='\t' '{if ($15 <= 1000 || $15 >= -1000) print $0}' | wc -l
1188
The || conditional in this case is failing to work (the total number of lines in the file are 1188 and I know for certain at least some lines do not satisfy the condition), because if I remove the OR conditional then suddenly it works:
$ closest-features --closest --no-overlaps --delim '\t' --dist --ec megatrans_enhancers.sorted.bed ../../data/alu_repeats.sorted.bed | awk -v OFS='\t' '{if ($15 <= 1000) print $0}' | wc -l
926
Not sure what i'm doing wrong. Any advice?
Example Input to Awk command:
chr1 378268 378486 chr1-798_Enhancer 17.2 + chr1 375923 376219 AluY|SINE|Alu-HOMER529 0 + E:375923 0.044 -2050
chr1 1079471 1079689 chr1-929_Enhancer 14.6 - chr1 1071271 1071563 AluSx1|SINE|Alu-HOMER1669 0 - E:1071271 0.13 -7909
chr1 1080259 1080477 chr1-830_Enhancer 16.7 - chr1 1071271 1071563 AluSx1|SINE|Alu-HOMER1669 0 - E:1071271 0.13 -8697
chr1 6611744 6611962 chr1-241_Enhancer 46.6 + chr1 6611431 6611723 AluSc|SINE|Alu-HOMER10257 0 + E:6611431 0.089 -22
chr1 6959639 6959857 chr1-58_Enhancer 100.1 - chr1 6966612 6966911 AluSx|SINE|Alu-HOMER11041 0 - E:6966612 0.137 6756
chr1 6960593 6960811 chr1-202_Enhancer 51.6 - chr1 6966612 6966911 AluSx|SINE|Alu-HOMER11041 0 - E:6966612 0.137 5802
chr1 7447888 7448106 chr1-2_Enhancer 181.9 - chr1 7449489 7449799 AluSz|SINE|Alu-HOMER11879 0 + E:7449489 0.119 1384
chr1 10752461 10752679 chr1-131_Enhancer 65.4 - chr1 10752754 10753065 AluSq2|SINE|Alu-HOMER19455 0 + E:10752754 0.106 76
chr1 12485694 12485912 chr1-353_Enhancer 36.7 + chr1 12487328 12487634 AluSx3|SINE|Alu-HOMER23581 0 + E:12487328 0.085 1417
chr1 12486469 12486687 chr1-141_Enhancer 63.6 + chr1 12487328 12487634 AluSx3|SINE|Alu-HOMER23581 0 + E:12487328 0.085 642
Try to put && condition because a digit should be greater than -1000 and lesser than 1000.
Your_command | awk '$15<=1000 && $15>=-1000{count++} END{print count}'
Add -F"\t" in above awk in case your Input to it is coming TAB delimited too. Also there is no need to use wc -l after awk. I have written logic for that so give the count of lines which are satisfying the condition by creating a variable named count and printing it at very last of Input_file.
Also for your provided samples output is coming as 3 which I believe is correct one.

Extracting information from lines having columns occuring more than once

I have a file :
chr1 1234 2345 EG1234:E1
chr1 2350 2673 EG1234:E2
chr1 2673 2700 EG1234:E2
chr1 2700 2780 EG1234:E2
chr2 5672 5700 EG2345:E1
chr2 5705 5890 EG2345:E2
chr2 6000 6010 EG2345:E3
chr2 6010 6020 EG2345:E3
As you can see there is a specific ID before ':' and there is an id that is repeated after ':' which might be common to more than one row , I want an output that looks something like this:
chr1 1234 2345 EG1234:E1 (output as it is since it doesn't have duplicate id in the next row)
chr1 2350 2780 EG1234:E2 (since duplicate the 1st and 2nd column of 1st occurrence &
3rd and 4 th column of the last occurence)
similarly
chr2 5672 5700 EG2345:E1
chr2 5705 5890 EG2345:E2
chr2 6000 6020 EG2345:E3
I was trying to use a key to move to next column but I am not quiet sure as to how would I extract the column wise values
awk '{key=$4; if (!(key in data)) c[++n]=key; data[key]=$0} END{for (i=1; i<=n; i++) print data[c[i]]}' file1
In short I want to extract the first two columns of first occurrence and last two columns from the last occurrence of any rows with duplicate 4 th column
This one only messes up the record order:
($1 FS $4 in a) { # combination of $1 and $4 is the key
split(a[$1 FS $4],b) # split to get the old $2
a[$1 FS $4]=b[1] FS b[2] FS $3 FS b[4] # update $3
next
}
{
a[$1 FS $4]=$0 # new key found
}
END {
for(i in a) # print them all
print a[i]
}
Test it:
$ awk -f foo.awk foo.txt
chr1 EG1234:E2 2350 2780
chr2 EG2345:E1 5672 5700
chr2 EG2345:E2 5705 5890
chr2 EG2345:E3 6000 6020
chr1 EG1234:E1 1234 2345
One-liner:
$ awk '($1 FS $4 in a) {split(a[$1 FS $4],b); a[$1 FS $4]=b[1] FS b[2] FS $3 FS b[4]; next} {a[$1 FS $4]=$0} END {for(i in a) print a[i]}' foo.txt
Using awk, considering the key1:key2 as a unique combination and if applying it to filter duplicates. Here $4 represents the key1:key2 from your file.
awk '!seen[$4]++' file
chr1 1234 2345 EG1234:E1
chr1 2350 2673 EG1234:E2
chr2 5672 5700 EG2345:E1
chr2 5705 5890 EG2345:E2
chr2 6000 6010 EG2345:E3
The logic is straight forward, the line identified by key1:key2 is printed only if it is not seen already.

awk to count lines in column of file

I have a large file that I want to use awk to count the lines in a specific column $5, before the: and only count -uniq entries, but seem to be having trouble getting the syntax correct. Thank you :).
Sample Input
chr1 955542 955763 + AGRN:exon.1 1 0
chr1 955542 955763 + AGRN:exon.1 2 0
chr1 955542 955763 + AGRN:exon.1 3 0
chr1 955542 955763 + AGRN:exon.1 4 1
chr1 955542 955763 + AGRN:exon.1 5 1
awk -F: ' NR > 1 { count += $5 } -uniq' Input
Desired output
1
$ awk -F'[ \t:]+' '{a[$5]=1;} END{for (k in a)n++; print n;}' Input
1
-F'[ \t:]+'
This tells awk to use spaces, tabs, or colons as the field separator.
a[$5]=1
As we loop through each line, this adds an entry into associative array a for each value of $5 encountered.
END{for (k in a)n++; print n;}
After we have finished reading the file, this counts the number of keys in associative array a and prints the total.
The idiomatic, portable awk approach:
$ awk '{sub(/:.*/,"",$5)} !seen[$5]++{unq++} END{print unq}' file
1
The briefer but gawk-only (courtesy of length(array)) approach:
$ awk '{seen[$5]} END{print length(seen)}' file
1