get the statistics in a text file using awk - awk

I have a text file like this small example:
>chr10:101370300-101370301
A
>chr10:101370288-101370289
A
>chr10:101370289-101370290
G
>chr10:101471626-101471627
g
>chr10:101471865-101471866
g
>chr10:101471605-101471606
a
>chr10:101471606-101471607
g
>chr10:101471681-101471682
as you see below every line that starts with ">" I have a letter. these letters are A, G, T or C. in my results I would like to get the frequency of them in percentage. here is a small example of expected output.
A = 28.57
G = 14.29
g = 42.85
a = 14.29
I am trying to do that in awk using :
awk 'if $1 == "G", num=+1 { a[$1]+=num/"G" }
if $1 == "G", num=+1 { a[$1]+=num/"C" }
if $1 == "G", num=+1 { a[$1]+=num/"T" }
if $1 == "G", num=+1 { a[$1]+=num/"A" }
' infile.txt > outfile.txt
but it does not return what I want. do you know how to fix it?

Awk solution:
awk '/^[a-zA-Z]/{ a[$1]++; cnt++ }
END{ for (i in a) printf "%s = %.2f\n", i, a[i]*100/cnt }' file.txt
/^[a-zA-Z]/ - on encountering records that only starts with a letter [a-zA-Z]:
a[$1]++ - accumulate occurrences of each item(letter)
cnt++ - count the total number of items(letters)
The output:
A = 28.57
a = 14.29
G = 14.29
g = 42.86

you sample is in contradiction with your comment (every line starting with > have no letter on my display, so i assume it's a copy/paste translation error)
awk '{C[$NF]++;S+=0.01} END{ for( c in C ) printf( "%s = %2.2f\n", c, C[c]/S)}' infile.txt > outfile.txt
if line are well under like the sample add 'NF==1' as first part of the awk code

Related

Concatenating array elements into a one string in for loop using awk

I am working on a variant calling format (vcf) file, and I tried to show you guys what I am trying to do:
Input:
1 877803 838425 GC G
1 878077 966631 C CCACGG
Output:
1 877803 838425 C -
1 878077 966631 - CACGG
In summary, I am trying to delete the first letters of longer strings.
And here is my code:
awk 'BEGIN { OFS="\t" } /#/ {next}
{
m = split($4, a, //)
n = split($5, b, //)
x = "-"
delete y
if (m>n){
for (i = n+1; i <= m; i++) {
y = sprintf("%s", a[i])
}
print $1, $2, $3, y, x
}
else if (n>m){
for (j = m+1; i <= n; i++) {
y = sprintf("%s", b[j]) ## Problem here
}
print $1, $2, $3, x, y
}
}' input.vcf > output.vcf
But,
I am getting the following error in line 15, not even in line 9
awk: cmd. line:15: (FILENAME=input.vcf FNR=1) fatal: attempt to use array y in a scalar context
I don't know how to concatenate array elements into a one string using awk.
I will be very happy if you guys help me.
Merry X-Mas!
You may try this awk:
awk -v OFS="\t" 'function trim(s) { return (length(s) == 1 ? "-" : substr(s, 2)); } {$4 = trim($4); $5 = trim($5)} 1' file
1 877803 838425 C -
1 878077 966631 - CACGG
More readable form:
awk -v OFS="\t" 'function trim(s) {
return (length(s) == 1 ? "-" : substr(s, 2))
}
{
$4 = trim($4)
$5 = trim($5)
} 1' file
You can use awk's substr function to process the 4th and 5th space delimited fields:
awk '{ substr($4,2)==""?$4="-":$4=substr($4,2);substr($5,2)==""?$5="-":$5=substr($5,2)}1' file
If the string from position 2 onwards in field 4 is equal to "", set field 4 to "-" otherwise, set field 4 to the extract of the field from position 2 to the end of the field. Do the same with field 5. Print lines modified or not with short hand 1.

How to add numbers from files to computation?

I need to get results of this formula - a column of numbers
{x = ($1-T1)/Fi; print (x-int(x))}
from inputs file1
4 4
8 4
7 78
45 2
file2
0.2
3
2
1
From this files should be 4 outputs.
$1 is the first column from file1, T1 is the first line in first column of the file1 (number 4) - it is alway this number, Fi, where i = 1, 2, 3, 4 are numbers from the second file. So I need a cycle for i from 1 to 4 and compute the term one times with F1=0.2, the second output with F2=3, then third output with F3=2 and the last output will be for F4=1. How to express T1 and Fi in this way and how to do a cycle?
awk 'FNR == NR { F[++n] = $1; next } FNR == 1 { T1 = $1 } { for (i = 1; i <= n; ++i) { x = ($1 - T1)/F[i]; print x - int(x) >"output" FNR} }' file2 file1
This gives more than 4 outputs. What is wrong please?
FNR == 1 { T1 = $1 } is being run twice, when file2 is started being read T1 is set to 0.2,
>"output" FNR is problematic, you should enclose the output name expression in parentheses.
Here's how I'd do it:
awk '
NR==1 {t1=$1}
NR==FNR {f[NR]=$1; next}
{
fn="output"FNR
for(i in f) {
x=(f[i]-t1)/$1
print x-int(x) >fn
}
close(fn)
}
' file1 file2

awk match and find mismatch between files and output results

In the below awk I am using $5 $7 and $8 of file1 to search $3 $5 and $6 of file2. The header row is skipped and it then outputs a new file with what lines match and if they do not match what file the match is missing from. When I search for one match use 3 fields for the key for the lookup and do not skip the header I get current output. I apologize for the long post and file examples, just trying to include everything to help get this working. Thank you :).
file1
Index Chromosomal Position Gene Inheritance Start End Ref Alt Func.refGene
98 48719928 FBN1 AD 48719928 48719929 AT - exonic
101 48807637 FBN1 AD 48807637 48807637 C T exonic
file2
R_Index Chr Start End Ref Alt Func.IDP.refGene
36 chr15 48719928 48719929 AT - exonic
37 chr15 48719928 48719928 A G exonic
38 chr15 48807637 48807637 C T exonic
awk
awk -F'\t' '
NR == FNR {
A[$25]; A[$26]; A[$27]
next
}
{
B[$3]; B[$5]; B[$6]
}
END {
print "Match"
OFS=","
for ( k in A )
{
if ( k && k in B )
printf "%s ", k
}
print "Missing from file1"
OFS=","
for ( k in B )
{
if ( ! ( k in A ) )
printf "%s ", k
}
print "Missing from file2"
OFS=","
for ( k in A )
{
if ( ! ( k in B ) )
printf "%s ", k
}
}
' file1 file2 > list
current output
Match
Missing from file1
A C Ref 48807637 Alt Start T G - AT 48719928 Missing from file2
desired output
Match 48719928 AT -, 48807637 C T
Missing from file1 48719928 A G
Missing from file2
You misunderstand awk syntax and are confusing awk with shell. When you wrote:
A[$25] [$26] [$27]
you probably meant:
A[$25]; A[$26]; A[$27]
(and similarly for B[]) and when you wrote:
IFS=
since IFS is a shell variable, not an awk one, you maybe meant
FS=
BUT since you're doing that in the END section and not calling split() and so not doing anything that would use FS idk what you were hoping to achieve with that. Maybe you meant:
OFS=
BUT you aren't doing anything that would use OFS and your desired output isn't comma-separated so idk what you'd be hoping to achieve with that either.
If that's not enough info for you to solve your problem yourself then reduce your example to something with 10 columns or less so we don't have to read a lot of irrelevant info to help you.
Program 1
This works, except the output format is different from what you request:
awk 'FNR==1 { next }
FNR == NR { file1[$5,$7,$8] = $5 " " $7 " " $8 }
FNR != NR { file2[$3,$5,$6] = $3 " " $5 " " $6 }
END { print "Match:"; for (k in file1) if (k in file2) print file1[k] # Or file2[k]
print "Missing in file1:"; for (k in file2) if (!(k in file1)) print file2[k]
print "Missing in file2:"; for (k in file1) if (!(k in file2)) print file1[k]
}' file1 file2
Output 1
Match:
48807637 C T
48719928 AT -
Missing in file1:
48719928 A G
Missing in file2:
Program 2
If you must have each set of values in a category comma-separated on a single line, then:
awk 'FNR==1 { next }
FNR == NR { file1[$5,$7,$8] = $5 " " $7 " " $8 }
FNR != NR { file2[$3,$5,$6] = $3 " " $5 " " $6 }
END {
printf "Match"
pad = " "
for (k in file1)
{
if (k in file2)
{
printf "%s%s", pad, file1[k]
pad = ", "
}
}
print ""
printf "Missing in file1"
pad = " "
for (k in file2)
{
if (!(k in file1))
{
printf "%s%s", pad, file2[k]
pad = ", "
}
}
print ""
printf "Missing in file2"
pad = " "
for (k in file1)
{
if (!(k in file2))
{
printf "%s%s", pad, file1[k]
pad = ", "
}
}
print ""
}' file1 file2
The code is a little bigger, but the format used exacerbates the difference. The change is all in the END block; the other code is unchanged. The sequences of actions in the END block no longer fit comfortably on a single line, so they're spread out for readability. You can apply a liberal smattering of semicolons and concatenate the lines to shrink the apparent size of the program if you desire.
It's tempting to try a function for the printing, but the conditions just make it too tricky to be worthwhile, I think — but I'm open to persuasion otherwise.
Output 2
Match 48807637 C T, 48719928 AT -
Missing in file1 48719928 A G
Missing in file2
This output will be a lot harder to parse than the one shown first, so doing anything automatically with it will be tricky. While there are only 3 entries to worry about, the line length isn't an issue. If you get to 3 million entries, the lines become very long and unmanageable.

awk - moving average in blocks of ascii-file

I have a big ascii-file that looks like this:
12,3,0.12,965.814
11,3,0.22,4313.2
14,3,0.42,7586.22
17,4,0,0
11,4,0,0
15,4,0,0
13,4,0,0
17,4,0,0
11,4,0,0
18,3,0.12,2764.86
12,3,0.22,2058.3
11,3,0.42,2929.62
10,4,0,0
10,4,0,0
14,4,0,0
12,4,0,0
19,3,0.12,1920.64
20,3,0.22,1721.51
12,3,0.42,1841.55
11,4,0,0
15,4,0,0
19,4,0,0
11,4,0,0
13,4,0,0
17,3,0.12,2738.99
12,3,0.22,1719.3
18,3,0.42,3757.72
.
.
.
I want to calculate a selected moving average over three values with awk. The selection should be done by the second and the third column.
A moving average should be calculated only for lines with the second column is 3.
The I would like to calculate three moving averages selected by the third column (contains for every "block" the same values in the same order).
The moving average shall then be calculated of the fourth column.
I would like to output the whole line of second moving-average value and replace the fourth column with the result.
I know that sounds very complicated, so I will give an example what I want to calculate and also the desired result:
(965.814+2764.86+1920.64)/3 = 1883.77
and output the result together with line 10:
18,3,0.12,1883.77
Then continue with the second, eleventh and eighteenth line...
The end result for my data example shall look like this:
18,3,0.12,1883.77
12,3,0.22,2697.67
11,3,0.42,4119.13
19,3,0.12,2474.83
20,3,0.22,1833.04
12,3,0.42,2842.96
I tried to calculate the moving-average with the following code in awk but think I designed the script wrong because awk tells me syntax error for every "$2 == 3".
BEGIN { FS="," ; OFS = "," }
$2 == 3 {
a; b; c; d; e; f = 0
line1 = $0; a = $3; b = $4; getline
line2 = $0; c = $3; d = $4; getline
line3 = $0; e = $3; f = $4
$2 == 3 {
line11 = $0; a = $3; b += $4; getline
line22 = $0; c = $3; d += $4; getline
line33 = $0; e = $3; f += $4
$2 == 3 {
line111 = $0; a = $3; b += $4; getline
line222 = $0; c = $3; d += $4; getline
line333 = $0; e = $3; f += $4
}
}
$0 = line11; $3 = a; $4 = b/3; print
$0 = line22; $3 = c; $4 = d/3; print
$0 = line33; $3 = e; $4 = f/3
}
{print}
Can you help me understanding how to correct my script (I think I have shortcomings with the philosophy of awk) or to start a completly new script because there is an easier solution out there ;-)
I also tried another idea:
BEGIN { FS="," ; OFS = "," }
i=0;
do {
i++;
a; b; c; d; e; f = 0
$2 == 3 {
line1 = $0; a = $3; b += $4; getline
line2 = $0; c = $3; d += $4; getline
line3 = $0; e = $3; f += $4
}while(i<3)
$0 = line1; $3 = a; $4 = b/3; print
$0 = line2; $3 = c; $4 = d/3; print
$0 = line3; $3 = e; $4 = f/3
}
{print}
This one also does not work, awk gives me two syntax errors (one at the "do" and the other after the "$$2 == 3").
I changed and tried a lot in both scripts and at some point they ran without errors but they did not give the desired output at all, so I thought there has to be a general problem.
I hope you can help me, that would be really nice!
Normalize your input
If you normalize your input using the right tools, then the task of finding a solution is far easier
My idea is to use awk to select the records where $2==3 and then use sort to group the data on the numerical value of the third column
% echo '12,3,0.12,965.814
11,3,0.22,4313.2
14,3,0.42,7586.22
17,4,0,0
11,4,0,0
15,4,0,0
13,4,0,0
17,4,0,0
11,4,0,0
18,3,0.12,2764.86
12,3,0.22,2058.3
11,3,0.42,2929.62
10,4,0,0
10,4,0,0
14,4,0,0
12,4,0,0
19,3,0.12,1920.64
20,3,0.22,1721.51
12,3,0.42,1841.55
11,4,0,0
15,4,0,0
19,4,0,0
11,4,0,0
13,4,0,0
17,3,0.12,2738.99
12,3,0.22,1719.3
18,3,0.42,3757.72' | \
awk -F, '$2==3' | \
sort --field-separator=, --key=3,3 --numeric-sort --stable
12,3,0.12,965.814
18,3,0.12,2764.86
19,3,0.12,1920.64
17,3,0.12,2738.99
11,3,0.22,4313.2
12,3,0.22,2058.3
20,3,0.22,1721.51
12,3,0.22,1719.3
14,3,0.42,7586.22
11,3,0.42,2929.62
12,3,0.42,1841.55
18,3,0.42,3757.72
%
Reason on normalized input
As you can see, the situation is now much clearer and we can try to design an algorithm to output a 3-elements running mean.
% awk -F, '$2==3' YOUR_FILE | \
sort --field-separator=, --key=3,3 --numeric-sort --stable | \
awk -F, '
$3!=prev {prev=$3
c=0
s[1]=0;s[2]=0;s[3]=0}
{old=new
new=$0
c = c+1; i = (c-1)%3+1; s[i] = $4
if(c>2)print old FS (s[1]+s[2]+s[3])/3}'
18,3,0.12,2764.86,1883.77
19,3,0.12,1920.64,2474.83
12,3,0.22,2058.3,2697.67
20,3,0.22,1721.51,1833.04
11,3,0.42,2929.62,4119.13
12,3,0.42,1841.55,2842.96
Oops,
I forgot your requirement on SUBSTITUTING $4 with the running mean, I will come out with a solution unless you're faster than me...
Edit: change the line
{old=new
to
{split(new,old,",")
and change the line
if(c>2)print old FS (s[1]+s[2]+s[3])/3}'
to
if(c>2) print old[1] FS old[2] FS old[3] FS (s[1]+s[2]+s[3])/3}'

using awk or sed extract first character of each column and store it in a separate file

I have a file like below
AT AT AG AG
GC GC GG GC
i want to extract first and last character of every col n store them in two different files
File1:
A A A A
G G G G
File2:
T T G G
C C G C
My input file is very large. Is it a way that i can do it in awk or sed
With GNU awk for gensub():
gawk '{
print gensub(/.( |$)/,"","g") > "file1"
print gensub(/(^| )./,"","g") > "file2"
}' file
You can do similar in any awk with gsub() and a couple of variables.
you can try this :
write in test.awk
#!/usr/bin/awk -f
BEGIN {
# FS = "[\s]+"
outfile_head="file1"
outfile_tail="file2"
}
{
num = NF
for(i = 1; i <= NF; i++) {
printf "%s ", substr($i, 0, 1) >> outfile_head
printf "%s ", substr($i, length($i), 1) >> outfile_tail
}
}
then you run this:
./test.awk file
It's easy to do in two passes:
sed 's/\([^ ]\)[^ ]/\1/g' file > file1
sed 's/[^ ]\([^ ]\)/\1/g' file > file2
Doing it in one pass is a challenge...
Edit 1: Modified for your multiple line edit.
You could write a perl script and pass in the file names if you plan to edit it and share it. This loops through the file only once and does not require storing the file in memory.
File "seq.pl":
#!/usr/bin/perl
open(F1,">>$ARGV[1]");
open(F2,">>$ARGV[2]");
open(DATA,"$ARGV[0]");
while($line=<DATA>) {
$line =~ s/(\r|\n)+//g;
#pairs = split(/\s/, $line);
for $pair(#pairs) {
#bases = split(//,$pair);
print F1 $bases[0]." ";
print F2 $bases[length($bases)-1]." ";
}
print F1 "\n";
print F2 "\n";
}
close(F1);
close(F2);
close(DATA);
Execute it like so:
perl seq.pl full.seq f1.seq f2.seq
File "full.seq":
AT AT AG AG
GC GC GG GC
AT AT GC GC
File "f1.seq":
A A A A
G G G G
A A G G
File "f2.seq":
T T G G
C C G C
T T C C