I have a text file like this small example:
chr10:102721669-102724893 3217 3218 5
chr10:102721669-102724893 3218 3219 1
chr10:102721669-102724893 3219 3220 5
chr10:102721669-102724893 421 422 1
chr10:102721669-102724893 858 859 2
chr10:102539319-102568941 13921 13922 1
chr10:102587299-102589074 1560 1561 1
chr10:102587299-102589074 1565 1566 1
chr10:102587299-102589074 1595 1596 1
chr10:102587299-102589074 944 945 1
the expected output would look like this:
chr10:102721669-102724893 3217 3218 5 CA
chr10:102721669-102724893 3218 3219 1 CA
chr10:102721669-102724893 3219 3220 5 CA
chr10:102721669-102724893 421 422 1 BA
chr10:102721669-102724893 858 859 2 BA
chr10:102539319-102568941 13921 13922 1 NON
chr10:102587299-102589074 1560 1561 1 CA
chr10:102587299-102589074 1565 1566 1 CA
chr10:102587299-102589074 1595 1596 1 CA
chr10:102587299-102589074 944 945 1 BA
the input has 4 tab separated columns and in the output, I have one more column with 3 different class (CA, NON or BA).
1- if the 1st column in the input is not repeated, in the 5th column of output it will be classified as NON
2- if (the number just after ":" (in the 1st column) + the 2nd column) - the number just after "-" (in the 1st column) is smaller than -30 (meaning -31 or smaller), that line will be classified as BA. for example in the last line:
(102587299 + 944) - 102589074 = -831 , so this line is classified as BA.
3- if (the number just after ":" (in the 1st column) + the 2nd column) - the number just after "-" (in the 1st column) is equal or bigger than -30 (meaning -30 or -29), that line will be classified as CA. for example the 1st line:
(102721669 + 3217) - 102724893 = -7
I am trying to do that in awk.
awk -F "\t"":""-" '{if($2+$4-$3 < -30) ; print $7 = BA, if($2+$4-$3 >= -30) ; print $7 = CA}' file.txt > out.txt
but it does not returns what I expect. do you know how to fix it?
Try
$ awk 'BEGIN{FS=OFS="\t"} NR==FNR{a[$1]++; next}
{ split($1, b, /[\t:-]/);
$5 = a[$1]==1 ? "NON" : (b[2]+$2-b[3]) < -30 ? "BA" : "CA" }
1' file.txt file.txt
chr10:102721669-102724893 3217 3218 5 CA
chr10:102721669-102724893 3218 3219 1 CA
chr10:102721669-102724893 3219 3220 5 CA
chr10:102721669-102724893 421 422 1 BA
chr10:102721669-102724893 858 859 2 BA
chr10:102539319-102568941 13921 13922 1 NON
chr10:102587299-102589074 1560 1561 1 BA
chr10:102587299-102589074 1565 1566 1 BA
chr10:102587299-102589074 1595 1596 1 BA
chr10:102587299-102589074 944 945 1 BA
BEGIN{FS=OFS="\t"} set both input/output field separator as tab
NR==FNR{a[$1]++; next} to count how many times first field is present in the file. Input file is passed twice, so that on second pass we can make decision based on count
split($1, b, /[\t:-]/) split the first column further, results saved in b array
rest of the code is assigning 5th field depending on given conditions and printing the modified line
Further reading
Idiomatic awk
split function
Related
Question
How would I use awk to create a new field that has $2+consistent value?
I am planning to cycle through a list of values but I wouldn't mind using a one liner for each command
PseudoCode
awk '$1 == Bob {$4 = $2 + 400}' file
Sample Data
Philip 13 2
Bob 152 8
Bob 4561 2
Bob 234 36
Bob 98 12
Rey 147 152
Rey 15 1547
Expected Output
Philip 13 2
Bob 152 8 408
Bob 4561 2 402
Bob 234 36 436
Bob 98 12 412
Rey 147 152
Rey 15 1547
just quote Bob, also you want to add third field not second
$ awk '$1=="Bob" {$4=$3+400}1' file | column -t
Philip 13 2
Bob 152 8 408
Bob 4561 2 402
Bob 234 36 436
Bob 98 12 412
Rey 147 152
Rey 15 1547
Here , check if $1 is equal to Bob and , reconstruct the record ($0) by appending $2 FS 400 in to $0. Here FS is the field separator used between 3rd and 4th fields. 1 in the end means tell awk to take the default action which is print.
awk '$1=="Bob"{$0=$0 FS $2 + 400}1' file
Philip 13 2
Bob 152 8 552
Bob 4561 2 4961
Bob 234 36 634
Bob 98 12 498
Rey 147 152
Rey 15 1547
Or , if you want to keep name(Bob) as variable
awk -vname="Bob" '$1==name{$0=$0 FS $2 + 400}1' file
1st solutiuon: Could you please try following too once. I am using here NF and NF+1 awk's out of the box variables. Where $NF denotes value of last column of current line and $(NF+1) will create an additional column if condition of st field stringBob` is found is TRUE.
awk '{$(NF+1)=$1=="Bob"?400+$NF:""} 1' OFS="\t" Input_file
2nd solution: In case we don't want to create a new field and simply want to print the values as per condition then try following, this should be more faster I believe.
awk 'BEGIN{OFS="\t"}{$1=$1;print $0,$1=="Bob"?400+$NF:""}' Input_file
Output will be as follows.
Philip 13 2
Bob 152 8 408
Bob 4561 2 402
Bob 234 36 436
Bob 98 12 412
Rey 147 152
Rey 15 1547
Explanation: Adding explanation for above code now.
awk ' ##Starting awk program here.
{
$(NF+1)=$1=="Bob"?400+$NF:"" ##Creating a new last field here whose value will be depending upon condition check.
##its checking condition if 1st field is having Bob string in it then add 400 value to last field value or make it NULL.
}
1 ##awk works on method of condition then action. so by mentioning 1 making condition TRUE here and NO action defined so by default print of current line will happen.
' OFS="\t" Input_file ##Setting OFS as TAB here where OFS ois output field separator and mentioning Input_file name here.
I have a text file like this small example:
chr10:103909786-103910082 147 148 24 BA
chr10:103909786-103910082 149 150 11 BA
chr10:103909786-103910082 150 151 2 BA
chr10:103909786-103910082 152 153 1 BA
chr10:103909786-103910082 274 275 5 CA
chr10:103909786-103910082 288 289 15 CA
chr10:103909786-103910082 294 295 4 CA
chr10:103909786-103910082 295 296 15 CA
chr10:104573088-104576021 2925 2926 134 CA
chr10:104573088-104576021 2926 2927 10 CA
chr10:104573088-104576021 2932 2933 2 CA
chr10:104573088-104576021 58 59 1 BA
chr10:104573088-104576021 689 690 12 BA
chr10:104573088-104576021 819 820 33 BA
in this file there are 5 tab separated columns. the first column is considered as ID. for example in the first row the whole "chr10:103909786-103910082" is ID.
1- in the 1st step I would like to filter out the rows based on the 4th column.
if the number in the 4th column is less than 10 and the same row but in the 5th column the group is BA, that row will be filtered out. also if the number in the 4th column is less than 5 and the same row but in the 5th column the group is CA, that row will be filtered out.
3- 3rd step:
I want to get the ratio of number in 4th column. in fact in the 1st column there are repeated values which represent the same ID. I want to get one ratio per ID, so in the output every ID will be repeated only once. each ID has both BA and CA in the 5th column. for each ID I should get 2 values for CA and BA separately and get the ration of CA/BA as the final value for each ID. to get one value as CA, I should add up all values in the 4th column which belong the same ID and classified as CA and to get one value as BA, I should add up all values in the 4th column which belong the same ID and classified as BA. the last step is to get the ration of CA/BA per ID. the expected output for the small example would look like this:
1- after filtration:
chr10:103909786-103910082 147 148 24 BA
chr10:103909786-103910082 149 150 11 BA
chr10:103909786-103910082 274 275 5 CA
chr10:103909786-103910082 288 289 15 CA
chr10:103909786-103910082 295 296 15 CA
chr10:104573088-104576021 2925 2926 134 CA
chr10:104573088-104576021 2926 2927 10 CA
chr10:104573088-104576021 689 690 12 BA
chr10:104573088-104576021 819 820 33 BA
2- after summarizing each group (CA and BA):
chr10:103909786-103910082 147 148 35 BA
chr10:103909786-103910082 274 275 35 CA
chr10:104573088-104576021 2925 2926 144 CA
chr10:104573088-104576021 819 820 45 BA
3- the final output(this ratio is made using the values in 4th column):
chr10:103909786-103910082 1
chr10:104573088-104576021 3.2
in the above lines, 1 = 35/35 and 3.2 = 144/45.
I am trying to do that in awk
awk 'ID==$1 {
if (ID) {
print ID, a["CA"]/a["BA"]; a["CA"]=a["BA"]=0;
}
ID=$1
}
$5=="BA" && $4>=10 || $5=="CA" && $4>=5 { a[$5]+=$4 }
END{ print ID, a["CA"]/a["BA"] }' file.txt
I tried to use the code but did not succeed. this code returns one number. in fact sum of all CA and divides it by sum of all BAs but I want to do that per ID and get the ration per ID. do you know how to solve the problem and correct the code?
awk '$4 >= 5 && $5 == "CA" { a[$1]+=$4 }
$4 >= 10 && $5 == "BA" { b[$1]+=$4 }
END{ for ( i in a) print i,a[i]/b[i]}' file
output:
chr10:103909786-103910082 1
chr10:104573088-104576021 3.2
I have a text file with 5 columns. If the number of the 5th column is less than the 3rd column, replace the 4th and 5th column as 2nd and 3rd column. If the number of the 5th column is greater than 3rd column, leave that line as same.
1EAD A 396 B 311
1F3B A 118 B 171
1F5V A 179 B 171
1G73 A 162 C 121
1BS0 E 138 G 230
Desired output
1EAD B 311 A 396
1F3B A 118 B 171
1F5V B 171 A 179
1G73 C 121 A 162
1BS0 E 138 G 230
$ awk '{ if ($5 >= $3) print $0; else print $1"\t"$4"\t"$5"\t"$2"\t"$3; }' foo.txt
I have this text file format:
01 contig00041 1 878 + YP_003990830.1 metalloendopeptidase, glycoprotease family Geobacillus sp. Y4.1MC1 100.00 291 1 291 47 337 0.0 592 #line 1
01 contig00041 1241 3117 - YP_002948419.1 ABC transporter Geobacillus sp. WCH70 84.94 #line 2
37.31 624 #line 3
260 1 #line 4
321 624 #line 5
532 23 #line 6
12 644 #line 7
270 0.0 #line 8
3e-37 1046 #line 9
154 #line 10
I have to detect a line containing 8 columns (line 2), and transpose the second column of the followning seven lines (lines 3 - 9) to the end of the 8-column line. And finally, exclude line 10. This pattern repeats along a large text file, but it is not frequent (30 time, in a file of 2000 lines). Is it possible doing it using awk?
The edited text file must look like the following text:
01 contig00041 1 878 + YP_003990830.1 metalloendopeptidase, glycoprotease family Geobacillus sp. Y4.1MC1 100.00 291 1 291 47 337 0.0 592 #line 1
01 contig00041 1241 3117 - YP_002948419.1 ABC transporter Geobacillus sp. WCH70 84.94 624 1 624 23 644 0.0 1046 #line 2
Thank you very much in advance.
awk 'NF == 12 { t = $0; for (i = 1; i <= 7; ++i) { r = getline; if (r < 1) break; t = t "\t" $2; } print t; next; } NF > 12' temp.txt
Output:
01 contig00041 1 878 + YP_003990830.1 metalloendopeptidase, glycoprotease family Geobacillus sp. Y4.1MC1 100.00 291 1 291 47 337 0.0 592
01 contig00041 1241 3117 - YP_002948419.1 ABC transporter Geobacillus sp. WCH70 84.94 624 1 624 23 644 0.0 1046
It would automatically print lines having more than 12 fields.
If it detects lines having 12 fields, concatenate second column of other 7 lines with it and print.
Ignore any other line.
edited to only add the second column of the lines with two columns.
I think this does what you want:
awk 'NF >= 8 { a[++i] = $0 } NF == 2 { a[i] = a[i] " " $2 } END { for (j = 1; j <= i; ++j) print a[j] }' file
For lines with more than 8 columns, add a new element to the array a. If the line has 2 columns, append the contents to the current array element. Once the whole file has been processed, go through the array and print all of the lines.
Output:
01 contig00041 1 878 + YP_003990830.1 metalloendopeptidase, glycoprotease family Geobacillus sp. Y4.1MC1 100.00 291 1 291 47 337 0.0 592
01 contig00041 1241 3117 - YP_002948419.1 ABC transporter Geobacillus sp. WCH70 84.94 624 1 624 23 644 0.0 1046
I wonder whether there is a way using awk to number the lines according to a field. For example,
Input
2334 332
2334 546
2334 675
7890 222
7890 134
234 45
.
.
.
Based on the 1st field, I would have the following output
Output
1 2334 332
1 2334 546
1 2334 675
2 7890 222
2 7890 134
3 234 45
.
.
.
I would be grateful for your help.
Cheers,
T
here's how,
awk '!a[$1]++{c++}{print c, $0}' file
1 2334 332
1 2334 546
1 2334 675
2 7890 222
2 7890 134
3 234 45
awk 'last != $1 { line = line + 1 } { last = $1; print line, $0 }'