If a row has a particular entry, creating a blank line in the output - awk

I have input.txt like so:
237 #
0 2 3 4 0. ABC
ABC
DEF
# 237
0 1 4 7 2 0.
0 3 8 9 1 0. GHI
XYZ
(a) If a row contains the symbol #, then, in the output, I want a newline/blankline.
(b) If a row starts with a 0 and contains 0. then, the interval of such entries excepting the terminating 0. should be displayed.
The following script accomplishes (b)
awk '{
for (i=1; i<NF; i++)
if($i == "0")
{arr[NR] = $i}
else
if ($i == "0.")
{break}
else
{arr[NR]=arr[NR]" "$(i)}}
($1 == "0") {print arr[NR]}
' input.txt > output.txt
so that the output is:
0 2 3 4
0 1 4 7 2
0 3 8 9 1
How can (a) be accomplished so that the output is:
// <----Starting newline
0 2 3 4
0 1 4 7 2
0 3 8 9 1

try add if ($0 ~ /#/) {print ""}
so
awk '{
for (i=1; i<NF; i++)
if($i == "0")
{arr[NR] = $i}
else
if ($i == "0.")
{break}
else
{arr[NR]=arr[NR]" "$(i)}
if ($0 ~ /#/) {print ""}
($1 == "0") {print arr[NR]}
' soinput.txt > output.txt

Is this what you're trying to do?
$ awk '/#/{print ""} /^0/ && sub(/ 0\..*/,"")' file
0 2 3 4
0 1 4 7 2
0 3 8 9 1

Related

Looping through combinations of selected strings in specific columns and counting their occurrence

I have
A 34 missense fixed
A 33 synonymous fixed
B 12 synonymous var
B 34 missense fixed
B 34 UTR fixed
B 45 missense var
TRI 4 synonymous var
TRI 4 intronic var
3 3 synonymous fixed
I wanna output the counts of the combinations missense && fixed, missense && var, synonymous && fixed, synonymous && var , for each element in $1
missensefixed missensevar synonymousfixed synonymousvar
A 1 0 1 0
B 1 1 0 0
TRI 0 0 0 1
3 0 0 1 0
I can do this way with 4 individual commands selecting for each combination and concatenating the outputs
awk -F'\t' '($3~/missense/ && $4~/fixed/)' file | awk -F'\t' '{count[$1"\t"$3"\t"$4]++} END {for (word in count) print word"\t"count[word]}' > out
But I'm would like to do this for all combinations at once. I've tried some variations of this but not able to make it work
awk print a[i] -v delim=":" -v string='missense:synonymous:fixed:var' 'BEGIN {n = split(string, a, delim); for (i = 1; i <= n-2; ++i) {count[xxxx}++}} END ;for (word in count) print word"\t"count[word]}
You may use this awk with multiple arrays to hold different counts:
awk -v OFS='\t' '
{keys[$1]}
/missense fixed/ {++mf[$1]}
/missense var/ {++mv[$1]}
/synonymous fixed/ {++sf[$1]}
/synonymous var/ {++sv[$1]}
END {
print "-\tmissensefixed\tmissensevar\tsynonymousfixed\tsynonymousvar"
for (i in keys)
print i, mf[i]+0, mv[i]+0, sf[i]+0, sv[i]+0
}
' file | column -t
- missensefixed missensevar synonymousfixed synonymousvar
A 1 0 1 0
B 1 1 0 1
TRI 0 0 0 1
3 0 0 1 0
I have used column -t for tabular output only.
GNU awk supports arrays of arrays, so if it is your awk you can count your records with something as simple as num[$1][$3$4]++. The most complex part is the final human-friendly printing:
$ cat foo.awk
{ num[$1][$3$4]++ }
END {
printf(" missensefixed missensevar synonymousfixed synonymousvar\n");
for(r in num) printf("%3s%14d%12d%16d%14d\n", r, num[r]["missensefixed"],
num[r]["missensevar"], num[r]["synonymousfixed"], num[r]["synonymousvar"])}
$ awk -f foo.awk data.txt
missensefixed missensevar synonymousfixed synonymousvar
A 1 0 1 0
B 1 1 0 1
TRI 0 0 0 1
3 0 0 1 0
Using any awk in any shell on every Unix box with an assist from column to convert the tab-separated awk output to a visually tabular display if you want it:
$ cat tst.awk
BEGIN {
OFS = "\t"
numTags = split("missensefixed missensevar synonymousfixed synonymousvar",tags)
}
{
keys[$1]
cnt[$1,$3 $4]++
}
END {
for (tagNr=1; tagNr<=numTags; tagNr++) {
tag = tags[tagNr]
printf "%s%s", OFS, tag
}
print ""
for (key in keys) {
printf "%s", key
for (tagNr=1; tagNr<=numTags; tagNr++) {
tag = tags[tagNr]
val = cnt[key,tag]
printf "%s%d", OFS, val
}
print ""
}
}
$ awk -f tst.awk file
missensefixed missensevar synonymousfixed synonymousvar
A 1 0 1 0
B 1 1 0 1
TRI 0 0 0 1
3 0 0 1 0
$ awk -f tst.awk file | column -s$'\t' -t
missensefixed missensevar synonymousfixed synonymousvar
A 1 0 1 0
B 1 1 0 1
TRI 0 0 0 1
3 0 0 1 0
I'd highly recommend you always give every column a header string though so it doesn't make further processing of the data harder (e.g. reading it into Excel and sorting on headers), so if I were you I'd add printf "key" or something else that more accurately identifies that columns contents as the first line of the END section (i.e. on a line immediately before the first for loop) so the first column gets a header too:
$ awk -f tst.awk file | column -s$'\t' -t
key missensefixed missensevar synonymousfixed synonymousvar
A 1 0 1 0
B 1 1 0 1
TRI 0 0 0 1
3 0 0 1 0

Counter in in awk if else loop

can you explain to me why this simple onliner does not work? Thanks for your time.
awk 'BEGIN{i=1}{if($2 == i){print $0} else{print "0",i} i=i+1}' check
input text file with name "check":
a 1
b 2
c 3
e 5
f 6
g 7
desired output:
a 1
b 2
c 3
0 4
e 5
f 6
g 7
output received:
a 1
b 2
c 3
0 4
0 5
0 6
awk 'BEGIN{i=1}{ if($2 == i){print $0; } else{print "0",i++; print $0 } i++ }' check
increment i one more time in the else (you are inserting a new line)
print the currentline in the else, too
this works only if there is only one line missing between the present lines, otherwise you need a loop printing the missing lines
Or simplified:
awk 'BEGIN{i=1}{ if($2 != i){print "0",i++; } print $0; i++ }' check
Yours is broken because:
you read the next line ("e 5"),
$2 is not equal to your counter,
you print the placeholder line and increment your counter (to 5),
you do not print the current line
you read the next line ("f 6")
goto 2
A while loop is warranted here -- that will also handle the case when you have gaps greater than a single number.
awk '
NR == 1 {prev = $2}
{
while ($2 > prev+1)
print "0", ++prev
print
prev = $2
}
' check
or, if you like impenetrable one-liners:
awk 'NR==1{p=$2}{while($2>p+1)print "0",++p;p=$2}1' check
All you need is:
awk '{while (++i<$2) print 0, i}1' file
Look:
$ cat file
a 1
b 2
c 3
e 5
f 6
g 7
k 11
n 14
$ awk '{while (++i<$2) print 0, i}1' file
a 1
b 2
c 3
0 4
e 5
f 6
g 7
0 8
0 9
0 10
k 11
0 12
0 13
n 14

Transforming multiple entries of data for the same ID into a row-awk

I have data in the following format:
ID Date X1 X2 X3
1 01/01/00 1 2 3
1 01/02/00 7 8 5
2 01/03/00 9 7 1
2 01/04/00 1 4 5
I would like to group measurements into new rows according to ID, so I end up with:
ID Date X1 X2 X3 Date X1_2 X2_2 X3_2
1 01/01/00 1 2 3 01/02/00 7 8 5
2 01/03/00 9 7 1 01/04/00 1 4 5
etc.
I have as many as 20 observations for a given ID.
So far I have tried the technique given by http://gadgetsytecnologia.com/da622c17d34e6f13e/awk-transpose-childids-column-into-row.html
The code I have tried so far is:
awk -F, OFS = '\t' 'NR >1 {a[$1] = a[$1]; a[$2] = a[$2]; a[$3] = a[$3];a[$4] = a[$4]; a[$5] = a[$5] OFS $5} END {print "ID,Date,X1,X2,X3,Date_2,X1_2, X2_2 X3_2'\t' for (ID in a) print a[$1:$5] }' file.txt
The file is a tab delimited file. I don't know how to manipulate the data, or to account for the fact that there will be more than two observations per person.
Just keep track of what was the previous first field. If it changes, print the stored line:
awk 'NR==1 {print; next} # print header
prev && $1!=prev {print prev, line; line=""} # print on different $1
{prev=$1; $1=""; line=line $0} # store data and remove $1
END {print prev, line}' file # print trailing line
If you have tab-separated fields, just add -F"\t".
Test
$ awk 'NR==1 {print; next} prev && $1!=prev {print prev, line; line=""} {prev=$1; $1=""; line=line $0} END {print prev, line}' a
ID Date X1 X2 X3
1 01/01/00 1 2 3 01/02/00 7 8 5
2 01/03/00 9 7 1 01/04/00 1 4 5
you can try this (gnu-awk solution)
gawk '
NR == 1 {
N = NF;
MAX = NF-1;
for(i=1; i<=NF; i++){ #store columns names
names[i]=$i;
}
next;
}
{
for(i=2; i<=N; i++){
a[$1][length(a[$1])+1] = $i; #store records for each id
}
if(length(a[$1])>MAX){
MAX = length(a[$1]);
}
}
END{
firstline = names[1];
for(i=1; i<=MAX; i++){ #print first line
column = int((i-1)%(N-1))+2
count = int((i-1)/(N-1));
firstline=firstline OFS names[column];
if(count>0){
firstline=firstline"_"count
}
}
print firstline
for(id in a){ #print each record in store
line = id;
for(i=1; i<=length(a[id]); i++){
line=line OFS a[id][i];
}
print line;
}
}
' input
input
ID Date X1 X2 X3
1 01/01/00 1 2 3
1 01/02/00 7 8 5
2 01/03/00 9 7 1
2 01/04/00 1 4 5
1 01/03/00 72 28 25
you get
ID Date X1 X2 X3 Date_1 X1_1 X2_1 X3_1 Date_2 X1_2 X2_2 X3_2
1 01/01/00 1 2 3 01/02/00 7 8 5 01/03/00 72 28 25
2 01/03/00 9 7 1 01/04/00 1 4 5

extract columns with awk

I have some text files as follows
293 800 J A 0 0 162
294 801 J R - 0 0 67
295 802 J P - 0 0 56
298 805 J G S S- 0 0 22
313 820 J R T 4 S- 0 0 152
I would like to print column4 if column5 is empty.
desired output
>filename
ARP
I used the following code. But this code prints only the filenames.
awk '{
if (FNR == 1 ) print ">" FILENAME
if ($5 == "") {
printf $4
}
}
END { printf "\n"}' *.txt
Here's one way using GNU awk:
awk 'BEGIN { FIELDWIDTHS="5 4 2 3 3 2 7 4 3" } FNR==1 { print ">" FILENAME } $5 == " " { sub(/ $/, "", $4); printf $4 } END { printf "\n" }' file.txt
Result:
>file.txt
ARP
This is not an elegant solution by any means and it is specific to this file.
You can do something like this
cut -c1-15 yourtext | awk '$5 {print $4}'
where 15 is the number of characters including column 5.
I do strongly agree with steve's suggestion to use an better alternative for your files. Or at least put a dummy/error value instead of leaving columns blank.
awk '{if(substr($0,15,1)~/ /)printf("%s",$4);}' your_file
tested below:
> cat temp
293 800 J A 0 0 162
294 801 J R - 0 0 67
295 802 J P - 0 0 56
298 805 J G S S- 0 0 22
313 820 J R T 4 S- 0 0 152
> awk '{if(substr($0,15,1)~/ /)printf("%s",$4);}' temp
ARP>
This is a starting point assuming the variations in column numbers stay the same.
awk '$5 !="" && NF<=8 {printf $4}END{print "\n"}' data.txt
yields
ARP
you can graft on the parts to display the filename.

How to select lines inwhich the sum of negative numbers in the line is equal or less than -3 (with awk)?

I have a sample file like this:
probeset_id submitted_id chr snp_pos alleleA alleleB 562_201 562_202 562_203 562_204 562_205 562_206 562_207 562_208 562_209 562_210
AX-75448119 Chr1_41908741 1 41908741 T C 0 -1 0 -1 0 0 0 0 0 -1
AX-75448118 Chr1_41908545 1 41908545 T A 2 -1 2 2 2 -1 -1 2 2 0
AX-75448118 Chr1_41908545 1 41908545 T A 1 2 -1 2 2 -1 2 -1 2 0
and I want to exclud the lines that have a sum of negative number equal or less than -3, I know how to calculate the sum of negative number and print it, with this code:
awk 'BEGIN{sum=0} NR >=2 {for (i=7;i<=NF;i++) if ($i ~ /^-/) sum += $i; print $1,$2,$3,$4,$5,$6,sum; sum=0}' test.txt > out.txt
But I don't want to do this I just want to calculate the sum of negative number and then select the lines that have less or equal to -3.
These are the commands that I wrote:
awk 'BEGIN{sum=0} NR >=2 {for (i=7;i<=NF;i++) if ($i ~ /^-/) sum += $i; sum=0}' test.txt | awk 'sum <= -3' > out.txt
I get no errors but the out.txt file is empty!
awk 'BEGIN{sum=0} NR >=2 {for (i=7;i<=NF;i++) if ($i ~ /^-/) sum += $i; if sum >= -3 pritn R; sum=0}' test.txt | wc -l
which I get:
^ syntax error
and how can I make sure that the first line(header) would be also in my output file?
so I would like to have this out put:
probeset_id submitted_id chr snp_pos alleleA alleleB 562_201 562_202 562_203 562_204 562_205 562_206 562_207 562_208 562_209 562_210
AX-75448119 Chr1_41908741 1 41908741 T C 0 -1 0 -1 0 0 0 0 0 -1
AX-75448118 Chr1_41908545 1 41908545 T A 2 -1 2 2 2 -1 -1 2 2 0
Try this:
awk '
NR == 1 {
print
next
}
{
negsum=0
for(i=7; i<=NF; i++) {
if ($i<0) {
negsum += $i
}
}
negsum <= -3'
Your first try fails because you use two different invocations of awk. These are two different programs being run, and the second knows nothing about the sum variable in the first, so it uses the default value sum = 0.
The second try just has a mis-spelling. You used pritn instead of print.
What you described can be easier to code with proper formatting. (Not that I'd always resort to using editor when scripting awk...)
The first condition (NR == 1) just ensures we print the first line as is.
awk '
NR == 1 { print }
NR >= 2 {
sum = 0;
for (i=7;i<=NF;i++) {
if ($i < 0)
sum += $i;
}
if (sum <= -3)
print;
}
' test.txt > out.txt