How to sum up every nth line in awk? - awk

I want to output the sum of every N lines, for example, every 4 lines:
cat file
1
11
111
1111
2
22
222
2222
3
33
333
3333
The output should be:
6 #(1+2+3)
66 #(11+22+33)
666 #(111+222+333)
6666 #(1111+2222+3333)
How can I do this with awk?

Basically you can use the following awk command:
awk -vN=4 '{s[NR%N]+=$0}END{for(i=0;i<N;i++){print s[i]}}' input.txt
You can choose N like you wish.
Output:
6666
6
66
666
But you see, the output isn't sorted as expected when iterating through an awk array. You can fix this by shifting the line number by -1:
awk -vN=4 '{s[(NR-1)%N]+=$0}END{for(i=0;i<N;i++){print s[i]}}' a.txt
Output:
6
66
666
6666

Related

count, groupby with sed, or awk

i want to perform two different sort and count on a file, based on each line's content.
1. i need to take the first column of a .tsv file
i would like to group by each line that starts with three digits, and keep only the three first digits, and for everything else, just sort and count the whole occurrence of the sentence in the first column.
Sample data:
687/878 9
890987 4
01a 55
1b 8743917
890a 34
abcdee 987
dfeqfe fkdjald
890897 34213
6878853 834
32fasd 53891
abcdee 8794371
abd 873
result:
687 2
890 3
01a 1
1b 1
32fasd 1
abd 1
dfeqfe 1
abcdee 2
I would also appreciate a solution that would
also take into account a sample input like
687/878 9
890987 4
01a 55
1b 8743917
890a 34
abcdee 987
dfeqfe 545
890897 34213
6878853 834
(632)fasd 53891
(88)abcdee 8794371
abd 873
so the first column may have values like (,), #, ', all kind of characters
so output will have two columns, the first with the values extracted, and the second with the new count, with the new values extracted from the source file.
Again preferred output format tsv.
so i need to extract all values that start with
^\d\d\d, and then for these three first digits, sort and count unique values,
but in a second pass, also do the same for each line, that does not start with 3 digits, but this time, keep the whole columns value and sort count by it.
what i have tried:
| sort | uniq -c | sort -nr for the lines that do start with ^\d\d\d, and
the same for those that do not fulfill the above regex, but is there a more elegant way using either sed or awk?
$ cat tst.awk
BEGIN { FS=OFS="\t" }
{ cnt[/^[0-9]{3}/ ? substr($1,1,3) : $1]++ }
END {
for (key in cnt) {
print (key !~ /^[0-9]{3}/), cnt[key], key, cnt[key]
}
}
$ awk -f tst.awk file | sort -k1,2n | cut -f3-
687 1
890 2
abcdee 1
You can try Perl
$ cat nefijaka.txt
687 878 9
890987 4
890a 34
abcdee 987
$ perl -lne ' /^(\d{3})|(\S+)/; $x=$1?$1:$2; $kv{$x}++; END { print "$_\t$kv{$_}" for (sort keys %kv) } ' nefijaka.txt
687 1
890 2
abcdee 1
$
You can pipe it to sort and get the values sorted..
$ perl -lne ' /^(\d{3})|(\S+)/; $x=$1?$1:$2; $kv{$x}++; END { print "$_\t$kv{$_}" for (sort keys %kv) } ' nefijaka.txt | sort -k2 -nr
890 2
abcdee 1
687 1
EDIT1:
$ cat nefijaka.txt2
687 878 9
890987 4
890a 34
abcdee 987
a word and then 23
$ perl -lne ' /^(\d{3})|(.+?\t)/; $x=$1?$1:$2; $x=~s/\t//g; $kv{$x}++; END { print "$_\t$kv{$_}" for (sort keys %kv) } ' nefijaka.txt2
687 1
890 2
a word and then 1
abcdee 1
$

Summing numbers in a file

I have a file which looks like this:
aaa 15
aaa 12
bbb 131
bbb 12
ccc 123
ddddd 1
ddddd 2
ddddd 3
I would like to get a sum for each unique element in the left side like this and also calculate how many of each element are summed up:
aaa 27 - 2
bbb 143 - 2
ccc 123 - 1
ddddd 6 - 3
How would I accomplish this with AWK or something similar?
You can do it in awk by collecting the sums into two arrays, using column 1 as the key to both arrays (then pipe to sort if desired):
awk '{sums[$1] += $2; counts[$1] += 1}
END {for (key in sums) {print key, sums[key], "-", counts[key]}}' file | sort
Output:
aaa 27 - 2
bbb 143 - 2
ccc 123 - 1
ddddd 6 - 3

how to calculate the difference of each line with awk?

I want to calculate the difference w/ awk output.
Can anyone help me with this ?
cat x.txt
a 100
b 102
c 110
awk output.
a 100
b 102 2
c 110 8
Try:
awk 'NR>1{$0=$0" "$2-v}{v=$2;print $0}' x.txt
Output:
a 100
b 102 2
c 110 8

Obtaining "consensus" results from two different files using awk

I have file1 as a result of a first operation, it has the following structure
201 12 0.298231 8.8942
206 13 -0.079795 0.6367
101 34 0.86348 0.7456
301 15 0.215355 4.6378
303 16 0.244734 5.9895
and file2 as a result of a different operation and has the same type of structure.
File 2 sample
204 60 -0.246038 6.0535
304 83 -0.246209 6.0619
101 34 -0.456629 6.0826
211 36 -0.247003 6.1011
305 83 -0.247134 6.1075
206 46 -0.247485 6.1249
210 39 -0.248066 6.1537
107 41 -0.248201 6.1603
102 20 -0.248542 6.1773
I would like to select fields 1 and 2 that have a field 3 value higher than a threshold in file1 (0.8) , then for these selected values of field 1 and 2, select the values that have a field 3 value higher than another threshold in file 2 (abs(x)=0.4).
Note that although files 1 and 2 have the same structure fields 1 and 2 values are not the same (not the same number of lines etc..)
Can you do this with awk?
desired output
101 34
If you combine awk with unix commands you can do the following
sort file1.txt > sorted1.txt
sort file2.txt > sorted2.txt
Sorting will allow you to use JOIN on the first line (which I assume is unique). Now field 3 of file1 is $3 and file2 is $6. Using awk you can write the following.:
join sorted1.txt sorted2.txt | awk 'function abs(value){return (value<0?-value:value);}{print $1"\t"$2} $3 >=0.8 && abs($6) >=0.4'
In essence, in the awk you first write a function to deal with absolute values, then you simply ask it to print line 1 and 2 selecting for the criteria you detailed at $3 and $6 (formely field 3 of file1 and file2 respectively)
Hope this helps...

How do i add the second column based on column 1 in awk ? For example i used the following script

zcat *.gz | awk '{print $1}' |sort| uniq -c | sed 's/^[ ]\+//g' | cut -d' ' -f1 | sort | uniq -c | sort -k1n
I get the following output:
3 648
3 655
3 671
3 673
3 683
3 717
4 18
4 29
4 31
4 34
4 652
5 12
6 24
6 33
7 13
12 10
13 9
14 8
33 7
73 6
166 5
383 4
1178 3
3945 2
26692 1
I don't want repetitions in my 1st column. Example: if my first column is 3 , i should add all the values in the second column that are associated with 3. Thank you
Solution using arrays in awk
{
a[$1]=a[$1]+$2
}
END {
for (i in a)
printf("%d\t%d\n", i, a[i])
}
Pipe the output through sort -n once more to have it in ascending order
$ awk -f num.awk numbers | sort -n
3 4047
4 764
5 12
6 57
7 13
12 10
13 9
14 8
33 7
73 6
166 5
383 4
1178 3
3945 2
26692 1
awk 'NF == 1 {c=$1; print $0} NF>1 {if (c==$1) {print "\t" $2} else {c=$1; print $0}}'
can do it, but please note, that the indentation can be incorrect, as I had used a simple tab \t above.
HTH