awk count unique occurrences and print other columns - awk
I have the following piece of code:
awk '{h[$1]++}; END { for(k in h) print k, h[k]}' ${infile} >> ${outfile2}
Which does part of what I want: printing out the unique values and then also counting how many times these unique values have occurred. Now, I want to print out the 2nd and 3rd column as well from each unique value. For some reason the following does not seem to work:
awk '{h[$1]++}; END { for(k in h) print k, $2, $3, h[k]}' ${infile} >> ${outfile2}
awk '{h[$1]++}; END { for(k in h) print k, h[$2], h[$3], h[k]}' ${infile} >> ${outfile2}
The first prints out the last index's 2nd and 3rd column, whereas the second code prints out nothing except k and h[k].
${infile} would look like:
20600 33.8318 -111.9286 -1 0.00 0
20600 33.8318 -111.9286 -1 0.00 0
30900 33.3979 -111.8140 -1 0.00 0
29400 33.9455 -113.5430 -1 0.00 0
30600 33.4461 -111.7876 -1 0.00 0
20600 33.8318 -111.9286 -1 0.00 0
30900 33.3979 -111.8140 -1 0.00 0
30600 33.4461 -111.7876 -1 0.00 0
The desired output would be:
20600, 33.8318, -111.9286, 3
30900, 33.3979, -111.8140, 2
29400, 33.9455, -113.5430, 1
30600, 33.4461, -111.7876, 2
You were close and you can do it all in awk, but if you are going to store the count based on field 1 and also have field 2 and field 3 available in END to output, you also need to store field 2 & 3 in arrays indexed by field 1 (or whatever field you are keeping count of). For example you could do:
awk -v OFS=', ' '
{ h[$1]++; i[$1]=$2; j[$1]=$3 }
END {
for (a in h)
print a, i[a], j[a], h[a]
}
' infile
Where h[$1] holds the count of the number of times field 1 is seen indexing the array with field 1. i[$1]=$2 captures field 2 indexed by field 1, and then j[$1]=$3 captures field 3 indexed by field 1.
Then within END all that is needed is to output field 1 (a the index of h), i[a] (field 2), j[a] (field 3), and finally h[a] the count of the number of times field 1 was seen.
Example Use/Output
Using your example data, you can just copy/middle-mouse-paste the code at the terminal with the correct filename, e.g.
$ awk -v OFS=', ' '
> { h[$1]++; i[$1]=$2; j[$1]=$3 }
> END {
> for (a in h)
> print a, i[a], j[a], h[a]
> }
> ' infile
20600, 33.8318, -111.9286, 3
29400, 33.9455, -113.5430, 1
30600, 33.4461, -111.7876, 2
30900, 33.3979, -111.8140, 2
Which provides the output desired. If you need to preserve the order of records in the order of the output you show, you can use string-concatenation to group fields 1, 2 & 3 as the index of the array and then output the array and index, e.g.
$ awk '{a[$1", "$2", "$3]++}END{for(i in a) print i ", " a[i]}' infile
20600, 33.8318, -111.9286, 3
30600, 33.4461, -111.7876, 2
29400, 33.9455, -113.5430, 1
30900, 33.3979, -111.8140, 2
Look things over and let me know if you have further questions.
GNU datamash is a very handy tool for working on groups of columnar data in files that makes this trivial to do.
Assuming your file uses tabs to separate columns like it appears to:
$ datamash -s --output-delimiter=, -g 1,2,3 count 3 < input.tsv
20600,33.8318,-111.9286,3
29400,33.9455,-113.5430,1
30600,33.4461,-111.7876,2
30900,33.3979,-111.8140,2
Though it's not much more complicated in awk, using a multi dimensional array:
$ awk 'BEGIN { OFS=SUBSEP="," }
{ group[$1,$2,$3]++ }
END { for (g in group) print g, group[g] }' input.tsv
29400,33.9455,-113.5430,1
30600,33.4461,-111.7876,2
20600,33.8318,-111.9286,3
30900,33.3979,-111.8140,2
If you want sorted output instead of random order for this one, if using GNU awk, add a PROCINFO["sorted_in"] = "#ind_str_asc" in the BEGIN block, or otherwise pipe the output through sort.
You can also get the same effect by pipelining a bunch of utilities (including awk and uniq):
$ sort -k1,3n input.tsv | cut -f1-3 | uniq -c | awk -v OFS=, '{ print $2, $3, $4, $1 }'
20600,33.8318,-111.9286,3
29400,33.9455,-113.5430,1
30600,33.4461,-111.7876,2
30900,33.3979,-111.8140,2
Related
Filter a file removing lines just with all 0
I need to remove rows from a file with all "0" in the differents columns Example seq_1 seq_2 seq_3 data_0 0 0 1 data_1 0 1 4 data_2 0 0 0 data_3 6 0 2 From the example, I need a new file just with the row of data_2. Because it has just all "0" numbers. I was try using grep and awk but I dont know how to filter just between column $2:4
$ awk 'FNR>1{for(i=2;i<=NF;i++)if($i!=0)next}1' file Explained: $ awk 'FNR>1 { # process all data records for(i=2;i<=NF;i++) # loop all data fields if($i!=0) # once non-0 field is found next # on to the next record }1' file # output the header and all-0 records Very poorly formated output as the sample data is in some kind of table format which it probably is not IRL: seq_1 seq_2 seq_3 data_2 0 0 0
With awk you can rely on field string representation: $ awk 'NR>1 && $2$3$4=="000"' test.txt > result.txt
Using sed, find lines matching a pattern of one or more spaces followed by a 0 (3 times) and if found print the line. sed -nr '/\s+0\s+0\s+0/'p file.txt > new_file.txt Or with awk, if columns 2, 3 and 4 are equal to a 0, print the line. awk '{if ($2=="0" && $3=="0" && $4=="0"){print $0}}' file.txt > new_file.txt EDIT: I ran the time command on these a bunch of times and the awk version is generally faster. Could add up if you are searching a large file. Of course your mileage may vary!
how to keep newline(s) when selecting a given column with awk
Suppose I have a file like this (disclaimer: this is not fixed I can have more than 7 rows, and more than 4 columns) R H A 23 S E A 45 T E A 34 U A 35 Y T A 35 O E A 353 J G B 23 I want the output to select second column if third column is A but keeping newline or whitespace character. output should be: HEE TE I tried this: awk '{if ($3=="A") print $2}' file | awk 'BEGIN{ORS = ""}{print $1}' But this gives: HEETE% Which has a weird % and is missing the space.
You may use this gnu-awk solution using FIELDWIDTHS: awk 'BEGIN{ FIELDWIDTHS = "1 1 1 1 1 1 *" } $5 == "A" {s = s $3} END {print s}' file HEE TE awk splits each record using width values provided in this variable FIELDWIDTHS. 1 1 1 1 1 1 * means each of first 6 columns will have single character length and remaining text will be filled in 7th column. Since you have a space after each value so $2,$4,$6 will be filled with a single space and $1,$3,$5 will be filled with the provided values in input. $5 == "A" {s = s $3}: Here we are checking if $5 is A and if that condition is true then we keep appending value of $3 in a variable s. In the END block we just print variable s. Without using fixed width parsing, awk will treat A in 4th row as $2. Or else if we let spaces part of column value then use: awk ' BEGIN{ FIELDWIDTHS = "2 2 2 *" } $3 == "A " {s = s substr($2,1,1)} END {print s} ' file
looking to compare against column number 3 of row number 3 using awk
looking to compare against column number 3 of row number 3 using awk input: uniqueid 22618 remoteid remote1 established 1302 output: 22618 Tried: awk '{ if(established > 1000) print 22618}'
I suggest: awk '$1=="uniqueid" {uid=$2}; $1=="established" {est=$2}; est>1000 {print uid}' file Output: 22618 If column 1 contains uniqueid save value of column 2 to variable uid. If column 1 contains established save value of column 2 to variable est. If value in variable est is larger 1000 print value in variable uid.
to compare against column number 3 of row number 3 using awk you need to specify the record (NR==3) and the field ($2 probably, not $3): $ awk 'NR==3 && $2 > 1000{ print 22618 }' file 22618
awk to Count Sum and Unique improve command
Would like to print based on 2nd column ,count of line items, sum of 3rd column and unique values of first column.Having around 100 InputTest files and not sorted .. Am using below 3 commands to achieve the desired output , would like to know the simplest way ... InputTest*.txt abc,xx,5,sss abc,yy,10,sss def,xx,15,sss def,yy,20,sss abc,xx,5,sss abc,yy,10,sss def,xx,15,sss def,yy,20,sss ghi,zz,10,sss Step#1: cat InputTest*.txt | awk -F, '{key=$2;++a[key];b[key]=b[key]+$3} END {for(i in a) print i","a[i]","b[i]}' Op#1 xx,4,40 yy,4,60 zz,1,10 Step#2 awk -F ',' '{print $1,$2}' InputTest*.txt | sort | uniq >Op_UniqTest2.txt Op#2 abc xx abc yy def xx def yy ghi zz Step#3 awk '{print $2}' Op_UniqTest2.txt | sort | uniq -c Op#3 2 xx 2 yy 1 zz Desired Output: xx,4,40,2 yy,4,60,2 zz,1,10,1 Looking for suggestions !!!
BEGIN { FS = OFS = "," } { ++lines[$2]; if (!seen[$2,$1]++) ++diff[$2]; count[$2]+=$3 } END { for(i in lines) print i, lines[i], count[i], diff[i] } lines tracks the number of occurrences of each value in column 2 seen records unique combinations of the second and first column, incrementing diff[$2] whenever a unique combination is found. The ++ after seen[$2,$1] means that the condition will only be true the first time the combination is found, as the value of seen[$2,$1] will be increased to 1 and !seen[$2,$1] will be false. count keeps a total of the third column $ awk -f avn.awk file xx,4,40,2 yy,4,60,2 zz,1,10,1
Using awk: $ awk ' BEGIN { FS = OFS = "," } { keys[$2]++; sum[$2]+=$3 } !seen[$1,$2]++ { count[$2]++ } END { for(key in keys) print key, keys[key], sum[key], count[key] } ' file xx,4,40,2 yy,4,60,2 zz,1,10,1 Set the input and output field separator to , in BEGIN block. We use arrays keys to identify and count keys. sum array keeps the sum for each keys. count allows us to keep track of unique column1 for each of column2 values.
Compare files with awk
I have two similar files (both with 3 columns). I'd like to check if these two files contains the same elements (but listed in a different orders). First of all I'd like to compare only the 1st columns file1.txt "aba" 0 0 "abc" 0 1 "abd" 1 1 "xxx" 0 0 file2.txt "xyz" 0 0 "aba" 0 0 "xxx" 0 0 "abc" 1 1 How can I do it using awk? I tried to have a look around but I've found only complicate examples. What if I want to include also the other two columns on the comparison? The output should give me the number of matching elements.
To print the common elements in both files: $ awk 'NR==FNR{a[$1];next}$1 in a{print $1}' file1 file2 "aba" "abc" "xxx" Explanation: NR and FNR are awk variables that store the total number of records and the number of records in the current files respectively (the default record is a line). NR==FNR # Only true when in the first file { a[$1] # Build associative array on the first column of the file next # Skip all proceeding blocks and process next line } ($1 in a) # Check in the value in column one of the second files is in the array { # If so print it print $1 } If you want to match the whole lines then use $0: $ awk 'NR==FNR{a[$0];next}$0 in a{print $0}' file1 file2 "aba" 0 0 "xxx" 0 0 Or a specific set of columns: $ awk 'NR==FNR{a[$1,$2,$3];next}($1,$2,$3) in a{print $1,$2,$3}' file1 file2 "aba" 0 0 "xxx" 0 0
To print the number of matching elements, here's one way using awk: awk 'FNR==NR { a[$1]; next } $1 in a { c++ } END { print c }' file1.txt file2.txt Results using your input: 3 If you'd like to add extra columns (for example, columns one, two and three), use a pseudo-multidimensional array: awk 'FNR==NR { a[$1,$2,$3]; next } ($1,$2,$3) in a { c++ } END { print c }' file1.txt file2.txt Results using your input: 2