Printing multiple lines with the same "largest" value using awk - awk

I have a file that looks like this:
3, abc, x
2, def, y
3, ghi, z
I want to find the highest value in $1 and print all rows that contain this highest value in $1.
sort -t, -k1,1n| tail -n1
would just give one of the rows that contain 3 in $1, but I need both.
Any suggestions are appreciated (:

I’m not sure if this is the nicest way to get lines while they have the same value with awk, but:
awk 'NR == 1 { t = $1; print } NR > 1 { if (t != $1) { exit; } print }'
which can be combined with sort as follows:
sort -t, -k1,1nr | awk 'NR == 1 { t = $1; print } NR > 1 { if (t != $1) { exit; } print }'
There’s also this, but it does unnecessary work:
sort -t, -k1,1nr | awk 'NR == 1 { t = $1 } t == $1 { print }'

Here is another approach that does not require sorting, but requires two passes over the data.
max=$(awk -F',' '{if(max < $1) max = $1}END{print max}' Input.txt )
awk -v max=$max -F',' '$1 == max' Input.txt

In awk, only one pass over the data:
$ awk -F, '
$1>m { # when new max is found
delete a; m=$1; i=0 # reset all
}
a[1]=="" || $1==m { # if $1 equals max or we're processing the first record
a[++i]=$0 # store the record to a
}
END { # in the end
for(j=1;j<=i;j++)
print a[j] # print a with stored records
}
' file
3, abc, x
3, ghi, z

Related

change field value of one file based on another input file using awk

I have a sparse matrix ("matrix.csv") with 10k rows and 4 columns (1st column is "user", and the rest columns are called "slots" and contain 0s or 1s), like this:
user1,0,1,0,0
user2,0,1,0,1
user3,1,0,0,0
Some of the slots that contain a "0" should be changed to contain a "1".
I have another file ("slots2change.csv") that tells me which slots should be changed, like this:
user1,3
user3,2
user3,4
So for user1, I need to change slot3 to contain a "1" instead of a "0", and for user3 I should change slot2 and slot4 to contain a "1" instead of a "0", and so on.
Expected result:
user1,0,1,1,0
user2,0,1,0,1
user3,1,1,0,1
How can I achieve this using awk or sed?
Looking at this post: awk or sed change field of file based on another input file, a user proposed an answer that is valid if the "slots2change.csv" file do not contain the same user in diferent rows, which is not the case in here.
The solution proposed was:
awk 'BEGIN{FS=OFS=","}
NR==FNR{arr[$1]=$2;next}
NR!=FNR {for (i in arr)
if ($1 == i) {
F=arr[i] + 1
$F=1
}
print
}
' slots2change.csv matrix.csv
But that answer doesn't apply in the case where the "slots2change.csv" file contain the same user in different rows, as is now the case.
Any ideas?
Using GNU awk for arrays of arrays:
$ cat tst.awk
BEGIN { FS=OFS="," }
NR == FNR {
users2slots[$1][$2]
next
}
$1 in users2slots {
for ( slot in users2slots[$1] ) {
$(slot+1) = 1
}
}
{ print }
$ awk -f tst.awk slots2change.csv matrix.csv
user1,0,1,1,0
user2,0,1,0,1
user3,1,1,0,1
or using any awk:
$ cat tst.awk
BEGIN { FS=OFS="," }
NR == FNR {
if ( !seen[$0]++ ) {
users2slots[$1] = ($1 in users2slots ? users2slots[$1] FS : "") $2
}
next
}
$1 in users2slots {
split(users2slots[$1],slots)
for ( idx in slots ) {
slot = slots[idx]
$(slot+1) = 1
}
}
{ print }
$ awk -f tst.awk slots2change.csv matrix.csv
user1,0,1,1,0
user2,0,1,0,1
user3,1,1,0,1
Using sed
while IFS="," read -r user slot; do
sed -Ei "/$user/{s/(([^,]*,){$slot})[^,]*/\11/}" matrix.csv
done < slots2change.csv
$ cat matrix.csv
user1,0,1,1,0
user2,0,1,0,1
user3,1,1,0,1
If the order in which the users are outputted doesn't matter then you could do something like this:
awk '
BEGIN { FS = OFS = "," }
FNR == NR {
fieldsCount[$1] = NF
for (i = 1; i <= NF; i++ )
matrix[$1,i] = $i
next
}
{ matrix[$1,$2+1] = 1 }
END {
for ( id in fieldsCount ) {
nf = fieldsCount[id]
for (i = 1; i <= nf; i++)
printf "%s%s", matrix[id,i], (i < nf ? OFS : ORS)
}
}
' matrix.csv slots2change.csv
user1,0,1,1,0
user2,0,1,0,1
user3,1,1,0,1
This might work for you (GNU sed):
sed -E 's#(.*),(.*)#/^\1/s/,[01]/,1/\2#' fileChanges | sed -f - fileCsv
Create a sed script from the file containing the changes and apply it to the intended file.
The solution above, manufactures a match and substitution for each line in the file changes. This is then piped through to second invocation of sed which applies the sed script to the csv file.

combine multiple string replacement awk commands into one

I have two separate awk commands that replaces the third column of specific rows of the destination_file with the third column of specific rows of the source file. The commands work but is cumbersome with a lot of IO operations. Is there a way to combine the awk commands into one?
The commands I have at the moment are:
z2=$(awk 'NR==12 { print$1 }' ../source_folder/source_file)
z3=$(awk 'NR==14 { print$1 }' ../source_folder/source_file)
awk -v z2=$z2 'NR==10{$3=z2} 1' destination_file > temp && mv temp destination_file
awk -v z3=$z3 'NR==11{$3=z3} 1' destination_file > temp && mv temp destination_file
As an expansion to this problem, the rows I obtain values for z2, z3, z4 ... are from NR==12, 14, 16 ... and the rows in the destination_file that I want to replace are NR==10, 11, 12 ...; is there a way to write a sequence to do this string replacement operation instead of listing all the z1, z2, z3 ...?
You could combine both of your programs into one. Have only 1 awk program to check both conditions NR==10 and NR==11 and do respective assignments of values to respective fields as per condition. This will save output into output file then.
z2=$(awk 'NR==12 { print$1 }' ../source_folder/source_file)
z3=$(awk 'NR==14 { print$1 }' ../source_folder/source_file)
awk -v z2="$z2" -v z3="$z3" 'NR==10{$3=z2} NR==11{$3=z3} 1' destination_file > temp && mv temp destination_file
You don't need multiple calls to awk and shell variables for this:
awk '
NR==FNR {
if ( FNR==12 ) { map[10] = $1 }
else if ( FNR==14 ) { map[11] = $1; nextfile }
next
}
FNR in map {
$3 = map[FNR]
}
{ print }
' ../source_folder/source_file destination_file
or if your line numbers always increase by 2 in the source file and 1 in the dest file:
awk '
NR==FNR {
if ( (12 <= FNR) && !(FNR%2) ) {
map[10 + (c++)] = $1
}
next
}
FNR in map {
$3 = map[FNR]
}
{ print }
' ../source_folder/source_file destination_file

awk totally separate duplicate and non-duplicates

If we have an input:
TargetIDs,CPD,Value,SMILES
95,CPD-1111111,-2,c1ccccc1
95,CPD-2222222,-3,c1ccccc1
95,CPD-2222222,-4,c1ccccc1
95,CPD-3333333,-1,c1ccccc1N
Now we would like to separate the duplicates and non-duplicates based on the fourth column (smiles)
duplicate:
95,CPD-1111111,-2,c1ccccc1
95,CPD-2222222,-3,c1ccccc1
95,CPD-2222222,-4,c1ccccc1
non-duplicate
95,CPD-3333333,-1,c1ccccc1N
Now the following attempt could do separate the duplicate without any problem. However, the first occurrence of the duplicate will still be included into the non-duplicate file.
BEGIN { FS = ","; f1="a"; f2="b"}
{
# Keep count of the fields in fourth column
count[$4]++;
# Save the line the first time we encounter a unique field
if (count[$4] == 1)
first[$4] = $0;
# If we encounter the field for the second time, print the
# previously saved line
if (count[$4] == 2)
print first[$4] > f1 ;
# From the second time onward. always print because the field is
# duplicated
if (count[$4] > 1)
print > f1;
if (count[$4] == 1) #if (count[$4] - count[$4] == 0) <= change to this doesn't work
print first[$4] > f2;
duplicate output results from the attempt:
95,CPD-1111111,-2,c1ccccc1
95,CPD-2222222,-3,c1ccccc1
95,CPD-2222222,-4,c1ccccc1
non-duplicate output results from the attempt
TargetIDs,CPD,Value,SMILES
95,CPD-3333333,-1,c1ccccc1N
95,CPD-1111111,-2,c1ccccc1
May I know if any guru might have comments/solutions? Thanks.
I would do this:
awk '
NR==FNR {count[$2] = $1; next}
FNR==1 {FS=","; next}
{
output = (count[$NF] == 1 ? "nondup" : "dup")
print > output
}
' <(cut -d, -f4 input | sort | uniq -c) input
The process substitution will pre-process the file and perform a count on the 4th column. Then, you can process the file and decide if that line is "duplicated".
All in awk: Ed Morton shows a way to collect the data in a single pass. Here's a 2 pass solution that's virtually identical to my example above
awk -F, '
NR==FNR {count[$NF]++; next}
FNR==1 {next}
{
output = (count[$NF] == 1 ? "nondup" : "dup")
print > output
}
' input input
Yes, the input file is given twice.
$ cat tst.awk
BEGIN{ FS="," }
NR>1 {
if (cnt[$4]++) {
dups[$4] = nonDups[$4] dups[$4] $0 ORS
delete nonDups[$4]
}
else {
nonDups[$4] = $0 ORS
}
}
END {
print "Duplicates:"
for (key in dups) {
printf "%s", dups[key]
}
print "\nNon Duplicates:"
for (key in nonDups) {
printf "%s", nonDups[key]
}
}
$ awk -f tst.awk file
Duplicates:
95,CPD-1111111,-2,c1ccccc1
95,CPD-2222222,-3,c1ccccc1
95,CPD-2222222,-4,c1ccccc1
Non Duplicates:
95,CPD-3333333,-1,c1ccccc1N
This solution only works if the duplicates are grouped together.
awk -F, '
function fout( f, i) {
f = (cnt > 1) ? "dups" : "nondups"
for (i = 1; i <= cnt; ++i)
print lines[i] > f
}
NR > 1 && $4 != lastkey { fout(); cnt = 0 }
{ lastkey = $4; lines[++cnt] = $0 }
END { fout() }
' file
Little late
My version in awk
awk -F, 'NR>1{a[$0":"$4];b[$4]++}
END{d="\n\nnondupe";e="dupe"
for(i in a){split(i,c,":");b[c[2]]==1?d=d"\n"i:e=e"\n"i} print e d}' file
Another built similar to glenn jackmans but all in awk
awk -F, 'function r(f) {while((getline <f)>0)a[$4]++;close(f)}
BEGIN{r(ARGV[1])}{output=(a[$4] == 1 ? "nondup" : "dup");print >output} ' file

Extract specific data from a file with awk

I have a text file as shown below. I would like to extract the .pdb IDs and its corresponding chains. How is this possible with awk?
>4HSU:A|PDBID|CHAIN|SEQUENCE
PLGSRKCEKAGCTATCPVCFASASERCAKNGY
PKAFMADQQL
>4HSU:B|PDBID|CHAIN|SEQUENCE
PLGSPEFSERGSKSPLKRAQETE
>4HSU:C|PDBID|CHAIN|SEQUENCE
ARTMQTARKSTGGKAPRKQLATKAARKSAP
>4HT3:A|PDBID|CHAIN|SEQUENCE
MERYENLFAQLNDRREGAF
>4HT3:B|PDBID|CHAIN|SEQUENCE
MTTLLNPYFGEFGGMYVPQ
>4I0W:A|PDBID|CHAIN|SEQUENCE
MENKAKVGIDFINTIPKQILTSLIEQYSPNNGEIELVVLYGDNFLRFKNSVDVIGAKVEDLGYGFGILII
>4I0W:B|PDBID|CHAIN|SEQUENCE
AYDSNRASCIPSVWNNYNLTGEGILVGFLDT
>4I0W:D|PDBID|CHAIN|SEQUENCE
AYDSNRASCIPSVWNNYNLTGEGILVGFLLPLGDTITSGGWRIIVRKLNNYEGYFDIWLPIAEGLN
ERTRFLQPSVYNTLGIPATVEGVIS
`
Desired output:
4HSU A B C
4HT3 A B
4I0W A B D
kent$ awk -F'[>:|]' '/^>/{a[$2]=a[$2] OFS $3}END{for(x in a)print x,a[x]}' file
4I0W A B D
4HSU A B C
4HT3 A B
I am satisfied with my FS value: >:| like a cute face!
Looks as though you want the output of everything in the original order; so, it takes some indirection to take care of this. All the below works in POSIX AWK as requested (or at least gawk with LINT = 1) and has the addtional feature of keeping track of what is seen to eliminate duplicates.
#! /usr/bin/awk -f
BEGIN {
FS="[>:|]"
split("", t) # table of output
split("", r) # row number in table for a ID
split("", seen) # keeps track of duplicates
row=0
}
/^>/ && !($2 SUBSEP $3 in seen) {
if ($2 in r) {
i=r[$2]
t[i] = t[i] OFS $3
} else {
r[$2] = row
t[row++] = $2 OFS $3
}
seen[$2, $3] = 1
}
END {
for (i=0; i<row; i++)
print t[i]
}

awk | update field number after comparing field from other file

Input File1: file1.txt
MH=919767,918975
DL=919922
HR=919891,919394,919812
KR=919999,918888
Input File2: file2.txt
aec,919922783456,a5,b3,,,asf
abc,918975583456,a1,b1,,,abf
aeci,919998546783,a2,b4,,,wsf
Output File
aec,919922783456,a5,b3,DL,,asf
abc,918975583456,a1,b1,MH,,abf
aeci,919998546783,a2,b4,NOMATCH,,wsf
Notes
Need to compare phone number (Input file2.txt - 2nd field - initial 6 digit only) within Input file1.txt - 2nd field with "=" separted). If there is match in intial 6 digit of phone number, then OUTPUT should contain 2 digit code from file (Input file1) into output in 5th field
File1.txt is having single code (for example MH) for mupltiple phone number intials.
If you have GNU awk, try the following. Run like:
awk -f script.awk file1.txt file2.txt
Contents of script.awk:
BEGIN {
FS="[=,]"
OFS=","
}
FNR==NR {
for(i=2;i<=NF;i++) {
a[$1][$i]
}
next
}
{
$5 = "NOMATCH"
for(j in a) {
for (k in a[j]) {
if (substr($2,0,6) == k) {
$5 = j
}
}
}
}1
Alternatively, here's the one-liner:
awk -F "[=,]" 'FNR==NR { for(i=2;i<=NF;i++) a[$1][$i]; next } { $5 = "NOMATCH"; for(j in a) for (k in a[j]) if (substr($2,0,6) == k) $5 = j }1' OFS=, file1.txt file2.txt
Results:
aec,919922783456,a5,b3,DL,,asf
abc,918975583456,a1,b1,MH,,abf
aeci,919998546783,a2,b4,NOMATCH,,wsf
If you have an 'old' awk, try the following. Run like:
awk -f script.awk file1.txt file2.txt
Contents of script.awk:
BEGIN {
# set the field separator to either an equals sign or a comma
FS="[=,]"
# set the output field separator to a comma
OFS=","
}
# for the first file in the arguments list
FNR==NR {
# loop through all the fields, starting at field two
for(i=2;i<=NF;i++) {
# add field one and each field to a pseudo-multidimensional array
a[$1,$i]
}
# skip processing the rest of the code
next
}
# for the second file in the arguments list
{
# set the default value for field 5
$5 = "NOMATCH"
# loop though the array
for(j in a) {
# split the array keys into another array
split(j,b,SUBSEP)
# if the first six digits of field two equal the value stored in this array
if (substr($2,0,6) == b[2]) {
# assign field five
$5 = b[1]
}
}
# return true, therefore print by default
}1
Alternatively, here's the one-liner:
awk -F "[=,]" 'FNR==NR { for(i=2;i<=NF;i++) a[$1,$i]; next } { $5 = "NOMATCH"; for(j in a) { split(j,b,SUBSEP); if (substr($2,0,6) == b[2]) $5 = b[1] } }1' OFS=, file1.txt file2.txt
Results:
aec,919922783456,a5,b3,DL,,asf
abc,918975583456,a1,b1,MH,,abf
aeci,919998546783,a2,b4,NOMATCH,,wsf
Try something like:
awk '
NR==FNR{
for(i=2; i<=NF; i++) A[$i]=$1
next
}
{
$5="NOMATCH"
for(i in A) if ($2~"^" i) $5=A[i]
}
1
' FS='[=,]' file1 FS=, OFS=, file2