lookup value associate with list items - awk

I am trying to approximate vlookup from excel.
I have two files.
file #1 - list.txt:
green
purple
orange
file #2 - reads.txt:
blue 2
green 3
red 5
purple 6
I am trying to read in the entries of list.txt, then go to reads.txt and pull out the associated value.
the desired output would be:
green 3
purple 6
orange 0
if I write:
awk -F ' ' 'FNR == NR {keys[$1]; next} {if ($1 in keys) print $1,$2}' list.txt reads.txt
I get:
green 3
purple 6
but nothing for orange, and I need the line:
orange 0
If I write
awk -F ' ' 'FNR == NR {keys[$1]; next} {if ($1 in keys) print $1,$2; else print $1,0}' list.txt reads.txt
I get:
blue 0
green 3
red 0
purple 6
any ideas how to fix this?
major newbie here, so any help appreciated!

$ awk 'NR==FNR{map[$1]=$2; next} {print $1, map[$1]+0}' reads.txt list.txt
green 3
purple 6
orange 0

1st solution(with as per shown samples): Could you please try following, written and tested with shown samples.
awk '
FNR==NR{
arr[$1]=$2
next
}
($0 in arr){
print $0,arr[$0]
next
}
{
print $0,0
}
' reads.txt list.txt
Output will be as follows, for shown samples.
green 3
purple 6
orange 0
2nd solution(Generic solution): In case your Input_file named reads.txt has multiple values of same first column and you want to print all values which are present in list.txt then please try following.
awk '
FNR==NR{
++arr[$1]
val[$1 OFS arr[$1]]=$2
next
}
($0 in arr){
for(i=1;i<=arr[$0];i++){
print $0,val[$0 OFS i]
}
next
}
{
print $0,0
}
' reads.txt list.txt
Sample of run: Let's say your sample of reads.txt is this:
cat reads.txt
blue 2
green 3
red 5
purple 6
green 15
green 120
Now after running this Generic solution we will get following.
green 3
green 15
green 120
purple 6
orange 0

If order doesn't matter, I'd use a left join instead of awk:
$ join -a1 -e 0 -o 0,2.2 <(sort list1.txt) <(sort reads.txt)
green 3
orange 0
purple 6
(Assumes a shell like bash that supports <() command redirection syntax).

Related

awk Can not Select Column with empty value

i am trying to select a column with its missing value
here is my input file separated by tab
1 2 3
4 5
6
7 8
9
i am trying to select the first column in which output will look like
1
4
7
and the length of my column would be 5 in this case
I have tried
awk '$1!=""{print $1}' ./demo.txt
but it returns
1
4
6
7
9
can anybody help with this I am new in AWK
You can use cut:
$ cut -f 1 file # the default delimiter is a tab
Or with sed:
$ sed 's/[[:blank:]].*$//' file
Or awk:
$ awk '{sub(/[[:blank:]].*$/,"")}1' file
Or:
$ awk 'BEGIN{FS=OFS="\t"} {print $1}' file
All those print the first column and all five lines (blank or not)
Prints:
1
4
7
Tell awk to use a tab (\t) as the input field delimiter (-F):
$ awk -F'\t' '{ print $1 }' demo.txt
1
4
7
If you want to print multiple columns, maintaining the same delimiter for output, another approach using the FS and OFS variables:
$ awk 'BEGIN { FS=OFS="\t" } { print $1,$3 }' demo.txt
1 3
4 5
7
9
With sed something like:
sed 's/^\([^[:blank:]]*\).*/\1/' demo.txt
Using FIELDWIDTHS in gnu-awk you can do this for fixed width separated data:
awk 'BEGIN {FIELDWIDTHS = "4 4 *"} {print $1}' file
1
4
7
For demo purpose:
awk 'BEGIN {FIELDWIDTHS = "4 4 *"} {print NR ":", $1}' file
1: 1
2: 4
3:
4: 7
5:
if they're all single digits in 1st column :
echo \
'1 2 3
4 5
6
7 8
9' |
mawk NF=1 FS=
gcat -n
1 1
2 4
3
4 7
5
that's literally all you need. To play it safe, then do
nawk NF=1 FS='[[:space:]]' # overly-verbose so-called
# "proper" posix form
gawk NF=1 FS='[ \t]' # suffices unless the input
# happens to have uncommon bytes
# like \013 \v or \014 \f
or a very fringe way of fudging NF :
mawk 'NF ^= FS="[ \t]"'

awk to compare multiple columns in 2 files

I would like to compare multiple columns from 2 files and NOT print lines matching my criteria.
An example of this would be:
file1
apple green 4
orange red 5
apple yellow 6
apple yellow 8
grape green 5
file2
apple yellow 7
grape green 10
output
apple green 4
orange red 5
apple yellow 8
I want to remove lines where $1 and $2 from file1 correspond to $1 and $2 from file2 AND when $3 from file1 is smaller than $3 from file2.
I can now only do the first part of the job, that is remove lines where $1 and $2 from file1 correspond to $1 and $2 from file2 (fields are separated by tabs):
awk -F '\t' 'FNR == NR {a[$1FS$2]=$1; next} !($1FS$2 in a)' file2 file1
Could you help me apply the last condition?
Many thanks in advance!
What you are after is this:
awk '(NR==FNR){a[$1,$2]=$3; next}!(($1,$2) in a) && a[$1,$2] < $3))' <file2> <file1>
Store 3rd field value while building the array and then use it for comparison
$ awk -F '\t' 'FNR==NR{a[$1FS$2]=$3; next} !(($1FS$2 in a) && $3 > a[$1FS$2])' f2 f1
apple green 4
orange red 5
apple yellow 6
grape green 5
Better written as:
awk -F '\t' '{k = $1FS$2} FNR==NR{a[k]=$3; next} !((k in a) && $3 > a[k])' f2 f1

How to replace multiple empty fields into zeroes using awk

I am using the following command to replace tab delimited empty fields with zeroes.
awk 'BEGIN { FS = OFS = "\t" } { for(i=1; i<=NF; i++) if($i ~ /^ *$/) $i = 0 }; 1'
How can I do the same, if I have the following input that is not tab delimited and have multiple empty fields ?
input
name A1348138 A1086070 A1080879 A1070208 A821846 A1068905 A1101931
g1 5 8 1 2 1 3 1
g2 1 3 2 1 1 2
desired output
name A1348138 A1086070 A1080879 A1070208 A821846 A1068905 A1101931
g1 5 8 1 2 1 3 1
g2 1 3 2 1 1 2 0
I'd suggest using GNU awk for FIELDWIDTHS to solve the problem you appear to be asking about and also to convert your fixed-width input to tab-separated output (or something else sensible) while you're at it:
$ cat file
1 2 3
4 6
$ gawk -v FIELDWIDTHS='4 4 4' -v OFS='\t' '{for (i=1;i<=NF;i++) {gsub(/^[[:space:]]+|[[:space:]]+$/,"",$i); $i=($i==""?0:$i)}; print}' file
1 2 3
4 0 6
$ gawk -v FIELDWIDTHS='4 4 4' -v OFS=',' '{for (i=1;i<=NF;i++) {gsub(/^[[:space:]]+|[[:space:]]+$/,"",$i); $i=($i==""?0:$i)}; print}' file
1,2,3
4,0,6
$ gawk -v FIELDWIDTHS='4 4 4' -v OFS=',' '{for (i=1;i<=NF;i++) {gsub(/^[[:space:]]+|[[:space:]]+$/,"",$i); $i="\""($i==""?0:$i)"\""}; print}' file
"1","2","3"
"4","0","6"
Take your pick of the above.

Join two columns from different files with awk

I want to join two columns from two different files using awk. These files look like (A, B, C, 0, 1, 2, etc are columns)
file1:
A B C D E F
fil2:
0 1 2 3 4 5
And I want to be able to select arbitrary columns on my ouput, something of the form:
Ie, I want the output to be:
A C E 4 5
I've seen a million answers with the following awk code (and very similar ones), offering no explanation. But none of them address the exact problem I want to solve:
awk 'FNR==NR{a[FNR]=$2;next};{$NF=a[FNR]};1' file2 file1
awk '
NR==FNR {A[$1,$3,$6] = $0; next}
($1 SUBSEP $2 SUBSEP $3) in A {print A[$1,$2,$3], $4}
' A.txt B.txt
But none of them seem to do what I want and I am not able to understand them.
So, how can I achieve the desired output using awk? (and please, offer an explanation, I want to actually learn)
Note:
I know I can do this using something like
paste <(awk '{print $1}' file1) <(awk '{print $2}' file2)
As I said, I'm trying to learn and understand awk.
With GNU awk for true multi-dimensional arrays and ARGIND:
$ awk -v flds='1 1 1 3 1 5 2 5 2 6' '
BEGIN{ nf = split(flds,o) }
{ f[ARGIND][1]; split($0,f[ARGIND]) }
NR!=FNR { for (i=2; i<=nf; i+=2) printf "%s%s", f[o[i-1]][o[i]], (i<nf?OFS:ORS) }
' file1 file2
A C E 4 5
The "flds" string is just a series of <file number> <field number in that file> pairs so you can print the fields from each file in whatever order you like, e.g.:
$ awk -v flds='1 1 2 2 1 3 2 4 1 5 2 6' 'BEGIN{nf=split(flds,o)} {f[ARGIND][1]; split($0,f[ARGIND])} NR!=FNR{for (i=2; i<=nf; i+=2) printf "%s%s",f[o[i-1]][o[i]], (i<nf?OFS:ORS)}' file1 file2
A 1 C 3 E 5
$ awk -v flds='2 1 1 2 2 3 1 4 2 5' 'BEGIN{nf=split(flds,o)} {f[ARGIND][1]; split($0,f[ARGIND])} NR!=FNR{for (i=2; i<=nf; i+=2) printf "%s%s",f[o[i-1]][o[i]], (i<nf?OFS:ORS)}' file1 file2
0 B 2 D 4

linux/ubuntu awk match unique values (instead of bash "sort unique grep" unique values)

My command looks like this:
cut -f 1 dummy_FILE | sort | uniq -c | awk '{print $2}' | for i in $(cat -); do grep -w $i dummy_FILE |
awk -v VAR="$i" '{distance+=$3-$2} END {print VAR, distance}'; done
cat dummy_FILE
Red 13 14
Red 39 46
Blue 45 23
Blue 34 27
Green 31 73
I want to:
For every word in $1 dummy_FILE (Red, Blue, Green) - Calculate sum of differences between $3 and $2.
To get the output like this:
Red 8
Blue -29
Green 42
My questions are:
Is it possible to replace cut -f 1 dummy_FILE | sort | uniq -c | awk '{print $2}'?
I am using sort | uniq -c to extract every word from the dataset - is it possible to do it with awk?
How can I overcome useless cat in for i in $(cat -)?
grep -w $i dummy_FILE works fine, but I want to replace it with awk (should I?); If so how can I do this?
When I am trying to awk -v VAR="$i" '/^VAR/ '{distance+=$3-$2} END {print VAR, distance}' I am getting "fatal: division by zero attempted".
I got it using:
awk '{a[$1] = a[$1] + $3 - $2;} END{for (x in a) {print x" "a[x];}}' dummy_FILE
Output:
Blue -29
Green 42
Red 8
If you want to sort the output, just append sort after the AWK command.
Here's one way using awk:
awk '{ a[$1]=a[$1] + $3 - $2 } END { for(i in a) print i, a[i] }' dummy
Results:
Red 8
Blue -29
Green 42
If you require sorted output, you could simply pipe into sort like arutaku suggests:
awk '{ a[$1]=a[$1] + $3 - $2 } END { for(i in a) print i, a[i] }' dummy | sort
You can however, print into sort (within the awk statement), like this:
awk '{ a[$1]=a[$1] + $3 - $2 } END { for(i in a) print i, a[i] | "sort" }' dummy