awk,merge two data sets based on column value - awk

I need to combine two data sets stored in variables. This merge needs to be conditional based on the value of 1st column of "$x" and third column of "$y"
-->echo "$x"
12 hey
23 hello
34 hi
-->echo "$y"
aa bb 12
bb cc 55
ff gg 34
ss ww 23
By following command, I managed to store the value of first column of $x in a[] and check for third column of $y but not getting what I am expecting, can someone please help here.
awk 'NR==FNR{a[$1]=$1;next} $3 in a{print $0,a[$1]}' <(echo "$x") <(echo "$y")
aa bb 12
ff gg 34
ss ww 23
Expected result:
aa bb 12 hey
ff gg 34 hi
ss ww 23 hello

Your answer is almost right:
awk 'NR==FNR{a[$1]=$2;next} ($3 in a){print $0,a[$3]}' <(echo "$x") <(echo "$y")
Note the a[$1]=$2 and the print $0,a[$3].

join -1 1 -2 3 <(sort -k 1b,1 a.txt) <(sort -k 3b,3 b.txt) |awk '{print $3, $4, $1, $2 }'
Might be a solution for your input in two textfiles a.txt and b.txt using join on your two number columns.
It does not keep the order though. You might have to sort again if it is important.

Related

How to update one file's column from another file's column in awk

I have two files, the first file:
1 AA
2 BB
3 CC
4 DD
and the second file
15 AA
17 BB
20 CC
25 FF
File 1 should be updated and the expected output should looks like this:
15 AA
17 BB
20 CC
4 DD
I have tried this script from another post but it didn't work
awk 'NR==FNR{a[$1]=$2;next}a[$1]{print $2,a[$1]}' file1 file2
$ awk 'NR==FNR{a[$2]=$1; next} $2 in a{$1=a[$2]} 1' file2 file1
15 AA
17 BB
20 CC
4 DD
Here is an awk:
awk 'FNR==NR{f2[$2]=$0; next}
$2 in f2 {print f2[$2]; next}
1' file2 file1
Prints:
15 AA
17 BB
20 CC
4 DD

AWK program that can read a second file either from a file specified on the command line or from data received via a pipe

I have an AWK program that does a join of two files, file1 and file2. The files are joined based on a set of columns. I placed the AWK program into a bash script that I named join.sh. See below. Here is an example of how the script is executed:
./join.sh '1,2,3,4' '2,3,4,5' file1 file2
That says this: Do a join of file1 and file2, using columns (fields) 1,2,3,4 of file1 and columns (fields) 2,3,4,5 of file2.
That works great.
Now what I would like to do is to filter file2 and pipe the results to the join tool:
./fetch.sh ident file2 | ./join.sh '1,2,3,4' '2,3,4,5' file1
fetch.sh is a bash script containing an AWK program that fetches the rows in file2 with primary key ident and outputs to stdout the rows that were fetched.
Unfortunately, that pipeline is not working. I get no results.
Recap: I want the join program to be able to read the second file either from a file that I specify on the command line or from data received via a pipe. How to do that?
Here is my bash script, named join.sh
#!/bin/bash
awk -v f1cols=$1 -v f2cols=$2 '
BEGIN { FS=OFS="\t"
m=split(f1cols,f1,",")
n=split(f2cols,f2,",")
}
{ sub(/\r$/, "") }
NR == 1 { b[0] = $0 }
(NR == FNR) && (NR > 1) { idx2=$(f2[1])
for (i=2;i<=n;i++)
idx2=idx2 $(f2[i])
a[idx2] = $0
next
}
(NR != FNR) && (FNR == 1) { print $0, b[0] }
FNR > 1 { idx1=$(f1[1])
for (i=2;i<=m;i++)
idx1=idx1 $(f1[i])
for (idx1 in a)
print $0, a[idx1]
}' $3 $4
I'm not sure if this is 'correct' as you haven't provided any example input and expected output, but does using - to signify stdin work for your use-case? E.g.
cat file1
1 2 3 4
AA BB CC DD
AA EE FF GG
cat file2
1 2 3 4
AA ZZ YY XX
AA 11 22 33
./join.sh '1' '1' file1 file2
1 2 3 4 1 2 3 4
AA ZZ YY XX AA BB CC DD
AA ZZ YY XX AA EE FF GG
AA 11 22 33 AA BB CC DD
AA 11 22 33 AA EE FF GG
cat file2 | ./join.sh '1' '1' file1 -
1 2 3 4 1 2 3 4
AA ZZ YY XX AA BB CC DD
AA ZZ YY XX AA EE FF GG
AA 11 22 33 AA BB CC DD
AA 11 22 33 AA EE FF GG
be able to read(...)from data received via a pipe
GNU AWK does support Using getline from a Pipe consider following simple example
awk 'BEGIN{cmd="seq 7";while((cmd | getline) > 0){print $1*7};close(cmd)}' emptyfile
gives output
7
14
21
28
35
42
49
Explanation: I process output of seq 7 command (numbers from 1 to 7 inclusive, each on separate line), body of while is executed for each line of seq 7 output, fields are set like for normal processing.

compare two file for match, print only one if duplicate matching found

I am having two files. File1 and File2. File2 is having some duplicate entries which I cannot remove due to complexity in the file structure. Now, while generating File3 which will have a matching 1st and 2nd column between File1 and File2; i want to have only one entry from File2 for matching pattern from File1. Whats the best way to do this. I tried awk 'NR==FNR{a[$1,$2]=$0;next} ($1,$2) in a{print $0}' File1 File2 but it keep all the matching entries from File2
File1
ab 12
cd 24
ef 56
File2
ab 12
ab 12
ef 56
What am getting is
File3
ab 12
ab 12
ef 56
But what I want is
File3
ab 12
ef 56
Thanks
Some more way,
Input:
$ cat f1
ab 12
cd 24
ef 56
$ cat f2
ab 12
ab 12
ef 56
Output:
$ awk '{k=$1 SUBSEP $2}FNR==NR{a[k]; next}k in a && !a[k]++' f1 f2
ab 12
ef 56
For better Readability ++a[k]==1 ( by considering thread title "compare two file for match, print only one if duplicate matching found" )
$ awk '{k=$1 SUBSEP $2}FNR==NR{a[k]; next}k in a && ++a[k]==1' f1 f2
ab 12
ef 56
You need to delete the entry from a after finding a matching line.
awk 'NR==FNR {a[$0]; next} ($0 in a) {delete a[$0]; print}' File1 File2

how to append a line with sed/awk after specific text

i would like to translate this input file using sed or awk:
input
1 AA
3 BB
5 CC
output
1 AA
3 BB
3 GG
5 CC
the closest syntax I found on this site sed -i '/^BB:/ s/$/ GG/' file but it does 3 BB GG. What I need is similar to a vi yank, paste & regex replace.
can this be done with sed or awk? thanks
Rand
With GNU sed:
sed -r 's/^([^ ]*) BB$/&\n\1 GG/' file
Output:
1 AA
3 BB
3 GG
5 CC
This might work for you (GNU sed):
sed '/BB/p;s//GG/' file
If the line contains the required string print it then substitute another string for it.
awk is a fine choice for this:
awk '{print $0} $2=="BB"{print $1,"GG"}' yourfile.txt
That will print the line {print $0}. And then if the second field in the line is equal to "BB", it will print the first field in the line (the number) and the text "GG".
Example in use:
>echo "1 AA\n3 BB\n4 RR" | awk '{print $0} $2=="BB"{print $1,"GG"}'
1 AA
3 BB
3 GG
4 RR
In awk:
$ awk '1; /BB/ && $2="GG"' input
1 AA
3 BB
3 GG
5 CC
1 prints the record. If there was BB in the record just printed, replace it with GG and print again.

Reading from a file and writing to another using Awk

There are two tab delimiter text files. My aim is to change File 1 so that corresponding values in the 2nd column of File 2 will be substituted with zeros in File 1.
To visualize,
File 1:
AA 0
BB 0
CC 0
DD 0
EE 0
File 2:
AA 256
DD 142
EE 26
File 1 - Output:
AA 256
BB 0
CC 0
DD 142
EE 26
I wrote below but as you can see I give the value of 1st row of File 2 by hand. I want to achieve this task automatically. What should I do?
awk -F'\t' 'BEGIN {OFS=FS} {if($1 == "AA") $2="256";print}' test > test.tmp && mv test.tmp test
Thank you in advance.
awk 'BEGIN {FS=OFS="\t"} NR==FNR{a[$1]=$2; next} {print $1, a[$1]+0}' file2 file1