Transpose a file and fill missing fields - awk

I have tried several awk and sed commands and GNU datamash to change the format and code the missing fields as "??" of this data file with no success. I have a file with a format that looks like the following:
ind_1 SNP_1 AA
ind_1 SNP_2 AB
ind_1 SNP_3 AA
ind_2 SNP_1 AA
ind_2 SNP_2 AA
ind_3 SNP_1 AB
ind_3 SNP_2 AA
ind_3 SNP_3 AB
ind_3 SNP_4 AA
desired format:
SNP_1 SNP_2 SNP_3 SNP_4
ind_1 AA AB AA ??
ind_2 AA AA ?? ??
ind_3 AB AA AB AA
i first tried using GNU datamash
datamash --no-strict transpose < input1.txt
then i tried this awk:
awk '
!b[$1 FS $2]++{
a[++i]=$1 FS $2
}
{
c[$1 FS $2]=c[$1 FS $2]?c[$1 FS $2] FS $4:$4
}
END{
for(k=1;k<=i;k++){
print a[k],c[a[k]]
}}
' Input1_txt

awk to the rescue!
with true multidimensional arrays it would be easier, but this works for most awks
awk -v OFS='\t' '{vals[$1]; cols[$2]; a[$1,$2]=$3}
END {for(j in cols) printf "%s", OFS j;
print "";
for(i in vals)
{printf "%s", i;
for(j in cols) printf "%s", OFS (((i,j) in a)?a[i,j]:"??");
print ""}}

Related

compare and print 2 columns from 2 files in awk ou perl

I have 2 files with 2 million lines.
I need to compare 2 columns in 2 different files and I want to print the lines of the 2 files where there are equal items.
this awk code works, but it does not print lines from the 2 files:
awk 'NR == FNR {a[$3]; next}$3 in a' file1.txt file2.txt
file1.txt
0001 00000001 084010800001080
0001 00000010 041140000100004
file2.txt
2451 00000009 401208008004000
2451 00000010 084010800001080
desired output:
file1[$1]-file2[$1] file1[$2]-file2[$2] $3 ( same on both files )
0001-2451 00000001-00000010 084010800001080
how to do this in awk or perl?
Assuming your $3 values are unique within each input file as shown in your sample input/output:
$ cat tst.awk
NR==FNR {
foos[$3] = $1
bars[$3] = $2
next
}
$3 in foos {
print foos[$3] "-" $1, bars[$3] "-" $2, $3
}
$ awk -f tst.awk file1.txt file2.txt
0001-2451 00000001-00000010 084010800001080
I named the arrays foos[] and bars[] as I don't know what the first 2 columns of your input actually represent - choose a more meaningful name.
With your shown samples, please try following awk code. Fair warning
I haven't tested it yet with millions of lines.
awk '
FNR == NR{
arr1[$3]=$0
next
}
($3 in arr1){
split(arr1[$3],arr2)
print (arr2[1]"-"$1,arr2[2]"-"$2,$3)
delete arr2
}
' file1.txt file2.txt
Explanation: Adding detailed explanation for above.
awk ' ##Starting awk program from here.
FNR == NR{ ##checking condition which will be TRUE when first Input_file is being read.
arr1[$3]=$0 ##Creating arr1 array with value of $1 OFS $2 and $3
next ##next will skip all further statements from here.
}
($3 in arr1){ ##checking if $3 is present in arr1 then do following.
split(arr1[$3],arr2) ##Splitting value of arr1 into arr2.
print (arr2[1]"-"$1,arr2[2]"-"$2,$3) ##printing values as per requirement of OP.
delete arr2 ##Deleting arr2 array here.
}
' file1.txt file2.txt ##Mentioning Input_file names here.
If you have two massive files, you may want to use sort, join and awk to produce your output without having to have the first file mostly in memory.
Based on your example, this pipe would do that:
join -1 3 -2 3 <(sort -k3 -n file1) <(sort -k3 -n file2) | awk '{printf("%s-%s %s-%s %s\n",$2,$4,$3,$5,$1)}'
Prints:
0001-2451 00000001-00000010 084010800001080
If your files are that big, you might want to avoid storing the data in memory. It's a whole lot of comparisons, 2 million lines times 2 million lines = 4 * 1012 comparisons.
use strict;
use warnings;
use feature 'say';
my $file1 = shift;
my $file2 = shift;
open my $fh1, "<", $file1 or die "Cannot open '$file1': $!";
while (<$fh1>) {
my #F = split;
open my $fh2, "<", $file2 or die "Cannot open '$file2': $!";
# for each line of file1 file2 is reopened and read again
while (my $cmp = <$fh2>) {
my #C = split ' ', $cmp;
if ($F[2] eq $C[2]) { # check string equality
say "$F[0]-$C[0] $F[1]-$C[1] $F[2]";
}
}
}
With your rather limited test set, I get the following output:
0001-2451 00000001-00000010 084010800001080
Python: tested with 2.000.000 rows each file
d = {}
with open('1.txt', 'r') as f1, open('2.txt', 'r') as f2:
for line in f1:
if not line: break
c0,c1,c2 = line.split()
d[(c2)] = (c0,c1)
for line in f2:
if not line: break
c0,c1,c2 = line.split()
if (c2) in d: print("{}-{} {}-{} {}".format(d[(c2)][0], c0, d[(c2)][1], c1, c2))
$ time python3 comapre.py
1001-2001 10000001-20000001 224010800001084
1042-2013 10000042-20000013 224010800001096
real 0m3.555s
user 0m3.234s
sys 0m0.321s

awk to divide fields based on match in file1

I am trying to use awk to do the below steps
find matching fields $1 strings between file1 and file2
if the $1 strings match then $2 in file1 is divided by $3 in file2 (that is x which is 3 signifigant figures rounded up)
x is multiplied by 100
each x is subtracted from 100 and that is the %
file1
USH2A 21
GIT1 357
PALB2 3
file2
GIT1 21 3096
USH2A 71 17718
PALB2 13 3954
awk
awk 'NR==FNR{a[$1]=$1;next;}{if ($1 in a) print $1, $2/a[$3];else print;}' file2 file1 > test
awk: cmd. line:1: (FILENAME=search FNR=2) fatal: division by zero attempted
awk 'NR==FNR{a[$1]=$1;next;}{if ($1 in a) print $1, $2/a[$3];else print;}' file1 file2 > test
awk: cmd. line:1: (FILENAME=search FNR=1) fatal: division by zero attempted
example
USH2A match is found so (21/17718)*100 = 0.11 and 100-0.11 = 99.99%
GIT1 match is found so (357/3096)*100 = 11.53 and 100-11.53 = 88.47%
PALB2 match is found so (3/3954) *100 = 0.07 and 100-0.7 = 99.93%
I am going line by line in the code and can see that I am already getting errors. Thank you :).
awk to the rescue!
$ awk 'function ceil(v) {return int(v)==v?v:int(v+1)}
NR==FNR{f1[$1]=$2; next}
$1 in f1{print $1, ceil(10000*(1-f1[$1]/$3))/100 "%"}' file1 file2
GIT1 88.47%
USH2A 99.89%
PALB2 99.93%
note that there is no round-up in awk so defined a ceil function for this task.
$ cat tst.awk
NR==FNR { a[$1]=$3; next }
$1 in a {
x = (a[$1] ? ($2*100)/a[$1] : 0)
printf "%s match is found so (%d/%d) *100 = %.2f and 100-%.2f = %.2f%%\n", $1, $2, a[$1], x, x, 100-x
}
$ awk -f tst.awk file2 file1
USH2A match is found so (21/17718) *100 = 0.12 and 100-0.12 = 99.88%
GIT1 match is found so (357/3096) *100 = 11.53 and 100-11.53 = 88.47%
PALB2 match is found so (3/3954) *100 = 0.08 and 100-0.08 = 99.92%

awk code in file comparision

two files which has component name and version number separated by a space:
cat file1
com.acc.invm:FNS_PROD 94.0.5
com.acc.invm:FNS_TEST_DCCC_Mangment 94.1.6
com.acc.invm:FNS_APIPlat_BDMap 100.0.9
com.acc.invm:SendEmail 29.6.113
com.acc.invm:SendSms 12.23.65
cat file2
com.acc.invm:FNS_PROD 95.0.5
com.acc.invm:FNS_TEST_DCCC_Mangment 94.0.6
com.acc.invm:FNS_APIPlat_BDMap 100.0.10
com.acc.invm:SendEmail 29.60.113
com.acc.invm:SendSms 133.28.65
com.acc.invm:distri_cob 110
desired output :
com.acc.invm:FNS_PROD 95.0.5
com.acc.invm:SendSms 133.28.65
needed output is: All components from file1 with a higher version than in file2 in only in first decimal position.
in desired output "com.acc.invm:FNS_PROD" is coming because 96(in file1) > 95(in file2)
"com.acc.invm:FNS_TEST_DCCC_Mangment" is not coming because 94.1.6(in file1) 94.0.6 ( in file2), first decimal value is same (94=94).
tried awk code but no luck.
tst.awk
{ split($2,a,/\./); curr = a[1]*10000 + a[2]*100 + a[3] }
NR==FNR { prev[$1] = curr; next }
!($1 in prev) || (curr > prev[$1])
/usr/bin/nawk -f file2 file1 tst.awk
Any suggestion will be welcome.
According to your statement(only in first decimal position), you don't need the curr = a[1]*10000 + a[2]*100 + a[3]. Just use curr = a[1] would be fine.
As your desired output only contain the line both in file1 and file2, so ($1 in prev) && (curr > prev[$1]) is needed.
{split($2,a,/\./); curr = a[1];}
NR==FNR {prev[$1] = curr; next }
($1 in prev) && (curr > prev[$1])
DEMO
lo#ubuntu:~$ cat f1
com.acc.invm:FNS_PROD 94.0.5
com.acc.invm:FNS_TEST_DCCC_Mangment 94.1.6
com.acc.invm:FNS_APIPlat_BDMap 100.0.9
com.acc.invm:SendEmail 29.6.113
com.acc.invm:SendSms 12.23.65
lo#ubuntu:~$ cat f2
com.acc.invm:FNS_PROD 95.0.5
com.acc.invm:FNS_TEST_DCCC_Mangment 94.0.6
com.acc.invm:FNS_APIPlat_BDMap 100.0.10
com.acc.invm:SendEmail 29.60.113
com.acc.invm:SendSms 133.28.65
com.acc.invm:distri_cob 110
lo#ubuntu:~$ awk -f t.awk f1 f2
com.acc.invm:FNS_PROD 95.0.5
com.acc.invm:SendSms 133.28.65
lo#ubuntu:~$ cat t.awk
{split($2,a,/\./); curr = a[1];}
NR==FNR {prev[$1] = curr; next }
($1 in prev) && (curr > prev[$1])
awk '{ Version = $2
sub( /[.].*/, "", Version)
if ( FNR == NR ) Versionning[ $1] = Version
else if( Versionning[ $1] < Version) print
}' file1 file2
You can adapt the last if to discard non existing line/product in file 1 changing the condition with Versionning [ $1] != "" && Versionning[ $1] < Version

add a new column to the file based on another file

I have two files file1 and file2 as shown below. file1 has two columns and file2 has one column. I want to add second column to the file2 based on file1. How can I do this with awk?
file1
2WPN B
2WUS A
2X83 A
2XFG A
2XQR C
file2
2WPN_1
2WPN_2
2WPN_3
2WUS
2X83
2XFG_1
2XFG_2
2XQR
Desired Output
2WPN_1 B
2WPN_2 B
2WPN_3 B
2WUS A
2X83 A
2XFG_1 A
2XFG_2 A
2XQR C
your help would be appreciated.
awk -v OFS='\t' 'FNR == NR { a[$1] = $2; next } { t = $1; sub(/_.*$/, "", t); print $1, a[t] }' file1 file2
Or
awk 'FNR == NR { a[$1] = $2; next } { t = $1; sub(/_.*$/, "", t); printf "%s\t%s\n", $1, a[t] }' file1 file2
Output:
2WPN_1 B
2WPN_2 B
2WPN_3 B
2WUS A
2X83 A
2XFG_1 A
2XFG_2 A
2XQR C
You may pass output to column -t to keep it uniform with spaces and not tabs.

awk improve command - Count & Sum

Would like to get your suggestion to improve this command and want to remove unwanted execution to avoid time consumption,
actually i am trying to find CountOfLines and SumOf$6 group by $2,substr($3,4,6),substr($4,4,6),$10,$8,$6.
GunZip Input file contains around 300 Mn rows of lines.
Input.gz
2067,0,09-MAY-12.04:05:14,09-MAY-12.04:05:14,21-MAR-16,600,INR,RO312,20120321_1C,K1,,32
2160,0,26-MAY-14.02:05:27,26-MAY-14.02:05:27,18-APR-18,600,INR,RO414,20140418_7,K1,,30
2160,0,26-MAY-14.02:05:27,26-MAY-14.02:05:27,18-APR-18,600,INR,RO414,20140418_7,K1,,30
2160,0,26-MAY-14.02:05:27,26-MAY-14.02:05:27,18-APR-18,600,INR,RO414,20140418_7,K1,,30
2104,5,13-JAN-13.01:01:38,,13-JAN-17,4150,INR,RO113,CD1301_RC50_B1_20130113,K2,,21
Am using the below command and working fine.
zcat Input.gz | awk -F"," '{OFS=","; print $2,substr($3,4,6),substr($4,4,6),$10,$8,$6}' | \
awk -F"," 'BEGIN {count=0; sum=0; OFS=","} {key=$0; a[key]++;b[key]=b[key]+$6} \
END {for (i in a) print i,a[i],b[i]}' >Output.txt
Output.txt
0,MAY-14,MAY-14,K1,RO414,600,3,1800
0,MAY-12,MAY-12,K1,RO312,600,1,600
5,JAN-13,,K2,RO113,4150,1,4150
Any suggestion to improve the above command are welcome ..
This seems more efficient:
zcat Input.gz | awk -F, '{key=$2","substr($3,4,6)","substr($4,4,6)","$10","$8","$6;++a[key];b[key]=b[key]+$6}END{for(i in a)print i","a[i]","b[i]}'
Output:
0,MAY-14,MAY-14,K1,RO414,600,3,1800
0,MAY-12,MAY-12,K1,RO312,600,1,600
5,JAN-13,,K2,RO113,4150,1,4150
Uncondensed form:
zcat Input.gz | awk -F, '{
key = $2 "," substr($3, 4, 6) "," substr($4, 4, 6) "," $10 "," $8 "," $6
++a[key]
b[key] = b[key] + $6
}
END {
for (i in a)
print i "," a[i] "," b[i]
}'
You can do this with one awk invocation by redefining the fields according to the first awk script, i.e. something like this:
$1 = $2
$2 = substr($3, 4, 6)
$3 = substr($4, 4, 6)
$4 = $10
$5 = $8
No need to change $6 as that is the same field. Now if you base the key on the new fields, the second script will work almost unaltered. Here is how I would write it, moving the code into a script file for better readability and maintainability:
zcat Input.gz | awk -f parse.awk
Where parse.awk contains:
BEGIN {
FS = OFS = ","
}
{
$1 = $2
$2 = substr($3, 4, 6)
$3 = substr($4, 4, 6)
$4 = $10
$5 = $8
key = $1 OFS $2 OFS $3 OFS $4 OFS $5 OFS $6
a[key]++
b[key] += $6
}
END {
for (i in a)
print i, a[i], b[i]
}
You can of course still run it as a one-liner, but it will look more cryptic:
zcat Input.gz | awk '{ key = $2 FS substr($3,4,6) FS substr($4,4,6) FS $10 FS $8 FS $6; a[key]++; b[key]+=$6 } END { for (i in a) print i,a[i],b[i] }' FS=, OFS=,
Output in both cases:
0,MAY-14,MAY-14,K1,RO414,600,3,1800
0,MAY-12,MAY-12,K1,RO312,600,1,600
5,JAN-13,,K2,RO113,4150,1,4150