Is there any way to read the user input through the awk programming?
I try writing a script to read a file which contained student's name and ID.
I have to get the name of the student from the user through the keyboard and return all student's results by using the awk.
You can collect user input using the getline function. Make sure to set this in the BEGIN block. Here's the contents of script.awk:
BEGIN {
printf "Enter the student's name: "
getline name < "-"
}
$2 == name {
print
}
Here's an example file with ID's, names, and results:
1 jonathan good
2 jane bad
3 steve evil
4 mike nice
Run like:
awk -f ./script.awk file.txt
Assuming the input file is formatted as:
name<tab>id
pairs and you want to print the line where the name in the file matches the user input, try this:
awk '
BEGIN { FS=OFS="\t"; printf "Enter name: " }
NR == FNR { name = $0; next }
$1 == name
' - file
or with GNU awk you can use nextfile so you don't have to enter control-D after your input:
awk '
BEGIN { FS=OFS="\t"; printf "Enter name: " }
NR == FNR { name = $0; nextfile }
$1 == name
' - file
Post some sample input and expected output if that's not what you're trying to do.
I've tested with line
"awk 'BEGIN{printf "enter:";getline name<"/dev/tty"} {print $0} END{printf "[%s]", name}' < /etc/passwd"
and for me is better solution and more readeable.
Related
I have multi columns file and i want to extract some info in column 71.
I want to extract using tags which the value can be anything, for example i want to just extract AC=* ; AF=* , where the value can be anything .
I found similar question and gave it a try but it didn't work
Extract columns with values matching a specific pattern
Column 71 looks like this:
AC=14511;AC_AFR=382;AC_AMR=1177;AC_Adj=14343;AC_EAS=5;AC_FIN=427;AC_Het=11813;AC_Hom=1265;AC_NFE=11027;AC_OTH=97;AC_SAS=1228;AF=0.137;AN=106198;AN_AFR=8190;AN_AMR=10424;AN_Adj=99264;AN_EAS=7068;AN_FIN=6414;AN_NFE=51090;AN_OTH=658;AN_SAS=15420;BaseQRankSum=1.73;ClippingRankSum=-1.460e-01;DB;DP=1268322;FS=0.000;GQ_MEAN=190.24;GQ_STDDEV=319.67;Het_AFR=358;Het_AMR=1049;Het_EAS=5;Het_FIN=399;Het_NFE=8799;Het_OTH=83;Het_SAS=1120;Hom_AFR=12;Hom_AMR=64;Hom_EAS=0;Hom_FIN=14;Hom_NFE=1114;Hom_OTH=7;Hom_SAS=54;InbreedingCoeff=0.0478;MQ=60.00;MQ0=0;MQRankSum=0.037;NCC=270;POSITIVE_TRAIN_SITE;QD=21.41;ReadPosRankSum=0.212;VQSLOD=4.79;culprit=MQ;DP_HIST=30|3209|1539|1494|30007|7938|4130|2038|1310|612|334|185|97|60|31|25|9|11|7|33,0|66|339|1048|2096|2665|2626|1832|1210|584|323|179|89|54|31|22|7|9|4|15;GQ_HIST=84|66|56|82|3299|568|617|403|250|319|436|310|28566|2937|827|834|451|186|217|12591,15|15|13|16|25|11|22|28|18|38|52|31|65|76|39|83|93|65|97|12397;CSQ=T|ENSG00000186868|ENST00000334239|Transcript|intron_variant||||||rs754512|1||1|MAPT|HGNC|6893|protein_coding||CCDS11502.1|ENSP00000334886|TAU_HUMAN|B4DSE3_HUMAN|UPI0000000C16||||2/8||ENST00000334239.8:c.134-94A>T||T:0.1171|||||||||15792962||||||||,T|ENSG00000186868|ENST00000570299|Transcript|intron_variant&non_coding_transcript_variant||||||rs754512|1||1|MAPT|HGNC|6893|processed_transcript||||||||||2/6||ENST00000570299.1:n.262-94A>T||T:0.1171|||||||||15792962||||||||,T|ENSG00000186868|ENST00000340799|Transcript|intron_variant||||||rs754512|1||1|MAPT|HGNC|6893|protein_coding||CCDS45716.1|ENSP00000340438|TAU_HUMAN||UPI000004EEE6||||3/10||ENST00000340799.5:c.221-94A>T||T:0.1171|||||||||15792962||||||||,T|ENSG00000186868|ENST00000262410|Transcript|intron_variant||||||rs754512|1||1|MAPT|HGNC|6893|protein_coding||CCDS11501.1|ENSP00000262410|TAU_HUMAN||UPI0000EE80B7||||4/13||ENST00000262410.5:c.308-94A>T||T:0.1171|||||||||15792962||||||||,T|ENSG00000186868|ENST00000446361|Transcript|intron_variant||||||rs754512|1||1|MAPT|HGNC|6893|protein_coding||CCDS11500.1|ENSP00000408975|TAU_HUMAN||UPI000004EEE5||||2/9||ENST00000446361.3:c.134-94A>T||T:0.1171|||||||||15792962||||||||,T|ENSG00000186868|ENST00000574436|Transcript|intron_variant||||||rs754512|1||1|MAPT|HGNC|6893|protein_coding||CCDS11499.1|ENSP00000460965|TAU_HUMAN||UPI000002D754||||3/10||ENST00000574436.1:c.308-94A>T||T:0.1171|||||||||15792962||||||||,T|ENSG00000186868|ENST00000571987|Transcript|intron_variant||||||rs754512|1||1|MAPT|HGNC|6893|protein_coding||CCDS11501.1|ENSP00000458742|TAU_HUMAN||UPI0000EE80B7||||3/12||ENST00000571987.1:c.308-94A>T||T:0.1171|||||||||15792962||||||||,T|ENSG00000186868|ENST00000415613|Transcript|intron_variant||||||rs754512|1||1|MAPT|HGNC|6893|protein_coding||CCDS45715.1|ENSP00000410838|TAU_HUMAN||UPI0001AE66E9||||3/13||ENST00000415613.2:c.308-94A>T||T:0.1171|||||||||15792962||||||||,T|ENSG00000186868|ENST00000571311|Transcript|intron_variant&NMD_transcript_variant||||||rs754512|1||1|MAPT|HGNC|6893|nonsense_mediated_decay|||ENSP00000460048||I3L2Z2_HUMAN|UPI00025A2E6E||||4/4||ENST00000571311.1:c.*176-94A>T||T:0.1171|||||||||15792962||||||||,T|ENSG00000186868|ENST00000535772|Transcript|intron_variant||||||rs754512|1||1|MAPT|HGNC|6893|protein_coding||CCDS56033.1|ENSP00000443028|TAU_HUMAN|B4DSE3_HUMAN|UPI000004EEE4||||4/10||ENST00000535772.1:c.308-94A>T||T:0.1171|||||||||15792962||||||||,T|ENSG00000186868|ENST00000576518|Transcript|stop_gained|5499|7|3|K/*|Aag/Tag|rs754512|1||1|MAPT|HGNC|6893|protein_coding|||ENSP00000458621||I3L170_HUMAN&B4DSE3_HUMAN|UPI0001639A7C|||1/7|||ENST00000576518.1:c.7A>T|ENSP00000458621.1:p.Lys3Ter|T:0.1171|||||||||15792962|||||POSITION:0.00682261208576998&ANN_ORF:-255.6993&MAX_ORF:-255.6993|PHYLOCSF_WEAK|ANC_ALLELE|LC,T|ENSG00000186868|ENST00000420682|Transcript|intron_variant||||||rs754512|1||1|MAPT|HGNC|6893|protein_coding||CCDS45716.1|ENSP00000413056|TAU_HUMAN||UPI000004EEE6||||2/9||ENST00000420682.2:c.221-94A>T||T:0.1171|||||||||15792962||||||||,T|ENSG00000186868|ENST00000572440|Transcript|non_coding_transcript_exon_variant&non_coding_transcript_variant|2790|||||rs754512|1||1|MAPT|HGNC|6893|retained_intron|||||||||1/1|||ENST00000572440.1:n.2790A>T||T:0.1171|||||||||15792962||||||||,T|ENSG00000186868|ENST00000351559|Transcript|intron_variant||||||rs754512|1||1|MAPT|HGNC|6893|protein_coding||CCDS11499.1|ENSP00000303214|TAU_HUMAN||UPI000002D754||||4/11||ENST00000351559.5:c.308-94A>T||T:0.1171|||||||||15792962||||||||,T|ENSG00000186868|ENST00000344290|Transcript|intron_variant||||||rs754512|1||1|MAPT|HGNC|6893|protein_coding|YES|CCDS45715.1|ENSP00000340820|TAU_HUMAN||UPI0001AE66E9||||4/14||ENST00000344290.5:c.308-94A>T||T:0.1171|||||||||15792962||||||||,T|ENSG00000186868|ENST00000347967|Transcript|intron_variant||||||rs754512|1||1|MAPT|HGNC|6893|protein_coding|||ENSP00000302706|TAU_HUMAN|B4DSE3_HUMAN|UPI0000173D91||||4/10||ENST00000347967.5:c.32-100A>T||T:0.1171|||||||||15792962||||||||,T|ENSG00000186868|ENST00000431008|Transcript|intron_variant||||||rs754512|1||1|MAPT|HGNC|6893|protein_coding||CCDS56033.1|ENSP00000389250|TAU_HUMAN|B4DSE3_HUMAN|UPI000004EEE4||||3/9||ENST00000431008.3:c.308-94A>T||T:0.1171|||||||||15792962||||||||
The code that i tried:
awk '{
for (i = 1; i <= NF; i++) {
if ($i ~ /AC|AF/) {
printf "%s %s ", $i, $(i + 1)
}
}
print ""
}'
I keep getting syntax error.
output wanted :
AC=14511;AF=0.137
Whenever you have name=value pairs, it's usually simplest to first create an array that maps names to values (n2v[] below) and then you can just access the values by their names.
$ cat file
AC=1;AC_AFR=2;AF=3 AC=4;AC_AFR=5;AF=6
$ cat tst.awk
{
delete n2v
split($2,tmp,/[;=]/)
for (i=1; i in tmp; i+=2) {
n2v[tmp[i]] = tmp[i+1]
}
prt("AC")
prt("AF")
}
function prt(name) { print name, "=", n2v[name] }
$ awk -f tst.awk file
AC = 4
AF = 6
Just change $2 to $71 for your real input.
Something like this should do it (in Gnu awk due to switch):
$ awk '{split($71,a,";");for(i in a )if(a[i]~/^AF/) print a[i]}' foo
AF=0.137
You split the field $71 by ;s, loop thru the array you split to looking for desired match. For multiple matches use switch:
$ awk '{
split($0,a,";");
for(i in a )
switch(a[i]) {
case /^AF=/:
b=b a[i] OFS;
break;
case /^AC=/:
b=b a[i] OFS;
break
}
sub(/.$/,"\n",b);
printf b
}' foo
AC=14511 AF=0.137
EDIT: Now it buffers output to a variable and prints it in the end. You can control the separator with OFS.
I'm an absolute beginner to awk and would like some help with this.
I have this data:
FOO|BAR|1234|A|B|C|D|
FOO|BAR|1234|E|F|G|H|
FOO|BAR|5678|I|J|K|L|
FOO|BAR|5678|M|N|O|P|
FOO|BAR|5678|Q|R|S|T|
Desired output:
FOO|BAR|1234|A|B|C|D|E|F|G|H|
FOO|BAR|5678|I|J|K|L|M|N|O|P|Q|R|S|T|
Basically I have to append some fields to the lines where column 3 matches.
Appreciate any responses, thanks a lot!
Another way:
awk -F"|" '$3 in a{
a[$3]=a[$3]"|"$4"|"$5"|"$6"|"$7;
next
}
{ a[$3]=$0
}
END {
for ( i in a) {
print a[i]
}
}'
$ awk -f chain.awk < data
FOO|BAR|1234|A|B|C|D|E|F|G|H|
FOO|BAR|5678|I|J|K|L|M|N|O|P|Q|R|S|T|
$ cat chain.awk
BEGIN {FS = "|"}
$3==old {for(i = 4; i <= NF; i++) saved = saved (i>4?"|":"") $i}
$3!=old {if(old) print saved ; saved = $0 ; old = $3}
END {print saved}
$
BEGIN we set the field separator
$3==old we append the fields $4 ... $NF to the saved data, joining the fields with | except for the first one (note that there is a last, null field)
$3!=old we print the saved data (except for the first record, when old is false) and we restart the mechanism
END we still have saved data in our belly, we have to print it
If we have an input:
TargetIDs,CPD,Value,SMILES
95,CPD-1111111,-2,c1ccccc1
95,CPD-2222222,-3,c1ccccc1
95,CPD-2222222,-4,c1ccccc1
95,CPD-3333333,-1,c1ccccc1N
Now we would like to separate the duplicates and non-duplicates based on the fourth column (smiles)
duplicate:
95,CPD-1111111,-2,c1ccccc1
95,CPD-2222222,-3,c1ccccc1
95,CPD-2222222,-4,c1ccccc1
non-duplicate
95,CPD-3333333,-1,c1ccccc1N
Now the following attempt could do separate the duplicate without any problem. However, the first occurrence of the duplicate will still be included into the non-duplicate file.
BEGIN { FS = ","; f1="a"; f2="b"}
{
# Keep count of the fields in fourth column
count[$4]++;
# Save the line the first time we encounter a unique field
if (count[$4] == 1)
first[$4] = $0;
# If we encounter the field for the second time, print the
# previously saved line
if (count[$4] == 2)
print first[$4] > f1 ;
# From the second time onward. always print because the field is
# duplicated
if (count[$4] > 1)
print > f1;
if (count[$4] == 1) #if (count[$4] - count[$4] == 0) <= change to this doesn't work
print first[$4] > f2;
duplicate output results from the attempt:
95,CPD-1111111,-2,c1ccccc1
95,CPD-2222222,-3,c1ccccc1
95,CPD-2222222,-4,c1ccccc1
non-duplicate output results from the attempt
TargetIDs,CPD,Value,SMILES
95,CPD-3333333,-1,c1ccccc1N
95,CPD-1111111,-2,c1ccccc1
May I know if any guru might have comments/solutions? Thanks.
I would do this:
awk '
NR==FNR {count[$2] = $1; next}
FNR==1 {FS=","; next}
{
output = (count[$NF] == 1 ? "nondup" : "dup")
print > output
}
' <(cut -d, -f4 input | sort | uniq -c) input
The process substitution will pre-process the file and perform a count on the 4th column. Then, you can process the file and decide if that line is "duplicated".
All in awk: Ed Morton shows a way to collect the data in a single pass. Here's a 2 pass solution that's virtually identical to my example above
awk -F, '
NR==FNR {count[$NF]++; next}
FNR==1 {next}
{
output = (count[$NF] == 1 ? "nondup" : "dup")
print > output
}
' input input
Yes, the input file is given twice.
$ cat tst.awk
BEGIN{ FS="," }
NR>1 {
if (cnt[$4]++) {
dups[$4] = nonDups[$4] dups[$4] $0 ORS
delete nonDups[$4]
}
else {
nonDups[$4] = $0 ORS
}
}
END {
print "Duplicates:"
for (key in dups) {
printf "%s", dups[key]
}
print "\nNon Duplicates:"
for (key in nonDups) {
printf "%s", nonDups[key]
}
}
$ awk -f tst.awk file
Duplicates:
95,CPD-1111111,-2,c1ccccc1
95,CPD-2222222,-3,c1ccccc1
95,CPD-2222222,-4,c1ccccc1
Non Duplicates:
95,CPD-3333333,-1,c1ccccc1N
This solution only works if the duplicates are grouped together.
awk -F, '
function fout( f, i) {
f = (cnt > 1) ? "dups" : "nondups"
for (i = 1; i <= cnt; ++i)
print lines[i] > f
}
NR > 1 && $4 != lastkey { fout(); cnt = 0 }
{ lastkey = $4; lines[++cnt] = $0 }
END { fout() }
' file
Little late
My version in awk
awk -F, 'NR>1{a[$0":"$4];b[$4]++}
END{d="\n\nnondupe";e="dupe"
for(i in a){split(i,c,":");b[c[2]]==1?d=d"\n"i:e=e"\n"i} print e d}' file
Another built similar to glenn jackmans but all in awk
awk -F, 'function r(f) {while((getline <f)>0)a[$4]++;close(f)}
BEGIN{r(ARGV[1])}{output=(a[$4] == 1 ? "nondup" : "dup");print >output} ' file
I want to compare field 1 with field 2 and find duplication
For eg: file contains below data
51:40-5E:40
51:41-5E:41
51:42-51:40
51:52-5E:52
51:A0-5E:A0
51:A9-5D:B8
51:AA-5E:53
In this file 51:40 is found in $1 and $2 so I need this 51:40 to be printed when running the script
This awk one-liner might work for you:
awk -F- '$1 in a{print $1}{b[$1]}$2 in b{print $2}{a[$2]}' file
You want
awk '
BEGIN {FS = "-"}
{
field1[$1]++
field2[$2]++
}
END {
for (item in field1) {
if (item in field2) {
print item
}
}
}
' filename
as a one-liner:
awk -F- '{a[$1];b[$2]} END {for (i in a) if (i in b) print i}' filename
As I know in awk, $1 and $2 refer to the first and second field of the file . But can $1 and $2 be used to refer the first and second field of a variable .. Such that if session=5 is stored in a variable. Then I would like to have $1 referring to 'session' and $2 to '5' . Thank you
Input File
session=123
process=90
customer=145
session=123
customer=198
process=90
CODE
awk '$1 ~ /^Session|^CustomerId/' hi|xargs -L 1 -I name '{if (!($1 SUBSEP $2 in a)) {ids[$1]++; a[$1, $2]}} END {for (id in ids) {print "Count of unique", id, " " ids[id]}}'
DETAILS
I will pass the output that I got from first and pipe it via xargs and I have the lines read in "name" variable in xargs .. Now my $1 should correspond to first field of xargs and this is my query
Output
Count of unique sessions=2
Count of unique customer=2
If you want to limit the script to only including "session" and "customer" all you have to do is add the regex to the main script as a selector:
awk -F= '$1 ~ /^(session|customer)$/ {if (!($1 SUBSEP $2 in a)) {ids[$1]++; a[$1, $2]}} END {for (id in ids) {print "Count of unique", id, " " ids[id]}}'
If what you're looking for is a count of unique customers and sessions, then this might do:
awk -F= '
$1~/^(session|customer)$/ && !seen[$0] {
seen[$0]=1;
count[$1]++;
}
END {
printf("Count of sessions: %d\n", count["session"]);
printf("Count of customers: %d\n", count["customer"]);
}' hi
In addition to keeping a count, this keeps an associative array of lines that have contributed a count, to avoid counting lines a second time - thus making it a unique count.
Use the Field Separator, which can be specified inside the BEGIN code block as FS="separator", or as a command line option to awk via -F "separator" This answer shows only the point asked by the question. it does not address the final output.
awk -F"=" '$1 == "session" ||
$1 == "customer" { ids[$1]++ } # do whatever you need with the counters.
END { for (id in ids) {
print "Count, id "=" ids[id] }}' hi
Why don't you just try an all awk solution? It's more simple:
awk -F "=" '$1 ~ /customer|session/ { name[$1]++ } END { for (var in name) print "Count of unique", var"="name[var] }' hi
Results:
Count of unique customer=2
Count of unique session=2
Is there some other reason you need to pipe to xargs?
HTH
Yet an alternative would be
awk -F "=" '$1 ~ /customer|session/ {print $1}'|sort |uniq -c | awk '{print "Count of unique "$2"="$1}'
Here is the answer to the question you deleted:
This is self-contained AWK script based on an answer of mine to one of your earlier questions:
#!/usr/bin/awk -f
/^Customer=/ {
mc[$0, prev]++
if (!($0 in cseen)) {
cust[++custc] = $0
ids["Customer"]++
}
cseen[$0]
}
/^Merchant=/ {
prev = $0
if (!($0 in mseen)) {
merch[++merchc] = $0
ids["Merchant"]++
}
mseen[$0]++
}
END {
for (id in ids) {
print "Count of unique", id, ids[id]
}
for (i = 1; i <= merchc; i++) {
merchant = merch[i]
print "Customers under (" merchant ") is " mseen[merchant]
for (j = 1; j <= custc; j++) {
customer = cust[j]
if (customer SUBSEP merchant in mc) {
print "(" customer ") under (" merchant ") is " mc[customer, merchant]
}
}
}
}
Set it be executable and run it:
$ chmod u+x customermerchant
$ ./customermerchant data.txt