I need to print out only one of various consecutive lines with same first field, and the one must be the one with "more fields in its last field". That means that last field is a set of words, and I need to print the line with more elements in its last field. In case of same number of max elements in last field, any of the max is ok.
Example input:
("aborrecimento",[Noun],[Masc],[Reg:Sing],[Bulk])
("aborrecimento",[Noun],[Masc],[Reg:Sing],[Device,Concrete,Count])
("aborrecimento",[Noun],[Masc],[Reg:Sing],[])
("adiamento",[Noun],[Masc],[Reg:Sing],[])
("adiamento",[Noun],[Masc],[Reg:Sing],[Count])
("adiamento",[Noun],[Masc],[Reg:Sing],[VerbNom])
Example output:
("aborrecimento",[Noun],[Masc],[Reg:Sing],[Device,Concrete,Count])
("adiamento",[Noun],[Masc],[Reg:Sing],[VerbNom])
solution with awk would be nice, but no need of one liner.
generate index file
$ cat input.txt |
sed 's/,\[/|[/g' |
awk -F'|' '
{if(!gensub(/[[\])]/, "", "g", $NF))n=0;else n=split($NF, a, /,/); print NR,$1,n}
' |
sort -k2,2 -k3,3nr |
awk '$2!=x{x=$2;print $1}' >idx.txt
content of index file
$ cat idx.txt
2
5
select lines
$ awk 'NR==FNR{idx[$0]; next}; (FNR in idx)' idx.txt input.txt
("aborrecimento",[Noun],[Masc],[Reg:Sing],[Device,Concrete,Count])
("adiamento",[Noun],[Masc],[Reg:Sing],[Count])
Note: no space in input.txt
Use [ as the field delimiter, then split the last field on ,:
awk -F '[[]' '
{split($NF, f, /,/)}
length(f) > max[$1] {line[$1] = $0; max[$1] = length(f)}
END {for (l in line) print line[l]}
' filename
Since order is important, an update:
awk -F '[[]' '
{split($NF, f, /,/)}
length(f) > max[$1] {line[$1] = $0; max[$1] = length(f); nr[$1] = NR}
END {for (l in line) printf("%d\t%s\n", nr[$1], line[l])}
' filename |
sort -n |
cut -f 2-
Something like this might work:
awk 'BEGIN {FS="["}
Ff != gensub("^([^,]+).*","\\1","g",$0) { Ff = gensub("^([^,]+).*","\\1","g",$0) ; Lf = $NF ; if (length(Ml) > 0) { print Ml } }
Ff == gensub("^([^,]+).*","\\1","g",$0) { if (length($NF) > length(Lf)) { Lf=$NF ; Ml=$0 } }
END {if (length(Ml) > 0) { print Ml } }' INPUTFILE
See here in action. BUT it's not the solution you want to use, as this is rather a hack. And it fails you if you meant that your last field is longer if it contains more , separated elements than the length of your last element. (E.g. the above script happily reports [KABLAMMMMMMMMMMM!] as longer than [A,B,C].)
This might work for you:
sort -r file | sort -t, -k1,1 -u
Related
How do I achieve from following string.ext
>Lipoprotein releasing system transmembrane protein LolC
MKWLWFAYQNVIRNRRRSLMTILIIAVGTAAILLSNGFALYTYDNLREGSALASGHVIIAHVDHFDKEEEIPMEYGLSDYEDIERHIAADDRVRMAIPRLQFSGLISNGDKSVIFMGTGVDPEGEFDIGGVLTNVLTGNTLSTHSAPDAVPEVMLAKDLAKQLHADIGGLLTLLATTADGALNALDVQVRGIFSTGVPEMDKRMLAVALPTAQELIMTDKVGTLSVYLHEIEQTDAMWAVLAEWYPNFATQPWWEQASFYFKVRALYDIIFGVMGVIILLIVFFTITNTLSMTIVERTRETGTLLALGTLPRQIMRNFALEALLIGLAGALLGMLIAGFTSITLFIAEIQMPPPPGSTEGYPLYIYFSPWLYGITSLLVVTLSIAAAFLTSRKAARKPIVEALAHV
>Phosphoserine phosphatase (EC 3.1.3.3)
MFQEHALTLAIFDLDNTLLAGDSDFLWGVFLVERGIVDGDEFERENERFYRAYQEGDLDIFEFLRFAFRPLRDNRLEDLKRWRQDFLREKIEPAILPMACELVEHHRAAGDTLLIITSTNEFVTAPIAEQLGIPNLIATVPEQLHGCYTGEAAGTPAFQAGKVKRLLDWLEETSTELAGSTFYSDSHNDIPLLEWVDHPVATDPDDRLRGYARDRGWPIISLREEIAP
to change the sequential number after string to a 4 digit number (starting with 0001) and separate that number with | from string, so that output is returned like:
>string|0001|Lipoprotein_releasing_system_transmembrane_protein_LolC
MKWLWFAYQNVIRNRRRSLMTILIIAVGTAAILLSNGFALYTYDNLREGSALASGHVIIAHVDHFDKEEEIPMEYGLSDYEDIERHIAADDRVRMAIPRLQFSGLISNGDKSVIFMGTGVDPEGEFDIGGVLTNVLTGNTLSTHSAPDAVPEVMLAKDLAKQLHADIGGLLTLLATTADGALNALDVQVRGIFSTGVPEMDKRMLAVALPTAQELIMTDKVGTLSVYLHEIEQTDAMWAVLAEWYPNFATQPWWEQASFYFKVRALYDIIFGVMGVIILLIVFFTITNTLSMTIVERTRETGTLLALGTLPRQIMRNFALEALLIGLAGALLGMLIAGFTSITLFIAEIQMPPPPGSTEGYPLYIYFSPWLYGITSLLVVTLSIAAAFLTSRKAARKPIVEALAHV
>string|0002|Phosphoserine_phosphatase_(EC_3_1_3_3)
MFQEHALTLAIFDLDNTLLAGDSDFLWGVFLVERGIVDGDEFERENERFYRAYQEGDLDIFEFLRFAFRPLRDNRLEDLKRWRQDFLREKIEPAILPMACELVEHHRAAGDTLLIITSTNEFVTAPIAEQLGIPNLIATVPEQLHGCYTGEAAGTPAFQAGKVKRLLDWLEETSTELAGSTFYSDSHNDIPLLEWVDHPVATDPDDRLRGYARDRGWPIISLREEIAP
the commands I came up until here are ($faa is referring to the filename string.ext)
faa=$1
var=$(basename "$faa" .ext)
awk '!/^>/ { printf "%s", $0; n = "\n" } /^>/ { print n $0; n = "" } END { printf "%s", n }' $faa >$faa.tmp
sed 's/ /_/g' $faa.tmp >$faa.tmp2
awk -v var="$var" '/>/{sub(">","&"var"|");sub(/\.ext/,x)}1' $faa.tmp2 >$faa.tmp3
awk '/>/{sub(/\|/,++i"|")}1' $faa.tmp3 >$faa.tmp4
tr '\.' '_' <$faa.tmp4 | tr '\:' '_' | sed 's/__/_/g' >$faa.tmp5
Edit: I also want to change following characters to 1 underscore: / . :
I'd use perl here:
perl -pe '
next unless /^>/; # only transform the "header" lines
s/[\h.]/_/g; # change dots and horizontal whitespace
substr($_,1,0) = sprintf("string|%04d|", ++$n) # insert the counter
' file
$ awk '
FNR==1 {base=FILENAME; sub(/\.[^.]+$/,"",base) }
sub(/^>/,"") { gsub(/[\/ .:]+/,"_"); $0=sprintf(">%s|%04d|%s",base,++c,$0) }
1' string.ext
>string|0001|Lipoprotein_releasing_system_transmembrane_protein_LolC
MKWLWFAYQNVIRNRRRSLMTILIIAVGTAAILLSNGFALYTYDNLREGSALASGHVIIAHVDHFDKEEEIPMEYGLSDYEDIERHIAADDRVRMAIPRLQFSGLISNGDKSVIFMGTGVDPEGEFDIGGVLTNVLTGNTLSTHSAPDAVPEVMLAKDLAKQLHADIGGLLTLLATTADGALNALDVQVRGIFSTGVPEMDKRMLAVALPTAQELIMTDKVGTLSVYLHEIEQTDAMWAVLAEWYPNFATQPWWEQASFYFKVRALYDIIFGVMGVIILLIVFFTITNTLSMTIVERTRETGTLLALGTLPRQIMRNFALEALLIGLAGALLGMLIAGFTSITLFIAEIQMPPPPGSTEGYPLYIYFSPWLYGITSLLVVTLSIAAAFLTSRKAARKPIVEALAHV
>string|0002|Phosphoserine_phosphatase_(EC_3_1_3_3)
MFQEHALTLAIFDLDNTLLAGDSDFLWGVFLVERGIVDGDEFERENERFYRAYQEGDLDIFEFLRFAFRPLRDNRLEDLKRWRQDFLREKIEPAILPMACELVEHHRAAGDTLLIITSTNEFVTAPIAEQLGIPNLIATVPEQLHGCYTGEAAGTPAFQAGKVKRLLDWLEETSTELAGSTFYSDSHNDIPLLEWVDHPVATDPDDRLRGYARDRGWPIISLREEIAP
I'm assuming from your posted sample and code that you actually want every contiguous sequence of any combination of spaces, periods, forward slashes and/or colons converted to a single underscore.
In awk.
$ awk '/^>/{n=sprintf("%04d",++i);sub(/^>/,">string|" n "|")}1' file
>string|0001|Lipoprotein releasing system transmembrane protein LolC
MKWLWFAYQNVIRNRRRSLMTILIIAVGTAAILLSNGFALYTYDNLREGSALASGHVIIAHVDHFDKEEEIPMEYGLSDYEDIERHIAADDRVRMAIPRLQFSGLISNGDKSVIFMGTGVDPEGEFDIGGVLTNVLTGNTLSTHSAPDAVPEVMLAKDLAKQLHADIGGLLTLLATTADGALNALDVQVRGIFSTGVPEMDKRMLAVALPTAQELIMTDKVGTLSVYLHEIEQTDAMWAVLAEWYPNFATQPWWEQASFYFKVRALYDIIFGVMGVIILLIVFFTITNTLSMTIVERTRETGTLLALGTLPRQIMRNFALEALLIGLAGALLGMLIAGFTSITLFIAEIQMPPPPGSTEGYPLYIYFSPWLYGITSLLVVTLSIAAAFLTSRKAARKPIVEALAHV
>string|0002|Phosphoserine phosphatase (EC 3.1.3.3)
MFQEHALTLAIFDLDNTLLAGDSDFLWGVFLVERGIVDGDEFERENERFYRAYQEGDLDIFEFLRFAFRPLRDNRLEDLKRWRQDFLREKIEPAILPMACELVEHHRAAGDTLLIITSTNEFVTAPIAEQLGIPNLIATVPEQLHGCYTGEAAGTPAFQAGKVKRLLDWLEETSTELAGSTFYSDSHNDIPLLEWVDHPVATDPDDRLRGYARDRGWPIISLREEIAP
Explained:
$ awk '
/^>/ { # if string starts with >
n=sprintf("%04d",++i) # iterate i from 1 and zeropad
sub(/^>/,">string|" n "|") # replace the > with stuff
}1' file # implicit output
Don't include & in string (see comments).
awk -F'[ \.]' 'BEGIN{a=1;OFS="_"}/^>/{$1=sprintf(">String|%04d",a);++a;print $0; next;}{print $0}' filename
I have multi columns file and i want to extract some info in column 71.
I want to extract using tags which the value can be anything, for example i want to just extract AC=* ; AF=* , where the value can be anything .
I found similar question and gave it a try but it didn't work
Extract columns with values matching a specific pattern
Column 71 looks like this:
AC=14511;AC_AFR=382;AC_AMR=1177;AC_Adj=14343;AC_EAS=5;AC_FIN=427;AC_Het=11813;AC_Hom=1265;AC_NFE=11027;AC_OTH=97;AC_SAS=1228;AF=0.137;AN=106198;AN_AFR=8190;AN_AMR=10424;AN_Adj=99264;AN_EAS=7068;AN_FIN=6414;AN_NFE=51090;AN_OTH=658;AN_SAS=15420;BaseQRankSum=1.73;ClippingRankSum=-1.460e-01;DB;DP=1268322;FS=0.000;GQ_MEAN=190.24;GQ_STDDEV=319.67;Het_AFR=358;Het_AMR=1049;Het_EAS=5;Het_FIN=399;Het_NFE=8799;Het_OTH=83;Het_SAS=1120;Hom_AFR=12;Hom_AMR=64;Hom_EAS=0;Hom_FIN=14;Hom_NFE=1114;Hom_OTH=7;Hom_SAS=54;InbreedingCoeff=0.0478;MQ=60.00;MQ0=0;MQRankSum=0.037;NCC=270;POSITIVE_TRAIN_SITE;QD=21.41;ReadPosRankSum=0.212;VQSLOD=4.79;culprit=MQ;DP_HIST=30|3209|1539|1494|30007|7938|4130|2038|1310|612|334|185|97|60|31|25|9|11|7|33,0|66|339|1048|2096|2665|2626|1832|1210|584|323|179|89|54|31|22|7|9|4|15;GQ_HIST=84|66|56|82|3299|568|617|403|250|319|436|310|28566|2937|827|834|451|186|217|12591,15|15|13|16|25|11|22|28|18|38|52|31|65|76|39|83|93|65|97|12397;CSQ=T|ENSG00000186868|ENST00000334239|Transcript|intron_variant||||||rs754512|1||1|MAPT|HGNC|6893|protein_coding||CCDS11502.1|ENSP00000334886|TAU_HUMAN|B4DSE3_HUMAN|UPI0000000C16||||2/8||ENST00000334239.8:c.134-94A>T||T:0.1171|||||||||15792962||||||||,T|ENSG00000186868|ENST00000570299|Transcript|intron_variant&non_coding_transcript_variant||||||rs754512|1||1|MAPT|HGNC|6893|processed_transcript||||||||||2/6||ENST00000570299.1:n.262-94A>T||T:0.1171|||||||||15792962||||||||,T|ENSG00000186868|ENST00000340799|Transcript|intron_variant||||||rs754512|1||1|MAPT|HGNC|6893|protein_coding||CCDS45716.1|ENSP00000340438|TAU_HUMAN||UPI000004EEE6||||3/10||ENST00000340799.5:c.221-94A>T||T:0.1171|||||||||15792962||||||||,T|ENSG00000186868|ENST00000262410|Transcript|intron_variant||||||rs754512|1||1|MAPT|HGNC|6893|protein_coding||CCDS11501.1|ENSP00000262410|TAU_HUMAN||UPI0000EE80B7||||4/13||ENST00000262410.5:c.308-94A>T||T:0.1171|||||||||15792962||||||||,T|ENSG00000186868|ENST00000446361|Transcript|intron_variant||||||rs754512|1||1|MAPT|HGNC|6893|protein_coding||CCDS11500.1|ENSP00000408975|TAU_HUMAN||UPI000004EEE5||||2/9||ENST00000446361.3:c.134-94A>T||T:0.1171|||||||||15792962||||||||,T|ENSG00000186868|ENST00000574436|Transcript|intron_variant||||||rs754512|1||1|MAPT|HGNC|6893|protein_coding||CCDS11499.1|ENSP00000460965|TAU_HUMAN||UPI000002D754||||3/10||ENST00000574436.1:c.308-94A>T||T:0.1171|||||||||15792962||||||||,T|ENSG00000186868|ENST00000571987|Transcript|intron_variant||||||rs754512|1||1|MAPT|HGNC|6893|protein_coding||CCDS11501.1|ENSP00000458742|TAU_HUMAN||UPI0000EE80B7||||3/12||ENST00000571987.1:c.308-94A>T||T:0.1171|||||||||15792962||||||||,T|ENSG00000186868|ENST00000415613|Transcript|intron_variant||||||rs754512|1||1|MAPT|HGNC|6893|protein_coding||CCDS45715.1|ENSP00000410838|TAU_HUMAN||UPI0001AE66E9||||3/13||ENST00000415613.2:c.308-94A>T||T:0.1171|||||||||15792962||||||||,T|ENSG00000186868|ENST00000571311|Transcript|intron_variant&NMD_transcript_variant||||||rs754512|1||1|MAPT|HGNC|6893|nonsense_mediated_decay|||ENSP00000460048||I3L2Z2_HUMAN|UPI00025A2E6E||||4/4||ENST00000571311.1:c.*176-94A>T||T:0.1171|||||||||15792962||||||||,T|ENSG00000186868|ENST00000535772|Transcript|intron_variant||||||rs754512|1||1|MAPT|HGNC|6893|protein_coding||CCDS56033.1|ENSP00000443028|TAU_HUMAN|B4DSE3_HUMAN|UPI000004EEE4||||4/10||ENST00000535772.1:c.308-94A>T||T:0.1171|||||||||15792962||||||||,T|ENSG00000186868|ENST00000576518|Transcript|stop_gained|5499|7|3|K/*|Aag/Tag|rs754512|1||1|MAPT|HGNC|6893|protein_coding|||ENSP00000458621||I3L170_HUMAN&B4DSE3_HUMAN|UPI0001639A7C|||1/7|||ENST00000576518.1:c.7A>T|ENSP00000458621.1:p.Lys3Ter|T:0.1171|||||||||15792962|||||POSITION:0.00682261208576998&ANN_ORF:-255.6993&MAX_ORF:-255.6993|PHYLOCSF_WEAK|ANC_ALLELE|LC,T|ENSG00000186868|ENST00000420682|Transcript|intron_variant||||||rs754512|1||1|MAPT|HGNC|6893|protein_coding||CCDS45716.1|ENSP00000413056|TAU_HUMAN||UPI000004EEE6||||2/9||ENST00000420682.2:c.221-94A>T||T:0.1171|||||||||15792962||||||||,T|ENSG00000186868|ENST00000572440|Transcript|non_coding_transcript_exon_variant&non_coding_transcript_variant|2790|||||rs754512|1||1|MAPT|HGNC|6893|retained_intron|||||||||1/1|||ENST00000572440.1:n.2790A>T||T:0.1171|||||||||15792962||||||||,T|ENSG00000186868|ENST00000351559|Transcript|intron_variant||||||rs754512|1||1|MAPT|HGNC|6893|protein_coding||CCDS11499.1|ENSP00000303214|TAU_HUMAN||UPI000002D754||||4/11||ENST00000351559.5:c.308-94A>T||T:0.1171|||||||||15792962||||||||,T|ENSG00000186868|ENST00000344290|Transcript|intron_variant||||||rs754512|1||1|MAPT|HGNC|6893|protein_coding|YES|CCDS45715.1|ENSP00000340820|TAU_HUMAN||UPI0001AE66E9||||4/14||ENST00000344290.5:c.308-94A>T||T:0.1171|||||||||15792962||||||||,T|ENSG00000186868|ENST00000347967|Transcript|intron_variant||||||rs754512|1||1|MAPT|HGNC|6893|protein_coding|||ENSP00000302706|TAU_HUMAN|B4DSE3_HUMAN|UPI0000173D91||||4/10||ENST00000347967.5:c.32-100A>T||T:0.1171|||||||||15792962||||||||,T|ENSG00000186868|ENST00000431008|Transcript|intron_variant||||||rs754512|1||1|MAPT|HGNC|6893|protein_coding||CCDS56033.1|ENSP00000389250|TAU_HUMAN|B4DSE3_HUMAN|UPI000004EEE4||||3/9||ENST00000431008.3:c.308-94A>T||T:0.1171|||||||||15792962||||||||
The code that i tried:
awk '{
for (i = 1; i <= NF; i++) {
if ($i ~ /AC|AF/) {
printf "%s %s ", $i, $(i + 1)
}
}
print ""
}'
I keep getting syntax error.
output wanted :
AC=14511;AF=0.137
Whenever you have name=value pairs, it's usually simplest to first create an array that maps names to values (n2v[] below) and then you can just access the values by their names.
$ cat file
AC=1;AC_AFR=2;AF=3 AC=4;AC_AFR=5;AF=6
$ cat tst.awk
{
delete n2v
split($2,tmp,/[;=]/)
for (i=1; i in tmp; i+=2) {
n2v[tmp[i]] = tmp[i+1]
}
prt("AC")
prt("AF")
}
function prt(name) { print name, "=", n2v[name] }
$ awk -f tst.awk file
AC = 4
AF = 6
Just change $2 to $71 for your real input.
Something like this should do it (in Gnu awk due to switch):
$ awk '{split($71,a,";");for(i in a )if(a[i]~/^AF/) print a[i]}' foo
AF=0.137
You split the field $71 by ;s, loop thru the array you split to looking for desired match. For multiple matches use switch:
$ awk '{
split($0,a,";");
for(i in a )
switch(a[i]) {
case /^AF=/:
b=b a[i] OFS;
break;
case /^AC=/:
b=b a[i] OFS;
break
}
sub(/.$/,"\n",b);
printf b
}' foo
AC=14511 AF=0.137
EDIT: Now it buffers output to a variable and prints it in the end. You can control the separator with OFS.
I'm an absolute beginner to awk and would like some help with this.
I have this data:
FOO|BAR|1234|A|B|C|D|
FOO|BAR|1234|E|F|G|H|
FOO|BAR|5678|I|J|K|L|
FOO|BAR|5678|M|N|O|P|
FOO|BAR|5678|Q|R|S|T|
Desired output:
FOO|BAR|1234|A|B|C|D|E|F|G|H|
FOO|BAR|5678|I|J|K|L|M|N|O|P|Q|R|S|T|
Basically I have to append some fields to the lines where column 3 matches.
Appreciate any responses, thanks a lot!
Another way:
awk -F"|" '$3 in a{
a[$3]=a[$3]"|"$4"|"$5"|"$6"|"$7;
next
}
{ a[$3]=$0
}
END {
for ( i in a) {
print a[i]
}
}'
$ awk -f chain.awk < data
FOO|BAR|1234|A|B|C|D|E|F|G|H|
FOO|BAR|5678|I|J|K|L|M|N|O|P|Q|R|S|T|
$ cat chain.awk
BEGIN {FS = "|"}
$3==old {for(i = 4; i <= NF; i++) saved = saved (i>4?"|":"") $i}
$3!=old {if(old) print saved ; saved = $0 ; old = $3}
END {print saved}
$
BEGIN we set the field separator
$3==old we append the fields $4 ... $NF to the saved data, joining the fields with | except for the first one (note that there is a last, null field)
$3!=old we print the saved data (except for the first record, when old is false) and we restart the mechanism
END we still have saved data in our belly, we have to print it
If we have an input:
TargetIDs,CPD,Value,SMILES
95,CPD-1111111,-2,c1ccccc1
95,CPD-2222222,-3,c1ccccc1
95,CPD-2222222,-4,c1ccccc1
95,CPD-3333333,-1,c1ccccc1N
Now we would like to separate the duplicates and non-duplicates based on the fourth column (smiles)
duplicate:
95,CPD-1111111,-2,c1ccccc1
95,CPD-2222222,-3,c1ccccc1
95,CPD-2222222,-4,c1ccccc1
non-duplicate
95,CPD-3333333,-1,c1ccccc1N
Now the following attempt could do separate the duplicate without any problem. However, the first occurrence of the duplicate will still be included into the non-duplicate file.
BEGIN { FS = ","; f1="a"; f2="b"}
{
# Keep count of the fields in fourth column
count[$4]++;
# Save the line the first time we encounter a unique field
if (count[$4] == 1)
first[$4] = $0;
# If we encounter the field for the second time, print the
# previously saved line
if (count[$4] == 2)
print first[$4] > f1 ;
# From the second time onward. always print because the field is
# duplicated
if (count[$4] > 1)
print > f1;
if (count[$4] == 1) #if (count[$4] - count[$4] == 0) <= change to this doesn't work
print first[$4] > f2;
duplicate output results from the attempt:
95,CPD-1111111,-2,c1ccccc1
95,CPD-2222222,-3,c1ccccc1
95,CPD-2222222,-4,c1ccccc1
non-duplicate output results from the attempt
TargetIDs,CPD,Value,SMILES
95,CPD-3333333,-1,c1ccccc1N
95,CPD-1111111,-2,c1ccccc1
May I know if any guru might have comments/solutions? Thanks.
I would do this:
awk '
NR==FNR {count[$2] = $1; next}
FNR==1 {FS=","; next}
{
output = (count[$NF] == 1 ? "nondup" : "dup")
print > output
}
' <(cut -d, -f4 input | sort | uniq -c) input
The process substitution will pre-process the file and perform a count on the 4th column. Then, you can process the file and decide if that line is "duplicated".
All in awk: Ed Morton shows a way to collect the data in a single pass. Here's a 2 pass solution that's virtually identical to my example above
awk -F, '
NR==FNR {count[$NF]++; next}
FNR==1 {next}
{
output = (count[$NF] == 1 ? "nondup" : "dup")
print > output
}
' input input
Yes, the input file is given twice.
$ cat tst.awk
BEGIN{ FS="," }
NR>1 {
if (cnt[$4]++) {
dups[$4] = nonDups[$4] dups[$4] $0 ORS
delete nonDups[$4]
}
else {
nonDups[$4] = $0 ORS
}
}
END {
print "Duplicates:"
for (key in dups) {
printf "%s", dups[key]
}
print "\nNon Duplicates:"
for (key in nonDups) {
printf "%s", nonDups[key]
}
}
$ awk -f tst.awk file
Duplicates:
95,CPD-1111111,-2,c1ccccc1
95,CPD-2222222,-3,c1ccccc1
95,CPD-2222222,-4,c1ccccc1
Non Duplicates:
95,CPD-3333333,-1,c1ccccc1N
This solution only works if the duplicates are grouped together.
awk -F, '
function fout( f, i) {
f = (cnt > 1) ? "dups" : "nondups"
for (i = 1; i <= cnt; ++i)
print lines[i] > f
}
NR > 1 && $4 != lastkey { fout(); cnt = 0 }
{ lastkey = $4; lines[++cnt] = $0 }
END { fout() }
' file
Little late
My version in awk
awk -F, 'NR>1{a[$0":"$4];b[$4]++}
END{d="\n\nnondupe";e="dupe"
for(i in a){split(i,c,":");b[c[2]]==1?d=d"\n"i:e=e"\n"i} print e d}' file
Another built similar to glenn jackmans but all in awk
awk -F, 'function r(f) {while((getline <f)>0)a[$4]++;close(f)}
BEGIN{r(ARGV[1])}{output=(a[$4] == 1 ? "nondup" : "dup");print >output} ' file
How can I use awk to join various fields, given that I don't know how many of them I have? For example, given the input string
aaa/bbb/ccc/ddd/eee
I use -F'/' as delimiter, do some manipulation on aaa, bbb, ccc, ddd, eee (altering, removing...) and I want to join it back to print something line
AAA/bbb/ddd/e
Thanks
... given that I don't know how many of them I have?
Ah, but you do know how many you have. Or you will soon, if you keep reading :-)
Before giving you a record to process, awk will set the NF variable to the number of fields in that record, and you can use for loops to process them (comments aren't part of the script, I've just put them there to explain):
$ echo pax/is/a/love/god | awk -F/ '{
gsub (/god/,"dog",$5); # pax,is,a,love,dog
$4 = ""; # pax,is,a,,dog
$6 = $5; # pax,is,a,,dog,dog
$5 = "rabid"; # pax,is,a,,rabid,dog
printf $1; # output "pax"
for (i = 2; i <= NF; i++) { # output ".<field>"
if ($i != "") { # but only for non-blank fields (skip $4)
printf "."$i;
}
}
printf "\n"; # finish line
}'
pax.is.a.rabid.dog
This shows manipulation of the values, as well as insertion and deletion.
The following will show you how to process each field and do some example manipulations on them.
The only caveat of using the output field separator OFS is that "deleted" fields will still have delimiters as shown in the output below; however it makes the code much simpler if you can live with that.
awk '
BEGIN{FS=OFS="/"}
{
for(i=1;i<=NF;i++){
if($i == "aaa")
$i=toupper($i)
else if($i ~ /c/)
$i=""
else if($i ~ /^eee$/)
$i="e"
}
}1' <<<'aaa/bbb/ccc/ddd/eee'
Output
AAA/bbb//ddd/e
This might work for you:
echo "aaa/bbb/ccc/ddd/eee" |
awk 'BEGIN{FS=OFS="/"}{sub(/../,"",$4);NF=4;print}'
aaa/bbb/ccc/d
To delete fields not at the end use a function to shuffle the values:
echo "aaa/bbb/ccc/ddd/eee" |
awk 'func d(n){for(x=n;x<=NF-1;x++){y=x+1;$x=$y}NF--};BEGIN{FS=OFS="/"}{d(2);print}'
aaa/ccc/ddd/eee
Deletes the second field.
awk -F'/' '{ # I'd suggest to add them to an array, like:
# for (i=1;i<=NF;i++) {a[i]=$i }
# Now manipulate your elements in the array
# then finally print them:
n = asorti(a, dest)
for (i=1;i<=n;i++) { output+=dest[i] "/") }
print gensub("/$","","g",output)
}' INPUTFILE
Doing it this way you can delete elements as well. Note deleting an item can be done like delete array[index].