I need to get all words with one vowel from a file and print one the most common of them. I use awk for matching one-vowel words, but I don't know how to get the most common one.
For example, for text test qwerty word test The result is test, it should print test.
Here is my awk-script:
BEGIN { IGNORECASE=1; }
{
for(i=1; i<=NF; i++){
pattern = "\\<[^AEYUIO]*[AEYUIO][^AEYUIO]*\\>" # pattern for one-vowel words
if($i ~ pattern){
print $i
}
}
}
It prints all the one-vowel words, now I need to get the most common of them. I tried to use sort like this: awk -f script file_with_data | sort, but I didn't find which parameters to use. Help me, please.
$ awk '
BEGIN { IGNORECASE=1; }
{
for(i=1; i<=NF; i++){
pattern = "\\<[^AEYUIO]*[AEYUIO][^AEYUIO]*\\>" # pattern for one-syllable words
if($i ~ pattern){
if(++freq[$i]>maxf){ # keep word count and compare
maxw=$i
maxf=freq[$i]
}
}
}
}
END{
print maxw
}' file
Output:
test
If there are equal counts for several words this only prints one, in that case and since you are already using GNU awk (IGNORECASE):
$ awk '
BEGIN { IGNORECASE=1; }
{
for(i=1; i<=NF; i++){
pattern = "\\<[^AEYUIO]*[AEYUIO][^AEYUIO]*\\>" # pattern for one-syllable words
if($i ~ pattern)
freq[$i]++ # word frequencies
}
}
END{
PROCINFO["sorted_in"]="#val_type_desc" # arm for sorting order
for(w in freq) # get biggest count
if(freq[w]>=p) { # and equal counts
print w
p=freq[w]
} else
break
}' file
Do you need to use awk?
Otherwise you could get it with a mix of grep and uniq -c like:
% echo test qwerty word test The result is test | tr ' ' '\n' | grep -iE "^[^AEYUIO]*[AEYUIO][^AEYUIO]*$" | sort | uniq -c | sort
1 is
1 The
1 word
3 test
Related
I want to merge some rows in a file so that the lines should contain 22 fields seperated by ~.
Input file looks like this.
200269~7414~0027001~VALTD~OM3500~963~~~~716~423~2523~Y~UN~~2423~223~~~~A~200423
2269~744~2701~VALD~3500~93~~~~76~423~223~Y~
UN~~243~223~~~~A~200123
209~7414~7001~VALD~OM30~963~~~
~76~23~2523~Y~UN~~223~223~~~~A~123
and So on
First line looks fine. 2nd and 3rd line needs to be merged so that it becomes a line with 22 fields. 4th,5th and 6th line should be merged and so on.
Expected output:
200269~7414~0027001~VALTD~OM3500~963~~~~716~423~2523~Y~UN~~2423~223~~~~A~200423
2269~744~2701~VALD~3500~93~~~~76~423~223~Y~UN~~243~223~~~~A~200123
209~7414~7001~VALD~OM30~963~~~~76~23~2523~Y~UN~~223~223~~~~A~123
The file has 10 GB data but the code I wrote (used while loop) is taking too much time to execute . How to solve this problem using awk/sed command?
Code Used:
IFS=$'\n'
set -f
while read line
do
count_tild=`echo $line | grep -o '~' | wc -l`
if [ $count_tild == 21 ]
then
echo $line
else
checkLine
fi
done < file.txt
function checkLine
{
current_line=$line
read line1
next_line=$line1
new_line=`echo "$current_line$next_line"`
count_tild_mod=`echo $new_line | grep -o '~' | wc -l`
if [ $count_tild_mod == 21 ]
then
echo "$new_line"
else
line=$new_line
checkLine
fi
}
Using only the shell for this is slow, error-prone, and frustrating. Try Awk instead.
awk -F '~' 'NF==1 { next } # Hack; see below
NF<22 {
for(i=1; i<=NF; i++) f[++a]=$i }
a==22 {
for(i=1; i<=a; ++i) printf "%s%s", f[i], (i==22 ? "\n" : "~")
a=0 }
NF==22
END {
if(a) for(i=1; i<=a; i++) printf "%s%s", f[i], (i==a ? "\n" : "~") }' file.txt>file.new
This assumes that consecutive lines with too few fields will always add up to exactly 22 when you merge them. You might want to check this assumption (or perhaps accept this answer and ask a new question with more and better details). Or maybe just add something like
a>22 {
print FILENAME ":" FNR ": Too many fields " a >"/dev/stderr"
exit 1 }
The NF==1 block is a hack to bypass the weirdness of the completely empty line 5 in your sample.
Your attempt contained multiple errors and inefficiencies; for a start, try http://shellcheck.net/ to diagnose many of them.
$ cat tst.awk
BEGIN { FS="~" }
{
sub(/^[0-9]+\./,"")
gsub(/[[:space:]]+/,"")
$0 = prev $0
if ( NF == 22 ) {
print ++cnt "." $0
prev = ""
}
else {
prev = $0
}
}
$ awk -f tst.awk file
1.200269~7414~0027001~VALTD~OM3500~963~~~~716~423~2523~Y~UN~~2423~223~~~~A~200423
2.2269~744~2701~VALD~3500~93~~~~76~423~223~Y~UN~~243~223~~~~A~200123
3.209~7414~7001~VALD~OM30~963~~~~76~23~2523~Y~UN~~223~223~~~~A~123
The assumption above is that you never have more than 22 fields on 1 line nor do you exceed 22 in any concatenation of the contiguous lines that are each less than 22 fields, just like you show in your sample input.
You can try this awk
awk '
BEGIN {
FS=OFS="~"
}
{
while(NF<22) {
if(NF==0)
break
a=$0
getline
$0=a$0
}
if(NF!=0)
print
}
' infile
or this sed
sed -E '
:A
s/((.*~){21})([^~]*)/\1\3/
tB
N
bA
:B
s/\n//g
' infile
In the tab-delimited file below I am trying to use awk to print out the headers of the fields if they contain a single . (dot). The other fields should not contain a . and I am going to use another awk to detect there data type (either alpha or integer --- could be a decimal). The below seems close but not working as expected. Thank you :).
file
Index HGMD Sanger Classification Pop
1 . . VUS .36
awk
awk -F'\t' '$2 && $3 ~ /./ && FNR == 1 {printf "dot detected in fields: ORS $0"}' file
Index HGMD Sanger Classification
desired output
dot detected in fields: HGMD, Sanger
Assuming you want the headers of the columns that have a single dot on any line (HGMD and Sanger here):
Index HGMD Sanger Classification Pop
1 . 2 VUS .36
1 . . VUS .36
One solution would be:
awk -F'\t' 'NR==1 {for (i=0 ; i <= NF ; i++) headers[i] = $i; } # 1
NR!=1 {for (i=0 ; i <= NF ; i++) if ($i == ".") dots[i] = 1} # 2
END { printf "Dots in fields: ";
for (x in headers) if (dots[x]) printf "%s ", headers[x]; # 3
printf "\n"
} ' file
(1) collect the headers from the first input line to array headers.
(2) On other input lines, compare the value to a single dot, and set the entry in array dots to record any found dots.
(3) Finally, print out headers of the columns with dots[i] set.
Output is Dots in fields: HGMD Sanger, i.e. they are only listed once.
The dot matches any character in a regex, so $3 ~ /./ in your snippet would be true if field 3 contained any character. Also, $2 && $3 ~ ... would first test field 2 for truthiness (an empty string is falsy), and then do the match on field 3.
Use an Awk as below
awk 'BEGIN{FS="\t"}NR==1{for(i=1;i<=NF;i++) header[i]=$i}{for(i=1;i<=NF;i++) { if (match($i,/^\.$/)) { print header[i] } } }' file
HGMD
Sanger
The idea is to get the header information from the first line hashed by index 1..n and when processing the actual lines, if the . is encountered, get the hashed value from the array and print it.
awk '
NR==1 { split($0,hdr); next }
{
for (i=1; i<=NF; i++) {
if ($i != ".") {
delete hdr[i]
}
}
}
END {
printf "dot detected in fields"
for (i in hdr) {
printf "%s %s", (c++?",":":"), hdr[i]
}
print ""
}
' file
dot detected in fields: HGMD, Sanger
I have multi columns file and i want to extract some info in column 71.
I want to extract using tags which the value can be anything, for example i want to just extract AC=* ; AF=* , where the value can be anything .
I found similar question and gave it a try but it didn't work
Extract columns with values matching a specific pattern
Column 71 looks like this:
AC=14511;AC_AFR=382;AC_AMR=1177;AC_Adj=14343;AC_EAS=5;AC_FIN=427;AC_Het=11813;AC_Hom=1265;AC_NFE=11027;AC_OTH=97;AC_SAS=1228;AF=0.137;AN=106198;AN_AFR=8190;AN_AMR=10424;AN_Adj=99264;AN_EAS=7068;AN_FIN=6414;AN_NFE=51090;AN_OTH=658;AN_SAS=15420;BaseQRankSum=1.73;ClippingRankSum=-1.460e-01;DB;DP=1268322;FS=0.000;GQ_MEAN=190.24;GQ_STDDEV=319.67;Het_AFR=358;Het_AMR=1049;Het_EAS=5;Het_FIN=399;Het_NFE=8799;Het_OTH=83;Het_SAS=1120;Hom_AFR=12;Hom_AMR=64;Hom_EAS=0;Hom_FIN=14;Hom_NFE=1114;Hom_OTH=7;Hom_SAS=54;InbreedingCoeff=0.0478;MQ=60.00;MQ0=0;MQRankSum=0.037;NCC=270;POSITIVE_TRAIN_SITE;QD=21.41;ReadPosRankSum=0.212;VQSLOD=4.79;culprit=MQ;DP_HIST=30|3209|1539|1494|30007|7938|4130|2038|1310|612|334|185|97|60|31|25|9|11|7|33,0|66|339|1048|2096|2665|2626|1832|1210|584|323|179|89|54|31|22|7|9|4|15;GQ_HIST=84|66|56|82|3299|568|617|403|250|319|436|310|28566|2937|827|834|451|186|217|12591,15|15|13|16|25|11|22|28|18|38|52|31|65|76|39|83|93|65|97|12397;CSQ=T|ENSG00000186868|ENST00000334239|Transcript|intron_variant||||||rs754512|1||1|MAPT|HGNC|6893|protein_coding||CCDS11502.1|ENSP00000334886|TAU_HUMAN|B4DSE3_HUMAN|UPI0000000C16||||2/8||ENST00000334239.8:c.134-94A>T||T:0.1171|||||||||15792962||||||||,T|ENSG00000186868|ENST00000570299|Transcript|intron_variant&non_coding_transcript_variant||||||rs754512|1||1|MAPT|HGNC|6893|processed_transcript||||||||||2/6||ENST00000570299.1:n.262-94A>T||T:0.1171|||||||||15792962||||||||,T|ENSG00000186868|ENST00000340799|Transcript|intron_variant||||||rs754512|1||1|MAPT|HGNC|6893|protein_coding||CCDS45716.1|ENSP00000340438|TAU_HUMAN||UPI000004EEE6||||3/10||ENST00000340799.5:c.221-94A>T||T:0.1171|||||||||15792962||||||||,T|ENSG00000186868|ENST00000262410|Transcript|intron_variant||||||rs754512|1||1|MAPT|HGNC|6893|protein_coding||CCDS11501.1|ENSP00000262410|TAU_HUMAN||UPI0000EE80B7||||4/13||ENST00000262410.5:c.308-94A>T||T:0.1171|||||||||15792962||||||||,T|ENSG00000186868|ENST00000446361|Transcript|intron_variant||||||rs754512|1||1|MAPT|HGNC|6893|protein_coding||CCDS11500.1|ENSP00000408975|TAU_HUMAN||UPI000004EEE5||||2/9||ENST00000446361.3:c.134-94A>T||T:0.1171|||||||||15792962||||||||,T|ENSG00000186868|ENST00000574436|Transcript|intron_variant||||||rs754512|1||1|MAPT|HGNC|6893|protein_coding||CCDS11499.1|ENSP00000460965|TAU_HUMAN||UPI000002D754||||3/10||ENST00000574436.1:c.308-94A>T||T:0.1171|||||||||15792962||||||||,T|ENSG00000186868|ENST00000571987|Transcript|intron_variant||||||rs754512|1||1|MAPT|HGNC|6893|protein_coding||CCDS11501.1|ENSP00000458742|TAU_HUMAN||UPI0000EE80B7||||3/12||ENST00000571987.1:c.308-94A>T||T:0.1171|||||||||15792962||||||||,T|ENSG00000186868|ENST00000415613|Transcript|intron_variant||||||rs754512|1||1|MAPT|HGNC|6893|protein_coding||CCDS45715.1|ENSP00000410838|TAU_HUMAN||UPI0001AE66E9||||3/13||ENST00000415613.2:c.308-94A>T||T:0.1171|||||||||15792962||||||||,T|ENSG00000186868|ENST00000571311|Transcript|intron_variant&NMD_transcript_variant||||||rs754512|1||1|MAPT|HGNC|6893|nonsense_mediated_decay|||ENSP00000460048||I3L2Z2_HUMAN|UPI00025A2E6E||||4/4||ENST00000571311.1:c.*176-94A>T||T:0.1171|||||||||15792962||||||||,T|ENSG00000186868|ENST00000535772|Transcript|intron_variant||||||rs754512|1||1|MAPT|HGNC|6893|protein_coding||CCDS56033.1|ENSP00000443028|TAU_HUMAN|B4DSE3_HUMAN|UPI000004EEE4||||4/10||ENST00000535772.1:c.308-94A>T||T:0.1171|||||||||15792962||||||||,T|ENSG00000186868|ENST00000576518|Transcript|stop_gained|5499|7|3|K/*|Aag/Tag|rs754512|1||1|MAPT|HGNC|6893|protein_coding|||ENSP00000458621||I3L170_HUMAN&B4DSE3_HUMAN|UPI0001639A7C|||1/7|||ENST00000576518.1:c.7A>T|ENSP00000458621.1:p.Lys3Ter|T:0.1171|||||||||15792962|||||POSITION:0.00682261208576998&ANN_ORF:-255.6993&MAX_ORF:-255.6993|PHYLOCSF_WEAK|ANC_ALLELE|LC,T|ENSG00000186868|ENST00000420682|Transcript|intron_variant||||||rs754512|1||1|MAPT|HGNC|6893|protein_coding||CCDS45716.1|ENSP00000413056|TAU_HUMAN||UPI000004EEE6||||2/9||ENST00000420682.2:c.221-94A>T||T:0.1171|||||||||15792962||||||||,T|ENSG00000186868|ENST00000572440|Transcript|non_coding_transcript_exon_variant&non_coding_transcript_variant|2790|||||rs754512|1||1|MAPT|HGNC|6893|retained_intron|||||||||1/1|||ENST00000572440.1:n.2790A>T||T:0.1171|||||||||15792962||||||||,T|ENSG00000186868|ENST00000351559|Transcript|intron_variant||||||rs754512|1||1|MAPT|HGNC|6893|protein_coding||CCDS11499.1|ENSP00000303214|TAU_HUMAN||UPI000002D754||||4/11||ENST00000351559.5:c.308-94A>T||T:0.1171|||||||||15792962||||||||,T|ENSG00000186868|ENST00000344290|Transcript|intron_variant||||||rs754512|1||1|MAPT|HGNC|6893|protein_coding|YES|CCDS45715.1|ENSP00000340820|TAU_HUMAN||UPI0001AE66E9||||4/14||ENST00000344290.5:c.308-94A>T||T:0.1171|||||||||15792962||||||||,T|ENSG00000186868|ENST00000347967|Transcript|intron_variant||||||rs754512|1||1|MAPT|HGNC|6893|protein_coding|||ENSP00000302706|TAU_HUMAN|B4DSE3_HUMAN|UPI0000173D91||||4/10||ENST00000347967.5:c.32-100A>T||T:0.1171|||||||||15792962||||||||,T|ENSG00000186868|ENST00000431008|Transcript|intron_variant||||||rs754512|1||1|MAPT|HGNC|6893|protein_coding||CCDS56033.1|ENSP00000389250|TAU_HUMAN|B4DSE3_HUMAN|UPI000004EEE4||||3/9||ENST00000431008.3:c.308-94A>T||T:0.1171|||||||||15792962||||||||
The code that i tried:
awk '{
for (i = 1; i <= NF; i++) {
if ($i ~ /AC|AF/) {
printf "%s %s ", $i, $(i + 1)
}
}
print ""
}'
I keep getting syntax error.
output wanted :
AC=14511;AF=0.137
Whenever you have name=value pairs, it's usually simplest to first create an array that maps names to values (n2v[] below) and then you can just access the values by their names.
$ cat file
AC=1;AC_AFR=2;AF=3 AC=4;AC_AFR=5;AF=6
$ cat tst.awk
{
delete n2v
split($2,tmp,/[;=]/)
for (i=1; i in tmp; i+=2) {
n2v[tmp[i]] = tmp[i+1]
}
prt("AC")
prt("AF")
}
function prt(name) { print name, "=", n2v[name] }
$ awk -f tst.awk file
AC = 4
AF = 6
Just change $2 to $71 for your real input.
Something like this should do it (in Gnu awk due to switch):
$ awk '{split($71,a,";");for(i in a )if(a[i]~/^AF/) print a[i]}' foo
AF=0.137
You split the field $71 by ;s, loop thru the array you split to looking for desired match. For multiple matches use switch:
$ awk '{
split($0,a,";");
for(i in a )
switch(a[i]) {
case /^AF=/:
b=b a[i] OFS;
break;
case /^AC=/:
b=b a[i] OFS;
break
}
sub(/.$/,"\n",b);
printf b
}' foo
AC=14511 AF=0.137
EDIT: Now it buffers output to a variable and prints it in the end. You can control the separator with OFS.
I'm new to programming language and stuff
so I have to reverse with awk all the lines and as well all the words in those lines, from a file and print them out.
"File1" to reverse:
aa bb cc
foo do as
And the output printing of the "File1" should be this:
as do foo
cc bb aa
I tried this for word reverse in each line:
for (i=NF; i>1; i--) printf("%s ",$i); printf("%s\n",$1)
but if I want to print the reversed lines I have to do this
{a[NR]=$0
}END{for(i=NR; i; i--) print a[i]}
I need to work with two files with this command in terminal:
awk -f commandFile FileToBePrinted
The problem is I'm beginer in all this and I don't know how to combine those two.
Thanks!
Kev's solution looks to reverse the text in each word. You example output doesn't show that, but his key point is to use a function.
You have the code you need, you just need to rearrange it a little.
cat file1
aa bb cc
foo do as
cat commandFile
function reverse( line ) {
n=split(line, tmpLine)
for (j=n; j>0; j--) {
printf("%s ",tmpLine[j] )
}
}
# main loop
{ a[NR]=$0 }
# print reversed array
END{ for(i=NR; i>0; i--) printf( "%s\n", reverse(a[i]) ) }
Running
awk -f commandFile file1
output
as do foo
cc bb aa
There were a couple of minor changes I made, using n=split(line, tmpLine) ... print tmpLine[j], is a common method of parsing a line of input in a function to print it out. I don't think the $1 vars have scope from a value passed in from an array (your a[i] value), so I changed it to split..tmpLine[j]. I also found that the 'i' variable from END section was kept in scope in the function reverse, so I changed that to j to disambiguate the situation.
I had to figure out a few things, so below is the debug version that I used.
If you're going to have access to gawk, then you'll do well to learn how to use the debugger that is available. If you're using awk/gawk/nawk on systems without a debugger, then this is one method for understanding what is happening in your code. If you're redirecting your programs output to a file or pipe, AND if you system supports "/dev/stderr" notation, you could print the debug lines there, i.e.
#dbg print "#dbg0:line=" line > "/dev/stderr"
Some systems have other notations for accessing stderr, so if you'll be doing this much, it is worthwhile to find out what is available.
cat commandFile.debug
function reverse( line ) {
n=split(line, tmpLine)
#dbg print "#dbg0:line=" line
#dbg print "#dbg1:n=" n "\tj=" j "\ttmpLine[j]=" tmpLine[j]
for (j=n; j>0; j--) {
#dbg print "#dbg2:n=" n "\tj=" j "\ttempLine[j]=" tmpLine[j]
printf("%s ",tmpLine[j] )
}
}
# main loop
{ a[NR]=$0 }
# print reversed array
#dbg END{ print "AT END"; for(i=NR; i>0; i--) printf( "#dbg4:i=%d\t%s\n%s\n", i, a[i] , reverse(a[i])
) }
END{ for(i=NR; i>0; i--) printf( "%s\n", reverse(a[i]) ) }
I hope this helps.
awk '
{
x=reverse($0) (x?"\n":"") x
}
END{
print x
}
function reverse(s, p)
{
for(i=length(s);i>0;i--)
p=p substr(s,i,1)
return p
}' File1
use tac to reverse the lines of the file, then awk to reverse the words.
tac filename | awk '{for (i=NF; i>1; i--) printf("%s ",$i); printf("%s\n",$1)}'
You could also use something like Perl that has list-reversing functions built-in:
tac filename | perl -lane 'print join(" ", reverse(#F))'
How can I use awk to join various fields, given that I don't know how many of them I have? For example, given the input string
aaa/bbb/ccc/ddd/eee
I use -F'/' as delimiter, do some manipulation on aaa, bbb, ccc, ddd, eee (altering, removing...) and I want to join it back to print something line
AAA/bbb/ddd/e
Thanks
... given that I don't know how many of them I have?
Ah, but you do know how many you have. Or you will soon, if you keep reading :-)
Before giving you a record to process, awk will set the NF variable to the number of fields in that record, and you can use for loops to process them (comments aren't part of the script, I've just put them there to explain):
$ echo pax/is/a/love/god | awk -F/ '{
gsub (/god/,"dog",$5); # pax,is,a,love,dog
$4 = ""; # pax,is,a,,dog
$6 = $5; # pax,is,a,,dog,dog
$5 = "rabid"; # pax,is,a,,rabid,dog
printf $1; # output "pax"
for (i = 2; i <= NF; i++) { # output ".<field>"
if ($i != "") { # but only for non-blank fields (skip $4)
printf "."$i;
}
}
printf "\n"; # finish line
}'
pax.is.a.rabid.dog
This shows manipulation of the values, as well as insertion and deletion.
The following will show you how to process each field and do some example manipulations on them.
The only caveat of using the output field separator OFS is that "deleted" fields will still have delimiters as shown in the output below; however it makes the code much simpler if you can live with that.
awk '
BEGIN{FS=OFS="/"}
{
for(i=1;i<=NF;i++){
if($i == "aaa")
$i=toupper($i)
else if($i ~ /c/)
$i=""
else if($i ~ /^eee$/)
$i="e"
}
}1' <<<'aaa/bbb/ccc/ddd/eee'
Output
AAA/bbb//ddd/e
This might work for you:
echo "aaa/bbb/ccc/ddd/eee" |
awk 'BEGIN{FS=OFS="/"}{sub(/../,"",$4);NF=4;print}'
aaa/bbb/ccc/d
To delete fields not at the end use a function to shuffle the values:
echo "aaa/bbb/ccc/ddd/eee" |
awk 'func d(n){for(x=n;x<=NF-1;x++){y=x+1;$x=$y}NF--};BEGIN{FS=OFS="/"}{d(2);print}'
aaa/ccc/ddd/eee
Deletes the second field.
awk -F'/' '{ # I'd suggest to add them to an array, like:
# for (i=1;i<=NF;i++) {a[i]=$i }
# Now manipulate your elements in the array
# then finally print them:
n = asorti(a, dest)
for (i=1;i<=n;i++) { output+=dest[i] "/") }
print gensub("/$","","g",output)
}' INPUTFILE
Doing it this way you can delete elements as well. Note deleting an item can be done like delete array[index].