AWK - Replace between two csv files and print correctly - awk

I have 2 files:
f1.csv:
CL*VIN
AV*AZA
PS*LUG
f2.csv:
2100-12-31*1234A*Thomas*Frederuc*1931-02-20*6791237*6791238*test1*1*0*0*CL*Jame 12*13*a1*zz3*D*13*1234*Tex*F
2100-12-31*1235A*Jack*Borbin*1931-02-21*7791238*7791239*test2*1*0*0*PS*Willliams Hou*14*a2*zz4*A*13*1235*Barc*F
2100-12-31*1236A*Pierce*Matheus*1931-02-22*8791239*8791240*test3*1*1*1*AV*Magnum st*15*a3*zz5*A*13*1236*Euo*F
And I want this output:
2100-12-31*1234A*Thomas*Frederuc*1931-02-20*6791237*6791238*test1*1*0*0*VIN*Jame 12*13*a1*zz3*D*13*1234*Tex*F
2100-12-31*1235A*Jack*Borbin*1931-02-21*7791238*7791239*test2*1*0*0*LUG*Willliams Hou*14*a2*zz4*A*13*1235*Barc*F
2100-12-31*1236A*Pierce*Matheus*1931-02-22*8791239*8791240*test3*1*1*1*AZA*Magnum st*15*a3*zz5*A*13*1236*Euo*F
I have the following code:
awk -F"*" 'FNR==NR{ A[$1]=$2;next} ($12 in A){$12=A[$12];print}' OFS='*' f1.csv f2.csv
But the output is:
*Jame 12*13*a1*zz3*D*13*1234*Tex*F931-02-20*6791237*6791238*test1*1*0*0*VIN
*Willliams Hou*14*a2*zz4*A*13*1235*Barc*F791238*7791239*test2*1*0*0*LUG
*Magnum st*15*a3*zz5*A*13*1236*Euo*F-02-22*8791239*8791240*test3*1*1*1*VIN
How can I obtain my desired output?

Your code works perfectly fine here, what's your system/code environment and awk version?
It seems something to do with carriage returns, so better run this before dealing with these files:
dos2unix files
However, you can try this:
awk 'BEGIN{FS=OFS="*";RS="\r\n|\r|\n";}FNR==NR{A[$1]=$2;next}($12 in A){$12=A[$12];print}' f1.csv f2.csv

Related

Awk array, replace with full length matches of keys

I want to replace strings in a target file (target.txt) by strings in a lookup table (lookup.tab), which looks as follows.
Seq_1 Name_one
Seq_2 Name_two
Seq_3 Name_three
...
Seq_10 Name_ten
Seq_11 Name_eleven
Seq_12 Name_twelve
The target.txt file is a large file with a tree structure (Nexus format). It is not arranged in columns.
Therefore I use the following command:
awk 'FNR==NR { array[$1]=$2; next } { for (i in array) gsub(i, array[i]) }1' "lookup.tab" "target.txt"
Unfortunately, this command does not take the full length of the elements from the first column, so that Seq_1, Seq_10, Seq_11, Seq_12 end up as Name_one, Name_one0, Name_one1, Name_one2 etc...
How can the awk command be made more specific to correctly substitute the strings?
Try this please, see if it meets your need:
awk 'FNR==NR { le=length($1); a[le][$1]=$2; if (maxL<le) maxL=le; next } { for(le=maxL;le>0;le--) if(length(a[le])) for (i in a[le]) gsub(i, a[le][i]) }1' "lookup.tab" "target.txt"
It's based on your own trying, but instead of randomly replace using the hashes in the array, replace using those longer keys first.
By this way, and based on your examples, I think it's enough to avoid wrongly substitudes.

Removing steric (*) from the end of a fasta sequence in a multi fasta file

I have a multifasta file containi g predicted proteins from 2 abinitio tools. Every sequence contains a steric (*) in the end. I want to remove it from the file. my sequences are like this:
>snapgene1
SFLPSAEAIEKVLSHMSRRIIDDMKAELQQPEMRWFWP*
>snapgene2
SFLPSAEAIEKVLSHIIIIAAAAKKKPPFFDDMKAELQQPEMRWFWP*
i want the sequences like this :
>snapgen1
SFLPSAEAIEKVLSHMSRRIIDDMKAELQQPEMRWFWP
>snapgene2
SFLPSAEAIEKVLSHIIIIAAAAKKKPPFFDDMKAELQQPEMRWFWP
Can anyone help me in this. Thankyou
If the text stored in a file "temp.txt",you can use command :
sed -i "s/*$//" temp.txt
In awk, if you keep your fastas in file:
$ awk '{sub(/\*$/,"")}1' file
>snapgene1
SFLPSAEAIEKVLSHMSRRIIDDMKAELQQPEMRWFWP
>snapgene2
SFLPSAEAIEKVLSHIIIIAAAAKKKPPFFDDMKAELQQPEMRWFWP
It replaces trailing * with nothing.

print from match & process several input files

when you scrutiny my questions from the past weeks you find I asked questions similar to this one. I had problems to ask in a demanded format since I did not really know where my problems came from. E. Morton tells me not to use range expression. Well, I do not know what they are excactly. I found in this forum many questions alike mine with working answers.
Like: "How to print following line from a match" (e.g.)
But all solutions I found stop working when I process more than one input file. I need to process many.
I use this command:
gawk -f 1.awk print*.csv > new.txt
while 1.awk contains:
BEGIN { OFS=FS=";"
pattern="row4"
}
go {print} $0 ~ pattern {go = 1}
input file 1 print1.csv contains:
row1;something;in;this;row;;;;;;;
row2;something;in;this;row;;;;;;;
row3;something;in;this;row;;;;;;;
row4;don't;need;to;match;the;whole;line,;
row5;something;in;this;row;;;;;;;
row6;something;in;this;row;;;;;;;
row7;something;in;this;row;;;;;;;
row8;something;in;this;row;;;;;;;
row9;something;in;this;row;;;;;;;
row10;something;in;this;row;;;;;;;
Input file 2 print2.csv contains the same just for illustration purpose.
The 1.awk (and several others ways I found in this forum to print from match) works for one file. Output:
row5;something;in;this;row;;;;;;;
row6;something;in;this;row;;;;;;;
row7;something;in;this;row;;;;;;;
row8;something;in;this;row;;;;;;;
row9;something;in;this;row;;;;;;;
row10;something;in;this;row;;;;;;;
BUT not when I process more input files.
Each time I process this way more than one input file awk commands 'to print from match' seem to be ignored.
As said I was told not to use range expression. I do not know how and maybe the problem is linked to the way I input several files?
just reset your match indicator at the beginning of each file
$ awk 'FNR==1{p=0} p; /row4/{p=1} ' file1 file2
row5;something;in;this;row;;;;;;;
row6;something;in;this;row;;;;;;;
row7;something;in;this;row;;;;;;;
row8;something;in;this;row;;;;;;;
row9;something;in;this;row;;;;;;;
row10;something;in;this;row;;;;;;;
row5;something;in;this;row;;;;;;;
row6;something;in;this;row;;;;;;;
row7;something;in;this;row;;;;;;;
row8;something;in;this;row;;;;;;;
row9;something;in;this;row;;;;;;;
row10;something;in;this;row;;;;;;;
UPDATE
From the comments
is it possible to combine your awk with: "If $1="row5" then write in
$6="row5" and delete the value "row5" in $5? In other words, to move
content "row5" in column1, if found there, to new column 6? I could to
this with another awk but a combination into one would be nicer
... $1=="row5"{$6=$5; $5=""} ...
or, if you want to use another field instead of $5 replace $5 with the corresponding field number.

extracting subtext between two characters using grep

I have text file which has information like follows
#Mp_chzt_1
asdjhsadhasdhdbjashdjaudashdjashdasdhasdhasdh
asdasdkasjdkaskdskadkasdkasdkjaskldasdklasdas
ahsjdasdfdfsdhghrtuztiuiuzozuoiouiouiouiouiou
asjkjieqjeroiweoriksfjksjksjkf
+
!!!#!!!!!!!!++??????????????~~~~~~~~~~~~~
BBBBBBBBBBBBMMMMMM!!!!!++LLLLLL******
#Mp_btrea_1
uokjjkzghqawsdasduihdlöklöaklöskdlkaökgzgzggz
asdasduzuqwtzeqweuvixcvdjfiisduiifuzwpqüqwoeü
kjkjiuijwiqquzwuziziqz
+
**********||||||||||||#########++++?????????
MMMMMMMMMUUUU***+++~~~~~~~~~~~~~~~~~~~~~~~~~~
#Mp_trwe_3
jhtrqhkjiqkjkqwjelasjjljiewkjkljkldjflsjljki8u
immhgwqtzopirpjgbsdkfjieipwippieoroeirkvsdjjfk
jkahdjhjhfuhjkwekksjakjeiuwiurweiurioweuroweod
poplrtm,ernmjhazqweqwjidiipfiopdifosidpfppsdif
mnasnbdhgqweqweipoipoxkajksdökalsklsaksldkasöd
asdas
+
!!!!!!!!!!!!!!!!!!#####???????????????????
I would like extract the region only between #Mp_* and + that comes right below the text and export it to txt file like following
#Mp_chzt_1
asdjhsadhasdhdbjashdjaudashdjashdasdhasdhasdh
asdasdkasjdkaskdskadkasdkasdkjaskldasdklasdas
ahsjdasdfdfsdhghrtuztiuiuzozuoiouiouiouiouiou
asjkjieqjeroiweoriksfjksjksjkf
#Mp_btrea_1
uokjjkzghqawsdasduihdlöklöaklöskdlkaökgzgzggz
asdasduzuqwtzeqweuvixcvdjfiisduiifuzwpqüqwoeü
kjkjiuijwiqquzwuziziqz
#Mp_trwe_3
jhtrqhkjiqkjkqwjelasjjljiewkjkljkldjflsjljki8u
immhgwqtzopirpjgbsdkfjieipwippieoroeirkvsdjjfk
jkahdjhjhfuhjkwekksjakjeiuwiurweiurioweuroweod
poplrtm,ernmjhazqweqwjidiipfiopdifosidpfppsdif
mnasnbdhgqweqweipoipoxkajksdökalsklsaksldkasöd
asdas
When I used the following code
grep -o -P '(?<=#MP.*).*(?=+)' query.txt > output.txt
It gave me "grep: nothing to repeat".
Could anyone guide where my mistake is and how to rectify it.
Thanks in advance.
Better use awk for this:
awk '/^#/{f=1} /^+/ {f=0} f' file > output.txt
Or, if you have leading spaces, match them with \s*:
awk '/^\s*#/{f=1} /^\s*\+/ {f=0} f' file > output.txt
This uses a flag f to decide whether the line should be printed or not.
When it sees a line starting with #, it activates it.
When it sees a line starting with +, it deactivates it.
Then, it evaluates the flag and prints if it is True.
With your given input it returns:
#Mp_chzt_1
asdjhsadhasdhdbjashdjaudashdjashdasdhasdhasdh
asdasdkasjdkaskdskadkasdkasdkjaskldasdklasdas
ahsjdasdfdfsdhghrtuztiuiuzozuoiouiouiouiouiou
asjkjieqjeroiweoriksfjksjksjkf
#Mp_btrea_1
uokjjkzghqawsdasduihdlöklöaklöskdlkaökgzgzggz
asdasduzuqwtzeqweuvixcvdjfiisduiifuzwpqüqwoeü
kjkjiuijwiqquzwuziziqz
#Mp_trwe_3
jhtrqhkjiqkjkqwjelasjjljiewkjkljkldjflsjljki8u
immhgwqtzopirpjgbsdkfjieipwippieoroeirkvsdjjfk
jkahdjhjhfuhjkwekksjakjeiuwiurweiurioweuroweod
poplrtm,ernmjhazqweqwjidiipfiopdifosidpfppsdif
mnasnbdhgqweqweipoipoxkajksdökalsklsaksldkasöd
asdas

awk getline skipping to last line -- possible newline character issue

I'm using
while( (getline line < "filename") > 0 )
within my BEGIN statement, but this while loop only seems to read the last line of the file instead of each line. I think it may be a newline character problem, but really I don't know. Any ideas?
I'm trying to read the data in from a file other than the main input file.
The same syntax actually works for one file, but not another, and the only difference I see is that the one for which it DOES work has "^M" at the end of each line when I look at it in Vim, and the one for which it DOESN'T work doesn't have ^M. But this seems like an odd problem to be having on my (UNIX based) Mac.
I wish I understood what was going with getline a lot better than I do.
You would have to specify RS to something more vague.
Here is a ugly hack to get things working
RS="[\x0d\x0a\x0d]"
Now, this may require some explanation.
Diffrent systems use difrent ways to handle change of line.
Read http://en.wikipedia.org/wiki/Carriage_return and http://en.wikipedia.org/wiki/Newline if you are
interested in it.
Normally awk hadles this gracefully, but it appears that in your enviroment, some files are being naughty.
0x0d or 0x0a or 0x0d 0x0a (CR+LF) should be there, but not mixed.
So lets try a example of a mixed data stream
$ echo -e "foo\x0d\x0abar\x0d\x0adoe\x0arar\x0azoe\x0dqwe\x0dtry" |awk 'BEGIN{while((getline r )>0){print "r=["r"]";}}'
Result:
r=[foo]
r=[bar]
r=[doe]
r=[rar]
try]oe
We can see that the last lines are lost.
Now using the ugly hack to RS
$ echo -e "foo\x0d\x0abar\x0d\x0adoe\x0arar\x0azoe\x0dqwe\x0dtry" |awk 'BEGIN{RS="[\x0d\x0a\x0d]";while((getline r )>0){print "r=["r"]";}}'
Result:
r=[foo]
r=[bar]
r=[doe]
r=[rar]
r=[zoe]
r=[qwe]
r=[try]
We can see every line is obtained, reguardless of the 0x0d 0x0a junk :-)
Maybe you should preprocess your input file with for example dos2unix (http://sourceforge.net/projects/dos2unix/) utility?