Substitute patterns using a correspondence file - awk
I try to change in a file some word by others using sed or awk.
My initial fileA as this format:
>Genus_species|NP_001006347.1|transcript-60_2900.p1:1-843
I have a second fileB with the correspondences like this:
NP_001006347.1 GeneA
XP_003643123.1 GeneB
I am trying to substitute in FileA the name to get this ouput:
>Genus_species|GeneA|transcript-60_2900.p1:1-843
I was thinking to use awk or sed, to do something like
's/$patternA/$patternB/' with a while read l but how to indicate which pattern 1 and 2 are in the fileB? I tried also this but not working.
sed "$(sed 's/^\([^ ]*\) \(.*\)$/s#\1#\2#g/' fileB)" fileA
Awk may be able to do the job more easily?
Thanks
It is easier to this in awk:
awk -v OFS='|' 'NR == FNR {
map[$1] = $2
next
}
{
for (i=1; i<=NF; ++i)
$i in map && $i = map[$i]
} 1' file2 FS='|' file1
>Genus_species|GeneA|transcript-60_2900.p1:1-843
Written and tested with your shown samples, considering that you have only one entry for NP_digits.digits in your Input_fileA then you could try following too.
awk '
FNR==NR{
arr[$1]=$2
next
}
match($0,/NP_[0-9]+\.[0-9]+/) && ((val=substr($0,RSTART,RLENGTH)) in arr){
$0=substr($0,1,RSTART-1) arr[val] substr($0,RSTART+RLENGTH)
}
1
' Input_fileB Input_fileA
Using awk
awk -F [\|" "] 'NR==FNR { arr[$1]=$2;next } NR!=FNR { OFS="|";$2=arr[$2] }1' fileB fileA
Set the field delimiter to space or |. Process fileB first (NR==FNR) Create an array called arr with the first space delimited field as the index and the second the value. Then for the second file (NR != FNR), check for an entry for the second field in the arr array and if there is an entry, change the second field for the value in the array and print the lines with short hand 1
You are looking for the join command which can be used like this:
join -11 -22 -t'|' <(tr ' ' '|' < fileB | sort -t'|' -k1) <(sort -t'|' -k2 fileA)
This performs a join on column 1 of fileB with column 2 of fileA. The tr was used such that fileB also uses | as delimiter because join requires it to be equal on both files.
Note that the output columns are not in the order you specified. You can swap by piping the output into awk.
Related
selecting columns in awk discarding corresponding header
How to properly select columns in awk after some processing. My file here: cat foo A;B;C 9;6;7 8;5;4 1;2;3 I want to add a first column with line numbers and then extract some columns of the result. For the example let's get the new first (line numbers) and third columns. This way: awk -F';' 'FNR==1{print "linenumber;"$0;next} {print FNR-1,$1,$3}' foo gives me this unexpected output: linenumber;A;B;C 1 9 7 2 8 4 3 1 3 but expected is (note B is now the third column as we added linenumber as first): linenumber;B 1;6 2;5 3;2 [fixed and revised]
To get your expected output, use: $ awk 'BEGIN { FS=OFS=";" } { print (FNR==1?"linenumber":FNR-1),$(FNR==1?3:1) }' file Output: linenumber;C 1;9 2;8 3;1 To add a column with line number and extract first and last columns, use: $ awk 'BEGIN { FS=OFS=";" } { print (FNR==1?"linenumber":FNR-1),$1,$NF }' file Output this time: linenumber;A;C 1;9;7 2;8;4 3;1;3
Why do you print $0 (the complete record) in your header? And, if you want only two columns in your output, why to you print 3 (FNR-1, $1 and $3)? Finally, the reason why your output field separators are spaces instead of the expected ; is simply that... you did not specify the output field separator (OFS). You can do this with a command line variable assignment (OFS=\;), as shown in the second and third versions below, but also using the -v option (-v OFS=\;) or in a BEGIN block (BEGIN {OFS=";"}) as you wish (there are differences between these 3 methods but they don't matter here). [EDIT]: see a generic solution at the end. If the field you want to keep is the second of the input file (the B column), try: $ awk -F\; 'FNR==1 {print "linenumber;" $2; next} {print FNR-1 ";" $2}' foo linenumber;B 1;6 2;5 3;2 or $ awk -F\; 'FNR==1 {print "linenumber",$2; next} {print FNR-1,$2}' OFS=\; foo linenumber;B 1;6 2;5 3;2 Note that, as long as you don't want to keep the first field of the input file ($1), you could as well overwrite it with the line number: $ awk -F\; '{$1=FNR==1?"linenumber":FNR-1; print $1,$2}' OFS=\; foo linenumber;B 1;6 2;5 3;2 Finally, here is a more generic solution to which you can pass the list of indexes of the columns of the input file you want to print (1 and 3 in this example): $ awk -F\; -v cols='1;3' ' BEGIN { OFS = ";"; n = split(cols, c); } { printf("%s", FNR == 1 ? "linenumber" : FNR - 1); for(i = 1; i <= n; i++) printf("%s", OFS $(c[i])); printf("\n"); }' foo linenumber;A;C 1;9;7 2;8;4 3;1;3
awk filter out CSV file content based on condition on a column
I am trying to filter out content in CSV file based on condition on 2nd column. Example : myfile.csv: A,2,Z C,1,B D,9,X BB,3,NN DD,8,PP WA,10,QR exclude.list 2 9 8 desired output file C,1,B BB,3,NN WA,10,QR If i wanted to exclude 2 , i could use : awk -F',' ' $2!="2" {print }' myfile.csv. I am trying to figure how to iterate over exclude.list file to exclude all values in the file.
1st Solution (preferred): Following awk may help you. awk 'FNR==NR{a[$1];next} !($2 in a)' exclude.list FS="," myfile.csv 2nd Solution (Comprehensive): Adding one more awk by changing Input_file(s) sequence of reading, though first solution is more preferable I am adding this to cover all possibilities of solutions :) awk ' FNR==NR{ a[$2]=$0; if(!b[$2]++){ c[++i]=$2 }; next} ($1 in a) { delete a[$1]} END{ for(j=1;j<=i;j++){ if(a[c[j]]){print a[c[j]]} }} ' FS="," myfile.csv FS=" " exclude.list
linux csv file concatenate columns into one column
I've been looking to do this with sed, awk, or cut. I am willing to use any other command-line program that I can pipe data through. I have a large set of data that is comma delimited. The rows have between 14 and 20 columns. I need to recursively concatenate column 10 with column 11 per row such that every row has exactly 14 columns. In other words, this: a,b,c,d,e,f,g,h,i,j,k,l,m,n,o,p will become: a,b,c,d,e,f,g,h,i,jkl,m,n,o,p I can get the first 10 columns. I can get the last N columns. I can concatenate columns. I cannot think of how to do it in one line so I can pass a stream of endless data through it and end up with exactly 14 columns per row. Examples (by request): How many columns are in the row? sed 's/[^,]//g' | wc -c Get the first 10 columns: cut -d, -f1-10 Get the last 4 columns: rev | cut -d, -f1-4 | rev Concatenate columns 10 and 11, showing columns 1-10 after that: awk -F',' ' NF { print $1","$2","$3","$4","$5","$6","$7","$8","$9","$10$11}'
Awk solution: awk 'BEGIN{ FS=OFS="," } { diff = NF - 14; for (i=1; i <= NF; i++) printf "%s%s", $i, (diff > 1 && i >= 10 && i < (10+diff)? "": (i == NF? ORS : ",")) }' file The output: a,b,c,d,e,f,g,h,i,jkl,m,n,o,p
With GNU awk for the 3rd arg to match() and gensub(): $ cat tst.awk BEGIN{ FS="," } match($0,"(([^,]+,){9})(([^,]+,){"NF-14"})(.*)",a) { $0 = a[1] gensub(/,/,"","g",a[3]) a[5] } { print } $ awk -f tst.awk file a,b,c,d,e,f,g,h,i,jkl,m,n,o,p
If perl is okay - can be used just like awk for stream processing $ cat ip.txt a,b,c,d,e,f,g,h,i,j,k,l,m,n,o,p 1,2,3,4,5,6,3,4,2,4,3,4,3,2,5,2,3,4 1,2,3,4,5,6,3,4,2,4,a,s,f,e,3,4,3,2,5,2,3,4 $ awk -F, '{print NF}' ip.txt 16 18 22 $ perl -F, -lane '$n = $#F - 4; print join ",", (#F[0..8], join("", #F[9..$n]), #F[$n+1..$#F]) ' ip.txt a,b,c,d,e,f,g,h,i,jkl,m,n,o,p 1,2,3,4,5,6,3,4,2,43432,5,2,3,4 1,2,3,4,5,6,3,4,2,4asfe3432,5,2,3,4 -F, -lane split on , results saved in #F array $n = $#F - 4 magic number, to ensure output ends with 14 columns. $#F gives the index of last element of array (won't work if input line has less than 14 columns) join helps to stitch array elements together with specified string #F[0..8] array slice with first 9 elements #F[9..$n] and #F[$n+1..$#F] the other slices as needed Borrowing from Ed Morton's regex based solution $ perl -F, -lape '$n=$#F-13; s/^([^,]*,){9}\K([^,]*,){$n}/$&=~tr|,||dr/e' ip.txt a,b,c,d,e,f,g,h,i,jkl,m,n,o,p 1,2,3,4,5,6,3,4,2,43432,5,2,3,4 1,2,3,4,5,6,3,4,2,4asfe3432,5,2,3,4 $n=$#F-13 magic number ^([^,]*,){9}\K first 9 fields ([^,]*,){$n} fields to change $&=~tr|,||dr use tr to delete the commas e this modifier allows use of Perl code in replacement section this solution also has the added advantage of working even if input field is less than 14
You can try this gnu sed sed -E ' s/,/\n/9g :A s/([^\n]*\n)(.*)(\n)(([^\n]*\n){4})/\1\2\4/ tA s/\n/,/g ' infile
First variant - with awk awk -F, ' { for(i = 1; i <= NF; i++) { OFS = (i > 9 && i < NF - 4) ? "" : "," if(i == NF) OFS = "\n" printf "%s%s", $i, OFS } }' input.txt Second variant - with sed sed -r 's/,/#/10g; :l; s/#(.*)((#[^#]){4})/\1\2/; tl; s/#/,/g' input.txt or, more straightforwardly (without loop) and probably faster. sed -r 's/,(.),(.),(.),(.)$/#\1#\2#\3#\4/; s/,//10g; s/#/,/g' input.txt Testing Input a,b,c,d,e,f,g,h,i,j,k,l,m,n,o,p a,b,c,d,e,f,g,h,i,j,k,l,m,n,o,p,q,r a,b,c,d,e,f,g,h,i,j,k,l,m,n,o,p,q,r,s,t,u Output a,b,c,d,e,f,g,h,i,jkl,m,n,o,p a,b,c,d,e,f,g,h,i,jklmn,o,p,q,r a,b,c,d,e,f,g,h,i,jklmnopq,r,s,t,u
Solved a similar problem using csvtool. Source file, copied from one of the other answers: $ cat input.txt a,b,c,d,e,f,g,h,i,j,k,l,m,n,o,p 1,2,3,4,5,6,3,4,2,4,3,4,3,2,5,2,3,4 1,2,3,4,5,6,3,4,2,4,a,s,f,e,3,4,3,2,5,2,3,4 Concatenating columns: $ cat input.txt | csvtool format '%1,%2,%3,%4,%5,%6,%7,%8,%9,%10%11%12,%13,%14,%15,%16,%17,%18,%19,%20,%21,%22\n' - a,b,c,d,e,f,g,h,i,jkl,m,n,o,p,,,,,, 1,2,3,4,5,6,3,4,2,434,3,2,5,2,3,4,,,, 1,2,3,4,5,6,3,4,2,4as,f,e,3,4,3,2,5,2,3,4 anatoly#anatoly-workstation:cbs$ cat input.txt
awk returning whitespace matches when comparing columns in csv
I am trying to do a file comparison in awk but it seems to be returning all the lines instead of just the lines that match due to whitespace matching awk -F "," 'NR==FNR{a[$2];next}$6 in a{print $6}' file1.csv fil2.csv How do I instruct awk not to match the whitespaces? I get something like the following: cccs dert ssss assak
this should do $ awk -F, 'NR==FNR && $2 {a[$2]; next} $6 in a {print $6}' file1 file2 if you data file includes spaces and numerical fields, as commented below better to change the check from $2 to $2!="" && $2!~/[[:space:]]+/
Consider cases like $2=<space>foo<space><space>bar in file1 vs $6=foo<space>bar<space> in file2. Here's how to robustly compare $6 in file2 against $2 of file1 ignoring whitespace differences, and only printing lines that do not have empty or all-whitespace key fields: awk -F, ' { key = (NR==FNR ? $2 : $6) gsub(/[[:space:]]+/," ",key) gsub(/^ | $/,"",key) } key=="" { next } NR==FNR { file1[key]; next } key in file1 ' file1 file2 If you want to make the comparison case-insensitive then add key=tolower(key) before the first gsub(). If you want to make it independent of punctuation add gsub(/[[:punct:]]/,"",key) before the first gsub(). And so on... The above is untested of course since no testable sample input/output was provided.
Is there a way to completely delete fields in awk, so that extra delimiters do not print?
Consider the following command: $ gawk -F"\t" "BEGIN{OFS=\"\t\"}{$2=$3=\"\"; print $0}" Input.tsv When I set $2 = $3 = "", the intended effect is to get the same effect as writing: print $1,$4,$5...$NF However, what actually happens is that I get two empty fields, with the extra field delimiters still printing. Is it possible to actually delete $2 and $3? Note: If this was on Linux in bash, the correct statement above would be the following, but Windows does not handle single quotes well in cmd.exe. $ gawk -F'\t' 'BEGIN{OFS="\t"}{$2=$3=""; print $0}' Input.tsv
This is an oldie but goodie. As Jonathan points out, you can't delete fields in the middle, but you can replace their contents with the contents of other fields. And you can make a reusable function to handle the deletion for you. $ cat test.awk function rmcol(col, i) { for (i=col; i<NF; i++) { $i = $(i+1) } NF-- } { rmcol(3) } 1 $ printf 'one two three four\ntest red green blue\n' | awk -f test.awk one two four test red blue
You can't delete fields in the middle, but you can delete fields at the end, by decrementing NF. So you can shift all the later fields down to overwrite $2 and $3 then decrement NF by two, which erases the last two fields: $ echo 1 2 3 4 5 6 7 | awk '{for(i=2; i<NF-1; ++i) $i=$(i+2); NF-=2; print $0}' 1 4 5 6 7
If you are just looking to remove columns, you can use cut: $ cut -f 1,4- file.txt To emulate cut: $ awk -F "\t" '{ for (i=1; i<=NF; i++) if (i != 2 && i != 3) { if (i == NF) printf $i"\n"; else printf $i"\t" } }' file.txt Similarly: $ awk -F "\t" '{ delim =""; for (i=1; i<=NF; i++) if (i != 2 && i != 3) { printf delim $i; delim = "\t"; } printf "\n" }' file.txt HTH
The only way I can think to do it in Awk without using a loop is to use gsub on $0 to combine adjacent FS: $ echo {1..10} | awk '{$2=$3=""; gsub(FS"+",FS); print}' 1 4 5 6 7 8 9 10
One way could be to remove fields like you do and remove extra spaces with gsub: $ awk 'BEGIN { FS = "\t" } { $2 = $3 = ""; gsub( /\s+/, "\t" ); print }' input-file
In the addition of the answer by Suicidal Steve I'd like to suggest one more solution but using sed instead awk. It seems more complicated than usage of cut as it was suggested by Steve. But it was the better solution because sed -i allows editing in-place. $ sed -i 's/\(.*,\).*,.*,\(.*\)/\1\2/' FILENAME
To remove fields 2 and 3 from a given input file (assuming a tab field separator), you can remove the fields from $0 using gensub and regenerate it as follows: awk -F '\t' 'BEGIN{OFS="\t"}\ {$0=gensub(/[^\t]*\t/,"",3);\ $0=gensub(/[^\t]*\t/,"",2);\ print}' Input.tsv
The method presented in the answer of ghoti has some problems: every assignment of $i = $(i+1) forces awk to rebuild the record $0. This implies that if you have 100 fields and you want to delete field 10, you rebuild the record 90 times. changing the value of NF manually is not posix compliant and leads to undefined behaviour (as is mentioned in the comments). A somewhat more cumbersome, but stable robust way to delete a set of columns would be: a single column: awk -v del=3 ' BEGIN{FS=fs;OFS=ofs} { b=""; for(i=1;i<=NF;++i) if(i!=del) b=(b?b OFS:"") $i; $0=b } # do whatever you want to do ' file multiple columns: awk -v del=3,5,7 ' BEGIN{FS=fs;OFS=ofs; del="," del ","} { b=""; for(i=1;i<=NF;++i) if (del !~ ","i",") b=(b?b OFS:"") $i; $0=b } # do whatever you want to do ' file
Well, if the goal is to remove the extra delimiters, then you can use tr on Linux. Example: $ echo "1,2,,,5" | tr -s ',' 1,2,5
echo one two three four five six|awk '{ print $0 is3=$3 $3="" print $0 print is3 }' one two three four five six one two four five six three