Replace value in nth line ABOVE search string using awk/sed - awk

I have a large firewall configuration file with sections like these distributed all over:
edit 78231
set srcintf "port1"
set dstintf "any"
set srcaddr "srcaddr"
set dstaddr "vip-dnat"
set service "service"
set schedule "always"
set logtraffic all
set logtraffic-start enable
set status enable
set action accept
next
I want to replace value "port1", which is 3 lines above search string "vip-dnat".
It seems the below solution is close but I don't seem to be able to invert the search to check above the matched string. Also it does not replace the value inside the file:
Replace nth line below the searched pattern in a file
I'm able to extract the exact value using the following awk command but simply cannot figure out how to replace it within the file (sub/gsub?):
awk -v N=3 -v pattern=".*vip-dnat.*" '{i=(1+(i%N));if (buffer[i]&& $0 ~ pattern) print buffer[i]; buffer[i]=$3;}' filename
"port1"

We could use tac + awk combination here. I have created a variable occur with value after how many lines(when "vip-dnat" is found) you need to perform substitution.
tac Input_file |
awk -v occur="3" -v new_port="new_port_value" '
/\"vip-dnat\"/{
found=1
print
next
}
found && ++count==occur{
sub(/"port1"/,new_port)
found=""
}
1' |
tac
Explanation: Adding detailed explanation for above.
tac Input_file | ##Printing Input_file content in reverse order, sending output to awk command as an input.
awk -v occur="3" -v new_port="new_port_value" ' ##Starting awk program with 2 variables occur which has number of lines after we need to perform substitution and new_port what is new_port value we need to keep.
/\"vip-dnat\"/{ ##Checking if line has "vip-dnat" then do following.
found=1 ##Setting found to 1 here.
print ##Printing current line here.
next ##next will skip all statements from here.
}
found && ++count==occur{ ##Checking if found is SET and count value equals to occur.
sub(/"port1"/,new_port) ##Then substitute "port1" with new_port value here.
found="" ##Nullify found here.
}
1' | ##Mentioning 1 will print current line and will send output to tac here.
tac ##Again using tac will print output in actual order.

Use a Perl one-liner. In this example, it changes line number 3 above the matched string to set foo bar:
perl -0777 -pe 's{ (.*\n) (.*\n) ( (?:.*\n){2} .* vip-dnat ) }{${1} set foo bar\n${3}}xms' in_file
Prints:
edit 78231
set foo bar
set dstintf "any"
set srcaddr "srcaddr"
set dstaddr "vip-dnat"
set service "service"
set schedule "always"
set logtraffic all
set logtraffic-start enable
set status enable
set action accept
next
When you are satisfied with the replacement written into STDOUT, change perl to perl -i.bak to replace the file in-place.
The Perl one-liner uses these command line flags:
-e : Tells Perl to look for code in-line, instead of in a file.
-p : Loop over the input one line at a time, assigning it to $_ by default. Add print $_ after each loop iteration.
-i.bak : Edit input files in-place (overwrite the input file). Before overwriting, save a backup copy of the original file by appending to its name the extension .bak.
-0777 : Slurp files whole.
(.*\n) : Any character, repeated 0 or more times, ending with a newline. Parenthesis serve to capture the matched part into "match variables", numbered $1, $2, etc, from left to right according to the position of the opening parenthesis.
( (?:.*\n){2} .* vip-dnat ) : 2 lines followed by the line with the desired string vip-dnat. (?: ... ) represents non-capturing parentheses.
SEE ALSO:
perldoc perlrun: how to execute the Perl interpreter: command line switches
perldoc perlre: Perl regular expressions (regexes)
perldoc perlre: Perl regular expressions (regexes): Quantifiers; Character Classes and other Special Escapes; Assertions; Capture groups
perldoc perlrequick: Perl regular expressions quick start
The regex uses these modifiers:
/x : Ignore whitespace and comments, for readability.
/m : Allow multiline matches.
/s : Allow . to match a newline.

Whenever you have tag-value pairs in your data it's best to first create an array of that mapping (tag2val[] below) and then you can test and/or change and/or print the values in whatever order you like just be using their names:
$ cat tst.awk
$1 == "edit" { editId=$2; next }
editId != "" {
if ($1 == "next") {
# Here is where you test and/or set the values of whatever tags
# you like by referencing their names.
if ( tag2val[ifTag] == ifVal ) {
tag2val[thenTag] = thenVal
}
print "edit", editId
for (tagNr=1; tagNr<=numTags; tagNr++) {
tag = tags[tagNr]
val = tag2val[tag]
print " set", tag, val
}
print $1
editId = ""
numTags = 0
delete tag2val
}
else {
tag = $2
sub(/^[[:space:]]*([^[:space:]]+[[:space:]]+){2}/,"")
sub(/[[:space:]]+$/,"")
val = $0
if ( !(tag in tag2val) ) {
tags[++numTags] = tag
}
tag2val[tag] = val
}
}
$ awk -v ifTag='dstaddr' -v ifVal='"vip-dnat"' -v thenTag='srcintf' -v thenVal='"foobar"' -f tst.awk file
edit 78231
set srcintf "foobar"
set dstintf "any"
set srcaddr "srcaddr"
set dstaddr "vip-dnat"
set service "service"
set schedule "always"
set logtraffic all
set logtraffic-start enable
set status enable
set action accept
next
Note that the above approach:
Will work even if/when either of the values you want to find appear in other contexts (e.g. associated with other tags or as substrings of other values), and doesn't rely on how many lines are between the lines you're interested in or what order they appear in in the record.
Will let you change that value of any tag based on the value of any other tag and is easily extended to do compound comparisons, assignments, etc. and/or print the values in a different order or print a subset of them or add new tags+values.

Related

awk combine info from two files (fasta file header)

I know there are many similar questions, and I had read through many of them. But I still can't make my code work. Could somebody point the problem out for me please? Thanks!
(base) $ head Sample.pep2
>M00000032072 gene=G00000025773 seq_id=ChrM type=cds
MFKQNPSPGWKECPPSSDKEGTTPERLDEGREMRRGKEKAFGDREISFLLHRKRRRPRIA
YGACYLKGARFFDRGAMIAGASPRSARWPIGIAACGLCLPIRIIIKNSGSARESAGNNRK
EGVHVAAAPAPLLSQWGSRFGIY*
>M00000032073 gene=G00000025774 seq_id=ChrM type=cds
MFKQNPSPGWKECPPSSDKEGTTPERLDEGREMRRGKEKAFGDREISFLLHRKRRRPRIA
YGACYLKGARFFDRGAMIAGASPRSARWPIGIAACGLCLPIRIIIKNSGSARESAGNNRK
EGVHVAAAPAPLLSQWGSSIASMILGALAAMAQTKVKRPLAHSSIGHVGYIRTGFSCGTI
EGIQSLLIGIFIYALMTMDAFAIVSALRQTRVKYIADLGALAKTNPISAITFSITMFSYA
GIPPLAGFCSKFYLFFAALGCGAYFLAPVGVVTSVIGRWAAGRLPRISKFGGPKAVLRAP
$ head -n 3 mRNA.function
M00000032074 locus=g17091;makerName=TCONS_00021197.p2
M00000032073 Dbxref=MobiDBLite:mobidb-lite;locus=g17092;makerName=TCONS_00021198.p3
M00000032072 Dbxref=MobiDBLite:mobidb-lite;locus=g17093;makerName=TCONS_00021199.p1
I would like the output
>M00000032072 gene=G00000025773 seq_id=ChrM type=cds Dbxref=MobiDBLite:mobidb-lite;locus=g17093;makerName=TCONS_00021199.p1
MFKQNPSPGWKECPPSSDKEGTTPERLDEGREMRRGKEKAFGDREISFLLHRKRRRPRIA
YGACYLKGARFFDRGAMIAGASPRSARWPIGIAACGLCLPIRIIIKNSGSARESAGNNRK
EGVHVAAAPAPLLSQWGSRFGIY*
>M00000032073 gene=G00000025774 seq_id=ChrM type=cds Dbxref=MobiDBLite:mobidb-lite;locus=g17092;makerName=TCONS_00021198.p3
MFKQNPSPGWKECPPSSDKEGTTPERLDEGREMRRGKEKAFGDREISFLLHRKRRRPRIA
YGACYLKGARFFDRGAMIAGASPRSARWPIGIAACGLCLPIRIIIKNSGSARESAGNNRK
EGVHVAAAPAPLLSQWGSSIASMILGALAAMAQTKVKRPLAHSSIGHVGYIRTGFSCGTI
EGIQSLLIGIFIYALMTMDAFAIVSALRQTRVKYIADLGALAKTNPISAITFSITMFSYA
GIPPLAGFCSKFYLFFAALGCGAYFLAPVGVVTSVIGRWAAGRLPRISKFGGPKAVLRAP
and my command is awk 'NR==FNR{id[$1]=$2; next} /^>/ {print $0=$0,id[$1]}' mRNA.function Sample.pep2. But it doesn't do the job... I don't know where it is wrong...
Here is a perl solution.
perl -lpe 'BEGIN { %id_to_function = map { /^(\S+)\s+(.*)/ } `cat mRNA.function`; } s{^>(\S+)(.*)}{>$1$2 $id_to_function{$1}};' sample.pep2
Prior to read the fasta file, the code executes the BEGIN { ... } block. There, the file with ids and functions is read into the hash %id_to_function.
In the main body of the code, the substitution operator s{...}{...} appends to the fasta header the function for the corresponding id using the hash lookup $id_to_function{$1}.
$1 and $2 are the first and second capture groups, correspondingly, that were captured with parentheses in the preceding regex: ^>(\S+)(.*).
The Perl one-liner uses these command line flags:
-e : Tells Perl to look for code in-line, instead of in a file.
-p : Loop over the input one line at a time, assigning it to $_ by default. Add print $_ after each loop iteration.
-l : Strip the input line separator ("\n" on *NIX by default) before executing the code in-line, and append it when printing.
SEE ALSO:
perldoc perlrun: how to execute the Perl interpreter: command line switches
perldoc perlrequick: Perl regular expressions quick start
Current code is close, just need to a) match the first field (Sample.pep2) with the corresponding array entry (if it exists) and b) make sure we print all input lines:
awk 'NR==FNR{id[$1]=$2; next} /^>/ {key=substr($1,2); if (key in id) $0=$0 OFS id[key]} 1' mRNA.function Sample.pep2
This generates:
>M00000032072 gene=G00000025773 seq_id=ChrM type=cds Dbxref=MobiDBLite:mobidb-lite;locus=g17093;makerName=TCONS_00021199.p1
MFKQNPSPGWKECPPSSDKEGTTPERLDEGREMRRGKEKAFGDREISFLLHRKRRRPRIA
YGACYLKGARFFDRGAMIAGASPRSARWPIGIAACGLCLPIRIIIKNSGSARESAGNNRK
EGVHVAAAPAPLLSQWGSRFGIY*
>M00000032073 gene=G00000025774 seq_id=ChrM type=cds Dbxref=MobiDBLite:mobidb-lite;locus=g17092;makerName=TCONS_00021198.p3
MFKQNPSPGWKECPPSSDKEGTTPERLDEGREMRRGKEKAFGDREISFLLHRKRRRPRIA
YGACYLKGARFFDRGAMIAGASPRSARWPIGIAACGLCLPIRIIIKNSGSARESAGNNRK
EGVHVAAAPAPLLSQWGSSIASMILGALAAMAQTKVKRPLAHSSIGHVGYIRTGFSCGTI
EGIQSLLIGIFIYALMTMDAFAIVSALRQTRVKYIADLGALAKTNPISAITFSITMFSYA
GIPPLAGFCSKFYLFFAALGCGAYFLAPVGVVTSVIGRWAAGRLPRISKFGGPKAVLRAP
awk '
NR==FNR {a[">"$1]=$2}
NR!=FNR && a[$1] != "" {$0=$0" "a[$1]}
NR!=FNR' mRNA.function Sample.pep2
You were on the right track with the array. In file2 you need to check that the line starts with the matching name from file1, before appending its data. That's what a[$1] != "" does.
This example assumes the first file only has two fields (no spaces in the data). If there are spaces I can post an edit.

How to have awk print variable instead of the full matched line?

The awk script below prints a different result than expected, and I'd like to know why. How can I make it behave the way I want?
Awk script
$ cat script.awk
/^This is/ {
print "Block entered";
my_var="value";
print $my_var;
};
Input data
$ cat input-data
This is the text to match
Expected output
Block entered
value
Actual output
$ awk -f script.awk -- input-data
Block entered
This is the text to match
anubhava's already suggested a fix for OP's code; the following is an explanation of what OP's code is doing ...
$ is used to reference a field in the current line (eg, $1 == 1st field) so ...
$myvar becomes $value, but since this is an invalid field reference awk silently ignores it so ...
print $myvar becomes print but ...
print (with no args) says to print the current (input) line ...
hence the output of This is the text to match
NOTE: following the link provided by OP (see comment - above), it appears that $value may actually be treated as $0 leaving us with print $0 (which is the same as print in this case)
Consider:
awk '{myvar="1"; print $myvar}' <<< "field#1 field#2 field#3"
In this scenario $myvar becomes $1 which is a valid field reference so the output generated is:
field#1

How to inplace substitute the content between 2 tags with SED (bash)?

I want to inplace edit a file with sed (Oracle-Linux/Bash).
The content between 2 search-tags (in form of "#"-comments) should get commented out.
Example:
Some_Values
#NORMAL_LISTENER_START
LISTENER =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = IPC)
(KEY = LISTENER)
)
)
)
#NORMAL_LISTENER_END
Other_Values
Should result in:
Some_Values
#NORMAL_LISTENER_START
# LISTENER =
# (DESCRIPTION =
# (ADDRESS = (PROTOCOL = IPC)
# (KEY = LISTENER)
# )
# )
# )
#NORMAL_LISTENER_END
Other_Values
The following command already achieves it, but it also puts a comment+blank in front of the search-tags:
sed -i "/#NORMAL_LISTENER_START/,/#NORMAL_LISTENER_END/ s/^/# /" ${my_file}
Now my research told me to exclude those search-tags like:
sed -i '/#NORMAL_LISTENER_START/,/#NORMAL_LISTENER_END/{//!p;} s/^/# /' ${my_file}
But it won't work - with the following message as a result:
sed: -e expression #1, char 56: extra characters after command
I need those SearchTags to be as they are, because I need them afterwards again.
If ed is available/acceptable.
printf '%s\n' 'g/#NORMAL_LISTENER_START/+1;/#NORMAL_LISTENER_END/-1s/^/#/' ,p Q | ed -s file.txt
Change Q to w if you're satisfied with the output and in-place editing will occur.
Remove the ,p If you don't want to see the output.
This might work for you (GNU sed):
sed '/#NORMAL_LISTENER_START/,/#NORMAL_LISTENER_END/{//!s/^/# /}' file
Use a range, delimited by two regexp and insert # before the lines between the regexps but not including the regexps.
Alternative:
sed '/#NORMAL_LISTENER_START/,/#NORMAL_LISTENER_END/{s/^[^#]/# &/}' file
Or if you prefer:
sed '/#NORMAL_LISTENER_START/{:a;n;/#NORMAL_LISTENER_END/!s/^/# /;ta}' file
With your shown samples only, please try following awk code. Simple explanation would be, look for specific string and set/unset vars as per that and then print updated(added # in lines) lines as per that, stop updating lines once we find line from which we need not to update lines.
awk ' /Other_Values/{found=""} found{$0=$0!~/^#/?"#"$0:$0} /Some_Values/{found=1} 1' Input_file
Above will print output on terminal, once you are happy with results you could run following code to do inplace save into Input_file.
awk ' /Other_Values/{found=""} found{$0=$0!~/^#/?"#"$0:$0} /Some_Values/{found=1} 1' Input_file > temp && mv temp Input_file
Explanation: Adding detailed explanation for above.
awk ' ##Starting awk program from here.
/Other_Values/{ found="" } ##If line contains Other_Values then nullify found here.
found { $0=$0!~/^#/?"#"$0:$0 } ##If found is SET then check if line already has # then leave it as it is OR add # in starting to it.
/Some_Values/{ found=1 } ##If line contains Some_Values then set found here.
1 ##Printing current line here.
' Input_file ##Mentioning Input_file name here.

Can I delete a field in awk?

This is test.txt:
0x01,0xDF,0x93,0x65,0xF8
0x01,0xB0,0x01,0x03,0x02,0x00,0x64,0x06,0x01,0xB0
0x01,0xB2,0x00,0x76
If I run
awk -F, 'BEGIN{OFS=","}{$2="";print $0}' test.txt
the result is:
0x01,,0x93,0x65,0xF8
0x01,,0x01,0x03,0x02,0x00,0x64,0x06,0x01,0xB0
0x01,,0x00,0x76
The $2 wasn't deleted, it just became empty.
I hope, when printing $0, that the result is:
0x01,0x93,0x65,0xF8
0x01,0x01,0x03,0x02,0x00,0x64,0x06,0x01,0xB0
0x01,0x00,0x76
All the existing solutions are good though this is actually a tailor made job for cut:
cut -d, -f 1,3- file
0x01,0x93,0x65,0xF8
0x01,0x01,0x03,0x02,0x00,0x64,0x06,0x01,0xB0
0x01,0x00,0x76
If you want to remove 3rd field then use:
cut -d, -f 1,2,4- file
To remove 4th field use:
cut -d, -f 1-3,5- file
I believe simplest would be to use sub function to replace first occurrence of continuous ,,(which are getting created after you made 2nd field NULL) with single ,. But this assumes that you don't have any commas in between field values.
awk 'BEGIN{FS=OFS=","}{$2="";sub(/,,/,",");print $0}' Input_file
2nd solution: OR you could use match function to catch regex from first comma to next comma's occurrence and get before and after line of matched string.
awk '
match($0,/,[^,]*,/){
print substr($0,1,RSTART-1)","substr($0,RSTART+RLENGTH)
}' Input_file
It's a bit heavy-handed, but this moves each field after field 2 down a place, and then changes NF so the unwanted field is not present:
$ awk -F, -v OFS=, '{ for (i = 2; i < NF; i++) $i = $(i+1); NF--; print }' test.txt
0x01,0x93,0x65,0xF8
0x01,0x01,0x03,0x02,0x00,0x64,0x06,0x01
0x01,0x00,0x76
$
Tested with both GNU Awk 4.1.3 and BSD Awk ("awk version 20070501" on macOS Mojave 10.14.6 — don't ask; it frustrates me too, but sometimes employers are not very good at forward thinking). Setting NF may or may not work on older versions of Awk — I was a little surprised it did work, but the surprise was a pleasant one, for a change.
If Awk is not an absolute requirement, and the input is indeed as trivial as in your example, sed might be a simpler solution.
sed 's/,[^,]*//' test.txt
This is especially elegant if you want to remove the second field. A more generic approach to remove, the nth field would require you to put in a regex which matches the first n - 1 followed by the nth, then replace that with just the the first n - 1.
So for n = 4 you'd have
sed 's/\([^,]*,[^,]*,[^,]*,\)[^,]*,/\1/' test.txt
or more generally, if your sed dialect understands braces for specifying repetitions
sed 's/\(\([^,]*,\)\{3\}\)[^,]*,/\1/' test.txt
Some sed dialects allow you to lose all those pesky backslashes with an option like -r or -E but again, this is not universally supported or portable.
In case it's not obvious, [^,] matches a single character which is not (newline or) comma; and \1 recalls the text from first parenthesized match (back reference; \2 recalls the second, etc).
Also, this is completely unsuitable for escaped or quoted fields (though I'm not saying it can't be done). Every comma acts as a field separator, no matter what.
With GNU sed you can add a number modifier to substitute nth match of non-comma characters followed by comma:
sed -E 's/[^,]*,//2' file
Using awk in a regex-free way, with the option to choose which line will be deleted:
awk '{ col = 2; n = split($0,arr,","); line = ""; for (i = 1; i <= n; i++) line = line ( i == col ? "" : ( line == "" ? "" : "," ) arr[i] ); print line }' test.txt
Step by step:
{
col = 2 # defines which column will be deleted
n = split($0,arr,",") # each line is split into an array
# n is the number of elements in the array
line = "" # this will be the new line
for (i = 1; i <= n; i++) # roaming through all elements in the array
line = line ( i == col ? "" : ( line == "" ? "" : "," ) arr[i] )
# appends a comma (except if line is still empty)
# and the current array element to the line (except when on the selected column)
print line # prints line
}
Another solution:
You can just pipe the output to another sed and squeeze the delimiters.
$ awk -F, 'BEGIN{OFS=","}{$2=""}1 ' edward.txt | sed 's/,,/,/g'
0x01,0x93,0x65,0xF8
0x01,0x01,0x03,0x02,0x00,0x64,0x06,0x01,0xB0
0x01,0x00,0x76
$
Commenting on the first solution of #RavinderSingh13 using sub() function:
awk 'BEGIN{FS=OFS=","}{$2="";sub(/,,/,",");print $0}' Input_file
The gnu-awk manual: https://www.gnu.org/software/gawk/manual/html_node/Changing-Fields.html
It is important to note that making an assignment to an existing field changes the value of $0 but does not change the value of NF, even when you assign the empty string to a field." (4.4 Changing the Contents of a Field)
So, following the first solution of RavinderSingh13 but without using, in this case,sub() "The field is still there; it just has an empty value, delimited by the two colons":
awk 'BEGIN {FS=OFS=","} {$2="";print $0}' file
0x01,,0x93,0x65,0xF8
0x01,,0x01,0x03,0x02,0x00,0x64,0x06,0x01,0xB0
0x01,,0x00,0x76
My solution:
awk -F, '
{
regex = "^"$1","$2
sub(regex, $1, $0);
print $0;
}'
or one line code:
awk -F, '{regex="^"$1","$2;sub(regex, $1, $0);print $0;}' test.txt
I found that OFS="," was not necessary
I would do it following way, let file.txt content be:
0x01,0xDF,0x93,0x65,0xF8
0x01,0xB0,0x01,0x03,0x02,0x00,0x64,0x06,0x01,0xB0
0x01,0xB2,0x00,0x76
then
awk 'BEGIN{FS=",";OFS=""}{for(i=2;i<=NF;i+=1){$i="," $i};$2="";print}' file.txt
output
0x01,0x93,0x65,0xF8
0x01,0x01,0x03,0x02,0x00,0x64,0x06,0x01,0xB0
0x01,0x00,0x76
Explanation: I set OFS to nothing (empty string), then for 2nd and following column I add , at start. Finally I set what is now comma and value to nothing. Keep in mind this solution would need rework if you wish to remove 1st column.

Bash code for Selecting few columns from a variable

In a file I have a list of coordinates stored (see figure, to the left).
From there I want to copy the coordinates only (red marked) and put them in another file.
I copy the correct section from the file using COORD=`grep -B${i} '&END COORD' ${cpki_file}. Then I tried to use awk to extract the required numbers from the COORD variable . It does output all the numbers in the file but deletes the spaces between values (figure, to the right).
How to write the red marked section as they are?
N=200
NEndCoord=`grep -B${N} '&END COORD' ${cpki_file}|wc -l`
NCoord=`grep -B${N} '&END COORD' ${cpki_file}| grep -B200 '&COORD' |wc -l`
let i=$NEndCoord-$NCoord
COORD=`grep -B${i} '&END COORD' ${cpki_file}`
echo "$COORD" | awk '{ print $2 $3 $4 }'
echo "$COORD" | awk '{ print $2 $3 $4 }'>tmp.txt
When you start using combinations of grep, sed, awk, cut and alike, you should realize you can do it all in a single awk command. In case of the OP, this would do exactly the same:
awk '/[&]END COORD/{p=0}
p { print $2,$3,$4 }
/[&]COORD/{p=1}' file
This parses the file keeping track of a printing flag p. The flag is set if "&COORD" is found and unset if "&END COORD" is found. Printing is done, only when the flag p is set. Since we don't want to print the line with "&END COORD", we have to reset the flag before we do the check for the printing. The same holds for the line with "&COORD", but there we have to reset it after we do the check for the printing (its a bit a weird reversed logic).
The problem with the above is that it will also process the lines
UNIT angstrom
If you want to have these removed, you might want to do a check on the total columns:
awk '/[&]END COORD/{p=0}
p && (NF==4){ print $2,$3,$4 }
/[&]COORD/{p=1}' file
Of only print the lines which do not contain "UNIT" or are empty:
awk '/[&]END COORD/{p=0}
p && (NF>0) && ($1 != "UNIT"){ print $2,$3,$4 }
/[&]COORD/{p=1}' file
sed one-liner:
sed -n '/^&COORD$/,/^UNIT/{s/.*[[:space:]]\+\(.*\)[[:space:]]\+\(.*\)[[:space:]]\+\(.*\)/\1\t\2\t\3/p}' <infile.txt >outfile.txt
Explanation:
Invocation:
sed: stream editor
-n: do not print unless eplicit
Commands in sed:
/^&COORD$/,/^UNIT/: Selects groups of lines after &COORDS and before UNIT.
{s/.*[[:space:]]\+\(.*\)[[:space:]]\+\(.*\)[[:space:]]\+\(.*\)/\1\t\2\t\3/p}: Process each selected lines.
s/.*[[:space:]]\+\(.*\)[[:space:]]\+\(.*\)[[:space:]]\+\(.*\): Regex capture space delimited groups except the first.
/\1\t\2\t\3/: Replace with tab delimited values of the captured groups.
p: Explicit printout.