How to evaluate or process if statements in data? - awk

Background
I wrote a bash script that pulls simple user functions from a PostgreSQL database, using awk converts pgplsql commands to SQL (like PERFORM function() to SELECT function(), removes comments --.*, etc.), stores the SQL commands to a file (file.sql) and reads and executes them in the database:
$ psql ... -f file.sql db
The functions are simple, mostly just calling other user defined functions. But how to "evaluate" or process an IF statement?:
IF $1 = 'customer1' THEN -- THESE $1 MEANS ARGUMENT TO PGPL/SQL FUNCTION
PERFORM subfunction1($1); -- THAT THIS IF STATEMENT IS IN:
ELSE -- SELECT function('customer1');
PERFORM subfunction2($1); -- $1 = 'customer1'
END IF;
Tl;dr:
IFs and such are not SQL so they should be pre-evaluated using awk. It's safe to assume that above is already processed into one record with comments removed:
IF $1 = 'customer1' THEN PERFORM subfunction1($1); ELSE PERFORM subfunction2($1); END IF;
After "evaluating" above should be replaced with:
SELECT subfunction1('customer1');
if the awk to evaluate it was called:
$ awk -v arg1="customer1' -f program.awk file.sql
or if arg1 is anything else, for example for customer2:
SELECT subfunction2('customer2');
Edit
expr popped into my mind first thing when I woke up:
$ awk -v arg="'customer1'" '
{
gsub(/\$1/,arg) # replace func arg with string
n=split($0,a,"(IF|THEN|ELSE|ELSE?IF|END IF;)",seps) # seps to get ready for SQL CASE
if(seps[1]=="IF") {
# here should be while for ELSEIF
c="expr " a[2]; c|getline r; close(c) # use expr to solve
switch (r) { # expr has 4 return values
case "1": # match
print a[3]
break
case "0": # no match
print a[4]
break
default: # (*) see below
print r
exit # TODO
} } }' file.sql
(*) expr outputs 0,1,2 or 3:
$ expr 1 = 1
1
$ expr 1 = 2
0
However, if you omit spaces:
$ expr 1=1
1=1

Without writing a full language parser, if you're looking for something cheap and cheerful then this might be a decent starting point:
$ cat tst.awk
{ gsub(/\$1/,"\047"arg1"\047") }
match($0,/^IF\s+(\S+)\s+(\S+)\s+(\S+)\s+THEN\s+(\S+)\s+(\S+)\s+ELSE\s+(\S+)\s+(\S+)\s+END\s+IF/,a) {
lhs = a[1]
op = a[2]
rhs = a[3]
trueAct = (a[4] == "PERFORM" ? "SELECT" : a[4]) FS a[5]
falseAct = (a[6] == "PERFORM" ? "SELECT" : a[6]) FS a[7]
if (op == "=") {
print (lhs == rhs ? trueAct : falseAct)
}
}
$ awk -v arg1='customer1' -f tst.awk file
SELECT subfunction1('customer1');
$ awk -v arg1='bob' -f tst.awk file
SELECT subfunction2('bob');
The above uses GNU awk for the 3rd arg to match(). Hopefully it's easy enough to understand that you can massage as needed to handle other constructs or other variations of this construct.

Related

How to replace all escape sequences with non-escaped equivalent with unix utilities (sed/tr/awk)

I'm processing a Wireshark config file (dfilter_buttons) for display filters and would like to print out the filter of a given name. The content of file is like:
Sample input
"TRUE","test","sip contains \x22Hello, world\x5cx22\x22",""
And the resulting output should have the escape sequences replaced, so I can use them later in my script:
Desired output
sip contains "Hello, world\x22"
My first pass is like this:
Current parser
filter_name=test
awk -v filter_name="$filter_name" 'BEGIN {FS="\",\""} ($2 == filter_name) {print $3}' "$config_file"
And my output is this:
Current output
sip contains \x22Hello, world\x5cx22\x22
I know I can handle these exact two escape sequences by piping to sed and matching those exact two sequences, but is there a generic way to substitutes all escape sequences? Future filters I build may utilize more escape sequences than just " and , and I would like to handle future scenarios.
Using gnu-awk you can do this using split, gensub and strtonum functions:
awk -F '","' -v filt='test' '$2 == filt {n = split($3, subj, /\\x[0-9a-fA-F]{2}/, seps); for (i=1; i<n; ++i) printf "%s%c", subj[i], strtonum("0" substr(seps[i], 2)); print subj[i]}' file
sip contains "Hello, world\x22"
A more readable form:
awk -F '","' -v filt='test' '
$2 == filt {
n = split($3, subj, /\\x[0-9a-fA-F]{2}/, seps)
for (i=1; i<n; ++i)
printf "%s%c", subj[i], strtonum("0" substr(seps[i], 2))
print subj[i]
}' file
Explanation:
Using -F '","' we split input using delimiter ","
$2 == filt we filter input for $2 == "test" condition
Using /\\x[0-9a-fA-F]{2}/ as regex (that matches 2 digit hex strings) we split $3 and save split tokens into array subj and matched separators into array seps
Using substr we remove first char i.e \\ and prepend 0
Using strtonum we convert hex string to equivalent ascii number
Using %c in printf we print corresponding ascii character
Last for loop joins $3 back using subj and seps array elements
Using GNU awk for FPAT, gensub(), strtonum(), and the 3rd arg to match():
$ cat tst.awk
BEGIN { FPAT="([^,]*)|(\"[^\"]*\")"; OFS="," }
$2 == ("\"" filter_name "\"") {
gsub(/^"|"$/,"",$3)
while ( match($3,/(\\x[0-9a-fA-F]{2})(.*)/,a) ) {
printf "%s%c", substr($3,1,RSTART-1), strtonum(gensub(/./,0,1,a[1]))
$3 = a[2]
}
print $3
}
$ awk -v filter_name='test' -f tst.awk file
sip contains "Hello, world\x22"
The above assumes your escape sequences are always \x followed by exactly 2 hex digits. It isolates every \xHH string in the input, replaces \ with 0 in that string so that strtonum() can then convert the string to a number, then uses %c in the printf formatting string to convert that number to a character.
Note that GNU awk has a debugger (see https://www.gnu.org/software/gawk/manual/gawk.html#Debugger) so if you're ever not sure what any part of a program does you can just run it in the debugger (-D) and trace it, e.g. in the following I plant a breakpoint to tell awk to stop at line 1 of the script (b 1), then start running (r) and the step (s) through the script printing the value of $3 (p $3) at each line so I can see how it changes after the gsub():
$ awk -D -v filter_name='test' -f tst.awk file
gawk> b 1
Breakpoint 1 set at file `tst.awk', line 1
gawk> r
Starting program:
Stopping in BEGIN ...
Breakpoint 1, main() at `tst.awk':1
1 BEGIN { FPAT="([^,]*)|(\"[^\"]*\")"; OFS="," }
gawk> p $3
$3 = uninitialized field
gawk> s
Stopping in Rule ...
2 $2 == "\"" filter_name "\"" {
gawk> p $3
$3 = "\"sip contains \\x22Hello, world\\x5cx22\\x22\""
gawk> s
3 gsub(/^"|"$/,"",$3)
gawk> p $3
$3 = "\"sip contains \\x22Hello, world\\x5cx22\\x22\""
gawk> s
4 while ( match($3,/(\\x[0-9a-fA-F]{2})(.*)/,a) ) {
gawk> p $3
$3 = "sip contains \\x22Hello, world\\x5cx22\\x22"

Variable operator for numerical comparison in awk

Can awk use variable operators for numerical comparison? The following code works with a hard coded operator, but not with a variable operator:
awk -v o="$operator" -v c="$comparison" '$1 o c'
No, that cannot work. Awk's -v option defines actual Awk variables, and not token-level macro substitutions.
It doesn't work for the same reason that this doesn't work:
awk 'BEGIN { o = "+"; print 2 o 2 }' # hoping for 2 + 2
Awk is different from the POSIX shell and similar languages; it doesn't evaluate variables by means of textual substitution.
Since you're calling Awk from a shell command line, you can use the shell's substitution to generate the Awk syntax, thereby obtaining that effect:
awk -v c="$comparison" "\$1 $operator c"
We now need a backslash on the $1 because we switched to double quotes, inside of which $1 is now recognized by the shell itself.
Another way to the one proposed by Kaz would be to define your own mapping function which takes the two variables as argument and the corresponding operator string o:
awk -v o="$operator" -v c="$comparison" '
function operator(arg1, arg2, op) {
if (op == "==") return arg1 == arg2
if (op == "!=") return arg1 != arg2
if (op == "<") return arg1 < arg2
if (op == ">") return arg1 > arg2
if (op == "<=") return arg1 <= arg2
if (op == ">=") return arg1 >= arg2
}
{ print operator($1,c,o) }'
This way you can also define your own operators.
No but you have a couple of options, the simplest being to let the shell expand one of the variables to become part of the awk script before awk runs on it:
$ operator='>'; comparison='3'
$ echo 5 | awk -v c="$comparison" '$1 '"$operator"' c'
5
Otherwise you can write your own eval-style function, e.g.:
$ cat tst.awk
cmp($1,o,c)
function cmp(x,y,z, cmd,line,ret) {
cmd = "awk \047BEGIN{print (" x " " y " " z ")}\047"
if ( (cmd | getline line) > 0 ) {
ret = line
}
close(cmd)
return ret
}
$ echo 5 | awk -v c="$comparison" -v o="$operator" -f tst.awk
5
See https://stackoverflow.com/a/54161251/1745001. The latter would work even if your awk program was saved in a file while the former would not. If you want to mix a library of functions with command line scripts then here's one way with GNU awk for -i:
$ cat tst.awk
function cmp(x,y,z, cmd,line,ret) {
cmd = "awk \047BEGIN{print (" x " " y " " z ")}\047"
if ( (cmd | getline line) > 0 ) {
ret = line
}
close(cmd)
return ret
}
$ awk -v c="$comparison" -v o="$operator" -i tst.awk 'cmp($1,o,c)'
5

Trap or evaluate bad regular expression string at runtime in awk script

How can I trap an error if a dynamic regular expression evaluation is bad like:
var='lazy dog'
# a fixed Regex here, but original is coming from ouside the script
Regex='*.'
#try and failed
if (var ~ Regex) foo
The goal is to manage this error as I cannot test the regex itself (it comes from external source). Using POSIX awk (AIX)
Something like this?
$ echo 'foo' |
awk -v re='*.' '
BEGIN {
cmd="awk --posix \047/" re "/\047 2>&1"
cmd | getline rslt
print "rslt="rslt
close(cmd)
}
{ print "got " $0 " but re was bad" }
'
rslt=awk: cmd. line:1: error: Invalid preceding regular expression: /*./
got foo but re was bad
I use gawk so I had to add --posix to make it not just accept that regexp as a literal * followed by any char. You'll probably have to change the awk command being called in cmd to behave sensibly for your needs with both valid and invalid regexps but you get the idea - to do something like an eval in awk you need to have awk call itself via system() or a pipe to getline. Massage to suit...
Oh, and I don't think you can get the exit status of cmd with the above syntax and you can't capture the output of a system() call within awk so you may need to test the re twice - first with system() to find out if it fails but redirecting it's output to /dev/null, and then on a failure run it again with getline to capture the error message.
Something like:
awk -v re='*.' '
BEGIN {
cmd="awk --posix \047/" re "/\047 2>&1"
if ( system(cmd " > /dev/null") ) {
close(cmd " > /dev/null")
cmd | getline rslt
print "rslt="rslt
close(cmd)
}
}
{ print "got " $0 " but re was bad" }
'

Delete a variable in awk

I wonder if it is possible to delete a variable in awk. For an array, you can say delete a[2] and the index 2 of the array a[] will be deleted. However, for a variable I cannot find a way.
The closest I get is to say var="" or var=0.
But then, it seems that the default value of a non-existing variable is 0 or False:
$ awk 'BEGIN {if (b==0) print 5}'
5
$ awk 'BEGIN {if (!b) print 5}'
5
So I also wonder if it is possible to distinguish between a variable that is set to 0 and a variable that has not been set, because it seems not to:
$ awk 'BEGIN {a=0; if (a==b) print 5}'
5
There is no operation to unset/delete a variable. The only time a variable becomes unset again is at the end of a function call when it's an unused function argument being used as a local variable:
$ cat tst.awk
function foo( arg ) {
if ( (arg=="") && (arg==0) ) {
print "arg is not set"
}
else {
printf "before assignment: arg=<%s>\n",arg
}
arg = rand()
printf "after assignment: arg=<%s>\n",arg
print "----"
}
BEGIN {
foo()
foo()
}
$ awk -f tst.awk file
arg is not set
after assignment: arg=<0.237788>
----
arg is not set
after assignment: arg=<0.291066>
----
so if you want to perform some actions A then unset the variable X and then perform actions B, you could encapsulate A and/or B in functions using X as a local var.
Note though that the default value is zero or null, not zero or false, since its type is "numeric string".
You test for an unset variable by comparing it to both null and zero:
$ awk 'BEGIN{ if ((x=="") && (x==0)) print "y" }'
y
$ awk 'BEGIN{ x=0; if ((x=="") && (x==0)) print "y" }'
$ awk 'BEGIN{ x=""; if ((x=="") && (x==0)) print "y" }'
If you NEED to have a variable you delete then you can always use a single-element array:
$ awk 'BEGIN{ if ((x[1]=="") && (x[1]==0)) print "y" }'
y
$ awk 'BEGIN{ x[1]=""; if ((x[1]=="") && (x[1]==0)) print "y" }'
$ awk 'BEGIN{ x[1]=""; delete x; if ((x[1]=="") && (x[1]==0)) print "y" }'
y
but IMHO that obfuscates your code.
What would be the use case for unsetting a variable? What would you do with it that you can't do with var="" or var=0?
An unset variable expands to "" or 0, depending on the context in which it is being evaluated.
For this reason, I would say that it's a matter of preference and depends on the usage of the variable.
Given that we use a + 0 (or the slightly controversial +a) in the END block to coerce the potentially unset variable a to a numeric type, I guess you could argue that the natural "empty" value would be "".
I'm not sure that there's too much to read in to the cases that you've shown in the question, given the following:
$ awk 'BEGIN { if (!"") print }'
5
("" is false, unsurprisingly)
$ awk 'BEGIN { if (b == "") print 5 }'
5
(unset variable evaluates equal to "", just the same as 0)

Why does awk "not in" array work just like awk "in" array?

Here's an awk script that attempts to set difference of two files based on their first column:
BEGIN{
OFS=FS="\t"
file = ARGV[1]
while (getline < file)
Contained[$1] = $1
delete ARGV[1]
}
$1 not in Contained{
print $0
}
Here is TestFileA:
cat
dog
frog
Here is TestFileB:
ee
cat
dog
frog
However, when I run the following command:
gawk -f Diff.awk TestFileA TestFileB
I get the output just as if the script had contained "in":
cat
dog
frog
While I am uncertain about whether "not in" is correct syntax for my intent, I'm very curious about why it behaves exactly the same way as when I wrote "in".
I cannot find any doc about element not in array.
Try !(element in array).
I guess: awk sees not as an uninitialized variable, so not is evaluated as an empty string.
$1 not == $1 "" == $1
I figured this one out. The ( x in array ) returns a value, so to do "not in array", you have to do this:
if ( x in array == 0 )
print "x is not in the array"
or in your example:
($1 in Contained == 0){
print $0
}
In my solution for this problem I use the following if-else statement:
if($1 in contained);else{print "Here goes your code for \"not in\""}
Not sure if this is anything like you were trying to do.
#! /bin/awk
# will read in the second arg file and make a hash of the token
# found in column one. Then it will read the first arg file and print any
# lines with a token in column one not matching the tokens already defined
BEGIN{
OFS=FS="\t"
file = ARGV[1]
while (getline &lt file)
Contained[$1] = $1
# delete ARGV[1] # I don't know what you were thinking here
# for(i in Contained) {print Contained[i]} # debuging, not just for sadists
close (ARGV[1])
}
{
if ($1 in Contained){} else { print $1 }
}
In awk commande line I use:
! ($1 in a)
$1 pattern
a array
Example:
awk 'NR==FNR{a[$1];next}! ($1 in a) {print $1}' file1 file2