converting hexadecimal number into decimal number using awk script - awk

I can convert hexadecimal number stored in a variable to decimal number in shell using:
$ x=08BA
$ echo $x | awk '{print strtonum( "0x" $1 )}'
2234
But when i writing awk script in a file to perform same task using following code giving error.
program:
x=08BA
system("echo $x | print strtonum( "0x" $1 )");
Error message is:
sh: 1: Syntax error: "(" unexpected
sh: 1: Syntax error: "(" unexpected
kindly provide solution

Related

What is the function of a comma between arguments in awk?

For this HackerRank bash challenge (round to 3 decimal places), the following solution works well:
$ echo '5+50*3/20 + (19*2)/7' | bc -l | awk '{ printf ("%.3f \n",$1) }'
17.929
whereas the same without a comma between printf's format string and the $1 produces the following error on a bash prompt:
$ echo '5+50*3/20 + (19*2)/7' | bc -l | awk '{ printf ("%.3f \n" $1) }'
awk: cmd. line:1: (FILENAME=- FNR=1) fatal: not enough arguments to satisfy format string
`%.3f
17.92857142857142857142'
^ ran out for this one
The error message suggests that the $1 without comma is not supplied as an argument to printf, but its elision has hitherto not caused me issues (awk '{ print $0 " with appendix." }' happily prints the appended text). Understandably, searching the manual for values separated by commas is not helpful. What is the function of the comma in separating arguments in awk (aside from inserting a space between strings)? Additionally: what are the round brackets doing in the example? For what it's worth, HackerRank gives the following error:
bc -l | awk '{ printf ("%.3f \n" $1) }'
Your Output (stdout)
0.000
17.92857142857142857142
First of all you don't even need awk to restrict decimal number to 3 decimal points. bc itself can do that:
bc -l <<< 'scale=3; 5+50*3/20 + (19*2)/7'
17.928
Now question about printf, syntax of printf should be:
printf format, item1, item2, …
But when you use it like this:
printf ("%.3f \n" $1)
You don't supply enough number of arguments to satisfy %.3f format string (since "%.3f \n" and $1 are concatenated into a single string), hence you get this error:
not enough arguments to satisfy format string
Even if you put parentheses around, it doesn't make error go away. (...) is optional in printf so it can be either of these 2 statements:
printf "%.3f \n", $1
printf ("%.3f \n", $1)
awk does not have an explicit string concatenation operator. Two strings are concatenated by simply placing then side-by-side
print "foo" "bar" # => prints "foobar"
When you omit the comma, you have essentially this:
fmt = "%.3f \n" $1 # the string => "%.3f\n17.92"
printf (fmt)
and theres a %f directive but no value given.
The error message suggests that the $1 without comma is not supplied as an argument to printf, but its elision has hitherto not caused me issues (awk '{ print $0 " with appendix." }' happily prints the appended text)
Yes. Both effects arise from the fact that awk concatenates adjacent strings without any explicit operator. And not only literals. See section 6.2.2 of the manual for details and examples. In the case of your print statement, that produces an effect that serves your purpose, but in the case of your printf call, it means that you are passing only one, concatenated argument to printf, which it interprets as a format string.
When you put a comma between the strings, whether in a print statement or in a printf call, you get a list of two items instead of a single, concatenated string.

How to overcome "(FILENAME=- FNR=1) fatal: division by zero attempted" error in awk

I have a 10 column text file and I need to do some mathematical processing on it.
For example, when I issue below command
cat case.dat | awk '{print ($1-0.777472), ($1*$2*$3*$4)/($10)}'
Then I get below error
awk: cmd. line:1: (FILENAME=- FNR=1) fatal: division by zero attempted
and the script does not give any output.
That means I have some zero's in $8 and thus I am getting error. How I can overcome this issue with a simple awk command?
Converting my comment to answer so that solution is easy to find for future visitors.
In order to avoid fatal: division by zero error have a check if $10 is non-zero before attempting to divide:
awk '{print ($1-0.777472), ($10 ? ($1*$2*$3*$4)/$10 : 0)}' case.dat

SED extract first occurance after 2 patterns match

I'm trying to use c-shell (I'm afraid that other option is not available) and SED to solve this problem. Given this example file with a report of all some tests that were failing:
============
test_085
============
- Signature code: F2B0C
- Failure reason: timeout
- Error: test has timed out
============
test_102
============
- Signature code: B4B4A
- Failure reason: syntax
- Error: Syntax error on file example.c at line 245
============
test_435
============
- Signature code: 000FC0
- Failure reason: timeout
- Error: test has timed out
I have a script that loops through all the tests that I'm running and I check them against this report to see if has failed and do some statistics later on:
if (`grep -c $test_name $test_report` > 0) then
printf ",TEST FAILED" >>! $report
else
printf ",TEST PASSED" >>! $report
endif
What I would like to do is to extract the reason if $test_name is found in $test_report. For example for test_085 I want to extract only 'timeout', for test_102 extract only 'syntax' and for test_435 'timeout', for test_045 it won't be the case because is not found in this report (meaning it has passed). In essence I want to extract first occurrence after these two pattern matches: test_085, Failure reason:
To extract "Failure reason" for the specified test name - short awk approach:
awk -v t_name="test_102" '$1==t_name{ f=1 }f && /Failure reason/{ print $4; exit }' reportfile
$1==t_name{ f=1 } - on encountering line matching the pattern(i.e. test name t_name) - set the flag f into active state
f && /Failure reason/ - while iterating through the lines under considered test name section (while f is "active") - capture the line with Failure reason and print the reason which is in the 4th field
exit - exit script execution immediately to avoid redundant processing
The output:
syntax
You can try handling RS and FS variables of awk to make the parsing easier:
$ awk -v RS='' -F='==*' '{gsub(/\n/," ")
sub(/.*Failure reason:/,"",$3)
sub(/- Error:.*/,"",$3)
printf "%s : %s\n",$2,$3}' file
output:
test_085 : timeout
test_102 : syntax
test_435 : timeout
If you don't care the newlines, you can remove the gsub() function.
Whenever you have input that has attributes with name to value mappings as your does, the best approach is to first create an array to capture those mappings (n2v[]) below and then access the values by their names. For example:
$ cat tst.awk
BEGIN { RS=""; FS="\n" }
$2 == id {
for (i=4; i<=NF; i++) {
name = value = $i
gsub(/^- |:.*$/,"",name)
gsub(/^[^:]+: /,"",value)
n2v[name] = value
}
print n2v[attr]
}
$ awk -v id='test_085' -v attr='Failure reason' -f tst.awk file
timeout
$ awk -v id='test_085' -v attr='Error' -f tst.awk file
test has timed out
$ awk -v id='test_102' -v attr='Signature code' -f tst.awk file
B4B4A
$ awk -v id='test_102' -v attr='Error' -f tst.awk file
Syntax error on file example.c at line 245
$ awk -v id='test_102' -v attr='Failure reason' -f tst.awk file
syntax

simple awk string comparison unexpected result

In general string comparison, "A" > "a" is false.
However, I am getting unexpected result from this awk execution:
$ echo "A a"| awk '{if ($1 > $2) print "gt"; else print "leq"}'
gt
What am I missing?
Environment info:
$ uname -r -s -v -M
AIX 1 6 IBM,9110-510
$ locale
LANG=en_AU.8859-15
LC_COLLATE="en_AU.8859-15"
LC_CTYPE="en_AU.8859-15"
LC_MONETARY="en_AU.8859-15"
LC_NUMERIC="en_AU.8859-15"
LC_TIME="en_AU.8859-15"
LC_MESSAGES="en_AU.8859-15"
LC_ALL=
Diagnostics:
$ echo "A a"| awk '{print NF}'
2
Update It produces the correct result after setting LC_ALL=POSIX (thanks JS웃). Need to investigate further into this.
I am unable to reproduce this but you can force a string comparison by concatenating the operand with the null string:
echo "A a"| awk '{if ($1"" > $2"") print "gt"; else print "leq"}'
Note: Concatenating with any one operand should suffice.
Update:
As suspected the locale settings of OP were causing the issue. After setting LC_ALL=POSIX the issue was resolved.

Why doesn't this ssh command work in ksh?

I'm tweaking a KSH script and I'm trying to ssh into various hosts and execute a grep command on vfstab that will return a certain line. The problem is, I can't get the following to work below. I'm trying to get the line it returns and append it to a destination file. Is there a better way to do this, ex assign the grep statement to a command variable? The command works fine within the script, but the nested quotations seems to bugger it. Anyways, here's the line:
ssh $user#$host "grep '/var/corefiles' $VFSTAB_LOC | awk '{print $3, $7}' " >> $DEST
This results in:
awk: syntax error near line 1
awk: illegal statement near line one
If there is a better/more correct way to do this please let me know!
You're putting the remote command in double quotes, so the $3 and $7 in the awk body will be substituted. awk probably sees '{print ,}'. Escape the dollar signs in the awk body.
ssh $user#$host "grep '/var/corefiles' $VFSTAB_LOC | awk '{print \$3, \$7}' " >> $DEST
^ ^
I tried below and it worked for me (in ksh) not sure why it would error out in your case
user="username";
host="somehost";
VFSTAB_LOC="result.out";
DEST="/home/username/aaa.out";
echo $DEST;
`ssh $user#$host "grep '/abc/dyf' $VFSTAB_LOC | awk '{print $3, $1}'" >> $DEST`;