awk set variable to line after if statement match - awk

I have the following text...
BIOS Information
Manufacturer : Dell Inc.
Version : 2.5.2
Release Date : 01/28/2015
Firmware Information
Name : iDRAC7
Version : 2.21.21 (Build 12)
Firmware Information
Name : Lifecycle Controller 2
Version : 2.21.21.21
... which is piped into the following awk statement...
awk '{ if ($1" "$2 == "BIOS Information") var=$1} END { print var }'
This will output 'BIOS' in this case.
I want to look for 'BIOS Information' and then set the third field, two lines down, so in this case 'var' would equal '2.5.2'. Is there a way to do this with awk?
EDIT:
I tried the following:
awk ' BEGIN {
FS="[ \t]*:[ \t]*";
}
NF==1 {
sectname=$0;
}
NF==2 && $1 == "Version" && sectname="BIOS Information" {
bios_version=$2;
}
END {
print bios_version;
}'
Which gives me '2.21.21.21' with the above text. Can this be modified to give me the first 'Version" following "BIOS Information"?

Following script may be an overkill but it is robust in cases if you have multiple section names and/or order of fields is changed.
BEGIN {
FS="[ \t]*:[ \t]*";
}
NF==1 {
sectname=$0;
}
NF==2 && $1 == "Version" && sectname=="BIOS Information" {
bios_version=$2;
}
END {
print bios_version;
}
First, we set input field separator so that words are not separated into different fields. Next, we check whether current line is section name or a key-value pair. If it is section name, set sectname to section name. If it is a key-value pair and current section name is "BIOS Information" and key is "Version" then we set bios_version.

To answer the question as asked:
awk -v RS= '
/^BIOS Information\n/ {
for(i=1;i<=NF;++i) { if ($i=="Version") { var=$(i+2); exit } }
}
END { print var }
' file
-v RS= puts awk in paragraph mode, so that each run of non-empty lines becomes a single record.
/^BIOS Information\n/ then only matches a record (paragraph) whose first line equals "BIOS Information".
Each paragraph is internally still split into fields by any run of whitespace (awk's default behavior), so the for loop loops over all fields until it finds literal Version, assigns the 2nd field after it to a variable (because : is parsed as a separate field) and exits, at which point the variable value is printed in the END block.
Note: A more robust and complete way to extract the version number can be found in the update below (the field-looping approach here could yield false positives and also only ever reports the first (whitespace-separated) token of the version field).
Update, based on requirements that emerged later:
To act on each paragraph's version number and create individual variables:
awk -v RS= '
# Helper function that that returns the value of the specified field.
function getFieldValue(name) {
# Split the record into everything before and after "...\n<name> : "
# and the following \n; the 2nd element of the array thus created
# then contains the desired value.
split($0, tokens, "^.*\n" name "[[:blank:]]+:[[:blank:]]+|\n")
return tokens[2]
}
/^BIOS Information\n/ {
biosVer=getFieldValue("Version")
print "BIOS version = " biosVer
}
/^Firmware Information\n/ {
firmVer=getFieldValue("Version")
print "Firmware version (" getFieldValue("Name") ") = " firmVer
}
' file
With the sample input, this yields:
BIOS version = 2.5.2
Firmware version (iDRAC7) = 2.21.21 (Build 12)
Firmware version (Lifecycle Controller 2) = 2.21.21.21

Given:
$ echo "$txt"
BIOS Information
Manufacturer : Dell Inc.
Version : 2.5.2
Release Date : 01/28/2015
Firmware Information
Name : iDRAC7
Version : 2.21.21 (Build 12)
Firmware Information
Name : Lifecycle Controller 2
Version : 2.21.21.21
You can do:
$ echo "$txt" | awk '/^BIOS Information/{f=1; printf($0)} /^Version/ && f{f=0; printf(":%s\n", $3)}'
BIOS Information:2.5.2

Related

Grabbing value from piped file contents

Let's say I have the following file:
credentials:
[default]
key_id = AKIAGHJQTOP
secret_key = alcsjkf
[default2]
key_id = AKIADGHNKVP
secret_key = njprmls
I want to grab the value of [default] key_id. I'm trying to do it with awk command but I'm open to any other way if it's more efficient and easier. Instead of passing a file name to awk, I want to pass the file contents from environmental variable FILE_CONTENTS
I tried the following:
$export VAR=$(echo "$FILE_CONTENTS" | awk '/credentials.default.key_id/ {print $2}')
But it didn't work. Any help is appreciated.
You can use awk like this:
cat srch.awk
BEGIN { FS = " *= *" }
{ sub(/^[[:blank:]]+/, "") }
/:[[:blank:]]*$/ {
sub(/:[[:blank:]]*$/, "")
k = $1
}
/^[[:blank:]]*\[/ {
s = k "." $1
}
NF == 2 {
map[s "." $1] = $2
}
key in map {
print map[key]
exit
}
# then use it as
echo "$FILE_CONTENTS" |
awk -v key='credentials.[default].key_id' -f srch.awk
AKIAGHJQTOP
# or else
echo "$FILE_CONTENTS" |
awk -v key='credentials.[default].secret_key' -f srch.awk
alcsjkf
With your shown samples, please try following awk code. Written and tested in GNU awk.
awk -v RS='(^|\\n)credentials:\\n[[:space:]]+\\[default\\]\\n[[:space:]]+key_id = \\S+' '
RT && num=split(RT,arr," key_id = "){
print arr[num]
}
' Input_file
Here is the Online demo for used regex(its bit changed from regex used in awk code as escaping is done in program not in site).
Assumptions:
no spaces between labels and :
no spaces between [ the stanza name and ]
all lines with attribute/value pairs have exactly 3 space-delimited fields as shown (ie, attr = value; value has no embedded spaces)
the contents of OP's variable (FILE_CONTENTS) is an exact copy (data and format) of the sample file provided by OP
NOTE: if the input file format can differ from these assumptions then additional code must be added to address said differences; as mentioned in comments ... writing your own parser is doable but you need to insure you address all possible format variations
One awk idea:
awk -v label='credentials' -v stanza='default' -v attr='key_id' '
/:/ { f1=0; if ($0 ~ label ":") f1=1 }
f1 && /[][]/ { f2=0; if ($0 ~ "[" stanza "]") f2=1 }
f1 && f2 && /=/ { if ($1 == attr) { print $3; f1=f2=0 } }
'
This generates:
AKIAGHJQTOP
$ awk 'f{print $3; exit} /\[default]/{f=1}' <<<"$FILE_CONTENTS"
AKIAGHJQTOP
If that's not all you need then edit your question to provide more truly realistic sample input/output including cases where the above doesn't work.
open to any other way if it's more efficient and easier
I suggest taking look at python's configparser, which is part of standard library. Let FILE_CONTENTS environment variable be holding
credentials:
[default]
key_id = AKIAGHJQTOP
secret_key = alcsjkf
[default2]
key_id = AKIADGHNKVP
secret_key = njprmls
then create file getkeyid.py with content as follows
import configparser
import os
config = configparser.ConfigParser()
config.read_string(os.environ["FILE_CONTENTS"].replace("credentials","#credentials",1))
print(config["default"]["key_id"])
and do
python3 getkeyid.py
to get output
AKIAGHJQTOP
Explanation: I retrieve string from environmental variable and replace credentials with #credentials at most 1 time in order to comment that line (otherwise parser will fail), then parse it and retrieve value corresponding to desired key.

Finding matching blocks of code and add or replace content in the second last line

I posted recently a similar question but got a few more requirements now.
I've got terraform files and I need to find one or multiple specific resources blocks within it and modify those or add a tags block if it's missing at all.
Sample Input - An example how a file with blocks might look (but could have more lines of data in each section):
resource "aws_instance" "ec2_1" {
ami = "ami-blahblah"
tags {
Owner = "Me"
}
}
resource "aws_instance" "ec2_2" {
ami = "ami-blahblah"
}
resource "aws_security_group" "gid-lb_1" {
description = "Security group for the load balancer"
ingress {
security_groups = ["${aws_security_group.gid-lb_1.id}"]
}
tags = merge(
var.tags,
map(
Type = "ec2"
By = "Terraform"
Owner = "Me"
)
)
}
resource "aws_route" "non-default-route" {
route_table_id = "exampleid"
}
Expected Output - Given the above I'd expect the following output:
resource "aws_instance" "ec2_1" {
ami = "ami-blahblah"
tags = merge(
var.tags,
map(
Owner = "Me"
Type = "ec2"
By = "Terraform"
)
)
}
resource "aws_instance" "ec2_2" {
ami = "ami-blahblah"
tags = merge(
var.tags,
map(
Type = "ec2"
By = "Terraform"
)
)
}
resource "aws_security_group" "gid-lb_1" {
description = "Security group for the load balancer"
ingress {
security_groups = ["${aws_security_group.gid-lb_1.id}"]
}
tags = merge(
var.tags,
map(
Type = "ec2"
By = "Terraform"
Owner = "Me"
)
)
}
resource "aws_route" "non-default-route" {
route_table_id = "exampleid"
}
I need to search for a list of different resource blocks like aws_instance and aws_security_group which starts with resource and check if there is a tag already set and if it's set, if it contains the map without the curly brackets or not. All other resources and so on should be not touched.
As you can maybe see, there are resources which are not support to be modified. Also, there might be other blocks like modules (didn't add this example since the code block is already quite huge).
I wrote a bash script, searching for the specific resource blocks using this awk:
awk '/^resource "/ {i++}; i=='$i' && k=='$k' {print}; /^}/ {k++}' file
and using the list of tags to add with an array:
ARRAY=( "Type:ec2"
"By:Terraform")
anyway, the problem here was, that I didn't find a way to replace just this one block within the text file and how to actually add the tags at the end before the last closing }.
I got help in my previous question, with an awesome awk code. Which did the job perfectly fine, besides the problem that I can't filter it for the specific resource types.
I am really frustrated with the documentation about awk I found. I am struggling with this issue now for a bit and spend many hours and by far the biggest help was the last mentioned awk code. I would be very thankful if someone could help me out here - maybe I was searching for the wrong keywords to actually find the right documentation to help myself adding my latest use-case to the given awk code.
Many thanks!
PS: I hope it's right to open a new question on that specific case since it's slightly different now. I hope it's ok I post the link with the former post.
Try the following GNU awk solution:
awk 'BEGIN { RS="resource" } $0 ~ /(aws_instance)|(aws_security_group)/ && $0 !~ /map/ && $0 ~ /tags/ { tags=gensub(/(^.*)(tags {.*\n[[:space:]]+)(.*)(\n[[:space:]]+})(.*$)/,"resource\\1tags=merge(\n var.tags,\n map(\n \\3\n Type = \"ec2\"\n By = \"Terraform\"\n )\n )\n \\4",$0);print tags;next } $0 ~ /(aws_instance)|(aws_security_group)/ && $0 !~ /map/ && $0 !~ /tags/ { print "resource"$0"\n tags=merge(\n var.tags,\n map(\n Type = \"ec2\"\n By = \"Terraform\"\n )\n )\n\n}";next } { print "resource"$0 }' file
Explanation:
awk 'BEGIN {
RS="resource" # Set the record separator to "resource"
}
$0 ~ /(aws_instance)|(aws_security_group)/ && $0 !~ /map/ && $0 ~ /tags/ { # Process when resource type is aws_instance or aws_security_group, record doesn't contain map and record contains tags
tags=gensub(/(^.*)(tags {.*\n[[:space:]]+)(.*)(\n[[:space:]]+})(.*$)/,"resource\\1tags=merge(\n var.tags,\n map(\n \\3\n Type = \"ec2\"\n By = \"Terraform\"\n )\n )\n \\4",$0); # Split the line into section, substituting the line for the third section surrounded by the amended tag text. (We are incorporating the existing Owner="Me" text into the new tag text) Read this new text into the variable tags
print tags; # Print tags
next # Skip to the next record
}
$0 ~ /(aws_instance)|(aws_security_group)/ && $0 !~ /map/ && $0 !~ /tags/ { # Process when there are the required resource, no map and no tags
print "resource"$0"\n tags=merge(\n var.tags,\n map(\n Type = \"ec2\"\n By = \"Terraform\"\n )\n )\n\n}"; # Print the amended tags text
next # Skip to the next record
}
{
print "resource"$0 # In all other cases, print the record separator "resource" plus the record ($0)
}' file

How to find the maximum value for the field by ignoring the lines with characters using awk?

Since am newbie to the awk , please help me with your suggestions. I tried the below command to filter the maximum value and ignore the first & last lines from the sample text file separately. They work when I try them separately.
My query:
I need to ignore the last line and first few lines and from the file and then need to take the maximum value for the field 7 using awk .
I also need to ignore the lines with the characters . Can anyone suggest me the possibilities two use both the commands together and get the required output.
Sample file:
Linux 3.10.0-957.5.1.el7.x86_64 (j051s784) 11/24/2020 _x86_64_ (8 CPU)
12:00:02 AM kbmemfree kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty
12:10:01 AM 4430568 61359128 93.27 1271144 27094976 66771548 33.04 39005492 16343196 1348
12:20:01 AM 4423380 61366316 93.28 1271416 27102292 66769396 33.04 39012312 16344668 1152
12:30:04 AM 4406324 61383372 93.30 1271700 27108332 66821724 33.06 39028320 16343668 2084
12:40:01 AM 4404100 61385596 93.31 1271940 27107724 66799412 33.05 39031244 16344532 1044
06:30:04 PM kbmemfree kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty
07:20:01 PM 3754904 62034792 94.29 1306112 27555948 66658632 32.98 39532204 16476848 2156
Average: 4013043 61776653 93.90 1293268 27368986 66755606 33.03 39329729 16427160 2005
Commands used:
cat testfile | awk '{print $7}' | head -n -1 | tail -n+7
awk 'BEGIN{a= 0}{if ($7>0+a) a=$7} END{print a}' testfile
Expected output:
Maximum value for the column 7 by excluding the lines wherever alphabet character is available
1st solution(Generic solution): Adding one Generic solution here, where sending field name to an awk variable(which we want to look for for maximum value) it will automatically find out its field number from very first line and will work accordingly. Considering that your first line has that field name which you want to look for.
awk -v var="kbcached" '
FNR==1{
for(i=1;i<=NF;i++){
if($i==var){ field=i }
}
next
}
/kbmemused/{
next
}
{
if($2!~/^[AP]M$/){
val=$(field-1)
}
else{
val=$field
}
}
{
max=(max>val?max:val)
val=""
}
END{
print "Maximum value is:" max
}
' Input_file
2nd solution(As per shown samples only): Could you please try following, based on your shown samples only. I am assuming you want the field value of column kbcached.
awk '
/kbmemfree/{
next
}
{
if($2!~/^[AP]M$/){
val=$6
}
else{
val=$7
}
}
{
max=(max>val?max:val)
val=""
}
END{
print "Maximum value is:" max
}
' Input_file
awk '$7 ~ ^[[:digit:]]+$/ && $1 != "Average:" {
max[$7]=""
}
END {
PROCINFO["sorted_in"]="#ind_num_asc";
for (i in max) {
maxtot=i
}
print maxtot
}' file
One liner:
awk '$7 ~ /^[[:digit:]]+$/ && $1 != "Average:" { max[$7]="" } END { PROCINFO["sorted_in"]="#ind_num_asc";for (i in max) { maxtot=i } print maxtot }' file
Using GNU awk, search for lines where field 7 is only numbers and field one is not "Average:" In these instances, create an array entry with field 7 as the index. At the end, sort the array in index ascending number order. Loop through the array setting a maxtot variable. The last entry in the max array will be the highest kbcached and so print maxtot

How to return 0 if awk returns null from processing an expression?

I currently have a awk method to parse through whether or not an expression output contains more than one line. If it does, it aggregates and prints the sum. For example:
someexpression=$'JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON)'
might be the one-liner where it DOESN'T yield any information. Then,
echo "$someexpression" | awk '
NR>1 {a[$4]++}
END {
for (i in a) {
printf "%d\n", a[i]
}
}'
this will yield NULL or an empty return. Instead, I would like to have it return a numeric value of $0$ if empty. How can I modify the above to do this?
Nothing in UNIX "returns" anything (despite the unfortunately named keyword for setting the exit status of a function), everything (tools, functions, scripts) outputs X and exits with status Y.
Consider these 2 identical functions named foo(), one in C and one in shell:
C (x=foo() means set x to the return code of foo()):
foo() {
printf "7\n"; // this is outputting 7 from the full program
return 3; // this is returning 3 from this function
}
x=foo(); <- 7 is output on screen and x has value '3'
shell (x=foo means set x to the output of foo()):
foo() {
printf "7\n"; # this is outputting 7 from just this function
return 3; # this is setting this functions exit status to 3
}
x=foo <- nothing is output on screen, x has value '7', and '$?' has value '3'
Note that what the return statement does is vastly different in each. Within an awk script, printing and return codes from functions behave the same as they do in C but in terms of a call to the awk tool, externally it behaves the same as every other UNIX tool and shell script and produces output and sets an exit status.
So when discussing anything in UNIX avoid using the term "return" as it's imprecise and ambiguous and so different people will think you mean "output" while others think you mean "exit status".
In this case I assume you mean "output" BUT you should instead consider setting a non-zero exit status when there's no match like grep does, e.g.:
echo "$someexpression" | awk '
NR>1 {a[$4]++}
END {
for (i in a) {
print a[i]
}
exit (NR < 2)
}'
and then your code that uses the above can test for the success/fail exit status rather than testing for a specific output value, just like if you were doing the equivalent with grep.
You can of course tweak the above to:
echo "$someexpression" | awk '
NR>1 {a[$4]++}
END {
if ( NR > 1 ) {
for (i in a) {
print a[i]
}
}
else {
print "$0$"
exit 1
}
}'
if necessary and then you have both a specific output value and a success/fail exit status.
You may keep a flag inside for loop to detect whether loop has executed or not:
echo "$someexpression" |
awk 'NR>1 {
a[$4]++
}
END
{
for (i in a) {
p = 1
printf "%d\n", a[i]
}
if (!p)
print "$0$"
}'
$0$

awk | Add new row or update existing row in a file

I want to update file1 on the basis of file2. If any row is new in file2 then it should be added in file1. If any row from file2 is already in file1, then update that row with the row from file2 if the time is greater in file2.
file1
DL,1111111100,201312051013,val,FIX01,OptIn,N,Ext1,Ext2
DL,1111111101,201312051014,val,FIX01,OptIn,Y,Ext1,Ext2
DL,1111111102,201312051015,val,FIX01,OptIn,Y,Ext1,Ext2
DL,1111111103,201312051016,val,FIX01,OptIn,N,Ext1,Ext2
file2
DL,1111111101,201312041013,val,FIX02,OptIn,N,Ext1,Ext2
DL,1111111102,201312051016,val,FIX02,OptIn,N,Ext1,Ext2
DL,1111111102,201312051017,val,FIX02,OptIn,N,Ext1,Ext2
DL,1111111104,201312051014,val,FIX01,OptIn,Y,Ext1,Ext2
DL,1111111104,201312051016,val,FIX02,OptIn,Y,Ext1,Ext2
newfile1
DL,1111111100,201312051013,val,FIX01,OptIn,N,Ext1,Ext2
DL,1111111101,201312051014,val,FIX01,OptIn,Y,Ext1,Ext2
DL,1111111102,201312051017,val,FIX02,OptIn,N,Ext1,Ext2
DL,1111111103,201312051016,val,FIX01,OptIn,N,Ext1,Ext2
DL,1111111104,201312051016,val,FIX02,OptIn,Y,Ext1,Ext2
Notes:
2nd field should be unique in the output.
Addition of new value: the latest 2nd field for value "1111111104" in file2 is taken which is newer (201312051016) then old value (201312051014) on the basis of date column (3rd field).
Update an existing value: updated "1111111102" with newer value on the basis of date in 3rd column
file1 is very LARGE whereas file2 has 5-10 entries only.
row with 2nd field "1111111101" doesn't need to b updated because it's entry in file1 already has the latest date "201312051014" as compared to new date "201312041013" in file2.
I haven't tried much on this because it really has complex condition for me as beginner..
BEGIN { FS = OFS = "," }
FNR == NR {
m=$2;
a[m] = $0;
next
}
{
if($2 in a)
{
split(a[$2],datetime,",")
if($3>datetime[3])
print $0;
else
print a[$2]"Old time"
}
else print $0"NOMATCH";
delete a[$2];
}
Assuming that you can start your awk as follows:
awk -f script.awk input2.csv input1.csv > result.csv
you can use the following script to obtain the desired output:
BEGIN {
FS = OFS = ","
}
FILENAME == "input2.csv" {
date[$2] = $3
data[$2] = $0
used[$2] = 0
}
FILENAME == "input1.csv" {
if ($2 in date) {
used[$2] = 1
if ($3 < date[$2])
print data[$2]
else
print $0
} else {
print $0
}
}
END {
for (key in used) {
if (used[key] == 0)
print data[key]
}
}
Notes:
The script takes advantages of the assumption that file2 is smaller than file1 because it uses an array only for the few entries in file2.
The new entries are simply appended to the output. There is no sorting. If this is required there will have to be an extra effort.
EDIT
Heeding #JonathanLeffler's remark about the way I determine which file is being processed I would like to offer an alternate version that may (or may not :-) ) be a little more straight forward to understand than checking NR=FNR. However, it only works for sufficiently recent versions of awk which are capable of returning the size of an array as length(array):
BEGIN {
FS = ","
}
{
# The following effectively creates an array entry for each filename found (for "known" filenames existing entries are overwritten).
files[FILENAME] = 1
# check the number of files we have so far
if (length(files) == 1) {
# we are still in the first file
date[$2] = $3
data[$2] = $0
used[$2] = 0
} else {
# we are in the second file (or any other following file)
if ($2 in date) {
used[$2] = 1
if ($3 < date[$2])
print data[$2]
else
print $0
} else {
print $0
}
}
}
END {
for (key in used) {
if (used[key] == 0)
print data[key]
}
}
Also, if you require your output to be sorted according to the second row you can replace the call to awk by this:
awk -f script.awk input2.csv input1.csv | sort -t "," -n -k 2 > result.csv
The latter, of course, works for both versions of the script.
Since file1 is very large but file2 is very small (5-10 entries), you need to read all of file2 into memory first, dealing with the duplicate values. As a result, you'll have an array indexed by the record number with the new data; you should also have a record of the date for each record in a separate array. Then, as you read the main file, you look up the the record number and the date in the arrays, and if you need to, substitute the saved new record for the incoming old record.
Your outline script is most of the way there. It is more complex because you didn't save the dates coming in. This more or less works:
awk -F, '
FNR == NR { if (!($2 in date) || date[$2] < $3) { date[$2] = $3; line[$2] = $0; } next; }
{ if ($2 in date)
{
if (date[$2] > $3)
print line[$2]
else
print
delete line[$2]
delete date[$2]
}
else
print
}
END { for (l in line) print line[l]; }' file2 file1
Sample output for given data:
DL,1111111100,201312051013,val,FIX01,OptIn,N,Ext1,Ext2
DL,1111111101,201312051014,val,FIX01,OptIn,Y,Ext1,Ext2
DL,1111111102,201312051017,val,FIX02,OptIn,N,Ext1,Ext2
DL,1111111103,201312051016,val,FIX01,OptIn,N,Ext1,Ext2
DL,1111111104,201312051016,val,FIX02,OptIn,Y,Ext1,Ext2
However, if there were 4 new records, there's no guarantee that they'd be in sorted order, though they would all be at the end of the list. It would be possible to upgrade the script to print the new records at the appropriate place in the list if the input is guaranteed to be in sorted order. You simply have to search through the list of lines to see whether there are any lines that should be printed before the current line, and if so, do so (and delete the record so that they are not printed at the end).
Note that uniqueness in the output depends on uniqueness in the input (file1). That is, if field 2 in the input is repeated, this code won't notice. There is also nothing that can be done with the current design even if a duplicate was spotted; the old row has been printed so printing the new row will simply cause the duplicate. If you were worried about this, you could design the awk script to keep the whole of file1 in memory and only print anything when the whole of the input has been processed. Needless to say, this uses a lot more memory than the current design, and will generally be less efficient because of that. Nevertheless, it could be done if needed.