Multiple awk print in single command - awk

Here are the 2 command which we need to execute, there are two ways to execute this in one line either by ; or |. Is there any other way to execute it via awk command.
These are the below command which is getting executed twice, is it possible to have one command with multiple awk print as shown in the example command tried.
isi_classic snapshot usage | tail -n 1 | awk '{printf "\t\t\tSnapshot USED %=%.1f%%\n", $4}'
Snapshot USED =0.6%
isi_classic snapshot usage | tail -n -1 | awk '{ print "\t\t\tSnapshot USED:" $1}'
Snapshot USED=3.2T
Example command tried:
isi_classic snapshot usage | tail -n 1 | awk '{printf "\t\t\tSnapshot USED %:%.1f%%\n", $4}'; awk '{ print "\t\t\tSnapshot USED:" $1}'
Snapshot USED =0.6%
Snapshot USED=3.2T

You can definitely use one-line command to do it,
isi_classic snapshot usage | awk -v OFS='\t\t\t' 'END{printf "%sSnapshot USED %=%.1f%%\n%sSnapshot USED:%s\n",OFS,$4,OFS,$1}'
Brief explanation,
No need to use tail, awk 'END{}' can do the same thing
You can combine your printf and print command to one
It would be better to substitute the '\t\t\t' as OFS to make the command more readable

Related

Output of awk in color

I am trying to set up polybar on my newly-installed Arch system. I know very little bash scripting. I am just getting started. If this is not an appropriate question for this forum, I will gladly delete it. I want to get the following awk output in color:
sensors | grep "Package id 0:" | tr -d '+' | awk '{print $4}'"
I know how to do this with echo, so I tried to pass the output so that with the echo command, it would be rendered in color:
sensors | grep "Package id 0:" | tr -d '+' | awk '{print $4}' | echo -e "\e[1;32m ???? \033[0m"
where I want to put the appropriate information where the ??? are.
The awk output is just a temperature, something like this: 50.0°C.
edit: It turns out that there is a very easy way to pass colors to outputs of bash scripts (even python scripts too) in polybar. But I am still stumped as to why the solutions suggested here in the answers work in the terminal but not in the polybar modules. I have several custom modules that use scripts with no problems.
Using awk
$ sensors | awk '/Package id 0:/{gsub(/+/,""); print "\033[32m"$4"\033[0m"}'
If that does not work, you can try this approach;
$ sensors | awk -v color="$(tput setaf 2)" -v end="$(tput sgr0)" '/Package id 0:/{gsub(/+/,""); print color $4 end}'
This is where you want to capture the output of awk. Since awk can do what grep and tr do, I've integrated the pipeline into one awk invocation:
temp=$(sensors | awk '/Package id 0:/ {gsub(/\+/, ""); print $4}')
echo -e "\e[1;32m $temp \033[0m"

Is using awk at least 'awk -F' always will be fine?

What is the difference on Ubuntu between awk and awk -F? For example to display the frequency of the cpu core 0 we use the command
cat /proc/cpuinfo | grep -i "^ cpu MHz" | awk -F ":" '{print $ 2}' | head -1
But why it uses awk -F? We could put awk without the -F and it would work of course (already tested).
Because without -F , we couldn't find from wath separator i will begin the calculation and print the right result. It's like a way to specify the kind of separator for this awk's using. Without it, it will choose the trivial separator in the line like if i type on the terminal: ps | grep xeyes | awk '{print $1}' ; in this case it will choose the space ' ' as a separator to print the first value: pid OF the process xeyes. I found it in https://www.shellunix.com/awk.html. Thanks for all.

How to assign Awk output to netcat?

I want to transfer first field of a file. I am using awk to pull the first field and then send it using netcat. But I don't get anything on the otherside. I am using the following command
awk -F, '{print $1}' sample.csv | netcat -lk 9999
Any hints would be much appreciated.
Regards,
Laeeq
I ran into this same problem when piping awk output to netcat. Turns out, awk buffers a lot.
Text can be flushed on each line with the fflush() command. The following works for me:
awk -F, '{print $1} fflush()' sample.csv | netcat -lk 9999

tips with awk with changing parameters

i got several pieces of code that look like:
for ff in `seq 3 $nlpN`;
do
npc1[$ff]=`awk 'NR=='$ff' {print $1}' p_walls.raw`;
echo ${npc1[$ff]};
npc2[$ff]=`awk 'NR=='$ff' {print $2}' p_walls.raw`;
npc3[$ff]=`awk 'NR=='$ff' {print $3}' p_walls.raw`;
npRs[$ff]=`awk 'NR=='$ff' {print $4}' p_walls.raw`;
echo $ff
done
as You can see i'm invoking awk several times. Is there a faster way to do this, like invoking awk once and do the assignments with the changin parameters?
thanks a lot in advance!
input looks like:
...
3.76023 0.79528 0.307771 8729.82
3.76024 0.814664 0.307849 8650.2
3.76026 0.845679 0.307978 8802.97
3.76025 0.826293 0.307897 8690.43
3.76017 0.65959 0.30722 8936.07
...
im looking for sth like:
TY
That does look pretty inefficient. As written, awk is processing the input file in its entirety four times with every pass of the loop. I'm also pretty sure that cut is completely unnecessary unless you have the FS environment variable set to something strange. The following will replace the multiple awk runs with a single pass over the data file that stops after it finds the line. Then you can use cut to extract the individual fields.
for ff in `seq 3 $nlpN`
do
data=`awk 'NR=='$ff' { print $1, $2, $3, $4; exit }' p_walls.raw`
npc1[$ff]=`echo "$data" | cut -f1 -d ' '`
echo ${npc1[$ff]}
npc2[$ff]=`echo "$data" | cut -f2 -d ' '`
npc3[$ff]=`echo "$data" | cut -f3 -d ' '`
npRs[$ff]=`echo "$data" | cut -f4 -d ' '`
echo $ff
done
Note that I added an exit statement so that awk will exit after processing the line. This prevents it from reading the entire file on every pass. If all that you need to do is extract a single line from a file, then you might want to use sed instead since (IMHO) the script is easier to read and it seems to be a little faster on large files. The following sed expression is equivalent to the awk line:
data=`sed -n -e "$ff p" -e "$ff q" p_walls.raw`
The -n tells sed to only output from the lines that are selected by the script. In this case, the script, supplied as two -e parameters. Each is an address followed by processing command. Multiple commands are separated newlines in sed scripts but they can also be specified by multiple -e parameters with the same address. Putting this all together, the expression 42 p tells sed to select line 42 and run the p command which prints the selected pattern space (the 42nd line). The 42 q command tells the utility to exit after processing the 42nd line. So, our sed expression reads the first $ffth lines from "p_walls.raw", prints the $ffth one and exits.
Run awk a single time and process the output on each iteration separately.
awk "(NR > 3 && NR <= $nlpN)"' { print NR, $1, $2, $3, $4 }' p_walls.raw |
while read ff c1 c2 c3 Rs
do
npc1[$ff]=$c1
echo ${npc1[$ff]};
npc2[$ff]=$c2
npc3[$ff]=$c3
npRs[$ff]=$Rs
echo $ff
done

Copy data from one database into another using bash

I need to copy data from one database into my own database, because i want to run it as a daily cronjob i prefer to have it in bash. I also need to store the values in variables so i can run various checks/validations on the values. This is what i got so far:
echo "SELECT * FROM table WHERE value='ABC' AND value2 IS NULL ORDER BY time" | mysql -u user -h ip db -p | sed 's/\t/,/g' | awk -F, '{print $3,$4,$5,$7 }' > Output
cat Output | while read line
do
Value1=$(awk '{print "",$1}')
Value2=$(awk '{print "",$2}')
Value3=$(awk '{print "",$3}')
Value4=$(awk '{print "",$4}')
echo "INSERT INTO db (value1,value2,value3,value4,value5) VALUES($Value1,$Value2,'$Value3',$Value4,'n')" | mysql -u rb db -p
done
I get the data i need from the database and store it in a new file seperated by spaces. Then i read the file line by line and store the values in variables, and last i run an insert query with the right varables.
I think something goes wrong while storing the values but i cant really figure out what goes wrong.
The awk used to get Value2, Value3 and Value4 does not get the input from $line. You can fix this as:
Value1=$(echo $line | awk '{print $1}')
Value2=$(echo $line | awk '{print $2}')
Value3=$(echo $line | awk '{print $3}')
Value4=$(echo $line | awk '{print $4}')
There's no reason to call awk four times in a loop. That could be very slow. If you don't need the temporary file "Output" for another reason then you don't need it at all - just pipe the output into the while loop. You may not need to use sed to change tabs into commas (you could use tr, by the way) since awk will split fields on tabs (and spaces) by default (unless your data contains spaces, but some of it seems not to).
echo "SELECT * FROM table WHERE value='ABC' AND value2 IS NULL ORDER BY time" |
mysql -u user -h ip db -p |
sed 's/\t/,/g' | # can this be eliminated?
awk -F, '{print $3,$4,$5,$7 }' | # if you eliminate the previous line then omit the -F,
while read line
do
tmparray=($line)
Value1=${tmparray[0]}
Value2=${tmparray[1]}
Value3=${tmparray[2]}
Value4=${tmparray[3]}
echo "INSERT INTO predb (value1,value2,value3,value4,value5) VALUES($Value1,$Value2,'$Value3',$Value4,'n')" | mysql -u rb db -p
done
That uses a temporary array to split the values out of the line. This is another way to do that:
set -- $line
Value1=$1
Value2=$2
Value3=$3
Value4=$4