connect to a DB inside awk script - sql

In a shell script we can connect to a Database using sqlplus on unix.
can i perform the same thing inside an awk script?
i need to access the output of a select query inside an awk script.is that possible?

I'd do the query and feed the output of it into awk:
sqlplus 'select onething from another' | awk '{ weave awk magic here }'
Just like any other command:
pax> ls -alF | awk '{print $9}'
file1.txt
file2.txt
my_p0rn_dir/

Just use some sort of command line client for your SQL database (if available) and pipe the output to awk.
E.g. with sqlite (I don't know what client SQL*Plus has):
echo "select * from foo;" | sqlite3 file.db | awk ...
awk can't do it. This is UNIX tools philosophy, instead of having few tools that do many tasks, you use many little tools that do one task and connect them together.

Related

Output of awk in color

I am trying to set up polybar on my newly-installed Arch system. I know very little bash scripting. I am just getting started. If this is not an appropriate question for this forum, I will gladly delete it. I want to get the following awk output in color:
sensors | grep "Package id 0:" | tr -d '+' | awk '{print $4}'"
I know how to do this with echo, so I tried to pass the output so that with the echo command, it would be rendered in color:
sensors | grep "Package id 0:" | tr -d '+' | awk '{print $4}' | echo -e "\e[1;32m ???? \033[0m"
where I want to put the appropriate information where the ??? are.
The awk output is just a temperature, something like this: 50.0°C.
edit: It turns out that there is a very easy way to pass colors to outputs of bash scripts (even python scripts too) in polybar. But I am still stumped as to why the solutions suggested here in the answers work in the terminal but not in the polybar modules. I have several custom modules that use scripts with no problems.
Using awk
$ sensors | awk '/Package id 0:/{gsub(/+/,""); print "\033[32m"$4"\033[0m"}'
If that does not work, you can try this approach;
$ sensors | awk -v color="$(tput setaf 2)" -v end="$(tput sgr0)" '/Package id 0:/{gsub(/+/,""); print color $4 end}'
This is where you want to capture the output of awk. Since awk can do what grep and tr do, I've integrated the pipeline into one awk invocation:
temp=$(sensors | awk '/Package id 0:/ {gsub(/\+/, ""); print $4}')
echo -e "\e[1;32m $temp \033[0m"

Multiple awk print in single command

Here are the 2 command which we need to execute, there are two ways to execute this in one line either by ; or |. Is there any other way to execute it via awk command.
These are the below command which is getting executed twice, is it possible to have one command with multiple awk print as shown in the example command tried.
isi_classic snapshot usage | tail -n 1 | awk '{printf "\t\t\tSnapshot USED %=%.1f%%\n", $4}'
Snapshot USED =0.6%
isi_classic snapshot usage | tail -n -1 | awk '{ print "\t\t\tSnapshot USED:" $1}'
Snapshot USED=3.2T
Example command tried:
isi_classic snapshot usage | tail -n 1 | awk '{printf "\t\t\tSnapshot USED %:%.1f%%\n", $4}'; awk '{ print "\t\t\tSnapshot USED:" $1}'
Snapshot USED =0.6%
Snapshot USED=3.2T
You can definitely use one-line command to do it,
isi_classic snapshot usage | awk -v OFS='\t\t\t' 'END{printf "%sSnapshot USED %=%.1f%%\n%sSnapshot USED:%s\n",OFS,$4,OFS,$1}'
Brief explanation,
No need to use tail, awk 'END{}' can do the same thing
You can combine your printf and print command to one
It would be better to substitute the '\t\t\t' as OFS to make the command more readable

Combining awk search with standard awk and awk delimiter

I`m working on a set of data for which I need specific fields as output:
The data looks like this:
/home/oracle/db.log.gz:2013-1-19T00:00:25 <user.info> 1 2013-1-19T00:00:53.911 host_name RT_FLOW [junos#26.1.1.1.2.4 source-address="10.1.2.0" source-port="616" destination-address="100.1.1.2" destination-port="23" service-name="junos-telnet" nat-source-address="20x.2x.1.2" nat-source-port="3546" nat-destination-address="9x.12x.3.0"]
From above I need three things:
(I) - 2013-1-19T00:00:53.911 which is $4
(II)- source-address="10.1.2.0" which is $8 of which I need only 10.1.2.0
(III) - destination-address="100.1.1.2" which $10 of which I need only 100.1.1.2
I cannot use simple awk like this -> awk '{ print $4 \t $8 \t $10 }' since there are some fields after "device_name" in the log file which are not always present in all log lines so I have to make use of delimiters such as
awk -F 'source-address=' '{print $2}' | awk '{print $1} -> this gives source-addressIP which is (II) requirement
I`m not sure how do I combine using a awk search for I and II and III.
Can someone help?
I believe sed is better for this job
sed -r 's/([^ ]+[ ]+){3}([^ ]+).*[ ]+source-address="([^"]+)".*[ ]+destination-address="([^"]+)".*/\2\t\3\t\4/' file
Output:
2013-1-19T00:00:53.911 10.1.2.0 100.1.1.2
What do you exactly want?
solve the problem using any (reasonably standard) tool
solve this challenge using one instance of awk
solve the problem using just awk, no matter how many instances it costs
For the first case, you could parse the line using scripting language of your choice (mine would be Perl), or do it the hard way using sed and a single big substitution. Or something between the two – use three regexes to get the parts you want.
For the second case, you could adapt any of the former solutions, preferably the sed one. Awk and sed solutions have already been posted.
For the third case, you could just run the obvious awk solutions you mentioned in your question and send the results to a single pipe like { awk …; awk …; awk …; } < file | consumer.
Try doing this :
awk '{print gensub(/.*\s+([0-9]{4}-[0-9]+-[0-9]+T[0-9]{2}:[0-9]{2}:[0-9]{2}.[0-9]+).*source-address="([0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}).*destination-address="([0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}).*/, "(I) \\1\n(II) \\2\n(III) \\3", "g"); }' file
Another solution using perl :
perl -lne 'print "(", "I" x ++$c, ") $_" for m/.*?\s+(\d{4}-\d+-\d+T\d{2}:\d{2}:\d{2}.\d+).*source-address="(\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}).*destination-address="(\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}).*/' file
Outputs :
(I) 2013-1-19T00:00:53.911
(II) 10.1.2.0
(III) 100.1.1.2

How to quote a shell variable in a TCL-expect string

I'm using the following awk command in an expect script to get the gateway for a particular destination
route | grep $dest | awk '{print $2}'
However the expect script does not like the $2 in the above statement.
Does anyone know of an alternative to awk to perform the same function as above? ie. output 2nd column.
You can use cut:
route | grep $dest | cut -d \ -f 2
That uses spaces as the field delimiter and pulls out the second field
To answer your Expect question, single quotes have no special meaning to the Tcl parser. You need to use braces to protect the body of the awk script:
route | grep $dest | awk {{print $2}}
And as awk can do what grep does, you can get away with one less process:
route | awk -v d=$dest {$0 ~ d {print $2}}
Before switching to another utility, check if changing field separator worrks. Documentation for field separators in GNU Awk here.
SED is the best alternative to use. If you don't mind a dependency, Perl should also be sufficient to solve the task
Depending on the structure of your data, you can use either cut, or use sed to do both filtering and printing the second column.
Alternatively, you could use Perl:
perl -ne 'if(/foo/) { #_ = split(/:/); print $_[1]; }'
This will print second token of each line containing foo, with : as token separator.

Copy data from one database into another using bash

I need to copy data from one database into my own database, because i want to run it as a daily cronjob i prefer to have it in bash. I also need to store the values in variables so i can run various checks/validations on the values. This is what i got so far:
echo "SELECT * FROM table WHERE value='ABC' AND value2 IS NULL ORDER BY time" | mysql -u user -h ip db -p | sed 's/\t/,/g' | awk -F, '{print $3,$4,$5,$7 }' > Output
cat Output | while read line
do
Value1=$(awk '{print "",$1}')
Value2=$(awk '{print "",$2}')
Value3=$(awk '{print "",$3}')
Value4=$(awk '{print "",$4}')
echo "INSERT INTO db (value1,value2,value3,value4,value5) VALUES($Value1,$Value2,'$Value3',$Value4,'n')" | mysql -u rb db -p
done
I get the data i need from the database and store it in a new file seperated by spaces. Then i read the file line by line and store the values in variables, and last i run an insert query with the right varables.
I think something goes wrong while storing the values but i cant really figure out what goes wrong.
The awk used to get Value2, Value3 and Value4 does not get the input from $line. You can fix this as:
Value1=$(echo $line | awk '{print $1}')
Value2=$(echo $line | awk '{print $2}')
Value3=$(echo $line | awk '{print $3}')
Value4=$(echo $line | awk '{print $4}')
There's no reason to call awk four times in a loop. That could be very slow. If you don't need the temporary file "Output" for another reason then you don't need it at all - just pipe the output into the while loop. You may not need to use sed to change tabs into commas (you could use tr, by the way) since awk will split fields on tabs (and spaces) by default (unless your data contains spaces, but some of it seems not to).
echo "SELECT * FROM table WHERE value='ABC' AND value2 IS NULL ORDER BY time" |
mysql -u user -h ip db -p |
sed 's/\t/,/g' | # can this be eliminated?
awk -F, '{print $3,$4,$5,$7 }' | # if you eliminate the previous line then omit the -F,
while read line
do
tmparray=($line)
Value1=${tmparray[0]}
Value2=${tmparray[1]}
Value3=${tmparray[2]}
Value4=${tmparray[3]}
echo "INSERT INTO predb (value1,value2,value3,value4,value5) VALUES($Value1,$Value2,'$Value3',$Value4,'n')" | mysql -u rb db -p
done
That uses a temporary array to split the values out of the line. This is another way to do that:
set -- $line
Value1=$1
Value2=$2
Value3=$3
Value4=$4