How to insert argument in awk script? - awk

I'm writing a shell script which shut down some services and trying to get its pid by using the following awk script.
However, this awk script can't get pid. What's wrong with that?
ps -ef | awk -v port_no=10080 '/[m]ilk.*port=port_no/{print $2}'
The result of ps -ef is like this:
username 13155 27705 0 16:06 pts/2 00:00:00 /home/username/.rbenv/versions/2.3.6/bin/ruby /home/username/.rbenv/versions/2.3.6/bin/milk web --no-browser --host=example.com --port=10080
This process is working with a different port argument as well, so I want to kill the process only working on port=10080.
The awk script below works fine, but when I specify the port no using awk -v like the above, it doesn't work well.
ps -ef | awk '/[m]ilk.*port=10080/{print $2}'
awk version: GNU Awk 4.0.2

The syntax for pattern matching with /../ does not work with variables in the regular expression. You need to use the ~ syntax for it.
awk -v port_no=10080 '$0 ~ "[m]ilk.*port="port_no{print $2}'
If you notice the regex carefully, the regex string on the r.h.s of ~ is under the double-quotes ".." except the variable name holding the port number which shouldn't be under quotes, for the expansion to happen.

This task is easily accomplished using pgrep:
$ pgrep -f '[m]ilk.*port=10080'
Have a look at man pgrep for details.

Related

Running an awk command with $SHELL -c returns different results

I am trying to use awk to print the unique lines returned by a command. For simplicity, assume the command is ls -alh.
If I run the following command in my Z shell, awk shows all lines printed by ls -alh
ls -alh | awk '!seen[$0]++'
However, if I run the same command with $SHELL -c while escaping the ! with backslash, I only see the first line of the output printed.
$SHELL -c "ls -alh | awk '\!seen[$0]++'"
How can I ensure the latter command prints the exact same outputs as the former?
EDIT 1:
I initially thought the ! could be the issue. But changing the expression '!seen[$0]++' to 'seen[$0]++==0' has the same problem.
EDIT 2:
It looks like I should have escaped $ too. Since I do not know the reason behind it, I will not post an answer.
In the second form, $0 is being treated as a shell variable in the double-quoted string. The substitution creates an interestingly mangled awk command:
> print $SHELL -c "ls -alh | awk '\!seen[$0]++'"
/bin/zsh -c ls -alh | awk '!seen[-zsh]++'
The variable is not substituted in the first form since it is inside single quotes.
This answer discusses how single- and double-quoted strings are treated in bash and zsh:
Difference between single and double quotes in Bash
Escaping the $ so that $0 is passed to awk should work, but note that quoting in commands that are parsed multiple times can get really tricky.
> print $SHELL -c "ls -alh | awk '\!seen[\$0]++'"
/bin/zsh -c ls -alh | awk '!seen[$0]++'

pass shell variable into awk's patern search

In a script I want to search connections established between some ports gathered with another command and set on PORT variable and specific systems.
the PORT variable is pass to awk using -vp=${PORT}
but I don't know how to use "p" it inside the rest of the pattern.
his does not work:
$ lsof -i -P|awk -vp=${PORT} '$(NF-1)~/vm7.+:'$p'->(vm9|vm11).+ESTABLISHED/{print $(NF-1)}'
$ lsof -i -P|awk -vp=${PORT} '$(NF-1)~/vm7.+:'p'->(vm9|vm11).+ESTABLISHED/{print $(NF-1)}'
give this a try:
awk -v p="$PORT" '{pat="yourHost(or whatever):"p}$(NF-1)~pat{print $(NF-1)}'
build the pattern(pat) with p and check the field (NF-1)
you don't need (shouldn't have) the ESTABLISHED in pattern, since it is the last field NF instead of NF-1
Use match:
$ awk -v p=$port 'match($(NF-1),"vm7.+:" p "->(vm9|vm11)"){print $(NF-1)}'
There might be some errors as there was no test material. Removed the ESTABLISHED as it is in $NF, not $(NF-1) (in my systems, at least).
... or don't:
$ awk -v p=$port '$(NF-1) ~ "vm7.+:" p "->(vm9|vm11)" {print $(NF-1)}'
Today I learned something.

"awk" Command Behaves Differently On SuSE 11 vs. Solaris 10

Friends,
I'm trying to extract the last part of following path in a ksh script:
TOOL_HOME=/export/fapps/mytool/mytool-V2-3-4
I want to extract the version # (i.e., 2-3-4) from the above.
awk runs fine on SuSE:
echo $TOOL_HOME | awk -F'mytool-V' '{print $2}'
#2-3-4
However, on Solaris 10, it produces the following:
#ytool
So on Solaris, awk is ignoring everything after the first character in -F'mytool-V'
What should i do to get the same output on both OS's?
On Solaris use /usr/xpg4/bin/awk, not /bin/awk (aka "old, broken awk").
Solaris awk is broken...
$ echo "$TOOL_HOME" | awk '{sub(/.*mytool-V/,"")}1'
2-3-4
or simply with sed
$ echo "$TOOL_HOME" | sed 's/.*mytool-V//'
2-3-4
No need to use awk or any other external program. ksh can do that:
echo ${TOOL_HOME##*mytool-V}

How to put this command in a Makefile?

I have the following command I want to execute in a Makefile but I'm not sure how.
The command is docker rmi -f $(docker images | grep "<none>" | awk "{print \$3}")
The command executed between $(..) should produce output which is fed to docker rmi but this is not working from within the Makefile I think that's because the $ is used specially in the Makefile but I'm not sure how to modify the command to fit in there.
Any ideas?
$ in Makefiles needs to be doubled to prevent substitution by make:
docker rmi -f $$(docker images | grep "<none>" | awk "{print \$$3}")
Also, it'd be simpler to use use a singly-quoted string in the awk command to prevent expansion of $3 by the shell:
docker rmi -f $$(docker images | grep "<none>" | awk '{print $$3}')
I really recommend the latter. It's usually better to have awk code in single quotes because it tends to contain a lot of $s, and all the backslashes hurt readability.

Nginx Version + AWK Not Working

Today I've noticed that I can't use "awk" on "nginx -v".
I've tried running this command: nginx -v | awk -F/ '{print $2}'
This should of been output like this: nginx/1.4.3
But instead it gives me nginx version: nginx/1.4.3
Any idea why it would behave this way !?
Also you can't output it to file by running: nginx -v > file.txt
nginx must be writing that message to standard error, not standard output. If you want to pipe it, you have to redirect stderr to stdout:
nginx -v 2>&1 | awk -F/ '{print $2}'