awk command not working with kubectl exec - awk

From outside container:
$ kubectl exec -it ui-gateway-0 -- bash -c "ps -ef | grep entities_api_svc | head -1"
root 14 9 0 10:34 ? 00:00:02 /svc/bin/entities_api_svc
$ kubectl exec -it ui-gateway-0 -- bash -c "ps -ef | grep entities_api_svc | head -1 | awk '{print $2}'"
root 14 9 0 10:34 ? 00:00:02 /svc/bin/entities_api_svc
From inside container:
[root#ui-gateway-0 /]# ps -ef | grep entities_api_svc | head -1 | awk '{print $2}'
14

I find it easier to use single quotes on the sh/bash command argument so it is closer to what you would type in the shell:
kubectl exec -it ui-gateway-0 -- \
bash -c 'ps -ef | grep entities_api_svc | head -1 | awk "{print \$2}"'
This means the awk uses double quotes, which requires the shell variable marker $ to be escaped.
In the original command, the shell running kubectl was replacing $2 with a zero length string so awk would see only print, which prints the whole line
Multiple levels of nesting
Nested shell escaping gets very obscure very quickly and hard to debug:
$ printf '%q\n' 'echo "single nested $var" | awk "print $2"'
echo\ \"single\ nested\ \$var\"\ \|\ awk\ \"print\ \$2\"
$ printf '%q\n' "$(printf '%q\n' 'echo "double nested $var" | awk "print $2"')"
echo\\\ \\\"double\\\ nested\\\ \\\$var\\\"\\\ \\\|\\\ awk\\\ \\\"print\\\ \\\$2\\\"
If you add a file grep-entities.sh in container
#!/bin/bash
set -uex -o pipefail
ps -ef | grep entities_api_svc | head -1 | awk '{print $2}'
You then don't need to worry about escaping
pid=$(sshpass -p "password" ssh vm#10.10.0.1 kubectl exec ui-gateway-0 -- /grep-entities.sh)
Also pgrep does the scripts job for you
kubectl exec ui-gateway-0 -- pgrep entities_api_svc

Related

Ansible grep from shell variable

I am trying to create an Ansible playbook to pull out MTU size for exact NIC (unfortunately i have 5k VMs and this exact NIC does not have the same name on all VMs). I need to parse IP from file to variable and grep by that.
My command i will use in playbook:
/sbin/ifconfig -a | grep -C 1 $IP | grep MTU | awk '{print $5}' | cut -c 5-10
And output should be looking like this:
9000
This one gnu awk command should do:
ifconfig -a | awk -v ip="$IP" -v RS= -F'MTU:' '$0~ip {split($2,a," ");print a[1]}'
9216
Another variations
ifconfig -a | awk -v ip="$IP" 'f {split($6,a,":");print a[2];exit} $0~ip{f=1}'
ifconfig -a | awk -v ip="$IP" 'f {print substr($6,5,99);exit} $0~ip{f=1}'
9216

How to execute a remote command over ssh?

I try to connect to the remote server by ssh and execute the command.
But given the situation, I can only execute a single command.
For example
ssh -i ~/auth/aws.pem ubuntu#server "echo 1"
It works very well, but I have a problem with the following
case1
ssh -i ~/auth/aws.pem ubuntu#server "cd /"
ssh -i ~/auth/aws.pem ubuntu#server "ls"
case2
ssh -i ~/auth/aws.pem ubuntu#server "export a=1"
ssh -i ~/auth/aws.pem ubuntu#server "echo $a"
The session is not maintained.
Of course, you can use "cd /; ls"
but I can only execute one command at a time.
...
Reflecting comments
developed a bash script
function cmd()
{
local command_delete="$#"
if [ -f /tmp/variables.current ]; then
set -a
source /tmp/variables.current
set +a
cd $PWD
fi
if [ ! -f /tmp/variables.before ]; then
comm -3 <(declare | sort) <(declare -f | sort) > /tmp/variables.before
fi
echo $command_delete > /tmp/export_command.sh
source /tmp/export_command.sh
comm -3 <(declare | sort) <(declare -f | sort) > /tmp/variables.after
diff /tmp/variables.before /tmp/variables.after \
| sed -ne 's/^> //p' \
| sed '/^OLDPWD/ d' \
| sed '/^PWD/ d' \
| sed '/^_/ d' \
| sed '/^PPID/ d' \
| sed '/^BASH/ d' \
| sed '/^SSH/ d' \
| sed '/^SHELLOPTS/ d' \
| sed '/^XDG_SESSION_ID/ d' \
| sed '/^FUNCNAME/ d' \
| sed '/^command_delete/ d' \
> /tmp/variables.current
echo "PWD=$(pwd)" >> /tmp/variables.current
}
ssh -i ~/auth/aws.pem ubuntu#server "cmd cd /"
ssh -i ~/auth/aws.pem ubuntu#server "cmd ls"
What better solution?
$ cat <<'EOF' | ssh user#server
export a=1
echo "${a}"
EOF
Pseudo-terminal will not be allocated because stdin is not a terminal.
user#server's password:
1
In this way you will send all commands to ssh as a single file script, so you can put any number of commands. Please note the way to use EOF between single quote '.

Executing Openstack-Rally test cases

I have Openstack installed in my Ubuntu server. I need to run all Rally test cases. I did rally deployment. Now I am able to execute single JSON file and getting the HTML and XML output.
eg:
root#ubuntu:/usr/share/rally/samples/tasks/scenarios/nova# rally task start list-images.json
This way I can execute individual JSON files only.
My requirement:
I have around 250 JSON files to be executed. How to execute them all in one shot?
What tools does the Openstack framework have to execute the entire rally cases(JSON files)?
Actually, you should not want to run 200 separated files. You would like to run one task that contains them. Rally allows you to put as many as you want test cases in single file. For example:
---
NovaServers.boot_and_delete_server:
-
args:
flavor:
name: "m1.tiny"
image:
name: "^cirros.*uec$"
force_delete: false
runner:
type: "constant"
times: 10
concurrency: 2
context:
users:
tenants: 3
users_per_tenant: 2
NovaServers.boot_and_list_server:
-
args:
flavor:
name: "m1.tiny"
image:
name: "^cirros.*uec$"
detailed: True
runner:
type: "constant"
times: 1
concurrency: 1
context:
users:
tenants: 1
users_per_tenant: 1
Take into account that Rally accept jinja2 templates, so you can use all features of jinja2 including "file includ options" Take a look here:
https://rally.readthedocs.io/en/latest/tutorial/step_5_task_templates.html
The best way is to write a script to run all tasks
ex:
#!/bin/bash
cd `dirname $0`
time=`date +%H:%M:%S`
mkdir -p testcase_result/$time
testcase_file=testcase_result/$time/rally_testcase.txt
total_file=testcase_result/$time/rally_total.txt
rally_task_dir=source/samples/tasks/scenarios
#keystone
keystone_case=`find $rally_task_dir/keystone -name "*.yaml"`
keystone_num=`grep -rn '\<Keystone' $keystone_case | wc -l`
echo "Keystone Testcases Number: "$keystone_num > $total_file
echo "Keystone" > $testcase_file
grep -rn '\<Keystone' $keystone_case | awk '{print NR":",$2}' >> $testcase_file
sed -i 's/Keystone.*\.//g' $testcase_file
#glance
glance_case=`find $rally_task_dir/glance -name "*.yaml"`
glance_num=`grep -rn '\<Glance' $glance_case | wc -l`
echo "Glance Testcases Number: "$glance_num >> $total_file
echo "" >> $testcase_file
echo "Glance" >> $testcase_file
grep -rn '\<Glance' $glance_case | awk '{print NR":",$2}' >> $testcase_file
sed -i 's/Glance.*\.//g' $testcase_file
#nova
nova_case=`find $rally_task_dir/nova -name "*.yaml"`
nova_num=`grep -rn '\<Nova' $nova_case | wc -l`
echo "Nova Testcases Number: "$nova_num >> $total_file
echo "" >> $testcase_file
echo "Nova" >> $testcase_file
grep -rn '\<Nova' $nova_case | awk '{print NR":",$2}' >> $testcase_file
sed -i 's/Nova.*\.//g' $testcase_file
#neutron
neutron_case=`find $rally_task_dir/neutron -name "*.yaml"`
neutron_num=`grep -rn '\<Neutron' $neutron_case | wc -l`
echo "Neutron Testcases Number: "$neutron_num >> $total_file
echo "" >> $testcase_file
echo "Neutron" >> $testcase_file
grep -rn '\<Neutron' $neutron_case | awk '{print NR":",$2}' >> $testcase_file
sed -i 's/Neutron.*\.//g' $testcase_file
#cinder
cinder_case=`find $rally_task_dir/cinder -name "*.yaml"`
cinder_num=`grep -rn '\<Cinder' $cinder_case | wc -l`
echo "Cinder Testcases Number: "$cinder_num >> $total_file
echo "" >> $testcase_file
echo "Cinder" >> $testcase_file
grep -rn '\<Cinder' $cinder_case | awk '{print NR":",$2}' >> $testcase_file
sed -i 's/Cinder.*\.//g' $testcase_file
#total
let total=$keystone_num+$glance_num+$nova_num+$neutron_num+$cinder_num
echo "Total Testcases Number: $total" >> $total_file
sed -i 's/:$//' $testcase_file
# Run Scripts tests
cd testcase_result/$time
for i in ../../rally_scripts/*.sh
do
bash $i
done

A script to change file names

I am new to awk and shell based programming. I have a bunch of files name file_0001.dat, file_0002.dat......file_1000.dat. I want to change the file names such as the number after file_ will be a multiple of 4 in comparison to previous file name. SO i want to change
file_0001.dat to file_0004.dat
file_0002.dat to file_0008.dat
and so on.
Can anyone suggest a simple script to do it. I have tried the following but without any success.
#!/bin/bash
a=$(echo $1 sed -e 's:file_::g' -e 's:.dat::g')
b=$(echo "${a}*4" | bc)
shuf file_${a}.dat > file_${b}.dat
This script will do that trick for you:
#!/bin/bash
for i in `ls -r *.dat`; do
a=`echo $i | sed 's/file_//g' | sed 's/\.dat//g'`
almost_b=`bc -l <<< "$a*4"`
b=`printf "%04d" $almost_b`
rename "s/$a/$b/g" $i
done
Files before:
file_0001.dat file_0002.dat
Files after first execution:
file_0004.dat file_0008.dat
Files after second execution:
file_0016.dat file_0032.dat
Here's a pure bash way of doing it (without bc, rename or sed).
#!/bin/bash
for i in $(ls -r *.dat); do
prefix="${i%%_*}_"
oldnum="${i//[^0-9]/}"
newnum="$(printf "%04d" $(( 10#$oldnum * 4 )))"
mv "$i" "${prefix}${newnum}.dat"
done
To test it you can do
mkdir tmp && cd $_
touch file_{0001..1000}.dat
(paste code into convert.sh)
chmod +x convert.sh
./convert.sh
Using bash/sed/find:
files=$(find -name 'file_*.dat' | sort -r)
for file in $files; do
n=$(sed 's/[^_]*_0*\([^.]*\).*/\1/' <<< "$file")
let n*=4
nfile=$(printf "file_%04d.dat" "$n")
mv "$file" "$nfile"
done
ls -r1 | awk -F '[_.]' '{printf "%s %s_%04d.%s\n", $0, $1, 4*$2, $3}' | xargs -n2 mv
ls -r1 list file in reverse order to avoid conflict
the second part will generate new filename. For example: file_0002.dat will become file_0002.dat file_0008.dat
xargs -n2 will pass two arguments every time to mv
This might work for you:
paste <(seq -f'mv file_%04g.dat' 1000) <(seq -f'file_%04g.dat' 4 4 4000) |
sort -r |
sh
This can help:
#!/bin/bash
for i in `cat /path/to/requestedfiles |grep -o '[0-9]*'`; do
count=`bc -l <<< "$i*4"`
echo $count
done

Counting number of processes using AWK

I'm trying to come up with command using AWK which will list down all the processes along with its number of instances running:
I'm using following command
ps axo pid,command | awk -F/ '{print $1, $4}'
and I;m getting following result
1727 sshd
1807 httpd
1834 abrtd
1842 abrt-dump-oops -d abrt -rwx
1848 httpd
1849 httpd
1879 gpm -m
I want to above command so that it can display total number of process count along with the process, something as follows
1 1727 sshd
3 1807 httpd
1 1834 abrtd
1 1842 abrt-dump-oops -d abrt -rwx
1 1879 gpm -m
In fact I want to kill a process running more than 5 instances does not matter what process it is.
This is not awk but it should produce the output you want.
ps axo pid,command | sort -k2 | uniq -c -f 1
try this .....
ps -eo command | grep -v COMMAND | awk '{count[$0]++}END{for(j in count) print count[j] ,j}' | sort -rn | head