How to execute a remote command over ssh? - ssh

I try to connect to the remote server by ssh and execute the command.
But given the situation, I can only execute a single command.
For example
ssh -i ~/auth/aws.pem ubuntu#server "echo 1"
It works very well, but I have a problem with the following
case1
ssh -i ~/auth/aws.pem ubuntu#server "cd /"
ssh -i ~/auth/aws.pem ubuntu#server "ls"
case2
ssh -i ~/auth/aws.pem ubuntu#server "export a=1"
ssh -i ~/auth/aws.pem ubuntu#server "echo $a"
The session is not maintained.
Of course, you can use "cd /; ls"
but I can only execute one command at a time.
...
Reflecting comments
developed a bash script
function cmd()
{
local command_delete="$#"
if [ -f /tmp/variables.current ]; then
set -a
source /tmp/variables.current
set +a
cd $PWD
fi
if [ ! -f /tmp/variables.before ]; then
comm -3 <(declare | sort) <(declare -f | sort) > /tmp/variables.before
fi
echo $command_delete > /tmp/export_command.sh
source /tmp/export_command.sh
comm -3 <(declare | sort) <(declare -f | sort) > /tmp/variables.after
diff /tmp/variables.before /tmp/variables.after \
| sed -ne 's/^> //p' \
| sed '/^OLDPWD/ d' \
| sed '/^PWD/ d' \
| sed '/^_/ d' \
| sed '/^PPID/ d' \
| sed '/^BASH/ d' \
| sed '/^SSH/ d' \
| sed '/^SHELLOPTS/ d' \
| sed '/^XDG_SESSION_ID/ d' \
| sed '/^FUNCNAME/ d' \
| sed '/^command_delete/ d' \
> /tmp/variables.current
echo "PWD=$(pwd)" >> /tmp/variables.current
}
ssh -i ~/auth/aws.pem ubuntu#server "cmd cd /"
ssh -i ~/auth/aws.pem ubuntu#server "cmd ls"
What better solution?

$ cat <<'EOF' | ssh user#server
export a=1
echo "${a}"
EOF
Pseudo-terminal will not be allocated because stdin is not a terminal.
user#server's password:
1
In this way you will send all commands to ssh as a single file script, so you can put any number of commands. Please note the way to use EOF between single quote '.

Related

How can I access a VPN inside a VMWare Fusion VM

I have a VPN connection in MacOS BigSur but I can't access it inside a Linux VM running under VMWare Fusion V12.1.2.
The issue has been fixed in V12.2.0 VMWare Fusion 12.2.0 Release Notes
The solution is to manually create the VPN tunnel and link it to the VM as there are multiple commands involved and the IP Address can change I created the following script to execute the required commands.
#!/bin/bash
function ask_yes_or_no() {
read -p "$1 ([y]es or [N]o): "
case $(echo $REPLY | tr '[A-Z]' '[a-z]') in
y|yes) echo "yes" ;;
*) echo "no" ;;
esac
}
currNatRules=$(sudo pfctl -a com.apple.internet-sharing/shared_v4 -s nat 2>/dev/null)
if test -z "$currNatRules"
then
echo -e "\nThere are currently no NAT rules loaded\n"
exit 0
fi
utunCheck=$(echo $currNatRules | grep utun)
if test -n "$utunCheck"
then
echo -e "\nIt looks like the VPN tunnel utun2 has already been created"
echo -e "\n$currNatRules\n"
if [[ "no" == $(ask_yes_or_no "Do you want to continue?") ]]
then
echo -e "\nExiting\n"
exit 0
fi
fi
natCIDR=$(echo $currNatRules | grep en | grep nat | cut -d\ -f 6)
if test -z "$natCIDR"
then
echo -e "\nCannot extract the NAT CIDR from:"
echo -e "\n$currNatRules\n"
exit 0
fi
interface=$(route get 10/8 | grep interface | cut -d\ -f 4)
echo -e "\nNAT CIDR=$natCIDR Interface=$interface\n"
newRule="nat on ${interface} inet from ${natCIDR} to any -> (${interface}) extfilter ei"
echo -e "\nAdding new rule: $newRule\n"
configFile="fixnat_rules.conf"
[[ -d $configFile ]] && rm $configFile
echo "$currNatRules" > $configFile
echo "$newRule" >> $configFile
sudo pfctl -a com.apple.internet-sharing/shared_v4 -N -f ${configFile} 2>/dev/null
echo -e "\nConfig update applied\n"
sudo pfctl -a com.apple.internet-sharing/shared_v4 -s nat 2>/dev/null
echo -e "\n"
exit 0

awk command not working with kubectl exec

From outside container:
$ kubectl exec -it ui-gateway-0 -- bash -c "ps -ef | grep entities_api_svc | head -1"
root 14 9 0 10:34 ? 00:00:02 /svc/bin/entities_api_svc
$ kubectl exec -it ui-gateway-0 -- bash -c "ps -ef | grep entities_api_svc | head -1 | awk '{print $2}'"
root 14 9 0 10:34 ? 00:00:02 /svc/bin/entities_api_svc
From inside container:
[root#ui-gateway-0 /]# ps -ef | grep entities_api_svc | head -1 | awk '{print $2}'
14
I find it easier to use single quotes on the sh/bash command argument so it is closer to what you would type in the shell:
kubectl exec -it ui-gateway-0 -- \
bash -c 'ps -ef | grep entities_api_svc | head -1 | awk "{print \$2}"'
This means the awk uses double quotes, which requires the shell variable marker $ to be escaped.
In the original command, the shell running kubectl was replacing $2 with a zero length string so awk would see only print, which prints the whole line
Multiple levels of nesting
Nested shell escaping gets very obscure very quickly and hard to debug:
$ printf '%q\n' 'echo "single nested $var" | awk "print $2"'
echo\ \"single\ nested\ \$var\"\ \|\ awk\ \"print\ \$2\"
$ printf '%q\n' "$(printf '%q\n' 'echo "double nested $var" | awk "print $2"')"
echo\\\ \\\"double\\\ nested\\\ \\\$var\\\"\\\ \\\|\\\ awk\\\ \\\"print\\\ \\\$2\\\"
If you add a file grep-entities.sh in container
#!/bin/bash
set -uex -o pipefail
ps -ef | grep entities_api_svc | head -1 | awk '{print $2}'
You then don't need to worry about escaping
pid=$(sshpass -p "password" ssh vm#10.10.0.1 kubectl exec ui-gateway-0 -- /grep-entities.sh)
Also pgrep does the scripts job for you
kubectl exec ui-gateway-0 -- pgrep entities_api_svc

How to suppress "killed by signal 1." error on ssh jump connection

I am using scp and ssh connections with following commands and yet keep getting "killed by signal 1." errors.
scp example:
$ scp -q '-oProxyCommand=ssh -W %h:%p {user}#{jump_server}' /path/file.txt
{user}#{server2}:/tmp/
Killed by signal 1.
ssh example:
$ ssh -A -J {user}#{jump_server} -q -o BatchMode=yes -o ServerAliveInterval=10 {user}#{server2} 'ps -ef | grep mysql | wc -l 2>&1'
2
Killed by signal 1.
I tried using -t:
$ ssh -t -A -J {user}#{jump_server} -t -q -o BatchMode=yes -o ServerAliveInterval=10 {user}#{server2} 'ps -ef | grep mysql | wc -l 2>&1'
2
Killed by signal 1.
I tried using LogLevel:
$ ssh -o LogLevel=QUIET -A -J {user}#{jump_server} -q -o BatchMode=yes -o ServerAliveInterval=10 -o LogLevel=QUIET {user}#{server2} 'ps -ef | grep mysql | wc -l 2>&1'
2
Killed by signal 1.
I tried using the ProxyCommand option:
$ ssh -q -oProxyCommand="ssh -W %h:%p {user}#{jump_server}" -q -o BatchMode=yes -o ServerAliveInterval=10 {user}#{server2} 'ps -ef | grep mysql | wc -l 2>&1'
2
Killed by signal 1.
How do I suppress this error message on command line in a bash script?
Add 2>/dev/null to the ssh command:
$ ssh -q -oProxyCommand="ssh -W %h:%p {user}#{jump_server}" 2>/dev/null
If you need to see errors you could go through sed:
$ ssh ... 2>&1 | sed '/^Killed by signal.*/d'

Executing Openstack-Rally test cases

I have Openstack installed in my Ubuntu server. I need to run all Rally test cases. I did rally deployment. Now I am able to execute single JSON file and getting the HTML and XML output.
eg:
root#ubuntu:/usr/share/rally/samples/tasks/scenarios/nova# rally task start list-images.json
This way I can execute individual JSON files only.
My requirement:
I have around 250 JSON files to be executed. How to execute them all in one shot?
What tools does the Openstack framework have to execute the entire rally cases(JSON files)?
Actually, you should not want to run 200 separated files. You would like to run one task that contains them. Rally allows you to put as many as you want test cases in single file. For example:
---
NovaServers.boot_and_delete_server:
-
args:
flavor:
name: "m1.tiny"
image:
name: "^cirros.*uec$"
force_delete: false
runner:
type: "constant"
times: 10
concurrency: 2
context:
users:
tenants: 3
users_per_tenant: 2
NovaServers.boot_and_list_server:
-
args:
flavor:
name: "m1.tiny"
image:
name: "^cirros.*uec$"
detailed: True
runner:
type: "constant"
times: 1
concurrency: 1
context:
users:
tenants: 1
users_per_tenant: 1
Take into account that Rally accept jinja2 templates, so you can use all features of jinja2 including "file includ options" Take a look here:
https://rally.readthedocs.io/en/latest/tutorial/step_5_task_templates.html
The best way is to write a script to run all tasks
ex:
#!/bin/bash
cd `dirname $0`
time=`date +%H:%M:%S`
mkdir -p testcase_result/$time
testcase_file=testcase_result/$time/rally_testcase.txt
total_file=testcase_result/$time/rally_total.txt
rally_task_dir=source/samples/tasks/scenarios
#keystone
keystone_case=`find $rally_task_dir/keystone -name "*.yaml"`
keystone_num=`grep -rn '\<Keystone' $keystone_case | wc -l`
echo "Keystone Testcases Number: "$keystone_num > $total_file
echo "Keystone" > $testcase_file
grep -rn '\<Keystone' $keystone_case | awk '{print NR":",$2}' >> $testcase_file
sed -i 's/Keystone.*\.//g' $testcase_file
#glance
glance_case=`find $rally_task_dir/glance -name "*.yaml"`
glance_num=`grep -rn '\<Glance' $glance_case | wc -l`
echo "Glance Testcases Number: "$glance_num >> $total_file
echo "" >> $testcase_file
echo "Glance" >> $testcase_file
grep -rn '\<Glance' $glance_case | awk '{print NR":",$2}' >> $testcase_file
sed -i 's/Glance.*\.//g' $testcase_file
#nova
nova_case=`find $rally_task_dir/nova -name "*.yaml"`
nova_num=`grep -rn '\<Nova' $nova_case | wc -l`
echo "Nova Testcases Number: "$nova_num >> $total_file
echo "" >> $testcase_file
echo "Nova" >> $testcase_file
grep -rn '\<Nova' $nova_case | awk '{print NR":",$2}' >> $testcase_file
sed -i 's/Nova.*\.//g' $testcase_file
#neutron
neutron_case=`find $rally_task_dir/neutron -name "*.yaml"`
neutron_num=`grep -rn '\<Neutron' $neutron_case | wc -l`
echo "Neutron Testcases Number: "$neutron_num >> $total_file
echo "" >> $testcase_file
echo "Neutron" >> $testcase_file
grep -rn '\<Neutron' $neutron_case | awk '{print NR":",$2}' >> $testcase_file
sed -i 's/Neutron.*\.//g' $testcase_file
#cinder
cinder_case=`find $rally_task_dir/cinder -name "*.yaml"`
cinder_num=`grep -rn '\<Cinder' $cinder_case | wc -l`
echo "Cinder Testcases Number: "$cinder_num >> $total_file
echo "" >> $testcase_file
echo "Cinder" >> $testcase_file
grep -rn '\<Cinder' $cinder_case | awk '{print NR":",$2}' >> $testcase_file
sed -i 's/Cinder.*\.//g' $testcase_file
#total
let total=$keystone_num+$glance_num+$nova_num+$neutron_num+$cinder_num
echo "Total Testcases Number: $total" >> $total_file
sed -i 's/:$//' $testcase_file
# Run Scripts tests
cd testcase_result/$time
for i in ../../rally_scripts/*.sh
do
bash $i
done

ssh on remote server to kill process, pushd script folder and execute the script is not wor

I have a requirement to kill remote process of specific patter, pushd the path to startup and execute the script.
I tried so far with
pid=$(ssh -q username#virt ps -ef|grep $APP|grep $PORT|awk '{print $2}')
ssh -q username#virt kill -9 $pid
ssh -q username#virt "find /shared/local/path1/app -name "start_app*" -exec grep -nl "9122" {} \;| xargs -0 -I '{}' bash -c 'pushd $(dirname {});bash {};'"
When I execute above command kill processing is working fine. The final step to find for scriptfile and execute script by pushing the folder to the path is not working.
For some reason the pushd is not working fine.
The command on the local server do work fie with
find /shared/local/path1/app -name "start_app*" -exec grep -nl "9122" {} \;| xarg -0 -I '{}' bash -c 'pushd $(dirname {});bash {};'
Please help a more effective solution to accomplish this task.
You have an error in the quotes here:
ssh -q username#virt "find /shared/local/path1/app -name "start_app*" -exec grep -nl "9122" {} \;| xargs -0 -I '{}' bash -c 'pushd $(dirname {});bash {};'"
Try this in stead:
ssh -q username#virt "find /shared/local/path1/app -name 'start_app*' -exec grep -nl '9122' {} \;| xargs -0 -I '{}' bash -c 'pushd \$(dirname {});bash {};'"