How to suppress "killed by signal 1." error on ssh jump connection - ssh

I am using scp and ssh connections with following commands and yet keep getting "killed by signal 1." errors.
scp example:
$ scp -q '-oProxyCommand=ssh -W %h:%p {user}#{jump_server}' /path/file.txt
{user}#{server2}:/tmp/
Killed by signal 1.
ssh example:
$ ssh -A -J {user}#{jump_server} -q -o BatchMode=yes -o ServerAliveInterval=10 {user}#{server2} 'ps -ef | grep mysql | wc -l 2>&1'
2
Killed by signal 1.
I tried using -t:
$ ssh -t -A -J {user}#{jump_server} -t -q -o BatchMode=yes -o ServerAliveInterval=10 {user}#{server2} 'ps -ef | grep mysql | wc -l 2>&1'
2
Killed by signal 1.
I tried using LogLevel:
$ ssh -o LogLevel=QUIET -A -J {user}#{jump_server} -q -o BatchMode=yes -o ServerAliveInterval=10 -o LogLevel=QUIET {user}#{server2} 'ps -ef | grep mysql | wc -l 2>&1'
2
Killed by signal 1.
I tried using the ProxyCommand option:
$ ssh -q -oProxyCommand="ssh -W %h:%p {user}#{jump_server}" -q -o BatchMode=yes -o ServerAliveInterval=10 {user}#{server2} 'ps -ef | grep mysql | wc -l 2>&1'
2
Killed by signal 1.
How do I suppress this error message on command line in a bash script?

Add 2>/dev/null to the ssh command:
$ ssh -q -oProxyCommand="ssh -W %h:%p {user}#{jump_server}" 2>/dev/null

If you need to see errors you could go through sed:
$ ssh ... 2>&1 | sed '/^Killed by signal.*/d'

Related

How to execute a remote command over ssh?

I try to connect to the remote server by ssh and execute the command.
But given the situation, I can only execute a single command.
For example
ssh -i ~/auth/aws.pem ubuntu#server "echo 1"
It works very well, but I have a problem with the following
case1
ssh -i ~/auth/aws.pem ubuntu#server "cd /"
ssh -i ~/auth/aws.pem ubuntu#server "ls"
case2
ssh -i ~/auth/aws.pem ubuntu#server "export a=1"
ssh -i ~/auth/aws.pem ubuntu#server "echo $a"
The session is not maintained.
Of course, you can use "cd /; ls"
but I can only execute one command at a time.
...
Reflecting comments
developed a bash script
function cmd()
{
local command_delete="$#"
if [ -f /tmp/variables.current ]; then
set -a
source /tmp/variables.current
set +a
cd $PWD
fi
if [ ! -f /tmp/variables.before ]; then
comm -3 <(declare | sort) <(declare -f | sort) > /tmp/variables.before
fi
echo $command_delete > /tmp/export_command.sh
source /tmp/export_command.sh
comm -3 <(declare | sort) <(declare -f | sort) > /tmp/variables.after
diff /tmp/variables.before /tmp/variables.after \
| sed -ne 's/^> //p' \
| sed '/^OLDPWD/ d' \
| sed '/^PWD/ d' \
| sed '/^_/ d' \
| sed '/^PPID/ d' \
| sed '/^BASH/ d' \
| sed '/^SSH/ d' \
| sed '/^SHELLOPTS/ d' \
| sed '/^XDG_SESSION_ID/ d' \
| sed '/^FUNCNAME/ d' \
| sed '/^command_delete/ d' \
> /tmp/variables.current
echo "PWD=$(pwd)" >> /tmp/variables.current
}
ssh -i ~/auth/aws.pem ubuntu#server "cmd cd /"
ssh -i ~/auth/aws.pem ubuntu#server "cmd ls"
What better solution?
$ cat <<'EOF' | ssh user#server
export a=1
echo "${a}"
EOF
Pseudo-terminal will not be allocated because stdin is not a terminal.
user#server's password:
1
In this way you will send all commands to ssh as a single file script, so you can put any number of commands. Please note the way to use EOF between single quote '.

Shared connection to server closed

When I run this:
$ ansible -i s1, s1 -m raw -a 'echo test' -u root -k
I get:
s1 | SUCCESS | rc=0 >>
test
Shared connection to s1 closed.
But this way:
$ ansible -i s1, s1 -m command -a 'echo test' -u root -k
I don't get "Shared connection to s1 closed." part:
s1 | SUCCESS | rc=0 >>
test
Why is that?
P.S. Above is a simplified way to reproduce the issue. What I'm facing is that when running playbook I get this extra line which is in the way.
UPD The line clearly coming from ssh. And if I run raw command with -vvvv, I get:
Using /etc/ansible/ansible.cfg as config file
Loading callback plugin minimal of type stdout, v2.0 from /usr/lib/python2.7/site-packages/ansible/plugins/callback/__init__.pyc
<s1> ESTABLISH SSH CONNECTION FOR USER: root
<s1> SSH: EXEC sshpass -d13 ssh -vvv -C -o ControlMaster=auto
-o ControlPersist=60s -o User=root -o ConnectTimeout=10
-o ControlPath=/home/yuri/.ansible/cp/ansible-ssh-%h-%p-%r -tt s1
'echo test'
s1 | SUCCESS | rc=0 >>
test
OpenSSH_7.4p1, OpenSSL 1.0.2k 26 Jan 2017
debug1: Reading configuration data /home/yuri/.ssh/config
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: auto-mux: Trying existing master
debug1: Control socket "/home/yuri/.ansible/cp/ansible-ssh-s1-22-root" does not exist
<...a lot of output from ssh...>
But with command, it's just:
Using /etc/ansible/ansible.cfg as config file
Loading callback plugin minimal of type stdout, v2.0 from
/usr/lib/python2.7/site-packages/ansible/plugins/callback/__init__.pyc
Using module file
/usr/lib/python2.7/site-packages/ansible/modules/core/commands/command.py
<s1> ESTABLISH SSH CONNECTION FOR USER: root
<s1> SSH: EXEC sshpass -d13 ssh -vvv -C -o ControlMaster=auto
-o ControlPersist=60s -o User=root -o ConnectTimeout=10
-o ControlPath=/home/yuri/.ansible/cp/ansible-ssh-%h-%p-%r s1
'/bin/sh -c '"'"'(
umask 77
&& mkdir -p "` echo ~/.ansible/tmp/ansible-tmp-1488989540.6-73006073289737 `"
&& echo ansible-tmp-1488989540.6-73006073289737="` echo ~/.ansible/tmp/ansible-tmp-1488989540.6-73006073289737 `"
) && sleep 0'"'"''
<s1> PUT /tmp/tmpes82wL TO
/root/.ansible/tmp/ansible-tmp-1488989540.6-73006073289737/command.py
<s1> SSH: EXEC sshpass -d13 sftp -o BatchMode=no -b - -vvv -C
-o ControlMaster=auto -o ControlPersist=60s -o User=root
-o ConnectTimeout=10
-o ControlPath=/home/yuri/.ansible/cp/ansible-ssh-%h-%p-%r '[s1]'
<s1> ESTABLISH SSH CONNECTION FOR USER: root
<s1> SSH: EXEC sshpass -d13 ssh -vvv -C -o ControlMaster=auto
-o ControlPersist=60s -o User=root -o ConnectTimeout=10
-o ControlPath=/home/yuri/.ansible/cp/ansible-ssh-%h-%p-%r s1
'/bin/sh -c '"'"'
chmod u+x /root/.ansible/tmp/ansible-tmp-1488989540.6-73006073289737/ /root/.ansible/tmp/ansible-tmp-1488989540.6-73006073289737/command.py
&& sleep 0'"'"''
<s1> ESTABLISH SSH CONNECTION FOR USER: root
<s1> SSH: EXEC sshpass -d13 ssh -vvv -C -o ControlMaster=auto
-o ControlPersist=60s -o User=root -o ConnectTimeout=10
-o ControlPath=/home/yuri/.ansible/cp/ansible-ssh-%h-%p-%r -tt s1
'/bin/sh -c '"'"'
/usr/bin/python /root/.ansible/tmp/ansible-tmp-1488989540.6-73006073289737/command.py;
rm -rf "/root/.ansible/tmp/ansible-tmp-1488989540.6-73006073289737/" > /dev/null 2>&1
&& sleep 0'"'"''
s1 | SUCCESS | rc=0 >>
test
Where is all ssh output gone?
Shared connection to s1 closed.
This message is an error message from ssh client.
With raw: echo test Ansible executes ssh <many parameters> s1 'echo test' and you get stdout/stderr from ssh command. This way message about shared connection pops up in your task result.
With command: echo test Ansible copy python-wrapper (command.py) and execute this wrapper, which in turn spawns echo test and capture stdout/stderr from echo commmand. Then command.py prints echo's result as JSON-object with stdout/stderr/rc keys. The ssh error message still occurs, but you don't see it (it is filtered by Ansible), because Ansible get task result from JSON-object key's and not from ssh plain stdout/stderr/rc.
Where is all ssh output gone?
This is related due to the same difference in handling raw/command. To see detailed ssh output set ANSIBLE_DEBUG=1 environment variable.
If you want to hide this error message, you can use ansible_ssh_extra_args='-o LogLevel=QUIET' inventory variable. But I'm not sure if this can give some other unexpected results.

Ansible giving ssh_exchange_identification ERROR

My Ansible playbook connects to a remote node using a Proxy.
When the Ansible play book runs; it gives the following ERROR while doing the ssh step.
[root#vm1-msdp ANSIBLE]# ansible-playbook fend_file.yaml -i env/target -vvvvv
PLAY [LAB1] *******************************************************************
GATHERING FACTS ***************************************************************
<10.169.99.222> ESTABLISH CONNECTION FOR USER: msdp
<10.169.99.222> REMOTE_MODULE setup
<10.169.99.222> EXEC sshpass -d9 ssh -C -tt -vvv -o ProxyCommand="nc -x 142.133.134.161:1088 %h %p" -o StrictHostKeyChecking=no -o GSSAPIAuthentication=no -o PubkeyAuthentication=no -o User=msdp -o ConnectTimeout=10 10.169.99.222 /bin/sh -c 'mkdir -p $HOME/.ansible/tmp/ansible-tmp-1473708903.98-28407509853006 && echo $HOME/.ansible/tmp/ansible-tmp-1473708903.98-28407509853006'
fatal: [10.169.99.222] => SSH Error: ssh_exchange_identification: Connection closed by remote host
It is sometimes useful to re-run the command using -vvvv, which prints SSH debug output to help diagnose the issue.
But when I run the ssh command myself, I am able to successfully connect.
[root#vm1-msdp ANSIBLE]# ssh -C -tt -o ProxyCommand="nc -x 142.133.134.161:1088 %h %p" -o StrictHostKeyChecking=no -o GSSAPIAuthentication=no -o PubkeyAuthentication=no -o User=root -o ConnectTimeout=10 10.169.99.222
root#10.169.99.222's password:
Last login: Mon Sep 12 12:28:19 2016 from 10.169.102.6
root#IC02 ~ #
Do I need to clear any ansible files ?
When you run the SSH command manually, you are specifying the root user. Your Ansible playbook is using your local user of msdp. Try setting your ansible_user variable in your inventory file. Maybe something like:
10.169.99.22 ansible_user=root

Error connecting with Ansible to Vagrant guest using SSH

I'm using Ubuntu to run Vagrant and I'm receiving an error every time I try to connect (reformatted for readability):
<127.0.0.1> ESTABLISH CONNECTION FOR USER: vagrant
<127.0.0.1> REMOTE_MODULE ping
<127.0.0.1> EXEC ssh -C -tt -vvv -o ControlMaster = auto -o ControlPersist = 60s
-o ForwardAgent = yes -o ControlPath="/home/naruto/.ansible/cp/ansible-ssh-%h-%p-%r"
-o StrictHostKeyChecking=no -o Port=2202
-o IdentityFile="/home/naruto/test/.vagrant/machines/default/virtualbox/private_key"
-o KbdInteractiveAuthentication=no
-o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey
-o PasswordAuthentication=no -o User=vagrant
-o ConnectTimeout=10 127.0.0.1 /bin/sh -c
'mkdir -p $HOME/.ansible/tmp/ansible-tmp-1452294114.84-103845443589966
&& chmod a+rx $HOME/.ansible/tmp/ansible-tmp-1452294114.84-103845443589966
&& echo $HOME/.ansible/tmp/ansible-tmp-1452294114.84-103845443589966'
testserver | FAILED => SSH Error: command-line line 0: missing argument.
It is sometimes useful to re-run the command using -vvvv, which prints SSH debug
output to help diagnose the issue.
This is the information in the vagrant ssh-config:
Host default
HostName 127.0.0.1
User vagrant
Port 2202
UserKnownHostsFile /dev/null
StrictHostKeyChecking no
PasswordAuthentication no
IdentityFile /home/naruto/test/.vagrant/machines/default/virtualbox/private_key
IdentitiesOnly yes
LogLevel FATAL
And this is the hosts file for Ansible:
[example]
testserver
ansible_ssh_host=127.0.0.1
ansible_ssh_port=2202
ansible_ssh_user=vagrant
ansible_ssh_private_key_file=/home/naruto/test/.vagrant/machines/default/virtualbox/private_key
Any idea what this problem is or how can I troubleshoot this further? Thanks.
ssh -C -tt -vvv -o ControlMaster = auto -o ControlPersist = 60s
-o ForwardAgent = yes
This part is wrong. You can't put spaces between option and argument. You need to use it the same way as the other options below:
ssh -C -tt -vvv -o ControlMaster=auto -o ControlPersist=60s
-o ForwardAgent=yes
Though I have no idea where does it come from.

How can I send password safely to tmux?

The following is my code in create_tmux.zsh
#!/bin/zsh
SESSIONNAME=$1
echo $SESSIONNAME
tmux has-session -t $SESSIONNAME &> /dev/null
if [ $? != 0 ]
then
tmux new-session -d -s $SESSIONNAME -n emacs
tmux new-window -t $SESSIONNAME:1 -n a
tmux send-keys -t $SESSIONNAME:1 'ssh -Y a#bc.com;$2' C-m
fi
tmux attach -t $SESSIONNAME
It's simple if I run
create_tmux.zsh ab $%^^&av1#
But in this way, it not only shows in the terminal of my password but also recorded in history.
How can I solve this?
Thank you