Unable to SSH into server with Ansible - ssh

Unable to SSH into server with Ansible.
$ ansible myserver -m ping -u username\#company.com -vvvvv
Using /etc/ansible/ansible.cfg as config file
Loaded callback minimal of type stdout, v2.0
<ip.ad.dr.es> ESTABLISH SSH CONNECTION FOR USER: username#company.com
<ip.ad.dr.es> SSH: ansible.cfg set ssh_args: (-o)(ControlMaster=auto)(-o)(ControlPersist=60s)
<ip.ad.dr.es> SSH: ANSIBLE_REMOTE_PORT/remote_port/ansible_port set: (-o)(Port=22)
<ip.ad.dr.es> SSH: ANSIBLE_PRIVATE_KEY_FILE/private_key_file/ansible_ssh_private_key_file set: (-o)(IdentityFile="/Users/username/.ssh/id_rsa")
<ip.ad.dr.es> SSH: ansible_password/ansible_ssh_pass not set: (-o)(KbdInteractiveAuthentication=no)(-o)(PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey)(-o)(PasswordAuthentication=no)
<ip.ad.dr.es> SSH: ANSIBLE_REMOTE_USER/remote_user/ansible_user/user/-u set: (-o)(User=username#company.com)
<ip.ad.dr.es> SSH: ANSIBLE_TIMEOUT/timeout set: (-o)(ConnectTimeout=10)
<ip.ad.dr.es> SSH: PlayContext set ssh_common_args: ()
<ip.ad.dr.es> SSH: PlayContext set ssh_extra_args: (-A)
<ip.ad.dr.es> SSH: found only ControlPersist; added ControlPath: (-o)(ControlPath=/Users/username/.ansible/cp/ansible-ssh-%h-%p-%r)
<ip.ad.dr.es> SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o Port=22 -o 'IdentityFile="/Users/username/.ssh/id_rsa"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=username#company.com -o ConnectTimeout=10 -o ControlPath=/Users/username/.ansible/cp/ansible-ssh-%h-%p-%r ip.ad.dr.es '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1469804843.87-104204648028091 `" && echo ansible-tmp-1469804843.87-104204648028091="` echo $HOME/.ansible/tmp/ansible-tmp-1469804843.87-104204648028091 `" ) && sleep 0'"'"''
ip.ad.dr.es | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh.",
"unreachable": true
}
Able to log in to the same server by manually doing a SSH from my Mac.
ssh -p 22 -A -i ~/.ssh/id_rsa username\#company.com#ip.ad.dr.es -X -C
Any idea on how to troubleshoot this further?
Looked for /var/log/auth.log on this server, and did not find the file. Not sure which other file to look at to see what is going on.
Edit #1:
Also did this -
ssh_args: (-o)(ControlMaster=auto)(-o)(ControlPersist=60s)(-o)(ForwardAgent=yes)
i.e. added the ForwardAgent=yes to the ssh_args, and removed the --ssh-extra-args="-A". That did not help either.

See comment on ControlPath being too long. This page - http://docs.ansible.com/ansible/intro_configuration.html#control-path - has the fix.

Related

kubespary:ansible can't send data with ssh to a node in ansible-playbook command

in step 10 of tutorial
https://dzone.com/articles/kubespray-10-simple-steps-for-installing-a-product
for deploying a production ready kubernetes cluster with kubespray, an error occured when running ansible-playbook command.error is:
ERROR! SSH Error: data could not be sent to the remote host. Make sure this host can be reached over ssh
ssh passwordless is active between nodes and i can run ssh from each nodes without password.
can anyone help me?
thanks
this is my command and it's output:
master-node#master-node:~/kubespray$ sudo ansible all -i inventory/mycluster/hosts.ini -m ping -vvv
ansible 2.7.8
config file = /home/master-node/kubespray/ansible.cfg
configured module search path = [u'/home/master-node/kubespray/library']
ansible python module location = /usr/local/lib/python2.7/dist-packages/ansible
executable location = /usr/local/bin/ansible
python version = 2.7.12 (default, Nov 12 2018, 14:36:49) [GCC 5.4.0 20160609]
Using /home/master-node/kubespray/ansible.cfg as config file
/home/master-node/kubespray/inventory/mycluster/hosts.ini did not meet host_list requirements, check plugin documentation if this is unexpected
/home/master-node/kubespray/inventory/mycluster/hosts.ini did not meet script requirements, check plugin documentation if this is unexpected
/home/master-node/kubespray/inventory/mycluster/hosts.ini did not meet yaml requirements, check plugin documentation if this is unexpected
Parsed /home/master-node/kubespray/inventory/mycluster/hosts.ini inventory source with ini plugin
META: ran handlers
Using module file /usr/local/lib/python2.7/dist-packages/ansible/modules/system/ping.py
<192.168.1.107> ESTABLISH SSH CONNECTION FOR USER: worker-node
<192.168.1.107> SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=30m -o ConnectionAttempts=100 -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=worker-node -o ConnectTimeout=10 -o ControlPath=/home/master-node/.ansible/cp/e24ed02313 192.168.1.107 '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"''
Using module file /usr/local/lib/python2.7/dist-packages/ansible/modules/system/ping.py
<192.168.1.142> ESTABLISH SSH CONNECTION FOR USER: master-node
<192.168.1.142> SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=30m -o ConnectionAttempts=100 -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=master-node -o ConnectTimeout=10 -o ControlPath=/home/master-node/.ansible/cp/01ac2924af 192.168.1.142 '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"''
master-node | UNREACHABLE! => {
"changed": false,
"msg": "SSH Error: data could not be sent to remote host \"192.168.1.142\". Make sure this host can be reached over ssh",
"unreachable": true
}
worker-node | UNREACHABLE! => {
"changed": false,
"msg": "SSH Error: data could not be sent to remote host \"192.168.1.107\". Make sure this host can be reached over ssh",
"unreachable": true
}

Issue passing argument to ansible ssh

Below ssh connectivity works fine:
ssh -i /opt/cert/id_rsa_prod targetuser#targethost -t bash
My ansible host file has the below entry
[target*]
targethost ansible_python_interpreter=/opt/bin/python2.7 ansible_ssh_extra_args="-t bash" ansible_ssh_common_args="-t" ansible_ssh_private_key_file=/opt/cert/id_rsa_prod USER_RUN=targetuser
When I run this ansible playbook it fails to connect to target host and throws the below error output:
23:53:42 ESTABLISH SSH CONNECTION FOR USER: targetuser
23:53:42 SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o
ControlPersist=60s -o 'IdentityFile="/opt/cert/id_rsa_prod"' -o
KbdInteractiveAuthentication=no -o
PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey
-o PasswordAuthentication=no -o User=targetuser -o ConnectTimeout=10 -t bash -o ControlPath=/home/sourceuser/.ansible/cp/e8313d01d6 targethost '/bin/sh -c '"'"'echo ~targetuser && sleep 0'"'"''
23:53:42 (255, '', 'OpenSSH_7.7p1 (CentrifyDC build
5.5.1-395) , OpenSSL 1.0.2o-fips 27 Mar 2018\r\ndebug1: Reading configuration data /home/sourceuser/.ssh/config\r\ndebug1: Reading
configuration data /etc/centrifydc/ssh/ssh_config\r\ndebug1:
/etc/centrifydc/ssh/ssh_config line 3: Applying options for
*\r\ndebug1: auto-mux: Trying existing master\r\ndebug1: Control socket "/home/sourceuser/.ansible/cp/e8313d01d6" does not
exist\r\ndebug2: resolving "bash" port 22\r\nssh: Could not resolve
hostname bash: Name or service not known\r\n')
23:53:42 fatal: [targethost]: UNREACHABLE! => {
23:53:42 "changed": false,
23:53:42 "msg": "Failed to connect to the host via ssh:
OpenSSH_7.7p1 (CentrifyDC build 5.5.1-395) , OpenSSL 1.0.2o-fips 27
Mar 2018\r\ndebug1: Reading configuration data
/home/sourceuser/.ssh/config\r\ndebug1: Reading configuration data
/etc/centrifydc/ssh/ssh_config\r\ndebug1:
/etc/centrifydc/ssh/ssh_config line 3: Applying options for
*\r\ndebug1: auto-mux: Trying existing master\r\ndebug1: Control socket \"/home/sourceuser/.ansible/cp/e8313d01d6\" does not
exist\r\ndebug2: resolving \"bash\" port 22\r\nssh: Could not resolve
hostname bash: Name or service not known\r\n",
23:53:42 "unreachable": true
23:53:42 }
23:53:42 to retry, use: --limit
#/opt/scripts/myfolder/site.retry
23:53:42
23:53:42 PLAY RECAP
23:53:42 targethost : ok=0 changed=0 unreachable=1 failed=0
Can you please suggest how to fix the connectivity issue ?

Authentication or permission failure for some hosts in inventory

I have a inventory with around 10 hosts and my playbook runs on all except 2. I am able to login to those 2 hosts passwordlessly from Ansible Server. But when I run the playbook or even a simple ping module I get error:
192.168.x.xxx | UNREACHABLE! => {
"changed": false,
"msg": "Authentication or permission failure. In some cases, you may have been able to authenticate and did not have permissions on the remote directory. Consider changing the remote temp path in ansible.cfg to a path rooted in \"/tmp\". Failed command was: ( umask 77 && mkdir -p \"` echo $HOME/.ansible/tmp/ansible-tmp-1498895076.45-202255130489130 `\" && echo ansible-tmp-1498895076.45-202255130489130=\"` echo $HOME/.ansible/tmp/ansible-tmp-1498895076.45-202255130489130 `\" ), exited with result 1",
"unreachable": true
}
I have already tried changing the ansible.cfg for remote_dir, changed connection type as suggested in https://github.com/ansible/ansible/issues/5725
The verbose mode output is:
Using /etc/ansible/ansible.cfg as config file
Loading callback plugin minimal of type stdout, v2.0 from /usr/lib/python2.7/site-packages/ansible/plugins/callback/__init__.pyc
Using module file /usr/lib/python2.7/site-packages/ansible/modules/core/system/ping.py
<192.168.x.xxx> ESTABLISH SSH CONNECTION FOR USER: None
Using module file /usr/lib/python2.7/site-packages/ansible/modules/core/system/ping.py
<192.168.x.xxx> SSH: ansible.cfg set ssh_args: (-C)(-o)(ControlMaster=auto)(-o)(ControlPersist=60s)
<192.168.x.xxx> SSH: ansible_password/ansible_ssh_pass not set: (-o)(KbdInteractiveAuthentication=no)(-o)(PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey)(-o)(PasswordAuthentication=no)
<192.168.x.xxx> SSH: ANSIBLE_TIMEOUT/timeout set: (-o)(ConnectTimeout=10)
<192.168.x.xxx> SSH: PlayContext set ssh_common_args: ()
<192.168.x.xxx> SSH: PlayContext set ssh_extra_args: ()
<192.168.x.xxx> SSH: found only ControlPersist; added ControlPath: (-o)(ControlPath=/home/webtech/.ansible/cp/ansible-ssh-%h-%p-%r)
<192.168.x.xxx> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/webtech/.ansible/cp/ansible-ssh-%h-%p-%r 192.168.x.xxx '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1498903623.28-136703981609211 `" && echo ansible-tmp-1498903623.28-136703981609211="` echo $HOME/.ansible/tmp/ansible-tmp-1498903623.28-136703981609211 `" ) && sleep 0'"'"''
Nothing helped.
Please help me, how can I run my playbook in those 2 hosts?
ansible <>
add -s at the end to run it as sudo user

Shared connection to server closed

When I run this:
$ ansible -i s1, s1 -m raw -a 'echo test' -u root -k
I get:
s1 | SUCCESS | rc=0 >>
test
Shared connection to s1 closed.
But this way:
$ ansible -i s1, s1 -m command -a 'echo test' -u root -k
I don't get "Shared connection to s1 closed." part:
s1 | SUCCESS | rc=0 >>
test
Why is that?
P.S. Above is a simplified way to reproduce the issue. What I'm facing is that when running playbook I get this extra line which is in the way.
UPD The line clearly coming from ssh. And if I run raw command with -vvvv, I get:
Using /etc/ansible/ansible.cfg as config file
Loading callback plugin minimal of type stdout, v2.0 from /usr/lib/python2.7/site-packages/ansible/plugins/callback/__init__.pyc
<s1> ESTABLISH SSH CONNECTION FOR USER: root
<s1> SSH: EXEC sshpass -d13 ssh -vvv -C -o ControlMaster=auto
-o ControlPersist=60s -o User=root -o ConnectTimeout=10
-o ControlPath=/home/yuri/.ansible/cp/ansible-ssh-%h-%p-%r -tt s1
'echo test'
s1 | SUCCESS | rc=0 >>
test
OpenSSH_7.4p1, OpenSSL 1.0.2k 26 Jan 2017
debug1: Reading configuration data /home/yuri/.ssh/config
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: auto-mux: Trying existing master
debug1: Control socket "/home/yuri/.ansible/cp/ansible-ssh-s1-22-root" does not exist
<...a lot of output from ssh...>
But with command, it's just:
Using /etc/ansible/ansible.cfg as config file
Loading callback plugin minimal of type stdout, v2.0 from
/usr/lib/python2.7/site-packages/ansible/plugins/callback/__init__.pyc
Using module file
/usr/lib/python2.7/site-packages/ansible/modules/core/commands/command.py
<s1> ESTABLISH SSH CONNECTION FOR USER: root
<s1> SSH: EXEC sshpass -d13 ssh -vvv -C -o ControlMaster=auto
-o ControlPersist=60s -o User=root -o ConnectTimeout=10
-o ControlPath=/home/yuri/.ansible/cp/ansible-ssh-%h-%p-%r s1
'/bin/sh -c '"'"'(
umask 77
&& mkdir -p "` echo ~/.ansible/tmp/ansible-tmp-1488989540.6-73006073289737 `"
&& echo ansible-tmp-1488989540.6-73006073289737="` echo ~/.ansible/tmp/ansible-tmp-1488989540.6-73006073289737 `"
) && sleep 0'"'"''
<s1> PUT /tmp/tmpes82wL TO
/root/.ansible/tmp/ansible-tmp-1488989540.6-73006073289737/command.py
<s1> SSH: EXEC sshpass -d13 sftp -o BatchMode=no -b - -vvv -C
-o ControlMaster=auto -o ControlPersist=60s -o User=root
-o ConnectTimeout=10
-o ControlPath=/home/yuri/.ansible/cp/ansible-ssh-%h-%p-%r '[s1]'
<s1> ESTABLISH SSH CONNECTION FOR USER: root
<s1> SSH: EXEC sshpass -d13 ssh -vvv -C -o ControlMaster=auto
-o ControlPersist=60s -o User=root -o ConnectTimeout=10
-o ControlPath=/home/yuri/.ansible/cp/ansible-ssh-%h-%p-%r s1
'/bin/sh -c '"'"'
chmod u+x /root/.ansible/tmp/ansible-tmp-1488989540.6-73006073289737/ /root/.ansible/tmp/ansible-tmp-1488989540.6-73006073289737/command.py
&& sleep 0'"'"''
<s1> ESTABLISH SSH CONNECTION FOR USER: root
<s1> SSH: EXEC sshpass -d13 ssh -vvv -C -o ControlMaster=auto
-o ControlPersist=60s -o User=root -o ConnectTimeout=10
-o ControlPath=/home/yuri/.ansible/cp/ansible-ssh-%h-%p-%r -tt s1
'/bin/sh -c '"'"'
/usr/bin/python /root/.ansible/tmp/ansible-tmp-1488989540.6-73006073289737/command.py;
rm -rf "/root/.ansible/tmp/ansible-tmp-1488989540.6-73006073289737/" > /dev/null 2>&1
&& sleep 0'"'"''
s1 | SUCCESS | rc=0 >>
test
Where is all ssh output gone?
Shared connection to s1 closed.
This message is an error message from ssh client.
With raw: echo test Ansible executes ssh <many parameters> s1 'echo test' and you get stdout/stderr from ssh command. This way message about shared connection pops up in your task result.
With command: echo test Ansible copy python-wrapper (command.py) and execute this wrapper, which in turn spawns echo test and capture stdout/stderr from echo commmand. Then command.py prints echo's result as JSON-object with stdout/stderr/rc keys. The ssh error message still occurs, but you don't see it (it is filtered by Ansible), because Ansible get task result from JSON-object key's and not from ssh plain stdout/stderr/rc.
Where is all ssh output gone?
This is related due to the same difference in handling raw/command. To see detailed ssh output set ANSIBLE_DEBUG=1 environment variable.
If you want to hide this error message, you can use ansible_ssh_extra_args='-o LogLevel=QUIET' inventory variable. But I'm not sure if this can give some other unexpected results.

SSH ok but Ansible returns "unreachable"

My SSH using keys is set up properly.
ssh admin#192.168.1.111
admin#DiskStation:~$
But Ansible returns an error:
TASK [setup] *******************************************************************
<192.168.1.111> ESTABLISH SSH CONNECTION FOR USER: admin
<192.168.1.111> SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=admin -o ConnectTimeout=10 -o ControlPath=/Users/Shared/Jenkins/.ansible/cp/ansible-ssh-%h-%p-%r 192.168.1.111 '/bin/sh -c '"'"'( umask 22 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1479205446.3-33100049148171 `" && echo "` echo $HOME/.ansible/tmp/ansible-tmp-1479205446.3-33100049148171 `" )'"'"''
<192.168.1.111> PUT /var/folders/pd/8q63k3z93nx_78dggb9ltm4c00007x/T/tmpNJvc43 TO /var/services/homes/admin/.ansible/tmp/ansible-tmp-1479205446.3-33100049148171/setup
<192.168.1.111> SSH: EXEC sftp -b - -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=admin -o ConnectTimeout=10 -o ControlPath=/Users/Shared/Jenkins/.ansible/cp/ansible-ssh-%h-%p-%r '[192.168.1.111]'
fatal: [192.168.1.111]: UNREACHABLE! => {"changed": false, "msg": "SSH Error: data could not be sent to the remote host. Make sure this host can be reached over ssh", "unreachable": true}
Can someone help me?
Ansible returns "unreachable" for the SFTP connection, not SSH.
Either enable SFTP on the target node (or a firewall in-between), or configure Ansible to use SCP in ansible.cfg:
scp_if_ssh = True
I had a similar "unreachable" error, but in my case it was because my playbook file specified the host this way:
[webservers]
ubuntu#123.456.789.111
This worked for us in the past, so presumably this works with some Ansible versions, but not with my version (2.0.0.2). Instead I changed this to what the documentation recommends:
[webservers]
123.456.789.111 ansible_user=ubuntu
and now the SFTP connection does not fail.
After many years of try and error, now I always have these setting on my ansible.cfg:
[defaults]
host_key_checking = false
[ssh_connection]
ssh_args = -o ControlMaster=auto -o ControlPersist=60s -o UserKnownHostsFile=/dev/null -o ServerAliveInterval=20
scp_if_ssh = True
[connection]
pipelining = true
The pipelining is my personal preference when dealing with multiple
hosts.
The ssh_args deals with hangs and timeouts, useful when your target remote has unstable connection.
Please check if python is installed on the target machines. It's a prerequisite.
sudo apt install python3
sudo apt install python
sudo apt install python-minimal