Ansible ssh connection - ssh

I know there are a few about this but so far nothing seems to work for me.
So I am trying to learn to use Ansible and I got stuck at this ssh connection issue. I think I did everything right however I would appreciate if someone would help out. Let me post the files I have configures and the result I have.
### ansible.cfg ###
[defaults]
inventory = ./Playbooks/hosts
remote_user = ansible
private_key_file = .ssh/id_key.pub
### Playbooks/hosts ###
[server]
ubu1 ansible_ssh_host=192.16.20.69 ansible_ssh_pass=qwerty ansible_ssh_user=ansible
### Command executed ###
sudo ansible -m ping -vvvv ubu1
### The result I get ###
Using /home/ansible/ansible.cfg as config file
Loaded callback minimal of type stdout, v2.0
<192.16.20.69> ESTABLISH SSH CONNECTION FOR USER: ansible
<192.16.20.69> SSH: EXEC sshpass -d12 ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o 'IdentityFile=".ssh/id_key.pub"' -o User=ansible -o ConnectTimeout=10 -o ControlPath=/home/ansible/.ansible/cp/ansible-ssh-%h-%p-%r 192.16.20.69 '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1470766758.25-258256142287087 `" && echo ansible-tmp-1470766758.25-258256142287087="` echo $HOME/.ansible/tmp/ansible-tmp-1470766758.25-258256142287087 `" ) && sleep 0'"'"''
ubu1 | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh.",
"unreachable": true
}
Unfortunalty I am unable to continue learning Ansible until I get this solved. One of the things I am wondering if the ssh-agent is not interfering with Ansible and if so and I must admit I have no clue on what to next.
Any help would be appreciated.
Thanks
Perry

The answer from comments above:
Try ANSIBLE_DEBUG=1 ansible -m ping -vvvv ubu1 and check the exact error message
Allowed to trace down problems with ip-addresses and python installation.

Related

Trying to test run playbook. Getting permission denied

I am trying to do a "dry-run" of a playbook. The machine I am targeting I am able to ssh into and vice versa. When I run the ansible all -m ping -vvv this is the output.
Using module file /usr/lib/python2.7/dist-packages/ansible/modules/core/system/ping.py
<192.168.4.136> ESTABLISH SSH CONNECTION FOR USER: hwaraich207970
<192.168.4.136> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=username -o ConnectTimeout=10 -o ControlPath=/home/username/.ansible/cp/ansible-ssh-%h-%p-%r 192.168.4.136 '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo ~/.ansible/tmp/ansible-tmp-1604952591.08-32914241935398 ` " && echo ansible-tmp-1604952591.08-32914241935398="` echo ~/.ansible/tmp/ansible-tmp-1604952591.08-32914241935
398 `" ) && sleep 0'"'"''
192.168.4.136 | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: Permission denied (publickey,password).\r\n",
"unreachable": true
This could happen even if you have made sure the passwordless ssh between System A and System B (say using either ssh-copy-id command or by manually copying the public key i.e content of the idrsa.pub file on System A to .ssh/authorizedkeys file on System B. If this is happening, one of the reason could be the user home directories.
On System A user home directory is say /home/tester and on System B, it is /users/tester, then passwordless ssh might not work. Make sure both users have the same home directory solves this issue. I observed this case in CentOS machines and on making sure the home directories for users same, the issue resolved.
Ansible typically works when ssh public keys of the controller node are added to authorized keys of the remote node. This enables ansible to ssh into the remote node from the controlled node without the need for a password.
There is an alternate way to make ansible work without sharing public keys using sshpass. In this case, you need to input the password of the remote users via the ansible_ssh_pass variable. This can be done via inventory file, group_vars, or the extra-vars.
Regarding the error shared by you. It says, "Permission denied", meaning there is something wrong related to either ssh key sharing or password setting.
msg": "Failed to connect to the host via ssh: Permission denied (publickey,password).\r\n",
Debug mode provides more info related to the issue:
SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=username -o ConnectTimeout=10 -o ControlPath=/home/username/.ansible/cp/ansible-ssh-%h-%p-%r 192.168.4.136 '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo ~/.ansible/tmp/ansible-tmp-1604952591.08-32914241935398 ` " && echo ansible-tmp-1604952591.08-32914241935398="` echo ~/.ansible/tmp/ansible-tmp-1604952591.08-32914241935
Some relevant information you can extract from the above snippet:
-o User=username: This means playbook is trying to execute from username user ID.
-o PasswordAuthentication=no: This would force ansible to use public keys over password.
This authentication failure is happening for 192.168.4.136.
Please check this for official info regarding connections for ansible.
Check this for generating and sharing ssh keys between the nodes.

Authentication or permission failure for some hosts in inventory

I have a inventory with around 10 hosts and my playbook runs on all except 2. I am able to login to those 2 hosts passwordlessly from Ansible Server. But when I run the playbook or even a simple ping module I get error:
192.168.x.xxx | UNREACHABLE! => {
"changed": false,
"msg": "Authentication or permission failure. In some cases, you may have been able to authenticate and did not have permissions on the remote directory. Consider changing the remote temp path in ansible.cfg to a path rooted in \"/tmp\". Failed command was: ( umask 77 && mkdir -p \"` echo $HOME/.ansible/tmp/ansible-tmp-1498895076.45-202255130489130 `\" && echo ansible-tmp-1498895076.45-202255130489130=\"` echo $HOME/.ansible/tmp/ansible-tmp-1498895076.45-202255130489130 `\" ), exited with result 1",
"unreachable": true
}
I have already tried changing the ansible.cfg for remote_dir, changed connection type as suggested in https://github.com/ansible/ansible/issues/5725
The verbose mode output is:
Using /etc/ansible/ansible.cfg as config file
Loading callback plugin minimal of type stdout, v2.0 from /usr/lib/python2.7/site-packages/ansible/plugins/callback/__init__.pyc
Using module file /usr/lib/python2.7/site-packages/ansible/modules/core/system/ping.py
<192.168.x.xxx> ESTABLISH SSH CONNECTION FOR USER: None
Using module file /usr/lib/python2.7/site-packages/ansible/modules/core/system/ping.py
<192.168.x.xxx> SSH: ansible.cfg set ssh_args: (-C)(-o)(ControlMaster=auto)(-o)(ControlPersist=60s)
<192.168.x.xxx> SSH: ansible_password/ansible_ssh_pass not set: (-o)(KbdInteractiveAuthentication=no)(-o)(PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey)(-o)(PasswordAuthentication=no)
<192.168.x.xxx> SSH: ANSIBLE_TIMEOUT/timeout set: (-o)(ConnectTimeout=10)
<192.168.x.xxx> SSH: PlayContext set ssh_common_args: ()
<192.168.x.xxx> SSH: PlayContext set ssh_extra_args: ()
<192.168.x.xxx> SSH: found only ControlPersist; added ControlPath: (-o)(ControlPath=/home/webtech/.ansible/cp/ansible-ssh-%h-%p-%r)
<192.168.x.xxx> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/webtech/.ansible/cp/ansible-ssh-%h-%p-%r 192.168.x.xxx '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1498903623.28-136703981609211 `" && echo ansible-tmp-1498903623.28-136703981609211="` echo $HOME/.ansible/tmp/ansible-tmp-1498903623.28-136703981609211 `" ) && sleep 0'"'"''
Nothing helped.
Please help me, how can I run my playbook in those 2 hosts?
ansible <>
add -s at the end to run it as sudo user

Ansible: "Failed to connect to the host via ssh" error

I'm trying to get set up with Ansible for the first time, to connect to a Raspberry Pi. Following the official 'getting started' steps, I've made an inventory file:
192.168.1.206
.. but the ping fails as follows:
$ ansible all -m ping -vvv
No config file found; using defaults
<192.168.1.206> ESTABLISH SSH CONNECTION FOR USER: pi
<192.168.1.206> SSH: EXEC ssh -C -q -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=pi -o ConnectTimeout=10 -o ControlPath=/Users/username/.ansible/cp/ansible-ssh-%h-%p-%r 192.168.1.206 '/bin/sh -c '"'"'( umask 22 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1464128959.67-131325759126042 `" && echo "` echo $HOME/.ansible/tmp/ansible-tmp-1464128959.67-131325759126042 `" )'"'"''
192.168.1.206 | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh.",
"unreachable": true
}
This looks the same as this question, but adding password/user bits make no effect for me, shouldn't be necessary to ping, and aren't in the official example anyhow. In any case I'd prefer to configure Ansible to use a specific public/private key pair (as per ssh -i ~/.ssh/keyfile method..)
Grateful for assistance.
Oh and yes the Raspberry is available at that address:
$ ping 192.168.1.206
PING 192.168.1.206 (192.168.1.206): 56 data bytes
64 bytes from 192.168.1.206: icmp_seq=0 ttl=64 time=83.822 ms
Despite what its name could suggest, Ansible ping module doesn't make an ICMP ping.
It tries to connect to host and makes sure a compatible version of Python is installed (as stated in the documentation).
ping - Try to connect to host, verify a usable python and return pong on success.
If you want to use a specific private key, you can specify ansible_ssh_private_key_file in your inventory file:
[all]
192.168.1.206 ansible_ssh_private_key_file=/home/example/.ssh/keyfile
It works for me.
10.23.4.5 ansible_ssh_pass='password' ansible_user='root'
You can also troubleshoot by executing ssh in debug mode and compare the results when running:
ssh -v pi#192.168.1.206
with:
ansible all -m ping -vvvv

ansible SSH connection fail

I'm trying to run ansible role on multiple servers, but i get an error:
fatal: [192.168.0.10]: UNREACHABLE! => {"changed": false, "msg":
"Failed to connect to the host via ssh.", "unreachable": true}
My /etc/ansible/hosts file looks like this:
192.168.0.10 ansible_sudo_pass='passphrase' ansible_ssh_user=user
192.168.0.11 ansible_sudo_pass='passphrase' ansible_ssh_user=user
192.168.0.12 ansible_sudo_pass='passphrase' ansible_ssh_user=user
I have no idea what's going on - everything looks fine - I can login via SSH, but ansible ping returns the same error.
The log from verbose execution:
<192.168.0.10> ESTABLISH SSH CONNECTION FOR USER: user <192.168.0.10>
SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o
KbdInteractiveAuthentication=no -o
PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey
-o PasswordAuthentication=no -o User=user -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/ansible-ssh-%h-%p-%r 192.168.0.10
'/bin/sh -c '"'"'( umask 22 && mkdir -p "echo
$HOME/.ansible/tmp/ansible-tmp-1463151813.31-156630225033829" &&
echo "echo
$HOME/.ansible/tmp/ansible-tmp-1463151813.31-156630225033829"
)'"'"''
Can you help me somehow? If I have to use ansible in local mode (-c local), then it's useless.
I've tried to delete ansible_sudo_pass and ansible_ssh_user, but it did'nt help.
You need to change the ansible_ssh_pass as well or ssh key, for example I am using this in my inventory file:
192.168.33.100 ansible_ssh_pass=vagrant ansible_ssh_user=vagrant
After that I can connect to the remote host:
ansible all -i tests -m ping
With the following result:
192.168.33.100 | SUCCESS => {
"changed": false,
"ping": "pong"
}
Hope that help you.
EDIT: ansible_ssh_pass & ansible_ssh_user don't work in the latest version of Ansible. It has changed to ansible_user & ansible_pass
mkdir /etc/ansible
cat > hosts
default ansible_ssh_host=127.0.0.1 ansible_ssh_port=2222 ansible_ssh_user=vagrant ansible_ssh_private_key_file=.vagrant/machines/default/virtualbox/private_key
Go to your playbook directory and run ansible all -m ping or ansible ping -m "server-group-name"
I had this issue, but it was for a different reason than was documented in other answers. My host that I was trying to deploy to was only available by going through a jump box. Originally, I thought that it was because Ansible wasn't recognizing my SSH config file, but it was. The solution for me was to make sure that the user that was present in the SSH config file matched the user in the Ansible playbook. That resolved the issue for me.
Try to modify your host file to:
192.168.0.10
192.168.0.11
192.168.0.12
$ansible -m ping all -vvv
After installing ansible on Ubuntu or CentOS.
You can have messages below. Do not panic, you must have an access right to the file /tmp of user [/home/user_name/.ansible/tmp/].
"Authentication or permission failure".
This preconisaion will solve the problem.
[Your_server ~]$ ansible -m ping all
rusub-bm-gt | SUCCESS => {
"changed": false,
"ping": "pong"
}
Your_server | SUCCESS => {
"changed": false,
"ping": "pong"
}
Best Practice for me I'm using SSH keys to access to server hosts
1.Create hosts file in inventories folder
[example]
example1.com
example2.com
example3.com
2. Create ansible-playbook file playbook.yml
---
- hosts:
- all
- roles:
- example
3. let's try to deploy ansible-playbook with multiple server hosts
ansible-playbook playbook.yml -i inventories/hosts example --user vagrant
The ansible_ssh_port changed while reloading the vm.
==> default: Booting VM...
==> default: Waiting for machine to boot. This may take a few minutes...
default: SSH address: 127.0.0.1:2222
default: SSH username: vagrant
default: SSH auth method: private key
So I had to update the inventory/hosts file as follows:
default ansible_ssh_host=127.0.0.1 ansible_ssh_port=2222 ansible_ssh_user='centos' ansible_ssh_private_key_file=<key path>

I can ssh just fine, but ansible says "no route to host"

I wrote a script to run up several vms using vagrant, which I have to then provision with ansible. Unfortunately my host is a windows machine, so I thought I could solve the issue by putting all the vms into a vpn and then provision them from another machine in the same vpn.
In theory, it works... I can ssh into the other machines without trouble. But when I run my ansible playbook, ansible fails.
At first I got the message "ssh: connect to host 10.1.2.100 [10.1.2.100] port 22: No route to host" when running ansible with -vvvv
This was in the evening, and I was very tired, and this error didn't recur the following morning. Not sure if it's got something to do with the vm I'm doing deployment from being rebooted in the meantime, or the receiving machine being destroyed and uped completely since then. In any case, the problem has not gone away.
results now, after recreating both vms:
# ansible-playbook -i vms -k -u vagrant vms.yml -vvvv
result:
<10.1.2.100> ESTABLISH SSH CONNECTION FOR USER: vagrant <10.1.2.100>
SSH: EXEC sshpass -d14 ssh -C -vvv -o ServerAliveInterval=50 -o
User=vagrant -o ConnectTimeout=10 -tt 10.1.2.100 '( umask 22 && mkdir
-p "$( echo $HOME/.ansible/tmp/ansible-tmp-1455781388.36-25193904947084 )" && echo
"$( echo $HOME/.ansible/tmp/ansible-tmp-1455781388.36-25193904947084
)" )' fatal: [10.1.2.100]: FAILED! => {"failed": true, "msg": "ERROR!
Using a SSH password instead of a key is not possible because Host Key
checking is enabled and sshpass does not support this. Please add
this host's fingerprint to your known_hosts file to manage this
host."}
So far so clear. I ssh into the other instance to add it to the known hosts. This works without any trouble.
Back to ansible, I try the same command again. The result now is:
<10.1.2.100> ESTABLISH SSH CONNECTION FOR USER: vagrant <10.1.2.100>
SSH: EXEC sshpass -d14 ssh -C -vvv -o ServerAliveInterval=50 -o
StrictHostKeyChecking=no -o User=vagrant -o ConnectTimeout=10 -tt
10.1.2.100 '( umask 22 && mkdir -p "$( echo $HOME/.ansible/tmp/ansible-tmp-1455782149.99-271768166468916 )" &&
echo "$( echo
$HOME/.ansible/tmp/ansible-tmp-1455782149.99-271768166468916 )" )'
<10.1.2.100> PUT /tmp/tmpXQKa8Z TO
/home/vagrant/.ansible/tmp/ansible-tmp-1455782149.99-271768166468916/setup
<10.1.2.100> SSH: EXEC sshpass -d14 sftp -b - -C -vvv -o
ServerAliveInterval=50 -o StrictHostKeyChecking=no -o User=vagrant -o
ConnectTimeout=10 '[10.1.2.100]' fatal: [10.1.2.100]: UNREACHABLE! =>
{"changed": false, "msg": "ERROR! SSH Error: data could not be sent to
the remote host. Make sure this host can be reached over ssh",
"unreachable": true}
Well, I made sure the host was reachable by ssh, thank you very much! Ansible still can't get through, and I'm about to get a brain tumor from thinking of things that might be the problem.
Any suggestions what might be the problem?
This issue was reported here, with some workarounds:
https://github.com/ansible/ansible/issues/15321
The consensus seems to be either to a. use ansible_password or b. use -u username in the connection parameters. However, any number of things can disrupt an SSH connection in ways that make it look "unreachable" to higher level apps, so I recommend going through each of the steps outlined in that ticket.