I'm trying to use Ansible to set up hosts that will initially only be accessible via SSH with a password (not a key file) (yes, my first playbook is to set up key based access).
I can access the hosts using SSH passwords from the command line.
Running Ansible in verbose mode gives the following output
EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="debian"' -o ConnectTimeout=30 -o ControlPath=/home/home/.ansible/cp/2d22e058dc 192.168.122.11 '/bin/sh -c '"'"'echo ~debian && sleep 0'"'"''
<192.168.122.11> (255, b'', b'OpenSSH_8.9p1 Ubuntu-3, OpenSSL 3.0.2 15 Mar 2022
debug1: Reading configuration data /home/home/.ssh/config
...
debug3: no such identity: /home/home/.ssh/id_dsa: No such file or directory
debug2: we did not send a packet, disable method
debug1: No more authentication methods to try.
debian#192.168.122.11: Permission denied (publickey,password).
It looks like the SSH client is being forced to not use passwords PasswordAuthentication=no and there is nothing in the rest of the output that indicates it is trying.
This is my hosts file (no they are not the real passwords)
all:
children:
init:
hosts:
bullseye-apps:
bullseye-backup:
vars:
ansible_ssh_pass: 'password'
ansible_become_pass: 'password'
So I think I should be giving Ansible the option to use passwords.
I run my playbook as follows
ansible-playbook -i test-hosts.yml playbook.yml
I've recently upgraded my OS (to Pop_OS! 22.04) and haven't run these playbooks in a while so possibly a change in Ansible?
$ ansible --version
ansible 2.10.8
config file = /home/home/Projects/federated-agency/ansible.cfg
configured module search path = ['/home/home/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 3.10.4 (main, Apr 2 2022, 09:04:19) [GCC 11.2.0]
Any thoughts?
So it looks like I misspelled the user names of one of the hosts, damnit!
Related
I am trying to do a "dry-run" of a playbook. The machine I am targeting I am able to ssh into and vice versa. When I run the ansible all -m ping -vvv this is the output.
Using module file /usr/lib/python2.7/dist-packages/ansible/modules/core/system/ping.py
<192.168.4.136> ESTABLISH SSH CONNECTION FOR USER: hwaraich207970
<192.168.4.136> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=username -o ConnectTimeout=10 -o ControlPath=/home/username/.ansible/cp/ansible-ssh-%h-%p-%r 192.168.4.136 '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo ~/.ansible/tmp/ansible-tmp-1604952591.08-32914241935398 ` " && echo ansible-tmp-1604952591.08-32914241935398="` echo ~/.ansible/tmp/ansible-tmp-1604952591.08-32914241935
398 `" ) && sleep 0'"'"''
192.168.4.136 | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: Permission denied (publickey,password).\r\n",
"unreachable": true
This could happen even if you have made sure the passwordless ssh between System A and System B (say using either ssh-copy-id command or by manually copying the public key i.e content of the idrsa.pub file on System A to .ssh/authorizedkeys file on System B. If this is happening, one of the reason could be the user home directories.
On System A user home directory is say /home/tester and on System B, it is /users/tester, then passwordless ssh might not work. Make sure both users have the same home directory solves this issue. I observed this case in CentOS machines and on making sure the home directories for users same, the issue resolved.
Ansible typically works when ssh public keys of the controller node are added to authorized keys of the remote node. This enables ansible to ssh into the remote node from the controlled node without the need for a password.
There is an alternate way to make ansible work without sharing public keys using sshpass. In this case, you need to input the password of the remote users via the ansible_ssh_pass variable. This can be done via inventory file, group_vars, or the extra-vars.
Regarding the error shared by you. It says, "Permission denied", meaning there is something wrong related to either ssh key sharing or password setting.
msg": "Failed to connect to the host via ssh: Permission denied (publickey,password).\r\n",
Debug mode provides more info related to the issue:
SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=username -o ConnectTimeout=10 -o ControlPath=/home/username/.ansible/cp/ansible-ssh-%h-%p-%r 192.168.4.136 '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo ~/.ansible/tmp/ansible-tmp-1604952591.08-32914241935398 ` " && echo ansible-tmp-1604952591.08-32914241935398="` echo ~/.ansible/tmp/ansible-tmp-1604952591.08-32914241935
Some relevant information you can extract from the above snippet:
-o User=username: This means playbook is trying to execute from username user ID.
-o PasswordAuthentication=no: This would force ansible to use public keys over password.
This authentication failure is happening for 192.168.4.136.
Please check this for official info regarding connections for ansible.
Check this for generating and sharing ssh keys between the nodes.
In short,
ssh-agent will authenticate the passphrase when I ssh into the remote server from the command line, but whenever I execute an ansible playbook it asks for the passphrase. My question is, why won't ssh-agent authenticate the passphrase for Ansible? How can I get it to work?
In detail,
I created a password protected private key and corresponding public key and uploaded the public key to the server.
I invoked the ssh-agent using eval $(ssh-agent) and then ssh-add /etc/ansible/ssh/private-key.pem
Typing ssh-agent -l shows that the key has been added.
I can successfully ssh into the machine from the command line using ssh username#ipaddress without being asked for the passphrase.
but if I execute a playbook or do something like sudo ansible -m ping server it will say
Enter passphrase for key '/etc/ansible/ssh/private-key.pem':
I tried it again in verbose mode and it gives me the following information
ansible 2.4.2.0
config file = /etc/ansible/ansible.cfg configured
module search path = [u'/etc/ansible/library']
ansible python module
location = /usr/lib/python2.7/dist-packages/ansible
executable
location = /usr/bin/ansible
python version = 2.7.12 (default, Nov 20
2017, 18:23:56) [GCC 5.4.0 20160609] Using /etc/ansible/ansible.cfg as
config file Parsed /etc/ansible/hosts inventory source with ini plugin
META: ran handlers Using module file
/usr/lib/python2.7/dist-packages/ansible/modules/system/ping.py
<35.230.127.195> ESTABLISH SSH CONNECTION FOR USER: user6
<35.230.127.195> SSH: EXEC ssh -C -o ControlMaster=auto -o
ControlPersist=60s -o StrictHostKeyChecking=no -o Port=22 -o
'IdentityFile="/etc/ansible/ssh/private-key.pem"' -o
KbdInteractiveAuthentication=no -o
PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey
-o PasswordAuthentication=no -o User=user6 -o ConnectTimeout=10 -o ControlPath=/home/user6/.ansible/cp/e26536be01 35.230.127.195 '/bin/sh
-c '"'"'echo ~ && sleep 0'"'"'' Enter passphrase for key '/etc/ansible/ssh/private-key.pem':
My Environment
Ansible version is 2.4.2.0
Python version is 2.7.12
OpenSSH_7.2p2 Ubuntu-4ubuntu2.2, OpenSSL 1.0.2g
The ssh keys were created using RSA (not SSH-1 RSA)
and 4096 bits.
In ansible.cfg transport is set to smart.
The key is encrypted using ansible-vault, but I've tried doing it
without encryption and it makes no difference.
Please help, I don't have much hair left.
UPDATE: Using transport = local executes everything locally (ie it doesn't execute the ansible playbook on the remote server(even though it looks like it does)).
Go to ansible.cfg file at below location:
/etc/ansible/ansible.cfg
And set the transport = local :
transport = local
Thanks
I'm new to Ansible.I set-up an Ubuntu virtual machine using Vagrant. I'm able to ssh into the machine using ssh vagrant#172.16.23.228. I have created an ssh key with the same password as the vm, added it to the agent and specified the path in my hosts file.
After following the instructions here I started to receive the following errors, when running this command (ansible all --inventory-file=hosts.ini --module-name ping -u vagrant -vvvv):
Not sure what I'm missing from my set-up, what else I need to check?
<172.16.23.228> ESTABLISH CONNECTION FOR USER: vagrant
<172.16.23.228> REMOTE_MODULE ping
<172.16.23.228> EXEC ssh -C -tt -vvv -o ControlMaster=auto -o ControlPersist=60s -o ControlPath="/Users/user/.ansible/cp/ansible-ssh-%h-%p-%r" - o Port=22 -o IdentityFile="~Users/user/.ssh/onemachine_rsa" -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=10 172.16.23.228 /bin/sh -c 'mkdir -p $HOME/.ansible/tmp/ansible-tmp-1451080871.59-247915080664557 && chmod a+rx $HOME/.ansible/tmp/ansible-tmp-1451080871.59-247915080664557 && echo $HOME/.ansible/tmp/ansible-tmp-1451080871.59-247915080664557'
172.16.23.228 | FAILED => SSH Error: tilde_expand_filename: No such user Users
while connecting to 172.16.23.228:22
It is sometimes useful to re-run the command using -vvvv, which prints SSH debug output to help diagnose the issue.
My hosts file looks like:
[testserver]
172.16.23.228 ansible_ssh_port=22 ansible_ssh_user=vagrant ansible_ssh_private_key_file=~Users/user/.ssh/onemachine_rsa
What you're doing can work, but I highly recommend using the built-in Ansible provisioner in Vagrant. It will make your life easier and improve your Vagrant skills at the same time. And if you need to execute any shell scripts, use the shell provisioner.
Providing this answer for the benefit of those, like me, who arrive later at the party. Latest Vagrant installations install a private key in a local directory instead of using the admittedly insecure private key for every VM. You'll have to create an ansible_hosts file like this one:
[vagrantboxes]
jessie ansible_ssh_port=2222 ansible_ssh_host=127.0.0.1
[vagrantboxes:vars]
ansible_ssh_user=vagrant
ansible_ssh_private_key_file=.vagrant/machines/default/virtualbox/private_key
Where the key is the last line, which provides a path to the actual private key used in the virtual machine that has been started up from this particular directory.
The path to your ansible_ssh_private_key_file is incorrect. Try ansible_ssh_private_key_file=~/.ssh/onemachine_rsa instead. The tilde in this case expands to the home directory of your user on the local machine you're running ansible from.
I am creating a vm in openstack (linux vm) and launching ansible script from there.I am getting following ssh error.
---
- hosts: licproxy
user: my-user
sudo: yes
tasks:
- name: Install tinyproxy#
command: sudo apt-get install tinyproxy
- name: Update tinyproxy
command: sudo apt-get update
- name: Install bind9
shell: yes '' | sudo apt-get install bind9
Though I am directly able to ssh to machine 10.32.1.40 from the linux box in openstack admin-keydev29
PLAY [licproxy] ***********************************************************
GATHERING FACTS ***************************************************************
<10.32.1.40> ESTABLISH CONNECTION FOR USER: my-user
<10.32.1.40> REMOTE_MODULE setup
<10.32.1.40> EXEC ssh -C -tt -vvv -o StrictHostKeyChecking=no -o IdentityFile="/opt/apps/installer/tenant-dev29/ssh/admin-key-dev29" -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=my-user -o ConnectTimeout=10 10.32.1.40 /bin/sh -c 'mkdir -p $HOME/.ansible/tmp/ansible-tmp-1450797442.33-90087292637238 && chmod a+rx $HOME/.ansible/tmp/ansible-tmp-1450797442.33-90087292637238 && echo $HOME/.ansible/tmp/ansible-tmp-1450797442.33-90087292637238'
EXEC previous known host file not found for 10.32.1.40
fatal: [10.32.1.40] => SSH Error: ssh: connect to host 10.32.1.40 port 22: Connection refused
while connecting to 10.32.1.40:22
It is sometimes useful to re-run the command using -vvvv, which prints SSH debug output to help diagnose the issue.
TASK: [Install tinyproxy] *****************************************************
FATAL: no hosts matched or all hosts have already failed -- aborting
I removed from known_host entry and ran the script again it is still showing me same message.
UPDATE
I observed manual ssh is working fine.but ansible script is giving ssh error.
I logged in to the newly created vm using ssh key and checked /var/log/auth.log file
Dec 30 13:00:33 licproxy-vm sshd[1184]: Server listening on :: port 22.
Dec 30 13:01:10 licproxy-vm sshd[1448]: error: Could not load host key: /etc/ssh/ssh_host_ed25519_key
Dec 30 13:01:10 licproxy-vm sshd[1448]: Connection closed by 192.168.0.106 [preauth]
Dec 30 13:01:32 licproxy-vm sshd[1450]: error: Could not load host key: /etc/ssh/ssh_host_ed25519_key
The vm has sshd version OpenSSH_6.6.1 version
I checked /etc/ssh folder i found ssh_host_ed25519_key and ssh_host_ed25519_key.pub missing
I created those file using command ssh-keygen -A.
Now I want to know why these files are missing from ssh folder.Is this a bug?
Problem was because of ssh port 22.The port was not up.
I added the following code.which basically wait for ssh port to come up.
while ! nc -z $PROXY_SERVER_IP 22; do
sleep 10s
done
OK, strange question. I have SSH forwarding working with Vagrant. But I'm trying to get it working when using Ansible as a Vagrant provisioner.
I found out exactly what Ansible is executing, and tried it myself from the command line, sure enough, it fails there too.
[/common/picsolve-ansible/u12.04%]ssh -o HostName=127.0.0.1 \
-o User=vagrant -o Port=2222 -o UserKnownHostsFile=/dev/null \
-o StrictHostKeyChecking=no -o PasswordAuthentication=no \
-o IdentityFile=/Users/bryanhunt/.vagrant.d/insecure_private_key \
-o IdentitiesOnly=yes -o LogLevel=FATAL \
-o ForwardAgent=yes "/bin/sh \
-c 'git clone git#bitbucket.org:bryan_picsolve/poc_docker.git /home/vagrant/poc_docker' "
Permission denied (publickey,password).
But when I just run vagrant ssh the agent forwarding works correctly, and I can checkout R/W my github project.
[/common/picsolve-ansible/u12.04%]vagrant ssh
vagrant#vagrant-ubuntu-precise-64:~$ /bin/sh -c 'git clone git#bitbucket.org:bryan_picsolve/poc_docker.git /home/vagrant/poc_docker'
Cloning into '/home/vagrant/poc_docker'...
remote: Counting objects: 18, done.
remote: Compressing objects: 100% (14/14), done.
remote: Total 18 (delta 4), reused 0 (delta 0)
Receiving objects: 100% (18/18), done.
Resolving deltas: 100% (4/4), done.
vagrant#vagrant-ubuntu-precise-64:~$
Has anyone got any idea how it is working?
Update:
By means of ps awux I determined the exact command being executed by Vagrant.
I replicated it and git checkout worked.
ssh vagrant#127.0.0.1 -p 2222 \
-o Compression=yes \
-o StrictHostKeyChecking=no \
-o LogLevel=FATAL \
-o StrictHostKeyChecking=no \
-o UserKnownHostsFile=/dev/null \
-o IdentitiesOnly=yes \
-i /Users/bryanhunt/.vagrant.d/insecure_private_key \
-o ForwardAgent=yes \
-o LogLevel=DEBUG \
"/bin/sh -c 'git clone git#bitbucket.org:bryan_picsolve/poc_docker.git /home/vagrant/poc_docker' "
As of ansible 1.5 (devel aa2d6e47f0) last updated 2014/03/24 14:23:18 (GMT +100) and Vagrant 1.5.1 this now works.
My Vagrant configuration contains the following:
config.vm.provision "ansible" do |ansible|
ansible.playbook = "../playbooks/basho_bench.yml"
ansible.sudo = true
ansible.host_key_checking = false
ansible.verbose = 'vvvv'
ansible.extra_vars = { ansible_ssh_user: 'vagrant',
ansible_connection: 'ssh',
ansible_ssh_args: '-o ForwardAgent=yes'}
It is also a good idea to explicitly disable sudo use. For example, when using the Ansible git module, I do this:
- name: checkout basho_bench repository
sudo: no
action: git repo=git#github.com:basho/basho_bench.git dest=basho_bench
The key difference appears to be the UserKnownHostFile setting. Even with StrictHostKeyChecking turned off, ssh quietly disables certain features including agent forwarding when there is a conflicting entry in the known hosts file (these conflicts are common for vagrant since multiple VMs may have the same address at different times). It works for me if I point UserKnownHostFile to /dev/null:
config.vm.provision "ansible" do |ansible|
ansible.playbook = "playbook.yml"
ansible.raw_ssh_args = ['-o UserKnownHostsFile=/dev/null']
end
Here's a workaround:
Create an ansible.cfg file in the same directory as your Vagrantfile with the following lines:
[ssh_connection]
ssh_args = -o ForwardAgent=yes
You can simply add this line to your Vagrantfile to enable the ssh forwarding:
config.ssh.forward_agent = true
Note: Don't forget to execute the task with become: false
Hope, this will help.
I've found that I need to do two separate things (on Ubuntu 12.04) to get it working:
the -o ForwardAgent thing that #Lorin mentions
adding /etc/sudoers.d/01-make_SSH_AUTH_SOCK_AVAILABLE with these contents:
Defaults env_keep += "SSH_AUTH_SOCK"
I struggled with a very similar problem for a few hours.
Vagrant 1.7.2
ansible 1.9.4
My symptoms:
failed: [vagrant1] => {"cmd": "/usr/bin/git ls-remote '' -h refs/heads/HEAD", "failed": true, "rc": 128}
stderr: Permission denied (publickey).
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
msg: Permission denied (publickey).
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
FATAL: all hosts have already failed -- aborting
SSH'ing into the guest, I found that my ssh-agent was forwarding as expected:
vagrant#vagrant-ubuntu-trusty-64:~$ ssh -T git#github.com
Hi baxline! You've successfully authenticated, but GitHub does not provide shell access.
However, from the host machine, I could not open the connection:
$ ansible web -a "ssh-add -L"
vagrant1 | FAILED | rc=2 >>
Could not open a connection to your authentication agent.
After confirming that my ansible.cfg file was set up, as #Lorin noted, and my Vagrantfile set config.ssh.forward_agent = true, I still came up short.
The solution was to delete all lines in my host's ~/.ssh/known_hosts file that were associated with my guest. For me, they were the lines that started with:
[127.0.0.1]:2201 ssh-rsa
[127.0.0.1]:2222 ssh-rsa
[127.0.01]:2222 ssh-rsa
[127.0.0.1]:2200 ssh-rsa
Note the third line has a funny ip address. I'm not certain, but I believe that line was the culprit. These lines are created as I destroy and create vagrant VMs.