Using Ansible authorized_key module to copy SSH key fails with sshpass needed erro - ssh

I am trying to copy my .pub key file located in ~/.ssh/mykey.pub to one of the remote hosts using Ansible.
I have a very simple playbook containing this task:
- name: SSH-copy-key to target
hosts: all
tasks:
- name: Copying local SSH key to target
ansible.posix.authorized_key:
user: '{{ ansible_user_id }}'
state: present
key: "{{ lookup('file', '~/.ssh/mykey.pub') }}"
Due to the fact that the host is 'new', I am adding a --ask-pass parameter to be asked for the SSH password on the first connection attempt.
However, I receive the error that I need to install the sshpass program.
The following is being returned:
➜ ansible ansible-playbook -i inventory.yaml ssh.yaml --ask-pass
SSH password:
PLAY [SSH-copy-key to target] ********************************************************************
TASK [Gathering Facts] ***************************************************************************
fatal: [debian]: FAILED! => {"msg": "to use the 'ssh' connection type with passwords or pkcs11_provider, you must install the sshpass program"}
PLAY RECAP ***************************************************************************************
debian : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
➜ ansible
I am executing Ansible from a MacBook. I tried replacing the 'key' key with the URL to my github account. The same error appears.
key: https://github.com/myuseraccount.keys
Any ideas?

Related

Ansible ssh fails with error: Data could not be sent to remote host

I have an ansible playbook that executes a shell script on remote host "10.8.8.88" as many times as the number files provided as parameter
ansible-playbook test.yml -e files="file1,file2,file3,file4"
playbook looks like below:
- name: Call ssh
shell: ~./execute.sh {{ item }}
with_items: {{ files.split(',') }}
This works fine for fewer files say 10 to 15 files. But I happen to have 145 files in the argument.
This is when the execution broke and play failed mid-way with below error message:
TASK [shell] *******************************************************************
[WARNING]: conditional statements should not include jinja2 templating
delimiters such as {{ }} or {% %}. Found: entrycurrdb.stdout.find("{{ BASEPATH
}}/{{ vars[(item | splitext)[1].split('.')[1]] }}/{{ item | basename }}") == -1
and actualfile.stat.exists == True
[WARNING]: sftp transfer mechanism failed on [10.8.8.88]. Use ANSIBLE_DEBUG=1
to see detailed information
[WARNING]: scp transfer mechanism failed on [10.8.8.88]. Use ANSIBLE_DEBUG=1
to see detailed information
fatal: [10.8.8.88]: UNREACHABLE! => {"changed": false, "msg": "Data could not be sent to remote host \"10.8.8.88\". Make sure this host can be reached over ssh: ", "unreachable": true}
NO MORE HOSTS LEFT *************************************************************
PLAY RECAP *********************************************************************
10.8.8.88 : ok=941 changed=220 unreachable=1 failed=0 skipped=145 rescued=0 ignored=0
localhost : ok=7 changed=3 unreachable=0 failed=0 skipped=3 rescued=0 ignored=0
Build step 'Execute shell' marked build as failure
Finished: FAILURE
I have the latest Ansible and the "pipeline" and "ssh" settings in ansible.cfg are as defaults.
I have the following questions.
How can I resolve the above issue?
I guess this could be due to a network issue. Is it possible to run infinite ssh ping to the remote server for testing purposes to see if the ansible command-line breaks? It will help me prove my case. A sample command that keeps pinging the remote using ssh is what I'm looking for.
It is possible to force ansible to retry ssh connection a few times in case of such failures so that it may connect in during retries. If so, I would appreciate where and how that can be set in ansible-playbook code as vars variable and not in ansible.cfg file?
https://docs.ansible.com/ansible/2.4/intro_configuration.html#retries
Something similar to:
vars:
ansible_ssh_private_key_file: "{{ key1 }}"
Many Thanks !!

Ansible SSH error during play

I get a strange error with Ansible. First of all, the first role works fine but when Ansible tries to execute the seconde one it failed because of ssh error.
Environment:
OS: CentOS 7
Ansible version: 2.2.1.0
Python version: 2.7.5
OpenSSH version: OpenSSH_6.6.1p1, OpenSSL 1.0.1e-fips 11 Feb 2013
Ansible command which is executed:
ansible-playbook -vvvv -i inventory/dev playbook_update_system.yml --limit "db[0]"
Playbook:
- name: "HUB Playbook | Updating system packages on {{ ansible_hostname }}"
hosts: release_first_half
roles:
- upgrade_system_package
- reboot_server
Role: upgrade_system_package:
- name: "upgrading CentOS system packages on {{ ansible_hostname }}"
shell: sudo puppet apply -e 'exec{"upgrade-package":command => "/usr/bin/yum clean all; /usr/bin/yum -y update;"}'
when: ansible_distribution == 'CentOS' and 'cassandra' not in group_names
Role: reboot_server:
- name: "reboot CentOS [{{ ansible_hostname }}] server"
shell: sudo puppet apply -e 'exec{"reboot-os":command => "/usr/sbin/reboot"}'
when: ansible_distribution == 'CentOS' and 'cassandra' not in group_names
Current behavior:
Connection to "db1" node and execute role "upgrade system packages" => OK
Try to connect to "db1" and execute role "reboot_server" => failed due to ssh.
Error message returned by Ansible:
fatal: [db1]: UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: OpenSSH_6.6.1, OpenSSL 1.0.1e-fips 11 Feb 2013\r\ndebug1: Reading configuration data /USR/newtprod/.ssh/config\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 56: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 64994\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Control master terminated unexpectedly\r\nShared connection to db1 closed.\r\n",
"unreachable": true
}
I don't understand because the previous role has been executed successfully on this node. Moreover, we have a lot of playbook which are using same inventory file and they works fine. I tried on another node too but same result.
It's a simple and pretty well-known issue: the shutdown process causes SSH daemon to quit and this breaks the current SSH session (you get the "broken pipe" error). The server reboots properly, but Ansible flow gets interrupted.
You need to add a delay to your shell command and run it with async option, so that Ansible's SSH session can finish before it gets killed.
shell: sleep 5; sudo puppet apply -e 'exec{"reboot-os":command => "/usr/sbin/reboot"}'
async: 0
poll: 0

Ansible provisioning ERROR! Using a SSH password instead of a key is not possible

I would like to provision with my three nodes from the last one by using Ansible.
My host machine is Windows 10.
My Vagrantfile looks like:
Vagrant.configure("2") do |config|
(1..3).each do |index|
config.vm.define "node#{index}" do |node|
node.vm.box = "ubuntu"
node.vm.box = "../boxes/ubuntu_base.box"
node.vm.network :private_network, ip: "192.168.10.#{10 + index}"
if index == 3
node.vm.provision :setup, type: :ansible_local do |ansible|
ansible.playbook = "playbook.yml"
ansible.provisioning_path = "/vagrant/ansible"
ansible.inventory_path = "/vagrant/ansible/hosts"
ansible.limit = :all
ansible.install_mode = :pip
ansible.version = "2.0"
end
end
end
end
end
My playbook looks like:
---
# my little playbook
- name: My little playbook
hosts: webservers
gather_facts: false
roles:
- create_user
My hosts file looks like:
[webservers]
192.168.10.11
192.168.10.12
[dbservers]
192.168.10.11
192.168.10.13
[all:vars]
ansible_connection=ssh
ansible_ssh_user=vagrant
ansible_ssh_pass=vagrant
After executing vagrant up --provision I got the following error:
Bringing machine 'node1' up with 'virtualbox' provider...
Bringing machine 'node2' up with 'virtualbox' provider...
Bringing machine 'node3' up with 'virtualbox' provider...
==> node3: Running provisioner: setup (ansible_local)...
node3: Running ansible-playbook...
PLAY [My little playbook] ******************************************************
TASK [create_user : Create group] **********************************************
fatal: [192.168.10.11]: FAILED! => {"failed": true, "msg": "ERROR! Using a SSH password instead of a key is not possible because Host Key checking is enabled and sshpass does not support this. Please add this host's fingerprint to your known_hosts file to manage this host."}
fatal: [192.168.10.12]: FAILED! => {"failed": true, "msg": "ERROR! Using a SSH password instead of a key is not possible because Host Key checking is enabled and sshpass does not support this. Please add this host's fingerprint to your known_hosts file to manage this host."}
PLAY RECAP *********************************************************************
192.168.10.11 : ok=0 changed=0 unreachable=0 failed=1
192.168.10.12 : ok=0 changed=0 unreachable=0 failed=1
Ansible failed to complete successfully. Any error output should be
visible above. Please fix these errors and try again.
I extended my Vagrantfile with ansible.limit = :all and added [all:vars] to the hostfile, but still cannot get through the error.
Has anyone encountered the same issue?
Create a file ansible/ansible.cfg in your project directory (i.e. ansible.cfg in the provisioning_path on the target) with the following contents:
[defaults]
host_key_checking = false
provided that your Vagrant box has sshpass already installed - it's unclear, because the error message in your question suggests it was installed (otherwise it would be "ERROR! to use the 'ssh' connection type with passwords, you must install the sshpass program"), but in your answer you add it explicitly (sudo apt-get install sshpass), like it was not
I'm using Ansible version 2.6.2 and solution with host_key_checking = false doesn't work.
Adding environment variable export ANSIBLE_HOST_KEY_CHECKING=False skipping fingerprint check.
This error can also be solved by simply export ANSIBLE_HOST_KEY_CHECKING variable.
export ANSIBLE_HOST_KEY_CHECKING=False
source: https://github.com/ansible/ansible/issues/9442
This SO post gave the answer.
I just extended the known_hosts file on the machine that is responsible for the provisioning like this:
Snippet from my modified Vagrantfile:
...
if index == 3
node.vm.provision :pre, type: :shell, path: "install.sh"
node.vm.provision :setup, type: :ansible_local do |ansible|
...
My install.sh looks like:
# add web/database hosts to known_hosts (IP is defined in Vagrantfile)
ssh-keyscan -H 192.168.10.11 >> /home/vagrant/.ssh/known_hosts
ssh-keyscan -H 192.168.10.12 >> /home/vagrant/.ssh/known_hosts
ssh-keyscan -H 192.168.10.13 >> /home/vagrant/.ssh/known_hosts
chown vagrant:vagrant /home/vagrant/.ssh/known_hosts
# reload ssh in order to load the known hosts
/etc/init.d/ssh reload
I had a similar challenge when working with Ansible 2.9.6 on Ubuntu 20.04.
When I run the command:
ansible all -m ping -i inventory.txt
I get the error:
target | FAILED! => {
"msg": "Using a SSH password instead of a key is not possible because Host Key checking is enabled and sshpass does not support this. Please add this host's fingerprint to your known_hosts file to manage this host."
}
Here's how I fixed it:
When you install ansible, it creates a file called ansible.cfg, this can be found in the /etc/ansible directory. Simply open the file:
sudo nano /etc/ansible/ansible.cfg
Uncomment this line to disable SSH key host checking
host_key_checking = False
Now save the file and you should be fine now.
Note: You could also try to add the host's fingerprint to your known_hosts file by SSHing into the server from your machine, this prompts you to save the host's fingerprint to your known_hosts file:
promisepreston#ubuntu:~$ ssh myusername#192.168.43.240
The authenticity of host '192.168.43.240 (192.168.43.240)' can't be established.
ECDSA key fingerprint is SHA256:9Zib8lwSOHjA9khFkeEPk9MjOE67YN7qPC4mm/nuZNU.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added '192.168.43.240' (ECDSA) to the list of known hosts.
myusername#192.168.43.240's password:
Welcome to Ubuntu 20.04.1 LTS (GNU/Linux 5.4.0-53-generic x86_64)
That's all.
I hope this helps
run the below command, it resolved my issue
export ANSIBLE_HOST_KEY_CHECKING=False && ansible-playbook -i
all provided solutions require changes in global config file or adding environment variable what create problems to onboard new people.
Instead you can add following variable to your inventory or host vars
ansible_ssh_common_args: '-o StrictHostKeyChecking=no'
Adding ansible_ssh_common_args='-o StrictHostKeyChecking=no'
to either your inventory
like:
[all:vars]
ansible_ssh_common_args='-o StrictHostKeyChecking=no'
[all:children]
servers
[servers]
host1
OR:
[servers]
host1 ansible_ssh_common_args='-o StrictHostKeyChecking=no'

Ansible ping to remote host works on local connection but not otherwise

I am a complete Ansible newbie, so apologies in advance!
I am trying to run an Ansible playbook whose role is to enable file sharing/transfer/synchronicity between programs I build locally and on a remote machine (as you can guess, I didn't write the playbook).
My issue is that I cannot ping the remote host when I don't use --connection=local. I can, however, ssh the remote host. When I run the playbook, it throws the error:
host1 | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh.",
"unreachable": true}
If I do
ansible-playbook cvfms.yml --connection=local
then I don't get the ssh error, but the playbook can't do anything, since I suspect the connection should be other than local for it to run.
For further information, here is my /etc/ansible/hosts file:
[group_name]
host1 ansible_ssh_host=lengau.chpc.ac.za
I also have a /etc/ansible/host_var, saying my username at the machine.
Any help on this issue would be deeply appreciated!
In answer to the comments: when I run ansible-playbook -vvv cvfms.yml, I get the output:
PLAYBOOK: cvmfs.yml ************************************************************
2 plays in /home/testuser/Documents/DevOps-master/Ansible/cvmfs.yml
PLAY [Enable CVMFS] ************************************************************
TASK [setup] *******************************************************************
Using module file /usr/lib/python2.6/site-packages/ansible-2.2.0- py2.6.egg/ansible/modules/core/system/setup.py
<lengau.chpc.ac.za> ESTABLISH SSH CONNECTION FOR USER: khenninger
<lengau.chpc.ac.za> SSH: EXEC ssh -q -C -o ControlMaster=auto -o ControlPersist=60s -o 'IdentityFile="/home/testuser/.ssh/id_rsa"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=khenninger -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/ansible-ssh-%h-%p-%r lengau.chpc.ac.za '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1468582573.96-15730857177484 `" && echo ansible-tmp-1468582573.96-15730857177484="` echo $HOME/.ansible/tmp/ansible-tmp-1468582573.96-15730857177484 `" ) && sleep 0'"'"''
fatal: [host1]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh.", "unreachable": true}
to retry, use: --limit #/home/testuser/Documents/DevOps- master/Ansible/cvmfs.retry
PLAY RECAP *********************************************************************
host1 : ok=0 changed=0 unreachable=1 failed=0
In response to the other question:
I set up the private keys like this:
On my "home" machine I now have a file /home/testuser/.ssh/id_rsa, which contains the private key I obtained via
ssh-keygen -t rsa.
This private key is stored in the directory /home/user/.ssh/ on the remote machine as well.
As far as I can gather, that was the right thing to do.
I still get the same issues as above when I run ansible-playbook or when I ping.
And to add some weirdness, all of this only happens if I am root. If I am a normal user on my home machine, the ssh works fine, and the playbook runs on the local connection with a new error message, as below:
ansible-playbook cvmfs.yml --connection=local
PLAY [Enable CVMFS] ************************************************************
TASK [setup] *******************************************************************
ok: [196.24.44.83]
ok: [host1]
TASK [Inform the team] *********************************************************
fatal: [host1]: FAILED! => {"failed": true, "msg": "the field 'args' has an invalid value, which appears to include a variable that is undefined. The error was: 'slack_token' is undefined\n\nThe error appears to have been in '/home/testuser/Documents/DevOps-master/Ansible/cvmfs.yml': line 7, column 7, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n pre_tasks:\n - name: Inform the team\n ^ here\n"}
fatal: [196.24.44.83]: FAILED! => {"failed": true, "msg": "the field 'args' has an invalid value, which appears to include a variable that is undefined. The error was: 'slack_token' is undefined\n\nThe error appears to have been in '/home/testuser/Documents/DevOps-master/Ansible/cvmfs.yml': line 7, column 7, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n pre_tasks:\n - name: Inform the team\n ^ here\n"}
NO MORE HOSTS LEFT *************************************************************
to retry, use: --limit #cvmfs.retry
PLAY RECAP *********************************************************************
196.24.44.83 : ok=1 changed=0 unreachable=0 failed=1
host1 : ok=1 changed=0 unreachable=0 failed=1
There is something seriously wrong with something on my local machine, I think...
I found the answer! I feel both elated and very dumb (blush)...
The problem was fixed by (after doing all the steps above) running:
ansible-playbook cvmfs.yml --ask-pass
After which point it runs fine. The output of
ansible all -m ping --ask-pass
was then "success" in all cases, not just over the local network. And the playbook runs fine. Yay!

Ansible - Define Inventory at run time

I am liitle new to ansible so bear with me if my questions are a bit basic.
Scenario:
I have a few group of Remote hosts such as [EPCs] [Clients] and [Testers]
I am able to configure them just the way I want them to be.
Problem:
I need to write a playbook, which when runs, asks the user for the inventory at run time.
As an example when a playbook is run the user should be prompted in the following way:
"Enter the number of EPCs you want to configure"
"Enter the number of clients you want to configure"
"Enter the number of testers you want to configure"
What should happen:
Now for instance the user enters 2,5 and 8 respectively.
Now the playbook should only address the first 2 nodes in the group [EPCs], the first 5 nodes in group [Clients] and the first 7 nodes in the group [Testers] .
I don't want to create a large number of sub-groups, for instance if I have 20 EPCs, then I don't want to define 20 groups for my EPCs, I want somewhat of a dynamic inventory, which should automatically configure the machines according to the user input at run time using the vars_prompt option or something similar to that
Let me post a partial part of my playbook for better understanding of what is to happen:
---
- hosts: epcs # Now this is the part where I need a lot of flexibility
vars_prompt:
name: "what is your name?"
quest: "what is your quest?"
gather_facts: no
tasks:
- name: Check if path exists
stat: path=/home/khan/Desktop/tobefetched/file1.txt
register: st
- name: It exists
debug: msg='Path existence verified!'
when: st.stat.exists
- name: It doesn't exist
debug: msg="Path does not exist"
when: st.stat.exists == false
- name: Copy file2 if it exists
fetch: src=/home/khan/Desktop/tobefetched/file2.txt dest=/home/khan/Desktop/fetched/ flat=yes
when: st.stat.exists
- name: Run remotescript.sh and save the output of script to output.txt on the Desktop
shell: cd /home/imran/Desktop; ./remotescript.sh > output.txt
- name: Find and replace a word in a file placed on the remote node using variables
shell: cd /home/imran/Desktop/tobefetched; sed -i 's/{{name}}/{{quest}}/g' file1.txt
tags:
- replace
#gli I tried your solution, I have a group in my inventory named test with two nodes in it. When I enter 0..1 I get:
TASK: [echo sequence] *********************************************************
changed: [vm2] => (item=some_prefix0)
changed: [vm1] => (item=some_prefix0)
changed: [vm1] => (item=some_prefix1)
changed: [vm2] => (item=some_prefix1)
Similarly when I enter 1..2 I get:
TASK: [echo sequence] *********************************************************
changed: [vm2] => (item=some_prefix1)
changed: [vm1] => (item=some_prefix1)
changed: [vm2] => (item=some_prefix2)
changed: [vm1] => (item=some_prefix2)
Likewise when I enter 4..5 (nodes not even present in the inventory, I get:
TASK: [echo sequence] *********************************************************
changed: [vm1] => (item=some_prefix4)
changed: [vm2] => (item=some_prefix4)
changed: [vm1] => (item=some_prefix5)
changed: [vm2] => (item=some_prefix5)
Any help would be really appreciated. Thanks!
You should use vars_prompt for getting information from user, add_host for updating hosts dynamically and with_sequence for loops:
$ cat aaa.yml
---
- hosts: localhost
gather_facts: False
vars_prompt:
- name: range
prompt: Enter range of EPCs (e.g. 1..5)
private: False
default: "1"
pre_tasks:
- name: Set node id variables
set_fact:
start: "{{ range.split('..')[0] }}"
stop: "{{ range.split('..')[-1] }}"
- name: "Add hosts:"
add_host: name="host_{{item}}" groups=just_created
with_sequence: "start={{start}} end={{stop}} "
- hosts: just_created
gather_facts: False
tasks:
- name: echo sequence
shell: echo "cmd"
The output will be:
$ ansible-playbook aaa.yml -i 'localhost,'
Enter range of EPCs (e.g. 1..5) [1]: 0..1
PLAY [localhost] **************************************************************
TASK: [Set node id variables] *************************************************
ok: [localhost]
TASK: [Add hosts:] ************************************************************
ok: [localhost] => (item=0)
ok: [localhost] => (item=1)
PLAY [just_created] ***********************************************************
TASK: [echo sequence] *********************************************************
fatal: [host_0] => SSH encountered an unknown error during the connection. We recommend you re-run the command using -vvvv, which will enable SSH debugging output to help diagnose the issue
fatal: [host_1] => SSH encountered an unknown error during the connection. We recommend you re-run the command using -vvvv, which will enable SSH debugging output to help diagnose the issue
FATAL: all hosts have already failed -- aborting
PLAY RECAP ********************************************************************
to retry, use: --limit #/Users/gli/aaa.retry
host_0 : ok=0 changed=0 unreachable=1 failed=0
host_1 : ok=0 changed=0 unreachable=1 failed=0
localhost : ok=2 changed=0 unreachable=0 failed=0
Here, it failed as host_0 and host_1 are unreachable, for you it'll work fine.
btw, I used more powerful concept "range of nodes". If you don't need it, it is quite simple to have "start=0" and ask only for "stop" value in the prompt.
I don't think you can define an inventory at run time. One thing you can do is, write a wrapper script over Ansible which will first prompt user for the hosts and then dynamically structure an ansible-playbook command.
I would prefer doing this using python, but you can use any language of your choice.
$ cat ansible_wrapper.py
import ConfigParser
import os
nodes = ''
inv = {}
hosts = raw_input("Enter hosts: ")
hosts = hosts.split(",")
config = ConfigParser.ConfigParser(allow_no_value=True)
config.readfp(open('hosts'))
sections = config.sections()
for i in range(len(sections)):
inv[sections[i]] = hosts[i]
for key, value in inv.items():
for i in range(int(value)):
nodes = nodes + config.items(key)[i][0] + ";"
command = 'ansible-playbook -i hosts myplaybook.yml -e "nodes=%s"' % (nodes)
print "Running command: ", command
os.system(command)
Note: I've tried running this script only using python2.7