Context: I'm trying to automate the provision of a fresh new server, but when a new machine is spawned and my ansible playbook is played against it from my provisioning server the usual message pops out:
The authenticity of host '192.168.1.25 (192.168.1.25)' can't be established.
ECDSA key fingerprint is SHA256:QF/AyFhYXaz5bjZ1O+kvceoOjBzmI8M1PYmg3lukYmE.
Are you sure you want to continue connecting (yes/no/[fingerprint])?
I am aware this question has been answered a couple times already but, I do not want to add this line to my .cfg file or give the relative argument when I launch an ansible-playbook command.
Problem: So this answer came to my attention https://stackoverflow.com/a/54735937/18647199
I copy pasted the two tasks in my playbook and if they're by themselves the script runs properly. Skipping the aforementioned prompt (even though it skips it on one server that I still have to made the first connection) see:
TASK [Check known_hosts for 192.168.1.14] **************************************
ok: [192.168.1.16 -> localhost]
ok: [192.168.1.14 -> localhost]
ok: [192.168.1.25 -> localhost]
TASK [Ignore host key for 192.168.1.14 on first run] ***************************
skipping: [192.168.1.14]
skipping: [192.168.1.16]
skipping: [192.168.1.25]
PLAY RECAP *********************************************************************
192.168.1.14 : ok=1 changed=0 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0
192.168.1.16 : ok=1 changed=0 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0
192.168.1.25 : ok=1 changed=0 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0
But if I add just one more task to it, it asks again for the auth prompt that I'm trying to skip.
p.s. using OpenSSH, latest current version.
What I'm trying to run:
---
#all
- hosts: all
#connection: local
become: true
gather_facts: false #otherwise ssh prompt appears
tasks:
- name: Check known_hosts
local_action: shell ssh-keygen -F "{{ inventory_hostname }}"
register: is_known
failed_when: false
changed_when: false
ignore_errors: yes
- name: debug message
debug:
msg: the "{{ inventory_hostname }}"" was tested with output "{{ is_known }}"
- name: Ignore host key for "{{ inventory_hostname }}" on first run
when: is_known.rc == 1
set_fact:
ansible_ssh_common_args: '-o StrictHostKeyChecking=no'
- name: Bootstrap check
stat:
path: /home/bot/bootstrapped-ok
register: bootstrap_result
[..] more code
Debug output:
ansible-playbook debug-bootstrap.yml
PLAY [all] *********************************************************************
TASK [Check known_hosts] *******************************************************
ok: [192.168.1.16 -> localhost]
ok: [192.168.1.14 -> localhost]
ok: [192.168.1.25 -> localhost]
TASK [debug message] ***********************************************************
ok: [192.168.1.14] => {
"msg": "the \"192.168.1.14\"\" was tested with output \"{'msg': 'non-zero return code', 'cmd': 'ssh-keygen -F \"192.168.1.14\"', 'stdout': '', 'stderr': 'do_known_hosts: hostkeys_foreach failed: No such file or directory', 'rc': 255, 'start': '2022-04-02 12:30:50.940041', 'end': '2022-04-02 12:30:50.943287', 'delta': '0:00:00.003246', 'changed': False, 'failed': False, 'stdout_lines': [], 'stderr_lines': ['do_known_hosts: hostkeys_foreach failed: No such file or directory'], 'failed_when_result': False}\""
}
ok: [192.168.1.16] => {
"msg": "the \"192.168.1.16\"\" was tested with output \"{'msg': 'non-zero return code', 'cmd': 'ssh-keygen -F \"192.168.1.16\"', 'stdout': '', 'stderr': 'do_known_hosts: hostkeys_foreach failed: No such file or directory', 'rc': 255, 'start': '2022-04-02 12:30:50.937097', 'end': '2022-04-02 12:30:50.941015', 'delta': '0:00:00.003918', 'changed': False, 'failed': False, 'stdout_lines': [], 'stderr_lines': ['do_known_hosts: hostkeys_foreach failed: No such file or directory'], 'failed_when_result': False}\""
}
ok: [192.168.1.25] => {
"msg": "the \"192.168.1.25\"\" was tested with output \"{'msg': 'non-zero return code', 'cmd': 'ssh-keygen -F \"192.168.1.25\"', 'stdout': '', 'stderr': 'do_known_hosts: hostkeys_foreach failed: No such file or directory', 'rc': 255, 'start': '2022-04-02 12:30:50.978944', 'end': '2022-04-02 12:30:50.982119', 'delta': '0:00:00.003175', 'changed': False, 'failed': False, 'stdout_lines': [], 'stderr_lines': ['do_known_hosts: hostkeys_foreach failed: No such file or directory'], 'failed_when_result': False}\""
}
TASK [Ignore host key for "192.168.1.14" on first run] *************************
skipping: [192.168.1.14]
skipping: [192.168.1.16]
skipping: [192.168.1.25]
TASK [Bootstrap check] *********************************************************
The authenticity of host '192.168.1.25 (192.168.1.25)' can't be established.
ECDSA key fingerprint is SHA256:QF/AyFhYXaz5bjZ1O+kvceoOjBzmI8M1PYmg3lukYmE.
Are you sure you want to continue connecting (yes/no/[fingerprint])? ok: [192.168.1.16]
ok: [192.168.1.14]
So it seems like the command shell ssh-keygen -F "{{ inventory_hostname }}" isn't doing what it's supposed to do as if we had to launch that via terminal.
Question: Does anyone know how to implement that "one-time skip" or has a better way to do this for a fully automated provisioning / deploy?
(I tried to create an unique .yml file with scarce results, I hit a wall and have not many ideas left on how to continue a fully automated provisioning)
Just added mine answer to How to ignore ansible SSH authenticity checking? which list lots of options.
This is what we are using for stable hosts (when running the playbook from Jenkins and you simply want to accept the host key when connecting to the host for the first time) in inventory file:
[all:vars]
ansible_ssh_common_args='-o StrictHostKeyChecking=accept-new'
And this is what we have for temporary hosts (in the end this will ignore they host key at all):
[all:vars]
ansible_ssh_common_args='-o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null'
There is also environment variable or you can add it into group/host variables file. No need to have it in the inventory - it was just convenient in our case.
Maybe this could help?
Related
My issue is that I have one server where the sudoers for the ansible user is like this:
ansible ALL=(root) NOPASSWD: /usr/bin/su - root
Hence, the only way to switch to the root user is:
sudo su - root
When I try to run the below ansible playbook:
---
- name: Configure Local Repo server address
hosts: lab
remote_user: ansible
become: yes
become_user: root
become_method: runas
tasks:
- name: test whoami
become: yes
shell:
cmd: whoami
register: whoami_output
- debug: var=whoami_output
- name: Deploy local.repo file to the hosts
become: yes
copy:
src: /etc/ansible/files/local.repo
dest: /etc/yum.repos.d/local.repo
owner: ansible
group: ansible
mode: 0644
backup: yes
register: deploy_file_output
- debug: var=deploy_file_output
I got the following error:
ansible-playbook --private-key /etc/ansible/keys/ansible_key /etc/ansible/playbooks/local_repo_provisioning.yml
PLAY [Configure Local Repo server address] *****************************************************************************************************************************************************************************************************
TASK [Gathering Facts] *************************************************************************************************************************************************************************************************************************
ok: [10.175.65.12]
TASK [test whoami] *****************************************************************************************************************************************************************************************************************************
changed: [10.175.65.12]
TASK [debug] ***********************************************************************************************************************************************************************************************************************************
ok: [10.175.65.12] => {
"whoami_output": {
"changed": true,
"cmd": "whoami",
"delta": "0:00:00.003301",
"end": "2023-01-15 17:53:56.312715",
"failed": false,
"msg": "",
"rc": 0,
"start": "2023-01-15 17:53:56.309414",
"stderr": "",
"stderr_lines": [],
"stdout": "ansible",
"stdout_lines": [
"ansible"
]
}
}
TASK [Deploy local.repo file to the hosts] *****************************************************************************************************************************************************************************************************
fatal: [10.175.65.12]: FAILED! => {"changed": false, "checksum": "2356deb90d20d5f31351c719614d5b5760ab967d", "msg": "Destination /etc/yum.repos.d not writable"}
PLAY RECAP *************************************************************************************************************************************************************************************************************************************
10.175.65.12 : ok=3 changed=1 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
When I tried to use become_method: sudo I got the "Missing sudo password" message. Further, when I tried become_method: su I got the "Timeout (12s) waiting for privilege escalation prompt:" message.
All in all, would someone know how to explain how ansible runs the commands deppending on the "become_method" set? Is there a way to switch to the root user with that kind of sudoers conf?
Thanks in advance!
I want to overwrite some variables in my playbook file from the inventory file for a host that are defined as "vars_prompt". If I understand it correctly, Ansible shouldn't prompt for the variables if they were already set before, however, it still prompts for the variables when I try to execute the playbook.
How can I overwrite the "vars_prompt" variables from the inventory or is this not possible because of the variable precedence definition of Ansible?
Example:
playbook.yml
---
- name: Install Gateway
hosts: all
become: yes
vars_prompt:
- name: "hostname"
prompt: "Hostname"
private: no
...
inventory.yml
---
all:
children:
gateways:
hosts:
gateway:
ansible_host: 192.168.1.10
ansible_user: user
hostname: "gateway-name"
...
Q: "If I understand it correctly, Ansible shouldn't prompt for the variables if they were already set before, however, it still prompts for the variables when I try to execute the playbook."
A: You're wrong. Ansible won't prompt for variables defined by the command line --extra-vars. Quoting from Interactive input: prompts:
Prompts for individual vars_prompt variables will be skipped for any variable that is already defined through the command line --extra-vars option, ...
You can't overwrite vars_prompt variables from the inventory. See Understanding variable precedence. Inventory variables (3.-9.) is lower precedence compared to play vars_prompt (13.). The precedence of extra vars is 22.
Use the module pause to ask for the hostname if any variable is not defined. For example, the inventory
shell> cat hosts
host_1
host_2
and the playbook
hosts: all
gather_facts: false
vars:
hostnames: "{{ ansible_play_hosts_all|
map('extract', hostvars, 'hostname')|
list }}"
hostnames_undef: "{{ hostnames|from_yaml|
select('eq', 'AnsibleUndefined')|
length > 0 }}"
tasks:
- debug:
msg: |
hostnames: {{ hostnames }}
hostnames_undef: {{ hostnames_undef }}
run_once: true
- pause:
prompt: "Hostname"
register: out
when: hostnames_undef
run_once: true
- set_fact:
hostname: "{{ out.user_input }}"
when: hostname is not defined
- debug:
var: hostname
gives
shell> ansible-playbook pb.yml
PLAY [all] ************************************************************************************
TASK [debug] **********************************************************************************
ok: [host_1] =>
msg: |-
hostnames: [AnsibleUndefined, AnsibleUndefined]
hostnames_undef: True
TASK [pause] **********************************************************************************
[pause]
Hostname:
gw.example.com^Mok: [host_1]
TASK [set_fact] *******************************************************************************
ok: [host_1]
ok: [host_2]
TASK [debug] **********************************************************************************
ok: [host_1] =>
hostname: gw.example.com
ok: [host_2] =>
hostname: gw.example.com
PLAY RECAP ************************************************************************************
host_1: ok=4 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
host_2: ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
The playbook won't ovewrite variables defined in the inventory. For example
shell> cat hosts
host_1
host_2 hostname=gw2.example.com
gives
TASK [debug] **********************************************************************************
ok: [host_1] =>
hostname: gw.example.com
ok: [host_2] =>
hostname: gw2.example.com
I don't know if you can stop the prompts but you can se a default value directly in vars_prompts. In this way you do not need to type "gateway-name" every time.
vars_prompt:
- name: "hostname"
prompt: "Hostname"
private: no
default: "gateway-name"
Source: https://docs.ansible.com/ansible/latest/user_guide/playbooks_prompts.html
I wish to grab in a variable sshreachable if a target hosts all_hosts are reachable or not.
I wrote the below playbook for the same.
- name: Play 3- check telnet nodes
hosts: localhost
ignore_unreachable: yes
- name: Check all port numbers are accessible from current host
include_tasks: innertelnet.yml
with_items: "{{ groups['all_hosts'] }}"
cat innertelnet.yml
---
- name: Check ssh connectivity
block:
- raw: "ssh -o BatchMode=yes root#{{ item }} echo success"
ignore_errors: yes
register: sshcheck
- debug:
msg: "SSHCHECK variable:{{ sshcheck }}"
- set_fact:
sshreachable: 'SSH SUCCESS'
when: sshcheck.unreachable == 'false'
- set_fact:
sshreachable: 'SSH FAILED'
when: sshcheck.unreachable == 'true'
- debug:
msg: "INNERSSH1: {{ sshreachable }}"
Unfortunately, i get error like below:
Output:
TASK [raw] *********************************************************************
fatal: [localhost]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: Shared connection to 10.9.9.126 closed.", "skip_reason": "Host localhost is unreachable", "unreachable": true}
TASK [debug] ***********************************************************************************************************************************************************
task path:
ok: [localhost] => {
"msg": "SSHCHECK variable:{'msg': u'Failed to connect to the host via ssh: Shared connection to 10.9.9.126 closed.', 'unreachable': True, 'changed': False}"
}
TASK [set_fact] ****************************************************************
skipping: [localhost]
TASK [set_fact] ****************************************************************
skipping: [localhost]
TASK [debug] *******************************************************************
fatal: [localhost]: FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: 'sshreachable' is undefined\n\nThe error appears to be in '/app/playbook/checkssh/innertelnet.yml': line 45, column 10, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n - debug:\n ^ here\n"}
PLAY RECAP *********************************************************************
10.0.116.194 : ok=101 changed=1 unreachable=9 failed=0 skipped=12 rescued=0 ignored=95
localhost : ok=5 changed=0 unreachable=1 failed=1 skipped=4 rescued=0 ignored=0
Can you please suggest changes to my code to get this to work?
The error seems to indicate that sshreachable variable is not getting set as the when: condition does not match. I.e. sshcheck.unreachable might not be something returned by raw.
For this purpose, command module should be enough, and we can evaluate the return code of the command to set_fact.
You could do something like:
- block:
- command: ssh -o BatchMode=yes user#host1 echo success
ignore_errors: yes
register: sshcheck
- set_fact:
sshreachable: "{{ sshcheck is success }}"
- debug:
msg: "Host1 reachable: {{ sshreachable | string }}"
Update:
raw module seems to work the same way. Example (including #mdaniel's valuable input):
- block:
- raw: ssh -o BatchMode=yes user#host1 echo success
ignore_errors: yes
register: sshcheck
- set_fact:
sshreachable: SSH SUCCESS
when: sshcheck is success
- set_fact:
sshreachable: SSH FAILED
when: sshcheck is failed
- debug:
msg: "Host1 reachable: {{ sshreachable }}"
I wish to obtain the value of "fdet_APP" variable of Play 2 into Play 3.
Below is my playbook which case be used as a testcase:
- name: "Play 1"
hosts: localhost
tasks:
- add_host: name={{ item }}
groups=dest_nodes
ansible_user={{ USER }}
with_items: "{{ Dest_IP.split(',') }}"
- name: "Play 2"
hosts: dest_nodes
user: "{{ USER }}"
tasks:
- set_fact:
fdet_APP: "Yellow"
- name: "Play 3"
hosts: localhost
user: "{{ USER }}"
vars:
dbfiledet: "{{ hostvars['dest_nodes']['fdet_APP'] }}"
tasks:
- debug: msg="{{ dbfiledet.stdout }}"
I get the below error for my attempt:
playbook RUN command:
ansible-playbook variabletest.yml -e "USER=user1 Dest_IP=10.17.44.26,10.17.54.26"
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
PLAY [Play 1]
TASK [Gathering Facts]
************************************************************************************************************************************** ok: [localhost]
TASK [add_host]
********************************************************************************************************************************************* changed: [localhost] => (item=10.17.44.26) changed: [localhost] =>
(item=10.17.54.26)
PLAY [Play 2]
TASK [Gathering Facts]
************************************************************************************************************************************** ok: [10.17.54.26] ok: [10.17.44.26]
TASK [set_fact]
********************************************************************************************************************************************* ok: [10.17.44.26] ok: [10.17.54.26]
PLAY [Play 3]
TASK [Gathering Facts]
************************************************************************************************************************************** ok: [localhost]
TASK [debug]
************************************************************************************************************************************************ fatal: [localhost]: FAILED! => {"msg": "The task includes an option
with an undefined variable. The error was: \"hostvars['dest_nodes']\"
is undefined\n\nThe error appears to be in 'variabletest.yml': line
36, column 6, but may\nbe elsewhere in the file depending on the exact
syntax problem.\n\nThe offending line appears to be:\n\n\n - debug:
msg=\"{{ dbfiledet.stdout }}\"\n ^ here\nWe could be wrong, but
this one looks like it might be an issue with\nmissing quotes. Always
quote template expression brackets when they\nstart a value. For
instance:\n\n with_items:\n - {{ foo }}\n\nShould be written
as:\n\n with_items:\n - \"{{ foo }}\"\n"}
PLAY RECAP
10.17.44.26 : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
10.17.54.26 : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 localhost
: ok=3 changed=1 unreachable=0 failed=1 skipped=0
rescued=0 ignored=0
I'm on the latest version of ansible and python 2.7.5
Can someone suggest what is wrong and how can i get the value for the variable in Play 3 please ?
hostvars are tied to a single Ansible managed host, not an inventory group. Try running debug: var=hostvars to see what I mean. In your case dest_nodes is an inventory group, not a host.
If just want to pull the variable from any host in the group, try:
- debug:
msg: "{{ hostvars[groups.dest_nodes|first]['fdet_APP'] }}"
If you are looking to parse the value for all hosts in the group, then you'll need to implement either a loop or a json_query filter
As the subject says. I have some host variables defined in my hosts inventory file. How do I access them in my playbook?
Here is an example. Based on all my research I was expecting foo and bar to be part of hostvars. I can put host specific variables in separate var files, but I would love to keep them in my inventory file "attached" to a host. I don't want to use it in templates.
ansible version: 1.3.2, ansible_distribution_version: 6.4
bash $
bash $ ansible --version
ansible 1.3.2
bash $
bash $ cat test_inv.ini
[foobar]
someHost foo="some string" bar=123
someOtherHost foo="some other string" bar=456
bash $
bash $ cat test.yml
---
- name: test variables...
hosts: all
vars:
- some_junk: "1"
# gather_facts: no # foo and bar are unavailable whether I gather facts or not.
tasks:
- debug: msg="hostvars={{hostvars}}"
- debug: msg="vars={{vars}}"
- debug: msg="groups={{groups}}"
- debug: msg="some_junk={{some_junk}}"
# - debug: msg="???? HOW DO I PRINT values of host specific variables foo and bar defined in inventory file ???"
bash $
bash $
bash $ ansible-playbook -i test_inv.ini test.yml
PLAY [test variables...] ******************************************************
GATHERING FACTS ***************************************************************
ok: [someHost]
TASK: [debug msg="hostvars={{hostvars}}"] *************************************
ok: [someHost] => {"msg": "hostvars={'someHost': {u'facter_operatingsystem': u'RedHat', u'facter_selinux_current_mode': u'enforcing', u'facter_hostname': u'someHost', 'module_setup': True, u'facter_memoryfree_mb': u'1792.70', u'ansible_distribution_version': u'6.4' // ...........snip...........// u'VMware IDE CDR10'}}"}
TASK: [debug msg="vars={{vars}}"] *********************************************
ok: [someHost] => {"msg": "vars={'some_junk': '1', 'delegate_to': None, 'changed_when': None, 'register': None, 'inventory_dir': '/login/sg219898/PPP/automation/ansible', 'always_run': False, 'ignore_errors': False}"}
TASK: [debug msg="groups={{groups}}"] *****************************************
ok: [someHost] => {"msg": "groups={'ungrouped': [], 'foobar': ['someHost'], 'all': ['someHost']}"}
TASK: [debug msg="some_junk=1"] ***********************************************
ok: [someHost] => {"msg": "some_junk=1"}
PLAY RECAP ********************************************************************
someHost : ok=5 changed=0 unreachable=0 failed=0
bash $
Doing the following should work:
debug: msg="foo={{foo}}"
The foo variable will be evaluated in the context of the current host. Tested locally with ansible 1.3.4.