How to detect unreachable target hosts in ansible - ssh

I wish to grab in a variable sshreachable if a target hosts all_hosts are reachable or not.
I wrote the below playbook for the same.
- name: Play 3- check telnet nodes
hosts: localhost
ignore_unreachable: yes
- name: Check all port numbers are accessible from current host
include_tasks: innertelnet.yml
with_items: "{{ groups['all_hosts'] }}"
cat innertelnet.yml
---
- name: Check ssh connectivity
block:
- raw: "ssh -o BatchMode=yes root#{{ item }} echo success"
ignore_errors: yes
register: sshcheck
- debug:
msg: "SSHCHECK variable:{{ sshcheck }}"
- set_fact:
sshreachable: 'SSH SUCCESS'
when: sshcheck.unreachable == 'false'
- set_fact:
sshreachable: 'SSH FAILED'
when: sshcheck.unreachable == 'true'
- debug:
msg: "INNERSSH1: {{ sshreachable }}"
Unfortunately, i get error like below:
Output:
TASK [raw] *********************************************************************
fatal: [localhost]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: Shared connection to 10.9.9.126 closed.", "skip_reason": "Host localhost is unreachable", "unreachable": true}
TASK [debug] ***********************************************************************************************************************************************************
task path:
ok: [localhost] => {
"msg": "SSHCHECK variable:{'msg': u'Failed to connect to the host via ssh: Shared connection to 10.9.9.126 closed.', 'unreachable': True, 'changed': False}"
}
TASK [set_fact] ****************************************************************
skipping: [localhost]
TASK [set_fact] ****************************************************************
skipping: [localhost]
TASK [debug] *******************************************************************
fatal: [localhost]: FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: 'sshreachable' is undefined\n\nThe error appears to be in '/app/playbook/checkssh/innertelnet.yml': line 45, column 10, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n - debug:\n ^ here\n"}
PLAY RECAP *********************************************************************
10.0.116.194 : ok=101 changed=1 unreachable=9 failed=0 skipped=12 rescued=0 ignored=95
localhost : ok=5 changed=0 unreachable=1 failed=1 skipped=4 rescued=0 ignored=0
Can you please suggest changes to my code to get this to work?

The error seems to indicate that sshreachable variable is not getting set as the when: condition does not match. I.e. sshcheck.unreachable might not be something returned by raw.
For this purpose, command module should be enough, and we can evaluate the return code of the command to set_fact.
You could do something like:
- block:
- command: ssh -o BatchMode=yes user#host1 echo success
ignore_errors: yes
register: sshcheck
- set_fact:
sshreachable: "{{ sshcheck is success }}"
- debug:
msg: "Host1 reachable: {{ sshreachable | string }}"
Update:
raw module seems to work the same way. Example (including #mdaniel's valuable input):
- block:
- raw: ssh -o BatchMode=yes user#host1 echo success
ignore_errors: yes
register: sshcheck
- set_fact:
sshreachable: SSH SUCCESS
when: sshcheck is success
- set_fact:
sshreachable: SSH FAILED
when: sshcheck is failed
- debug:
msg: "Host1 reachable: {{ sshreachable }}"

Related

Overwrite vars_prompt variable in playbook with host variable from inventory in Ansible

I want to overwrite some variables in my playbook file from the inventory file for a host that are defined as "vars_prompt". If I understand it correctly, Ansible shouldn't prompt for the variables if they were already set before, however, it still prompts for the variables when I try to execute the playbook.
How can I overwrite the "vars_prompt" variables from the inventory or is this not possible because of the variable precedence definition of Ansible?
Example:
playbook.yml
---
- name: Install Gateway
hosts: all
become: yes
vars_prompt:
- name: "hostname"
prompt: "Hostname"
private: no
...
inventory.yml
---
all:
children:
gateways:
hosts:
gateway:
ansible_host: 192.168.1.10
ansible_user: user
hostname: "gateway-name"
...
Q: "If I understand it correctly, Ansible shouldn't prompt for the variables if they were already set before, however, it still prompts for the variables when I try to execute the playbook."
A: You're wrong. Ansible won't prompt for variables defined by the command line --extra-vars. Quoting from Interactive input: prompts:
Prompts for individual vars_prompt variables will be skipped for any variable that is already defined through the command line --extra-vars option, ...
You can't overwrite vars_prompt variables from the inventory. See Understanding variable precedence. Inventory variables (3.-9.) is lower precedence compared to play vars_prompt (13.). The precedence of extra vars is 22.
Use the module pause to ask for the hostname if any variable is not defined. For example, the inventory
shell> cat hosts
host_1
host_2
and the playbook
hosts: all
gather_facts: false
vars:
hostnames: "{{ ansible_play_hosts_all|
map('extract', hostvars, 'hostname')|
list }}"
hostnames_undef: "{{ hostnames|from_yaml|
select('eq', 'AnsibleUndefined')|
length > 0 }}"
tasks:
- debug:
msg: |
hostnames: {{ hostnames }}
hostnames_undef: {{ hostnames_undef }}
run_once: true
- pause:
prompt: "Hostname"
register: out
when: hostnames_undef
run_once: true
- set_fact:
hostname: "{{ out.user_input }}"
when: hostname is not defined
- debug:
var: hostname
gives
shell> ansible-playbook pb.yml
PLAY [all] ************************************************************************************
TASK [debug] **********************************************************************************
ok: [host_1] =>
msg: |-
hostnames: [AnsibleUndefined, AnsibleUndefined]
hostnames_undef: True
TASK [pause] **********************************************************************************
[pause]
Hostname:
gw.example.com^Mok: [host_1]
TASK [set_fact] *******************************************************************************
ok: [host_1]
ok: [host_2]
TASK [debug] **********************************************************************************
ok: [host_1] =>
hostname: gw.example.com
ok: [host_2] =>
hostname: gw.example.com
PLAY RECAP ************************************************************************************
host_1: ok=4 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
host_2: ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
The playbook won't ovewrite variables defined in the inventory. For example
shell> cat hosts
host_1
host_2 hostname=gw2.example.com
gives
TASK [debug] **********************************************************************************
ok: [host_1] =>
hostname: gw.example.com
ok: [host_2] =>
hostname: gw2.example.com
I don't know if you can stop the prompts but you can se a default value directly in vars_prompts. In this way you do not need to type "gateway-name" every time.
vars_prompt:
- name: "hostname"
prompt: "Hostname"
private: no
default: "gateway-name"
Source: https://docs.ansible.com/ansible/latest/user_guide/playbooks_prompts.html

How to deal with multiple when condition for registered variable in ansible

I have a playbook 3 raw task (or more) with sample commands like below:
Playbook mytest.yml
- hosts: remotehost
gather_facts: no
tasks:
- name: Execute command1
raw: "ls -ltr"
register: cmdoutput
when: remcmd == "list"
- name: Execute command2
raw: "hostname"
register: cmdoutput
when: remcmd == "host"
- name: Execute command3
raw: "uptime"
register: cmdoutput
when: remcmd == "up"
- hosts: localhost
gather_facts: no
tasks:
- debug:
msg: "Printing {{ hostvars['remotehost']['cmdoutput'] }}"
This is my nventory myhost.yml
[remotehost]
myserver1
Here is how I run the playbook:
ansible-playbook -i myhost.yml mytest.yml -e remcmd="host"
PLAY [remotehost] ***************************************************************************************************************
TASK [Execute command1] *********************************************************************************************************
Thursday 06 October 2022 07:06:06 -0500 (0:00:00.013) 0:00:00.013 ******
skipping: [myserver1]
TASK [Execute command2] *********************************************************************************************************
Thursday 06 October 2022 07:06:06 -0500 (0:00:00.023) 0:00:00.036 ******
changed: [myserver1]
TASK [Execute command3] *********************************************************************************************************
Thursday 06 October 2022 07:06:06 -0500 (0:00:00.521) 0:00:00.557 ******
skipping: [myserver1]
PLAY [localhost] ****************************************************************************************************************
TASK [debug] ********************************************************************************************************************
Thursday 06 October 2022 07:06:06 -0500 (0:00:00.032) 0:00:00.590 ******
fatal: [localhost]: FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: \"hostvars['remotehost']\" is undefined\n\nThe error appears to be in '/home/wladmin/mytest.yml': line 22, column 7, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n tasks:\n - debug:\n ^ here\n"}
PLAY RECAP **********************************************************************************************************************
localhost : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
myserver1 : ok=1 changed=1 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0
My requirement is no matter what value is passed for remcmd my localhost play should print stdoutlines of cmdoutput
Preliminary notes:
Using raw is evil.
Don't use raw unless to install prereqs (i.e. python) on the target host. Then switch to modules or at the very least command/shell
If you still intend to use raw, go back to point 1 above
In case your forgot to go back to point 1: using raw is evil
Don't register several tasks with the same var name (the last one always win, even if skipped). Don't create tasks you can avoid up-start.
As an illustration of the above principles
- hosts: remotehost
gather_facts: no
vars:
cmd_map:
list: ls -ltr
host: hostname
up: uptime
tasks:
- name: Make sure remcmd is known
assert:
that: remcmd in cmp_map.keys()
fail_msg: "remcmd must be one of: {{ cmd_map.keys() | join(', ') }}"
- name: Execute command
command: "{{ cmd_map[remcmd] }}"
register: cmdoutput
- name: Show entire result from above task
debug:
var: cmdoutput
my localhost play should print stdout_lines of cmdoutput
As far as I understand "How the debug module works", it can only print on the Control Node.
Therefore you could just remove three (3) lines in your example
- hosts: localhost
gather_facts: no
tasks:
and give it a try with
- hosts: remotehost
gather_facts: no
tasks:
- name: Execute command1
raw: "ls -ltr"
register: cmdoutput
when: remcmd == "list"
- name: Execute command2
raw: "hostname"
register: cmdoutput
when: remcmd == "host"
- name: Execute command3
raw: "uptime"
register: cmdoutput
when: remcmd == "up"
- debug:
msg: "Printing {{ cmdoutput }}"
and independently of which task became executed the result would be provided.
Apart from the answer about "How the debug module works" here, I like to recommended to proceed further with the answer of Zeitounator, since it will address your possible use case more complete.

Ansible ssh error: mux_client_read_packet: read header failed: Broken pipe Received exit status from master

I have a script /wd/remoteuser/stopALL.sh on remotehost i.e 10.0.0.211 and takes 3 seconds to complete execution and has full permission 775 for remoteuser.
Note: /wd/remoteuser/stopALL.sh does not exist on the host where ansible runs.
I wish to trigger the stop script on remotehost from my ansiblehost.
Below is how i run my ansible playbook.
ansible-playbook /app/playbook/ovs.yml -i /app/playbook/ovs.hosts -t stop -f 5 -e Environment=PROD -e Country=SRILANKA -vvvv
cat /app/playbook/ovs.yml
---
- name: Play 1- check for login and mount point
hosts: "*{{ Country }}_{{ Environment }}"
user: "{{ USER }}"
any_errors_fatal: true
vars:
ansible_ssh_extra_args: -o StrictHostKeyChecking=no -o ConnectTimeout=90 -o ServerAliveInterval=50
ansible_ssh_private_key_file: /app/ssh_keys/id_rsa
gather_facts: false
tasks:
- name: Execute backup stop1 script
tags: stop,restart
script: "{{ stopscript }}"
args:
chdir: "{{ stopscript | dirname }}"
register: stopscriptoutput
- name: Debug stopscript
tags: stop,restart
debug:
msg: "{{ stopscriptoutput.stdout }}"
cat /app/playbook/ovs.hosts
[APP_SRILANKA_PROD]
10.0.0.211 USER=remoteuser stopscript=/wd/remoteuser/stopALL.sh countrydet=SRILANKA evt=PROD
Output:
<10.0.0.211> (0, '', 'OpenSSH_7.4p1, OpenSSL 1.0.2k-fips 26 Jan 2017\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 58: Applying options for *\r\ndebug3: kex names ok: [curve25519-sha256,curve25519-sha256#libssh.org,diffie-hellman-group14-sha1,diffie-hellman-group14-sha256,diffie-hellman-group16-sha512,diffie-hellman-group18-sha512,diffie-hellman-group-exchange-sha1,diffie-hellman-group-exchange-sha256,ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,gss-gex-sha1-,gss-group14-sha1-]\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 190236\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
fatal: [10.0.0.211]: FAILED! => {
"changed": false,
"msg": "Could not find or access '/wd/remoteuser/stopALL.sh' on the Ansible Controller.\nIf you are using a module and expect the file to exist on the remote, see the remote_src option"
}
NO MORE HOSTS LEFT *****************************************************************************************************************************************************
PLAY RECAP *************************************************************************************************************************************************************
10.0.0.211 : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
I get this ssh read header failed: Broken pipe error even if I use the shell module as shown below.
- name: Execute backup stop1 script
tags: stop
shell: "sleep 90; {{ stopscript }}; sleep 90"
register: stopscriptoutput
Kindly suggest how can I resolve the ssh broken pipe error and get the script to execute remotely?
Set the proxy in environment variable and it started working

Ansible hostname and IP address

How I can use the value of hostname and IP address from hosts inventory file?
For example, I have only one host in the hosts file with name as FQDN, but this is registered on the DNS server.
I tried with some vars, but always get the hostname. But, need both of them :(
Output of request to DNS server:
nslookup host1.dinamarca.com
Server: 10.10.1.1
Address: 10.10.1.1#53
Name: host1.dinamarca.com
Address: 192.168.1.10
Example host file: (only have one host)
host1.dinamarca.com
I call the service ansible with the command:
ansible-playbook --ask-pass -i hosts test.yml
My test.yml file:
---
- name: test1
hosts: host1.dinamarca.com
remote_user: usertest
tasks:
- name: show ansible_ssh_host
debug:
msg: "{{ ansible_ssh_host }}"
- name: show inventary_hostname
debug: var=inventory_hostname
- name: show ansible_hostname
debug: var=ansible_hostname
...
Output is:
TASK [show ansible_ssh_host] ****************************************************************************************************************************************
ok: [host1.dinamarca.com] => {
"msg": "host1.dinamarca.com"
}
TASK [show inventary_hostname] **************************************************************************************************************************************
ok: [host1.dinamarca.com] => {
"inventory_hostname": "host1.dinamarca.com"
}
TASK [show ansible_hostname] ****************************************************************************************************************************************
ok: [host1.dinamarca.com] => {
"ansible_hostname": "host1"
}
PLAY RECAP ************************************************************************************************ *************************************************************
host1.dinamarca.com : ok=4 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
There is an Ansible fact called ansible_fqdn. If you need both the hostname and FQDN, you can have tasks like this:
tasks:
- name: show ansible_ssh_host
debug:
msg: "{{ ansible_ssh_host }}"
- name: show inventory_hostname
debug:
msg: "{{ inventory_hostname }}"
- name: show ansible_hostname
debug:
msg: "{{ ansible_fqdn }}"

"Failed to connect to the host via ssh" error Ansible

I am trying to run the following playbook on Ansible:
- hosts: localhost
connection: local
remote_user: test
gather_facts: no
vars_files:
- files/aws_creds.yml
- files/info.yml
tasks:
- name: Basic provisioning of EC2 instance
ec2:
assign_public_ip: no
aws_access_key: "{{ aws_id }}"
aws_secret_key: "{{ aws_key }}"
region: "{{ aws_region }}"
image: "{{ standard_ami }}"
instance_type: "{{ free_instance }}"
key_name: "{{ ssh_keyname }}"
count: 3
state: present
group_id: "{{ secgroup_id }}"
wait: no
#delete_on_termination: yes
instance_tags:
Name: Dawny33Template
register: ec2
- name: Add new instance to host group
add_host:
hostname: "{{ item.public_ip }}"
groupname: launched
with_items: "{{ ec2.instances }}"
## Here lies the SSH code
- name: Wait for SSH to come up
wait_for:
host: "{{ item.public_ip }}"
port: 22
delay: 60
timeout: 320
state: started
with_items: "{{ ec2.instances }}"
- name: Configure instance(s)
hosts: launched
become: True
gather_facts: True
#roles:
# - my_awesome_role
# - my_awesome_test
- name: Terminate instances
hosts: localhost
connection: local
tasks:
- name: Terminate instances that were previously launched
ec2:
state: 'absent'
instance_ids: '{{ ec2.instance_ids }}'
I am getting the following error:
TASK [setup] *******************************************************************
fatal: [52.32.183.176]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: Warning: Permanently added '52.32.183.176' (ECDSA) to the list of known hosts.\r\nPermission denied (publickey).\r\n", "unreachable": true}
fatal: [52.34.255.16]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: Warning: Permanently added '52.34.255.16' (ECDSA) to the list of known hosts.\r\nPermission denied (publickey).\r\n", "unreachable": true}
fatal: [52.34.253.51]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: Warning: Permanently added '52.34.253.51' (ECDSA) to the list of known hosts.\r\nPermission denied (publickey).\r\n", "unreachable": true}
My ansible.cfg file already has the following:
[defaults]
host_key_checking = False
Yet, the playbook run is failing. Can someone help me with what I am doing wrong?
The answer has to lie in:
Permission denied (publickey).
You got past host key checking - your problem is with authentication.
Are you intending to use key-based authentication? If so, does
ssh <host> -l <ansible_user>
work for you, or does it produce a password prompt?
Are you trying to use password authentication? If so, it looks like your node does not allow it.
Edit:
adding -vvvv to your playbook enables SSH debugging.
is SSH setup properly? the logs indicate your public key isn't working