When I am running this on Ansible on GNS3, I am getting this error. Can anyone please help me with this error?
Hosts File
[ios]
172.20.10.55
[ios:vars]
ansible_network_os=ios
ansible_user=admin
ansible_password=cisco
ansible_become=yes
ansible_become_method=enable
Playbook
- name: multiple commands
hosts: ios
gather_facts: false
connection: network_cli
tasks:
- name: configure ospf
ios_config:
lines:
- configure terminal
- 10 pemrit ip host 192.168.1.1 any log
parents: ip access-list extended test
Error
TASK [configure ospf] **********************************************************
fatal: [172.20.10.55]: FAILED! => {"changed": false, "msg": "unable to elevate privilege to enable mode, at prompt [\nR1>] with error: failed to elevate privilege to enable mode still at prompt [\nR1>]"}
PLAY RECAP *********************************************************************
172.20.10.55 : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
You need to set ansible_become_password.
https://docs.ansible.com/ansible/latest/network/user_guide/platform_ios.html
Related
I'm running a ansible playbook with several tasks and hosts. In this playbook I'm trying to rerun tasks to failed hosts. I'll try to rebuild the situation:
Inventory:
[hostgroup_1]
host1 ansible_host=1.1.1.1
host2 ansible_host=1.1.1.2
[hostgroup_2]
host3 ansible_host=1.1.1.3
host4 ansible_host=1.1.1.4
The hosts from "hostgroup_1" are supposed to fail, so I can check the error-handling on the two hosts.
Playbook:
---
- name: firstplaybook
hosts: all
gather_facts: false
connection: network_cli
vars:
- ansible_network_os: ios
tasks:
- name: sh run
cisco.ios.ios_command:
commands: show run
- name: sh run
cisco.ios.ios_command:
commands: show run
As expected the fist two hosts (1.1.1.1 & 1.1.1.2) are failing and won't be considered for the second task. After looking to several Ansible documentations I found the meta clear_host_errors task. So I tried to run the playbook like this:
---
- name: firstplaybook
hosts: all
gather_facts: false
connection: network_cli
vars:
- ansible_network_os: ios
tasks:
- name: sh run
cisco.ios.ios_command:
commands: show run
- meta: clear_host_errors
- name: sh run
cisco.ios.ios_command:
commands: show run
Sadly the meta input did not reset the hosts and the Playbook went on without considering the failed hosts again.
Actually I would just like to know how Ansible considers failed hosts in a run again, so I can go on with these.
Thank y'all in advance
Regards, Lucas
Do you get any different results when using:
ignore_errors: true
or
ignore_unreachable: yes
with the first task?
Q: "How Ansible considers failed hosts in a run again?"
A: Use ignore_unreachable (New in version 2.7.). For example, in the play below the host test_99 is unreachable
- hosts: test_11,test_12,test_99
gather_facts: false
tasks:
- ping:
- debug:
var: inventory_hostname
As expected, the debug task omit the unreachable host
PLAY [test_11,test_12,test_99] ********************************************
TASK [ping] ***************************************************************
fatal: [test_99]: UNREACHABLE! => changed=false
msg: 'Failed to connect to the host via ssh: ssh: Could not resolve
hostname test_99: Name or service not known'
unreachable: true
ok: [test_11]
ok: [test_12]
TASK [debug] ***************************************************************
ok: [test_11] =>
inventory_hostname: test_11
ok: [test_12] =>
inventory_hostname: test_12
PLAY RECAP *****************************************************************
If you set ignore_unreachable: true the host will be skipped and included in the next task
- hosts: test_11,test_12,test_99
gather_facts: false
tasks:
- ping:
ignore_unreachable: true
- debug:
var: inventory_hostname
PLAY [test_11,test_12,test_99] ********************************************
TASK [ping] ***************************************************************
fatal: [test_99]: UNREACHABLE! => changed=false
msg: 'Failed to connect to the host via ssh: ssh: Could not resolve
hostname test_99: Name or service not known'
skip_reason: Host test_99 is unreachable
unreachable: true
ok: [test_11]
ok: [test_12]
TASK [debug] ***************************************************************
ok: [test_11] =>
inventory_hostname: test_11
ok: [test_12] =>
inventory_hostname: test_12
ok: [test_99] =>
inventory_hostname: test_99
PLAY RECAP *****************************************************************
In my ansible run i am getting the following error:
PLAY [test hashi vault] ******************************************************************************************************
TASK [Gathering Facts] *******************************************************************************************************
/usr/lib/python2.7/site-packages/urllib3/connectionpool.py:988: InsecureRequestWarning: Unverified HTTPS request is being made to host 'vault.domain'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
InsecureRequestWarning,
ok: [192.168.1.200]
TASK [show bar] **************************************************************************************************************
/usr/lib/python2.7/site-packages/urllib3/connectionpool.py:988: InsecureRequestWarning: Unverified HTTPS request is being made to host 'vault.domain'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
InsecureRequestWarning,
fatal: [192.168.1.200]: FAILED! => {"msg": "Incorrect sudo password"}
PLAY RECAP *******************************************************************************************************************
192.168.1.200 : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
i know that the password is correct having done a debug and the same password works when extracting from vault using curl. this is the new code were i get the error:
---
- name: test hashi vault
hosts: all
remote_user: ec2-user
tasks:
- name: show bar
systemd:
state: restarted
name: sssd.service
async: 45
become: yes
become_method: sudo
this is what im running:
ansible-playbook -l 192.168.1.200 test.yml --private-key=/home/rehna/.ssh/testKeyPair.pem --vault-password-file /etc/ansible/ansible.vault -e #credentials
contents of credentials:
ansible_user: ec2-user
ansible_become_pass: "{{ lookup('hashi_vault', 'secret=secret/test/ec2_password auth_method=userpass username={{vault_user}} password={{vault_password}} url={{vault_url}}:{{vault_port}} validate_certs=false') }}"
hosts
[ec2]
192.168.1.200
[test_env]
192.168.1.200 remote_user=ec2-user
from /var/log/secure:
unix_chkpwd[30174]: password check failed for user (ec2-user)
sudo: pam_unix(sudo:auth): authentication failure; logname=ec2-user uid=1000 euid=0 tty=/dev/pts/4 ruser=ec2-user rhost= user=ec2-user
sudo: pam_unix(sudo:auth): conversation failed
sudo: pam_unix(sudo:auth): auth could not identify password for [ec2-user]
should be like this:
sudo: ec2-user : TTY=pts/4 ; PWD=/home/ec2-user ; USER=root ; COMMAND=/bin/passwd --stdin ec2-user
sudo: pam_unix(sudo:session): session opened for user root by ec2-user(uid=0)
sudo: pam_unix(sudo:session): session closed for user root
the format of the data returned is dict key/value pairs.
you need to extract the content from the return data provided by the lookup:
ec2_pass: "{{ lookup('hashi_vault', 'secret=secret/test/ec2_password auth_method=userpass username={{vault_user}} password={{vault_password}} url={{vault_url}}:{{vault_port}} validate_certs=false') }}"
ansible_become_pass: "{{ec2_pass.value}}"
I have the following playbook:
---
- name: Get Nokia Info
hosts: LAB9ERIP008
connection: local
gather_facts: no
tasks:
- name: run show version command
sros_command:
commands: show version
register: config
- name: create backup of configuration
copy:
content: "{{config.stdout[0]}}"
dest: "/home/dafe/scripts/ansible/backups/show_version_{{inventory_hostname}}.txt"
And when I run the playbook, give me the following error:
[dafe#CETPMGIP001 ansible]$ ansible-playbook nokia.yml -i myhostsfile
PLAY [Get Cisco Info] **************************************************************************************************************
TASK [run show version command] ****************************************************************************************************
fatal: [LAB9ERIP008]: FAILED! => {"msg": "paramiko: The authenticity of host '10.150.16.129' can't be established.\nThe ssh-rsa key fingerprint is fca0d4eb97414dc5b5a13fa552e5dd69."}
to retry, use: --limit #/home/dafe/scripts/ansible/nokia.retry
PLAY RECAP *************************************************************************************************************************
LAB9ERIP008 : ok=0 changed=0 unreachable=0 failed=1
I tried to put in myhostsfile the var:
ansible_ssh_private_key_file=/home/dafe/.ssh/known_hosts
But continues to give, the same error.
If I do ssh manually to the host and add the key:
[dafe#CETPMGIP001 ansible]$ ssh dafernandes#10.150.16.129
The authenticity of host '10.150.16.129 (10.150.16.129)' can't be established.
RSA key fingerprint is SHA256:0YQYfLnRCQDZzpZ1+8ekW/Gks6mTxpI4xA56siaQUsM.
RSA key fingerprint is MD5:fc:a0:d4:eb:97:41:4d:c5:b5:a1:3f:a5:52:e5:dd:69.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '10.150.16.129' (RSA) to the list of known hosts.
TiMOS-C-16.0.R6 cpm/hops64 Nokia 7750 SR Copyright (c) 2000-2019 Nokia.
All rights reserved. All use subject to applicable license agreements.
Built on Wed Feb 27 14:42:05 PST 2019 by builder in /builds/c/160B/R6/panos/main
dafernandes#10.150.16.129's password:
And then run the playbook does not make the mistake anymore:
[dafe#CETPMGIP001 ansible]$ ansible-playbook nokia.yml -i myhostsfile
PLAY [Get Cisco Info] **************************************************************************************************************
TASK [run show version command] ****************************************************************************************************
ok: [LAB9ERIP008]
TASK [create backup of configuration] **********************************************************************************************
ok: [LAB9ERIP008]
PLAY RECAP *************************************************************************************************************************
LAB9ERIP008 : ok=2 changed=0 unreachable=0 failed=0
How can I solve this?
Thanks.
David
In the [defaults] section of your ansible.cfg file try setting the key host_key_checking = false.
This is obviously not as secure.
Being that SSH is the primary mechanism Ansible uses to communicate with target hosts, it is important that SSH is configured properly in your environment before attempting to execute Ansible playbooks.
The underlying problem in this case is likely that the SSH key associated with the SSH host you are trying to connect to has changed and no longer matches what is in ~/.ssh/known-hosts. More information about what SSH host keys are for can be found here.
I was not able to find where the actual problem is. I executed below playbook with my private key:
---
- hosts: localhost
gather_facts: false
sudo: yes
tasks:
- name: Install package libpcre3-dev
apt: name=libpcre3-dev state=latest
But I am getting the error below on Vagrant Ubuntu machine:
PLAY [localhost]
*********************************************************************
TASK [Install package ]
***************************************************
fatal: [vagrant]: UNREACHABLE! => {"changed": false, "msg": "Failed to
connect to the host via ssh: Permission denied (publickey,password).\r\n",
"unreachable": true}
to retry, use: --limit #/home/vagrant/playbooks/p1.retry
PLAY RECAP
*********************************************************************
vagrant : ok=0 changed=0 unreachable=1 failed=0
What could be the possible suggestion?
You are running a playbook against a localhost with SSH connection (default in Ansible) and this fails. Most likely because you never configured the account on your machine to accept the key from itself. Using defaults, you'd need to add the ~/.ssh/id_rsa.pub to ~/.ssh/authorized_keys.
Instead, to run on locally add connection: local to the play:
---
- hosts: localhost
connection: local
tasks:
- debug:
And it will give you a proper response:
TASK [debug] *******************************************************************
ok: [localhost] => {
"msg": "Hello world!"
}
I have a Rasberry pi which I can connect to via SSH from terminal through an ethernet cable from my Macbook to the pi via the command 'ssh pi#169.254.0.2'
Yet, when I run an ansible playbook to this host
[pis]
169.254.0.2
I get the following error:
PLAY [Ansible Playbook for configuring brand new Raspberry Pi] *****************
TASK [setup] *******************************************************************
<169.254.0.2> ESTABLISH CONNECTION FOR USER: pi on PORT 22 TO 169.254.0.2
CONNECTION: pid 2118 waiting for lock on 10
CONNECTION: pid 2118 acquired lock on 10
fatal: [169.254.0.2]: UNREACHABLE! => {"changed": false, "msg": "ERROR! (25, 'Inappropriate ioctl for device')", "unreachable": true}
PLAY RECAP *********************************************************************
169.254.0.2 : ok=0 changed=0 unreachable=1 failed=0
My ansible version is 2.0.0.2.
How can I configure Ansible so that it connects in the same way as I am successfully able to connect with SSH from the terminal?
Always include Ansible version when reporting issues like this. I had a similar issue when multiple ssh connections were opened by Ansible. Can you set pipelining to False in Ansible config file (/etc/ansible/ansible.cfg) and try again? Check what it is set to now before setting it.
pipelining = False
I received this error when trying to run ansible from inside a docker container and I got this same error. This answer led me to the solution which was that you have to add the -t flag which allocates a pseudo-TTY.
E.g.
sudo docker run -t -v `pwd`:/ansible -w /ansible ansible:latest ansible-playbook -i inventory.yml site.yml