I have the following playbook:
---
- name: Get Nokia Info
hosts: LAB9ERIP008
connection: local
gather_facts: no
tasks:
- name: run show version command
sros_command:
commands: show version
register: config
- name: create backup of configuration
copy:
content: "{{config.stdout[0]}}"
dest: "/home/dafe/scripts/ansible/backups/show_version_{{inventory_hostname}}.txt"
And when I run the playbook, give me the following error:
[dafe#CETPMGIP001 ansible]$ ansible-playbook nokia.yml -i myhostsfile
PLAY [Get Cisco Info] **************************************************************************************************************
TASK [run show version command] ****************************************************************************************************
fatal: [LAB9ERIP008]: FAILED! => {"msg": "paramiko: The authenticity of host '10.150.16.129' can't be established.\nThe ssh-rsa key fingerprint is fca0d4eb97414dc5b5a13fa552e5dd69."}
to retry, use: --limit #/home/dafe/scripts/ansible/nokia.retry
PLAY RECAP *************************************************************************************************************************
LAB9ERIP008 : ok=0 changed=0 unreachable=0 failed=1
I tried to put in myhostsfile the var:
ansible_ssh_private_key_file=/home/dafe/.ssh/known_hosts
But continues to give, the same error.
If I do ssh manually to the host and add the key:
[dafe#CETPMGIP001 ansible]$ ssh dafernandes#10.150.16.129
The authenticity of host '10.150.16.129 (10.150.16.129)' can't be established.
RSA key fingerprint is SHA256:0YQYfLnRCQDZzpZ1+8ekW/Gks6mTxpI4xA56siaQUsM.
RSA key fingerprint is MD5:fc:a0:d4:eb:97:41:4d:c5:b5:a1:3f:a5:52:e5:dd:69.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '10.150.16.129' (RSA) to the list of known hosts.
TiMOS-C-16.0.R6 cpm/hops64 Nokia 7750 SR Copyright (c) 2000-2019 Nokia.
All rights reserved. All use subject to applicable license agreements.
Built on Wed Feb 27 14:42:05 PST 2019 by builder in /builds/c/160B/R6/panos/main
dafernandes#10.150.16.129's password:
And then run the playbook does not make the mistake anymore:
[dafe#CETPMGIP001 ansible]$ ansible-playbook nokia.yml -i myhostsfile
PLAY [Get Cisco Info] **************************************************************************************************************
TASK [run show version command] ****************************************************************************************************
ok: [LAB9ERIP008]
TASK [create backup of configuration] **********************************************************************************************
ok: [LAB9ERIP008]
PLAY RECAP *************************************************************************************************************************
LAB9ERIP008 : ok=2 changed=0 unreachable=0 failed=0
How can I solve this?
Thanks.
David
In the [defaults] section of your ansible.cfg file try setting the key host_key_checking = false.
This is obviously not as secure.
Being that SSH is the primary mechanism Ansible uses to communicate with target hosts, it is important that SSH is configured properly in your environment before attempting to execute Ansible playbooks.
The underlying problem in this case is likely that the SSH key associated with the SSH host you are trying to connect to has changed and no longer matches what is in ~/.ssh/known-hosts. More information about what SSH host keys are for can be found here.
Related
In my ansible run i am getting the following error:
PLAY [test hashi vault] ******************************************************************************************************
TASK [Gathering Facts] *******************************************************************************************************
/usr/lib/python2.7/site-packages/urllib3/connectionpool.py:988: InsecureRequestWarning: Unverified HTTPS request is being made to host 'vault.domain'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
InsecureRequestWarning,
ok: [192.168.1.200]
TASK [show bar] **************************************************************************************************************
/usr/lib/python2.7/site-packages/urllib3/connectionpool.py:988: InsecureRequestWarning: Unverified HTTPS request is being made to host 'vault.domain'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
InsecureRequestWarning,
fatal: [192.168.1.200]: FAILED! => {"msg": "Incorrect sudo password"}
PLAY RECAP *******************************************************************************************************************
192.168.1.200 : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
i know that the password is correct having done a debug and the same password works when extracting from vault using curl. this is the new code were i get the error:
---
- name: test hashi vault
hosts: all
remote_user: ec2-user
tasks:
- name: show bar
systemd:
state: restarted
name: sssd.service
async: 45
become: yes
become_method: sudo
this is what im running:
ansible-playbook -l 192.168.1.200 test.yml --private-key=/home/rehna/.ssh/testKeyPair.pem --vault-password-file /etc/ansible/ansible.vault -e #credentials
contents of credentials:
ansible_user: ec2-user
ansible_become_pass: "{{ lookup('hashi_vault', 'secret=secret/test/ec2_password auth_method=userpass username={{vault_user}} password={{vault_password}} url={{vault_url}}:{{vault_port}} validate_certs=false') }}"
hosts
[ec2]
192.168.1.200
[test_env]
192.168.1.200 remote_user=ec2-user
from /var/log/secure:
unix_chkpwd[30174]: password check failed for user (ec2-user)
sudo: pam_unix(sudo:auth): authentication failure; logname=ec2-user uid=1000 euid=0 tty=/dev/pts/4 ruser=ec2-user rhost= user=ec2-user
sudo: pam_unix(sudo:auth): conversation failed
sudo: pam_unix(sudo:auth): auth could not identify password for [ec2-user]
should be like this:
sudo: ec2-user : TTY=pts/4 ; PWD=/home/ec2-user ; USER=root ; COMMAND=/bin/passwd --stdin ec2-user
sudo: pam_unix(sudo:session): session opened for user root by ec2-user(uid=0)
sudo: pam_unix(sudo:session): session closed for user root
the format of the data returned is dict key/value pairs.
you need to extract the content from the return data provided by the lookup:
ec2_pass: "{{ lookup('hashi_vault', 'secret=secret/test/ec2_password auth_method=userpass username={{vault_user}} password={{vault_password}} url={{vault_url}}:{{vault_port}} validate_certs=false') }}"
ansible_become_pass: "{{ec2_pass.value}}"
I'm using molecule and vagrant to deploy centos7 instance. For some reasons, I need to use ssh command access molecule instance, instead of molecule login. The ssh informations will then paste into one of my VS code extension.
Molecule.yml
---
dependency:
name: gilt
driver:
name: vagrant
provider:
name: virtualbox
lint:
name: yamllint
platforms:
- name: openresty-instance
box: centos/7
instance_raw_config_args:
- "ssh.insert_key = false"
- "vm.network 'forwarded_port', guest: 22, host: 22"
- "vm.network 'forwarded_port', guest: 80, host: 8080"
interfaces:
- auto_config: true
network_name: private_network
ip: '192.168.33.111'
provisioner:
name: ansible
log: true
lint:
name: ansible-lint
verifier:
name: testinfra
lint:
name: flake8
The IP above let me access port 80 outside vagrant.
But the ssh command to molecule instance IP is not working.
Error
########################################################### #
WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! #
########################################################### IT IS
POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY! Someone could be
eavesdropping on you right now (man-in-the-middle attack)! It is also
possible that a host key has just been changed. The fingerprint for
the ECDSA key sent by the remote host is
SHA256:wVk4Da5pWWNHLiypvEKAJuwzG/2FLOMgwPkrO4oFBZQ. Please contact
your system administrator. Add correct host key in
/Users/abel/.ssh/known_hosts to get rid of this message.
Offending ECDSA key in /Users/abel/.ssh/known_hosts:32 ECDSA
host key for 192.168.33.111 has changed and you have requested strict
checking. Host key verification failed
This message can mean what it says: "that there is something nasty going on" if you have this in an environment with static servers.
But if you have, say, a testing-environment, where you create and destroy virtual machines as a daily procedure, this is a "normal" security warning.
It just means "hey, I now this guy, but his fingerprint doesn't match the one in my document archive". If this is intended (like I said, in a test-environment) - then just go into the "document archive", delete "this guys fingerprint" and "take a new fingerprint of him".
So in your case ("/Users/abel/.ssh/known_hosts:32") just open your "known_hosts"-file, and delete the line 32.
Or use the command:
ssh-keygen -R 192.168.33.111 -f "~/Users/abel/.ssh/known_hosts"
I'm on a Mac machine.
$ which ansible
/Library/Frameworks/Python.framework/Versions/3.5/bin/ansible
or I guess, ansible can be located at a generic location: /usr/bin/ansible (for ex: on CentOS/Ubuntu).
$ ansible --version
ansible 2.2.0.0
Running the following playbook works fine from my other vagrant / Ubuntu box.
Playbook file looks like:
- hosts: all
become: true
gather_facts: true
roles:
- a_role_which_just_say_hello_world_debug_msg
From my local machine, I can successfully ssh to the target servers/the following server (without any password as I have already added the .pem key file using ssh-add), which is failing in Ansible playbook's [Setup] (gather facts step) in Ansible playbook run.
On Mac machine, I'm getting this error sometimes (not everytime). Error: Failed to connect to the host via ssh: Connection timed out during banner exchange. PS: this issue is not coming all the time.
$ ansible-playbook -i inventory -l tag_cluster_mycluster myplabook.yml
PLAY [all] *********************************************************************
TASK [setup] *******************************************************************
ok: [myclusterSomeServer01_i_07f318688f6339971]
fatal: [myclusterSomeServer02_i_03df6f1f988e665d9]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: Connection timed out during banner exchange\r\n", "unreachable": true}
OK, tried couple of times, same behavior, out of 15 servers (that I have in the mycluster cluster), the [SETUP] setup is failing during the gathering facts setup and next time it's working fine.
Retried:
$ ansible-playbook -i inventory -l tag_cluster_mycluster myplabook.yml
PLAY [all] *********************************************************************
TASK [setup] *******************************************************************
ok: [myclusterSomeServer01_i_07f318688f6339971]
ok: [myclusterSomeServer02_i_03df6f1f988e665d9]
ok: [myclusterSomeServer03_i_057dfr56u88e665d9]
...
.....more...this time it worked for all servers.
As you see above, this time the above step worked fine. The same issue (SSH connection timed out) is happening during some task/actions (where I'm trying to install something using Ansible yum module. If I try it again, it works fine for the server which failed last time but then it may fail for another server which was successful last time. Thus, the behavior is random.
My /etc/ansible/ansible.cfg file has:
[ssh_connection]
scp_if_ssh = True
Adding the following timeout setting to /etc/ansible/ansible.cfg config file worked when I increased it to 25. When it was 10 or 15, I still saw the errors in some servers due to connection timeout banner issue.
[defaults]
timeout = 25
[ssh_connection]
scp_if_ssh = True
Apart from the above, I had to use serial: N or serial: N% (where N is a number) to run my playbook on N number or percentage of servers at a time, then it worked fine.
i.e.
- hosts: all
become: true
gather_facts: true
serial: 2
#serial: "10%"
#serial: "{{ serialNumber }}"
#serial: "{{ serialNumber }}%"
vars:
- serialNumber: 5
roles:
- a_role_which_just_say_hello_world_debug_msg
I was not able to find where the actual problem is. I executed below playbook with my private key:
---
- hosts: localhost
gather_facts: false
sudo: yes
tasks:
- name: Install package libpcre3-dev
apt: name=libpcre3-dev state=latest
But I am getting the error below on Vagrant Ubuntu machine:
PLAY [localhost]
*********************************************************************
TASK [Install package ]
***************************************************
fatal: [vagrant]: UNREACHABLE! => {"changed": false, "msg": "Failed to
connect to the host via ssh: Permission denied (publickey,password).\r\n",
"unreachable": true}
to retry, use: --limit #/home/vagrant/playbooks/p1.retry
PLAY RECAP
*********************************************************************
vagrant : ok=0 changed=0 unreachable=1 failed=0
What could be the possible suggestion?
You are running a playbook against a localhost with SSH connection (default in Ansible) and this fails. Most likely because you never configured the account on your machine to accept the key from itself. Using defaults, you'd need to add the ~/.ssh/id_rsa.pub to ~/.ssh/authorized_keys.
Instead, to run on locally add connection: local to the play:
---
- hosts: localhost
connection: local
tasks:
- debug:
And it will give you a proper response:
TASK [debug] *******************************************************************
ok: [localhost] => {
"msg": "Hello world!"
}
I have a Rasberry pi which I can connect to via SSH from terminal through an ethernet cable from my Macbook to the pi via the command 'ssh pi#169.254.0.2'
Yet, when I run an ansible playbook to this host
[pis]
169.254.0.2
I get the following error:
PLAY [Ansible Playbook for configuring brand new Raspberry Pi] *****************
TASK [setup] *******************************************************************
<169.254.0.2> ESTABLISH CONNECTION FOR USER: pi on PORT 22 TO 169.254.0.2
CONNECTION: pid 2118 waiting for lock on 10
CONNECTION: pid 2118 acquired lock on 10
fatal: [169.254.0.2]: UNREACHABLE! => {"changed": false, "msg": "ERROR! (25, 'Inappropriate ioctl for device')", "unreachable": true}
PLAY RECAP *********************************************************************
169.254.0.2 : ok=0 changed=0 unreachable=1 failed=0
My ansible version is 2.0.0.2.
How can I configure Ansible so that it connects in the same way as I am successfully able to connect with SSH from the terminal?
Always include Ansible version when reporting issues like this. I had a similar issue when multiple ssh connections were opened by Ansible. Can you set pipelining to False in Ansible config file (/etc/ansible/ansible.cfg) and try again? Check what it is set to now before setting it.
pipelining = False
I received this error when trying to run ansible from inside a docker container and I got this same error. This answer led me to the solution which was that you have to add the -t flag which allocates a pseudo-TTY.
E.g.
sudo docker run -t -v `pwd`:/ansible -w /ansible ansible:latest ansible-playbook -i inventory.yml site.yml