Ansible Inventory Specifying the Same Host with Different Users and Keys for Initial SSH User Setup and Disabling Root Access - ssh

I am attempting to have playbooks that run once to set up a new user and disable root ssh access.
For now, I am doing that by declaring all of my inventory twice. Each host needs an entry that accesses with the root user, used to create a new user, set up ssh settings, and then disable root access.
Then each host needs another entry with the new user that gets created.
My current inventory looks like this. It's only one host for now, but with a larger inventory, the repetition would just take up a ton of unnecessary space:
---
# ./hosts.yaml
---
all:
children:
master_roots:
hosts:
demo_master_root:
ansible_host: a.b.c.d # same ip as below
ansible_user: root
ansible_ssh_private_key_file: ~/.ssh/id_rsa_infra_ops
masters:
hosts:
demo_master:
ansible_host: a.b.c.d # same ip as above
ansible_user: infraops
ansible_ssh_private_key_file: ~/.ssh/id_rsa_infra_ops
Is there a cleaner way to do this?
Is this an anti-pattern in any way? It is not idempotent. It would be nice to have this run in a way that running the same playbook twice always has the same output - either "success", or "no change".
I am using DigitalOcean and they have a functionality to have this done via a bash script before the VM comes up for the first time, but I would prefer a platform-independent solution.
Here is the playbook for setting up the users & ssh settings and disabling root access
---
# ./initial-host-setup.yaml
---
# References
# Digital Ocean recommended droplet setup script:
# - https://docs.digitalocean.com/droplets/tutorials/recommended-setup
# Digital Ocean tutorial on installing kubernetes with Ansible:
# - https://www.digitalocean.com/community/tutorials/how-to-create-a-kubernetes-cluster-using-kubeadm-on-debian-9
# Ansible Galaxy (Community) recipe for securing ssh:
# - https://github.com/vitalk/ansible-secure-ssh
---
- hosts: master_roots
become: 'yes'
tasks:
- name: create the 'infraops' user
user:
state: present
name: infraops
password_lock: 'yes'
groups: sudo
append: 'yes'
createhome: 'yes'
shell: /bin/bash
- name: add authorized keys for the infraops user
authorized_key: 'user=infraops key="{{item}}"'
with_file:
'{{ hostvars[inventory_hostname].ansible_ssh_private_key_file }}.pub'
- name: allow infraops user to have passwordless sudo
lineinfile:
dest: /etc/sudoers
line: 'infraops ALL=(ALL) NOPASSWD: ALL'
validate: visudo -cf %s
- name: disable empty password login for all users
lineinfile:
dest: /etc/ssh/sshd_config
regexp: '^#?PermitEmptyPasswords'
line: PermitEmptyPasswords no
notify: restart sshd
- name: disable password login for all users
lineinfile:
dest: /etc/ssh/sshd_config
regexp: '^(#\s*)?PasswordAuthentication '
line: PasswordAuthentication no
notify: restart sshd
- name: Disable remote root user login
lineinfile:
dest: /etc/ssh/sshd_config
regexp: '^#?PermitRootLogin'
line: 'PermitRootLogin no'
notify: restart sshd
handlers:
- name: restart sshd
service:
name: sshd
state: restarted
Everything after this would use the masters inventory.
EDIT
After some research I have found that "init scripts"/"startup scripts"/"user data" scripts are supported across AWS, GCP, and DigitalOcean, potentially via cloud-init (this is what DigitalOcean uses, didn't research the others), which is cross-provider enough for me to just stick with a bash init script solution.
I would still be interested & curious if someone had a killer Ansible-only solution for this, although I am not sure there is a great way to make this happen without a pre-init script.
Regardless of any ansible limitations, it seems that without using the cloud init script, you can't have this. Either the server starts with a root or similar user to perform these actions, or the server starts without a user with those powers, then you can't perform these actions.
Further, I have seen Ansible playbooks and bash scripts that try to solve the desired "idempotence" (complete with no errors even if root is already disabled) by testing root ssh access, then falling back to another user, but "I can't ssh with root" is a poor test for "is the root user disabled" because there are plenty of ways your ssh access could fail even though the server is still configured to allow root to ssh.
EDIT 2 placing this here, since I can't use newlines in my response to a comment:
β.εηοιτ.βε responded to my assertion:
"but "I can't ssh with root" is a poor test for "is the root user disabled" because there are plenty of ways your ssh access could fail even though the server is still configured to allow root to ssh
with
then, try to ssh with infraops and assert that PermitRootLogin no is in the ssh daemon config file?"
It sounds like the suggestion is:
- attempt ssh with root
- if success, we know user/ssh setup tasks have not completed, so run those tasks
- if failure, attempt ssh with infraops
- if success, go ahead and run everything except the user creation again to ensure ssh config is as desired
- if failure... ? something else is probably wrong, since I can't ssh with either user
I am not sure what this sort of if-then failure recovery actually looks like in an Ansible script

You can overwrite host variables for a given play by using vars.
- hosts: masters
become: 'yes'
vars:
ansible_ssh_user: "root"
ansible_ssh_private_key_file: "~/.ssh/id_rsa_infra_ops"
tasks:

You could only define the demo_master group and alter the ansible_user and ansible_ssh_private_key_file at run time, using command flags --user and --private-key.
So with an host.yaml containing
all:
children:
masters:
hosts:
demo_master:
ansible_host: a.b.c.d # same ip as above
ansible_user: infraops
ansible_ssh_private_key_file: ~/.ssh/id_rsa_infra_ops
And run on - hosts: master, the first run would, for example be with
ansible-playbook initial-host-setup.yaml \
--user root \
--private-key ~/.ssh/id_rsa_root
When the subsequent runs would simply by
ansible-playbook subsequent-host-setup.yaml
Since all the required values are in the inventory already.

Related

Ansible "Failed to connect to the host via ssh: ubuntu# because target machine uses ec2-user

So I'm trying to use roles in Ansible and I'm not sure how to tell Ansible to use a specific user to ssh
So I have 2 files
site.yml
- hosts: _uat_web
- import_playbook: ../static-assignments/uat-webservers.yml
uat-webservers.yml
---
- hosts: _uat_web
remote_user: ec2-user
roles:
- webservers
So if I run ansible-playbook uat-webservers.yml everything works as expected but the idea is for site.yml to call uat-webservers.yml
So when I run ansible-playbook site.yml I get this issue
UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: ubuntu#ec2-xx-xxx-xxx-xxx.compute-1.amazonaws.com: Permission denied (publickey,gssapi-keyex,gssapi-with-mic).", "unreachable": true}
I know the issue is that the target machine is using Red Hat therefore I need user ec2-user for ssh to work.
I tried putting remote_user: ec2-user in site.yml did not work. FYI I'm executing the ansible playbooks on an Ubuntu machine thats why it defaults to ubuntu user
- hosts: _uat_web #uat-webservers
- remote_user: ec2-user
- import_playbook: ../static-assignments/uat-webservers.yml
In addition, I'm using dynamic inventory aws_ec2 I know with static inventory you can specify the user in the inventory. Would love a solution in the playbook itself such as remote_user that doesn't seem to work when using the import. Thank you
In site.yml, this line by itself doesn't do anything (aside from gather facts). So it is redundant and can be removed.
- hosts: _uat_web
So if you remove that line, your import_playbook should work on it's own. ie.
# site.yml
- import_playbook: ../static-assignments/uat-webservers.yml
If you really wanted that section because you wanted to run some stuff before importing the playbook, then do:
# site.yml
- hosts: _uat_web
remote_user: ec2-user # Notice this line doesn't start with a "-" like it did in your example
tasks: # or roles:
...
- import_playbook: ../static-assignments/uat-webservers.yml
Edit:
Once the UNREACHABLE error is resolved, I think you may encounter another error where it cannot find the role. I'm not sure how your directory structure is setup, but when you use import_playbook, the imported playbook will look for the roles relative to itself.
Ie. your ../static-assignments/uat-webservers.yml playbook calls the webservers role, then it will try to find it in ../static-assignments/roles/webservers which may not exist in that path.
Some potential solutions is to look into the roles_path setting in ansible.cfg. Or potentially using a symlink to point to your main roles directory.

Ansible dynamically add proxy host, then use proxy host to login to machine behind it, without ssh_config

SO I would like to provision a proxy-host ( i can do this), add it to the dynamic ansible inventory via add_host (done),
Then in the next play, run tasks on that proxy-host, to find another machine behind it, update something ansible side to know this new host's location, and that It needs to be proxy jumped via this current proxy-host,
Then in the next play target this new machine behind the proxy-host.
I am at a lost here, i was hoping to do it without all of this ssh_config changes... is this possible, has anyone done this, thoughts?
I have an answer to my question. I think that it is a perfectly valid question, and alot the documentation from Ansible semi-answers this question, it is not put into the context of being dynamic, nor is it stated that it can be done completely dynamically.
Pretext: Using terraform, within ansible to generate hosts, with the following configuration:
control_box (running ansible/terraform from)----> dynamically created Bastion/proxy/jump_host ---> some_server(behind the bastion)
Playbook:
#Make the bastion host, and add it to the just_created group
- hosts: 127.0.0.1
roles:
- terraform_logic_add_host_logic
- hosts: just_created #aka bastion
tasks:
- name: Include task list in play
include: "get_the_private_ip_and_add_to_behind_bastion_group.yml"
# Login into behind_bastion group.....
- hosts: behind_bastion_group
vars:
- ansible_connection: ssh
- ansible_ssh_common_args: '-o ProxyCommand="ssh -i {{ some_pem_key }} -o StrictHostKeyChecking=no -W %h:%p -q ec2-user#{{ the_bastion_ip }}"'
tasks:
- name: Include task list in play
include: "do_stuff_finally.yml"
I have done my research as well FYI:
Posts such as this, do not show the complete End2End solution of doing this all dynamically...
https://blog.scottlowe.org/2015/12/24/running-ansible-through-ssh-bastion-host/
Ansible with a bastion host / jump box?
https://selivan.github.io/2018/01/29/ansible-ssh-bastion-host.html

Is it possible to add an ssh key to the agent for a private repo in an ansible playbook?

I am using Ansible to provision a Vagrant environment. As part of the provisioning process, I need to connect from the currently-provisioning VM to a private external repository using an ssh key in order to use composer to pull in modules for an application. I've done a lot of reading on this before asking this question, but still can't seem to comprehend what's going on.
What I want to happen is:
As part of the playbook, on the Vagrant VM, I add the ssh key to the private repo to the ssh-agent
Using that private key, I am then able to use composer to require modules from the external source
I've read articles which highlight specifying the key in playbook execution. (E.g. ansible-play -u username --private-key play.yml) As far as I understand, this isn't for me, as I'm calling the playbook via Vagrant file. I've also read articles which mention ssh forwarding. (SSH Agent Forwarding with Ansible). Based on what I have read, this is what I've done:
On the VM being provisioned, I insert a known_hosts file which consists of the host entries of the machines which house the repos I need:
On the VM being provisioned, I have the following in ~/.ssh/config:
Host <VM IP>
ForwardAgent yes
I have the following entries in my ansible.cfg to support ssh forwarding:
[defaults]
transport = ssh
[ssh_connection]
ssh_args=-o ForwardAgent=yes -o ControlMaster=auto -o ControlPersist=60s -o ControlPath=/tmp/ansible-ssh-%h-%p-%r
[privilege_escalation]
pipelining = False
I have also added the following task to the playbook which tries to
use composer:
- name: Add ssh agent line to sudoers
become: true
lineinfile:
dest: /etc/sudoers
state: present
regexp: SSH_AUTH_SOCK
line: Defaults env_keep += "SSH_AUTH_SOCK"
I exit the ansible provisioner and add the private key on the provisioned VM to the agent via a shell provisioner (This is where I suspect I'm going wrong)
Then, I attempt to use composer, or call git via the command module. Like this, for example, to test:
- name: Test connection
command: ssh -T git#github.com
Finally, just in case I wasn't understanding ssh connection forwarding correctly, I assumed that what was supposed to happen was that I needed to first add the key to my local machine's agent, then forward that through to the provisioned VM to use to grab the repositories via composer. So I used ssh-add on my local machine before executing vagrant up and running the provisioner.
No matter what, though, I always get permission denied when I do this. I'd greatly appreciate some understanding as to what I may be missing in my understanding of how ssh forwarding should be working here, as well as any guidance for making this connection happen.
I'm not certain I understand your question correctly, but I often setup machines that connect to a private bitbucket repository in order to clone it. You don't need to (and shouldn't) use agent forwarding for that ("ssh forwarding" is unclear; there's "authentication agent forwarding" and "port forwarding", but you need neither in this case).
Just to be clear with terminology, you are running Ansible in your local machine, you are provisioning the controlled machine, and you want to ssh from the controlled machine to a third-party server.
What I do is I upload the ssh key to the controlled machine, in /root/.ssh (more generally $HOME/.ssh where $HOME is the home directory of the controlled machine user who will connect to the third-party server—in my case that's root). I don't use the names id_rsa and id_rsa.pub, because I don't want to touch the default keys of that user (these might have a different purpose; for example, I use them to backup the controlled machine). So this is the code:
- name: Install bitbucket aptiko_ro ssh key
copy:
dest: /root/.ssh/aptiko_ro_id_rsa
mode: 0600
content: "{{ aptiko_ro_ssh_key }}"
- name: Install bitbucket aptiko_ro ssh public key
copy:
dest: /root/.ssh/aptiko_ro_id_rsa.pub
content: "{{ aptiko_ro_ssh_pub_key }}"
Next, you need to tell the controlled machine ssh this: "When you connect to the third-party server, use key X instead of the default key, and logon as user Y". You tell it in this way:
- name: Install ssh config that uses aptiko_ro keys on bitbucket
copy:
dest: /root/.ssh/config
content: |
Host bitbucket.org
IdentityFile ~/.ssh/aptiko_ro_id_rsa
User aptiko_ro

In Ansible, is it possible to define the authentication method per playbook?

TL;DR: Is it possible to chain two playbooks with one ansible-playbook command where one playbook is password auth and the other playbook is key auth? (see last section for real-world purpose).
Setup:
I have two playbooks, the second of which includes the first.
PlaybookA.yml
---
- name: PlaybookA # requires password authentication
hosts: sub.domain.ext
remote_user: root
roles:
- { role: role1, sudo: yes }
...
PlaybookB.yml
---
- name: Run PlaybookA
include: PlaybookA.yml
- name: PlaybookB # requires ssh-key authentication
hosts: sub.domain.ext
remote_user: ansible
roles:
- { role: role2, sudo: yes }
...
Requirements:
Execute only one command.
Use password auth for PlaybookA.
Use ssh-key auth for PlaybookB.
Question 1:
Is it possible within Ansible (versions 1.9.4 or lower) to execute one ansible-playbook command that will successfully run PlaybookB using ssh-key authentication but when PlaybookB includes PlaybookA, run PlaybookA using password authentication?
Question 2:
If this is not possible with Ansible 1.9.4 or lower, is this possible with 2.0.0+?
Notes of worth:
Ansible provides --ask-pass (or -k) as a command line switch enabling password authentication.
Ansible provides ask_pass as a variable but it seems as though it can only be set within ansible.cfg (I haven't been able to set this as a playbook variable to the desired effect).
Attempting to set ask_pass as an instruction within a playbook results in the following: ERROR: ask_pass is not a legal parameter of an Ansible Play. If this parameter was legal, it would provide a way to instruct ansible on a per-playbook level, what authentication method to use.
Purpose / Real World:
I'm attempting to create a configuration management workflow with Ansible that will be simple enough that others at work will be able to learn / adapt to it (and hopefully the use of Ansible in general for CM and orchestration).
For any new machine (VM or physical) that gets built, I intend for us to run two playbooks immediately. PlaybookA (as shown above) has the responsibility of logging in with the correct default user (typically depends upon the infrastructure [aws, vsphere, none, etc]). Once in, its very limited job is to:
Create the standardized user for ansible to run as (and install its ssh-key).
Remove any non-root users that may exist (artifacts of the vm infrastructure, etc).
Disable root access.
Disable password authentication (ssh-key only from this point on).
Depending upon the vm infrastructure (or lack thereof), the default user or the default authentication method can be different. Toward the goal of adoption of Ansible, I'm attempting to keep things extremely simple for fellow co-workers, so I'd like to automate as much of this flow-control as possible.
Once PlaybookA has locked down the vm and setup the standardized user, PlaybookB uses that standardized user to perform all other operations necessary to bring our vm's up to the necessary baseline of tools and utilities, etc.
Any tips, hints, suggestions would be greatly appreciated.
I have been facing the same problem today. Two ideas may help you here:
You can ask for the password using vars_prompt in your playbook instead of --ask-pass
Set the password using set_fact:
- name: "set password for the play"
set_fact: ansible_ssh_pass="{{ my_pass }}"
You could store the password in a file, or prompt for it, as in the example below. In my example, the sshd config thats being created will forbid password logins, but using ansible defaults, you will be surprised that the second playbook will still be executed (!), even though I "forgot" to create an authorized_key. Thats due to the fact, that ansible uses the ControlPersist options of ssh, and simply keeps the connection between single tasks open. You can turn that off in ansible.cfg
Example Playbook:
- name: "MAKE BARE: Run preparatory steps on a newly acquired server"
hosts: blankee
tasks:
- name: "set password for the play"
set_fact: ansible_ssh_pass="{{ my_pass }}"
- name: "Create directory {{ pathsts }}/registry/ansible-init"
file: name="{{ pathsts }}/registry/ansible-init" state=directory owner=root group=www-data mode=770
- name: "copy sshd config file"
copy:
src: 'roles/newhost/files/sshd_config'
dest: '/etc/ssh/sshd_config'
owner: 'root'
group: 'root'
mode: '0644'
- name: "Check syntax of sshd configuration"
shell: sshd -t
register: result
changed_when: false
failed_when: "result.rc != 0"
- name: "Restart SSHD and enable Service to start at boot"
service: name=sshd state=restarted
changed_when: false
vars:
my_pass2: foobar
vars_prompt:
- name: "my_pass"
prompt: "########## Enter PWD:\n "
- name: "Second run: This should authenticate w/out password:"
hosts: blankee
tasks:
- name: "Create directory {{ pathsts }}/registry/ansible-init"
file: name="{{ pathsts }}/registry/ansible-init22" state=directory owner=root group=www-data mode=770
I don't know a way to change the authentication method within the play. I think I'd prefer running two different playbooks as Jenkins job or similar, but I can think of a pure Ansible workaround: instead of including the second playbook, you could get ansible to run a shell command as a local action, and run the command to execute the second playbook from the first one. Here's a rough proof of concept:
---
- hosts: all
vars_files:
- vars.yml
tasks:
- debug: msg="Run your first role here."
- name: Then call Ansible to run the second playbook.
local_action: shell ansible-playbook -i ~/workspace/hosts ~/workspace/second_playbook.yml
register: playbook_results
- debug: var=playbook_results.stdout_lines
Here's the output:
GATHERING FACTS ***************************************************************
ok: [vagrantbox]
TASK: [debug msg="Run your first role here."] *********************************
ok: [vagrantbox] => {
"msg": "Run your first role here."
}
TASK: [Then call Ansible to run the second playbook.] *************************
changed: [vagrantbox -> 127.0.0.1]
TASK: [debug var=playbook_results.stdout_lines] *******************************
ok: [vagrantbox] => {
"var": {
"playbook_results.stdout_lines": [
"",
"PLAY [Proof of concept] ******************************************************* ",
"",
"GATHERING FACTS *************************************************************** ",
"ok: [vagrantbox]",
"",
"TASK: [debug msg=\"This playbook was called from another playbook!\"] *********** ",
"ok: [vagrantbox] => {",
" \"msg\": \"This playbook was called from another playbook!\"",
"}",
"",
"PLAY RECAP ******************************************************************** ",
"vagrantbox : ok=2 changed=0 unreachable=0 failed=0 "
]
}
}
PLAY RECAP ********************************************************************
vagrantbox : ok=4 changed=1 unreachable=0 failed=0

How to ignore ansible SSH authenticity checking?

Is there a way to ignore the SSH authenticity checking made by Ansible? For example when I've just setup a new server I have to answer yes to this question:
GATHERING FACTS ***************************************************************
The authenticity of host 'xxx.xxx.xxx.xxx (xxx.xxx.xxx.xxx)' can't be established.
RSA key fingerprint is xx:yy:zz:....
Are you sure you want to continue connecting (yes/no)?
I know that this is generally a bad idea but I'm incorporating this in a script that first creates a new virtual server at my cloud provider and then automatically calls my ansible playbook to configure it. I want to avoid any human intervention in the middle of the script execution.
Two options - the first, as you said in your own answer, is setting the environment variable ANSIBLE_HOST_KEY_CHECKING to False.
The second way to set it is to put it in an ansible.cfg file, and that's a really useful option because you can either set that globally (at system or user level, in /etc/ansible/ansible.cfg or ~/.ansible.cfg), or in an config file in the same directory as the playbook you are running.
To do that, make an ansible.cfg file in one of those locations, and include this:
[defaults]
host_key_checking = False
You can also set a lot of other handy defaults there, like whether or not to gather facts at the start of a play, whether to merge hashes declared in multiple places or replace one with another, and so on. There's a whole big list of options here in the Ansible docs.
Edit: a note on security.
SSH host key validation is a meaningful security layer for persistent hosts - if you are connecting to the same machine many times, it's valuable to accept the host key locally.
For longer-lived EC2 instances, it would make sense to accept the host key with a task run only once on initial creation of the instance:
- name: Write the new ec2 instance host key to known hosts
connection: local
shell: "ssh-keyscan -H {{ inventory_hostname }} >> ~/.ssh/known_hosts"
There's no security value for checking host keys on instances that you stand up dynamically and remove right after playbook execution, but there is security value in checking host keys for persistent machines. So you should manage host key checking differently per logical environment.
Leave checking enabled by default (in ~/.ansible.cfg)
Disable host key checking in the working directory for playbooks you run against ephemeral instances (./ansible.cfg alongside the playbook for unit tests against vagrant VMs, automation for short-lived ec2 instances)
I found the answer, you need to set the environment variable ANSIBLE_HOST_KEY_CHECKING to False. For example:
ANSIBLE_HOST_KEY_CHECKING=False ansible-playbook ...
Changing host_key_checking to false for all hosts is a very bad idea.
The only time you want to ignore it, is on "first contact", which this playbook will accomplish:
---
- name: Bootstrap playbook
# Don't gather facts automatically because that will trigger
# a connection, which needs to check the remote host key
gather_facts: false
tasks:
- name: Check known_hosts for {{ inventory_hostname }}
local_action: shell ssh-keygen -F {{ inventory_hostname }}
register: has_entry_in_known_hosts_file
changed_when: false
ignore_errors: true
- name: Ignore host key for {{ inventory_hostname }} on first run
when: has_entry_in_known_hosts_file.rc == 1
set_fact:
ansible_ssh_common_args: "-o StrictHostKeyChecking=no"
# Now that we have resolved the issue with the host key
# we can "gather facts" without issue
- name: Delayed gathering of facts
setup:
So we only turn off host key checking if we don't have the host key in our known_hosts file.
You can pass it as command line argument while running the playbook:
ansible-playbook play.yml --ssh-common-args='-o StrictHostKeyChecking=no'
forward to nikobelia
For those who using jenkins to run the play book, I just added to my jenkins job before running the ansible-playbook the he environment variable ANSIBLE_HOST_KEY_CHECKING = False
For instance this:
export ANSIBLE_HOST_KEY_CHECKING=False
ansible-playbook 'playbook.yml' \
--extra-vars="some vars..." \
--tags="tags_name..." -vv
If you don't want to modify ansible.cfg or the playbook.yml then you can just set an environment variable:
export ANSIBLE_HOST_KEY_CHECKING=False
Ignoring checking is a bad idea as it makes you susceptible to Man-in-the-middle attacks.
I took the freedom to improve nikobelia's answer by only adding each machine's key once and actually setting ok/changed status in Ansible:
- name: Accept EC2 SSH host keys
connection: local
become: false
shell: |
ssh-keygen -F {{ inventory_hostname }} ||
ssh-keyscan -H {{ inventory_hostname }} >> ~/.ssh/known_hosts
register: known_hosts_script
changed_when: "'found' not in known_hosts_script.stdout"
However, Ansible starts gathering facts before the script runs, which requires an SSH connection, so we have to either disable this task or manually move it to later:
- name: Example play
hosts: all
gather_facts: no # gather facts AFTER the host key has been accepted instead
tasks:
# https://stackoverflow.com/questions/32297456/
- name: Accept EC2 SSH host keys
connection: local
become: false
shell: |
ssh-keygen -F {{ inventory_hostname }} ||
ssh-keyscan -H {{ inventory_hostname }} >> ~/.ssh/known_hosts
register: known_hosts_script
changed_when: "'found' not in known_hosts_script.stdout"
- name: Gathering Facts
setup:
One kink I haven't been able to work out is that it marks all as changed even if it only adds a single key. If anyone could contribute a fix that would be great!
You can simply tell SSH to automatically accept fingerprints for new hosts. Just add
StrictHostKeyChecking=accept-new
to your ~/.ssh/config. It does not disable host-key checking entirely, it merely disables this annoying question whether you want to add a new fingerprint to your list of known hosts. In case the fingerprint for a known machine changes, you will still get the error.
This policy also works with ANSIBLE_HOST_KEY_CHECKING and other ways of passing this param to SSH.
I know the question has been answered and it's correct as well, but just wanted to link the ansible doc where it's explained clearly when and why respective check should be added: host-key-checking
The most problems appear when you want to add new host to dynamic inventory (via add_host module) in playbook. I don't want to disable fingerprint host checking permanently so solutions like disabling it in a global config file are not ok for me. Exporting var like ANSIBLE_HOST_KEY_CHECKING before running playbook is another thing to do before running that need to be remembered.
It's better to add local config file in the same dir where playbook is. Create file named ansible.cfg and paste following text:
[defaults]
host_key_checking = False
No need to remember to add something in env vars or add to ansible-playbook options. It's easy to put this file to ansible git repo.
This one is the working one I used in my environment. I use the idea from this ticket https://github.com/mitogen-hq/mitogen/issues/753
- name: Example play
gather_facts: no
hosts: all
tasks:
- name: Check SSH known_hosts for {{ inventory_hostname }}
local_action: shell ssh-keygen -l -F {{ inventory_hostname }}
register: checkForKnownHostsEntry
failed_when: false
changed_when: false
ignore_errors: yes
- name: Add {{ inventory_hostname }} to SSH known hosts automatically
when: checkForKnownHostsEntry.rc == 1
changed_when: checkForKnownHostsEntry.rc == 1
local_action:
module: shell
args: ssh-keyscan -H "{{ inventory_hostname }}" >> $HOME/.ssh/known_hosts
Host key checking is important security measure so I would not just skip it everywhere. Yes, it can be annoying if you keep reinstalling same testing host (without backing up it's SSH certificates) or if you have stable hosts but you run your playbook for Jenkins without simple option to add host key if you are connecting to the host for a first time. So:
This is what we are using for stable hosts (when running the playbook from Jenkins and you simply want to accept the host key when connecting to the host for the first time) in inventory file:
[all:vars]
ansible_ssh_common_args='-o StrictHostKeyChecking=accept-new'
And this is what we have for temporary hosts (in the end this will ignore they host key at all):
[all:vars]
ansible_ssh_common_args='-o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null'
There is also environment variable or you can add it into group/host variables file. No need to have it in the inventory - it was just convenient in our case.
Used some other responses here and a co-worker solution, thank you!
Use the parameter named as validate_certs to ignore the ssh validation
- ec2_ami:
instance_id: i-0661fa8b45a7531a7
wait: yes
name: ansible
validate_certs: false
tags:
Name: ansible
Service: TestService
By doing this it ignores the ssh validation process