Set ansible connection differently according to given condition - ssh

I have a playbook that, for one of the hosts, how I need to connect differs according to whether certain tasks have previously succeeded.
In this specific case there's a tunnel between two of them, and one routes all its traffic over that tunnel, so once configured I need to use the other as a jump box in order to connect - but I can imagine many other circumstances where you might want to change connection method mid-playbook, from as simple as modifying users/passwords.
How can I have a conditional connection method?
I can't simply update with set_fact, since by the time I reach that task ansible will already have tried and possibly failed to 'gather facts' at the start, and won't proceed.

The devil is in the details for such a question, for sure, but in general I think use of add_host will be the most legible way to do what you want. You can also change the connection on a per-task basis, or conditionally change the connection for the whole playbook against that host:
- hosts: all
connection: ssh # <-- or whatever bootstrap connection plugin
gather_facts: no
tasks:
- command: echo "do something here"
register: the_thing
# now, you can either switch to the alternate connection per task:
- command: echo "do the other thing"
connection: lxd # <-- or whatever
when: the_thing is success
# OR, you can make the alternate connection the default
# for the rest of the current playbook
- name: switch the rest of the playbook
set_fact:
ansible_connection: chroot
when: the_thing is success
# OR, perhaps run another playbook using the alternate connection
# by adding the newly configured host to a special group
- add_host:
name: '{{ ansible_host }}'
groups:
- configured_hosts
when: the_thing is success
# and then running the other playbook against configured hosts
- hosts: configured_hosts
connection: docker # <-- or whatever connection you want
tasks:
- setup:

I use the following snippet as a role and invoke this role depending on the situation whether I need jumphost(bastion or proxy) or not. An example is also given in the comments. This role can add multiple hosts at the same time. Put the following contents in roles/inventory/tasks/main.yml
# Description: |
# Adds given hosts to inventory.
# Inputs:
# hosts_info: |
# (mandatory)
# List of hosts with the structure which looks like this:
#
# - name: <host name>
# address: <url or ip address of host>
# groups: [] list of groups to which this host will be added.
# user: <SSH user>
# ssh_priv_key_path: <private key path for ssh access to host>
# proxy: <define following structure if host should be accessed using proxy>
# ssh_priv_key_path: <priv key path for ssh access to proxy node>
# user: <login user on proxy node>
# host: <proxy host address>
#
# Example Usage:
# - include_role:
# name: inventory
# vars:
# hosts_info:
# - name: controller-0
# address: 10.100.10.13
# groups:
# - controller
# user: user1
# ssh_priv_key_path: /home/user/.ssh/id_rsa
# - name: node-0
# address: 10.10.1.14
# groups:
# - worker
# - nodes
# user: user1
# ssh_priv_key_path: /home/user/.ssh/id_rsa
# proxy:
# ssh_priv_key_path: /home/user/jumphost_key.rsa.priv
# user: jumphost-user
# host: 10.100.10.13
- name: validate inventory input
assert:
that:
- "single_host_info.name is defined"
- "single_host_info.groups is defined"
- "single_host_info.address is defined"
- "single_host_info.user is defined"
- "single_host_info.ssh_priv_key_path is defined"
loop: "{{ hosts_info }}"
loop_control:
loop_var: single_host_info
- name: validate inventory proxy input
assert:
that:
- "single_host_info.proxy.host is defined"
- "single_host_info.proxy.user is defined"
- "single_host_info.proxy.ssh_priv_key_path is defined"
when: "single_host_info.proxy is defined"
loop: "{{ hosts_info }}"
loop_control:
loop_var: single_host_info
- name: Add hosts to inventory without proxy
add_host:
groups: "{{ single_host_info.groups | join(',') }}"
name: "{{ single_host_info.name }}"
host: "{{ single_host_info.name }}"
hostname: "{{ single_host_info.name }}"
ansible_host: "{{ single_host_info.address }}"
ansible_connection: ssh
ansible_ssh_user: "{{ single_host_info.user }}"
ansible_ssh_extra_args: "-o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null"
ansible_ssh_private_key_file: "{{ single_host_info.ssh_priv_key_path }}"
loop: "{{ hosts_info | json_query(\"[?contains(keys(#), 'proxy') == `false`]\") | list }}"
loop_control:
loop_var: single_host_info
- name: Add hosts to inventory with proxy
add_host:
groups: "{{ single_host_info.groups | join(',') }}"
name: "{{ single_host_info.name }}"
host: "{{ single_host_info.name }}"
hostname: "{{ single_host_info.name }}"
ansible_host: "{{ single_host_info.address }}"
ansible_connection: ssh
ansible_ssh_user: "{{ single_host_info.user }}"
ansible_ssh_extra_args: "-o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null"
ansible_ssh_private_key_file: "{{ single_host_info.ssh_priv_key_path }}"
ansible_ssh_common_args: >-
-o ProxyCommand='ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null
-W %h:%p -q -i {{ single_host_info.proxy.ssh_priv_key_path }}
{{ single_host_info.proxy.user }}#{{ single_host_info.proxy.host }}'
loop: "{{ hosts_info | json_query(\"[?contains(keys(#), 'proxy') == `true`]\") }}"
loop_control:
loop_var: single_host_info

Related

delegate_to ignores configured ssh_port

Use-Case:
We are deploying virtual machines into a cloud with a default linux image (Ubuntu 22.04 at the moment). After deploying a machine, we configure our default users and change the SSH port from 22 to 2222 with Ansible.
Side note: We are using a jump concept through the internet - Ansible automation platform / AWS => internet => SSH jump host => target host
To keep the possibility for Ansible to connect to the new machine, after changing the SSH port, I found multiple Stack Overflow / blog entries, checking and setting ansible_ssh_port, basically by running wait_for on port 22 and 2222 and set the SSH variable depending on the result (code below).
Right now this works fine for the first SSH host (jumphost), but always fails for the second host due to issues with establishing the ssh connection.
Side note: The SSH daemon is running. If I use my user from the jump host, I can get a SSH response from 22/2222 (depending on the current state of deployment).
Edit from questions:
The deployment tasks should only be run on the target host. Not the jumphost as well.
I run the deployment on the jumphost first and make sure it is up, running and configured.
After that, i run the deployment on all machines behind the jumphost to configure them.
This also ensures that if i ever would need reboot, that i don't kill all tunneled ssh session by accident.
Ansible inventory example
all:
hosts:
children:
jumphosts:
hosts:
example_jumphost:
ansible_host: 123.123.123.123
cloud_hosts:
hosts:
example_cloud_host01: #local DNS is resolved on the jumphost - no ansible_host here (yet)
ansible_ssh_common_args: '-oProxyCommand="ssh -W %h:%p -oStrictHostKeyChecking=no -q ansible#123.123.123.123 -p 2222"' #Tunnel through the appropriate jumphost
delegation_host: "ansible#123.123.123.123" #delegate jobs to the jumphost in each project if needed
vars:
ansible_ssh_port: 2222
SSH check_port role
- name: Set SSH port to 2222
set_fact:
ansible_ssh_port: 2222
- name: "Check backend port 2222"
wait_for:
port: 2222
state: "started"
host: "{{ inventory_hostname }}"
connect_timeout: "5"
timeout: "5"
# delegate_to: "{{ delegation_host }}"
# vars:
# ansible_ssh_port: 2222
ignore_errors: true
register: ssh_port
- name: "Check backend port 22"
wait_for:
port: "22"
state: "started"
host: "{{ inventory_hostname }}"
connect_timeout: "5"
timeout: "5"
# delegate_to: "{{ delegation_host }}"
# vars:
# ansible_ssh_port: 2222
ignore_errors: true
register: ssh_port_default
when:
- ssh_port is defined
- ssh_port.state is undefined
- name: Set backend SSH port to 22
set_fact:
ansible_ssh_port: 22
when:
- ssh_port_default.state is defined
The playbook itself
- hosts: "example_cloud_host01"
gather_facts: false
roles:
- role: check_port #check if we already have the correct port or need 22
- role: sshd #Set Port to 2222 and restart sshd
- role: check_port #check the port again, after it has been changed
- role: install_apps
- role: configure_apps
Error message:
with delegate_to for the task Check backend port 2222:
fatal: [example_cloud_host01 -> ansible#123.123.123.123]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: ssh: connect to host 123.123.123.123 port 22: Connection refused", "unreachable": true}
This confuses me, because I expect the delegation host to use the same ansible_ssh_port as the target host.
Without delegate_to for task Check backend port 2222 and Check backend port 22:
fatal: [example_cloud_host01]: FAILED! => {"ansible_facts": {"discovered_interpreter_python": "/usr/bin/python3"}, "changed": false, "elapsed": 5, "msg": "Timeout when waiting for example_cloud_host01:2222"}
fatal: [example_cloud_host01]: FAILED! => {"ansible_facts": {"discovered_interpreter_python": "/usr/bin/python3"}, "changed": false, "elapsed": 5, "msg": "Timeout when waiting for example_cloud_host01:22"}
I have no idea why this happens. If I try the connection manually, it works fine.
What I tried so far:
I played around with delegate_to, vars, ... as mentioned above.
I wanted to see if I can provide delegato_to with the proper port 2222 for the jump host.
I wanted to see if can run this without delegate_to (since it should automatically use the proxy command to run on the jump host anyway).
Neither way gave me a solution on how to connect to my second tier servers after changing the SSH port.
Right now, I split the playbook into two
deploy sshd config with port 22
run our full deploy afterwards on port 2222
I would do the following (I somewhat tested this with fake values in the inventory using localhost as a jumphost to check ports on localhost as well)
Edit: modified my examples to somewhat try to show you a way after your comments on your question an on this answer
Inventory
---
all:
vars:
ansible_ssh_port: 2222
proxies:
vars:
ansible_user: ansible
hosts:
example_jumphost1:
ansible_host: 123.123.123.123
example_jumphost2:
ansible_host: 231.231.231.231
# ... and more jump hosts ...
cloud_hosts:
vars:
jump_vars: "{{ hostvars[jump_host] }}"
ansible_ssh_common_args: '-oProxyCommand="ssh -W %h:%p -oStrictHostKeyChecking=no -q {{ jump_vars.ansible_user }}#{{ jump_vars.ansible_host }} -p {{ jump_vars.ansible_shh_port | d(22) }}"'
children:
cloud_hosts_north:
vars:
jump_host: example_jumphost1
hosts:
example_cloud_host01:
example_cloud_host02:
# ... and more ...
cloud_hosts_south:
var:
jump_host: example_jumphost2
hosts:
example_cloud_host03:
example_cloud_host04:
# ... and more ...
# ... and more cloud groups ...
Tasks to check ports.
- name: "Check backend inventory configured port {{ ansible_ssh_port }}"
wait_for:
port: "{{ ansible_ssh_port }}"
state: "started"
host: "{{ inventory_hostname }}"
connect_timeout: "5"
timeout: "5"
delegate_to: "{{ jump_host }}"
ignore_errors: true
register: ssh_port
- name: "Check backend default ssh port if relevant"
wait_for:
port: "22"
state: "started"
host: "{{ inventory_hostname }}"
connect_timeout: "5"
timeout: "5"
delegate_to: "{{ jump_host }}"
ignore_errors: true
register: ssh_port_default
when: ssh_port is failed
- name: "Set backend SSH port to 22 if we did not change it yet"
set_fact:
ansible_ssh_port: 22
when:
- ssh_port_default is not skipped
- ssh_port_default is success
Please note that if checks for ports 22/2222 both fail, your configured port will still be 2222 but any later task will obviously fail. You might want to fail fast after checks for those relevant hosts:
- name: "Fail host if no port is available"
fail:
msg:
- "Host {{ inventory_hostname }}" does not have"
- "any ssh port available (tested 22 and 2222)"
when:
- ssh_port is failed
- ssh_port_default is failed
With this in place, you can use different targets on your play to reach the relevant hosts:
For jump hosts
Run on a single bastion host: e.g. hosts: example_jumphost1
Run on all bastion hosts: hosts: proxies
For cloud hosts
Run on all cloud hosts: hosts: cloud_hosts
Run on a single child group: e.g. hosts: cloud_hosts_north
Run on all cloud hosts except a subgroup: e.g. hosts: cloud_hosts:!cloud_hosts_south
For more see ansible patterns

Ansible playbook passing variable to next play

I have built a playbook in ansible that creates 2 groups of ec2 instances.
In a second playbook, I want that the first play lists the existing group to the user so the user can choose one. Then in a second play, use this group in hosts
---
- name: playbook
hosts: localhost
vars_prompt:
- name: groupvar
prompt: "Select a group"
private: no
tasks:
- name: task 1
debug:
msg: "{{ groupvar}}"
- name: Another play
hosts: "{{ groupvar }}"
# ...
How can I pass on the value of groupvar to the second play in this playbook?
Note: make sure you are not simply re-inventing the existing --limit option of the ansible-playbook command line
As you found out, vars_prompt do not survive the play they're declared in. In that case you have to use set_fact. Here is an example using your above code as a starting point:
- name: playbook
hosts: localhost
vars_prompt:
- name: groupvar
prompt: "Select a group"
private: no
tasks:
- name: task 1
debug:
msg: "{{ groupvar }}"
- name: Save value in a fact for current host
set_fact:
groupvar: "{{ groupvar }}"
- name: Another play running on above chosen group
# Remember we have set the fact on current host above which was localhost
hosts: "{{ hostvars['localhost'].groupvar }}"
# ... rest of your play.

Ansible, saving list of hosts into a variable

I have a playbook where I am first running a SQL statement to get a list of hosts from a database. I then save that list into a variable and want to run the next set of tasks over this list of hosts. But I am not sure how to do this or if it is even possible to dynamically define the hosts into an Ansible variable?
Below is a snippet of my code and what I am trying to do.
---
- hosts: all
gather_facts: no
tasks:
- name: Get list of hosts
command: sqlcmd -d testDB -q "SET NOCOUNT ON; SELECT DISTINCT HostName FROM Servers" -S "Central_Server" -h -1
register: sql_servers
- hosts: '{{ sql_servers.stdout_lines }}'
serial: 1
gather_facts: no
tasks:
........
other tasks
........
In the above code, I am trying to save the list of hosts into the sql_servers variables and want to run the 2nd set of my playbook over those hosts.
You're running the play with hosts: all. This implicates that there might be more hosts and, as a result, more lists of sql_servers too. Let's concatenate the lists. For example, whatever the source of the lists might be, given the inventory
shell> cat hosts
srv1 sql_servers='["a", "b"]'
srv2 sql_servers='["c", "a"]'
srv3 sql_servers='["e", "b"]'
the play
- hosts: all
tasks:
- set_fact:
srvs: "{{ ansible_play_hosts|
map('extract', hostvars, 'sql_servers')|
flatten|unique }}"
run_once: true
gives
srvs:
- a
- b
- c
- e
Now, use add_host and create group sql_servers
- add_host:
hostname: "{{ item }}"
groups: sql_servers
loop: "{{ srvs }}"
run_once: true
Use this group in the next play. The complete simplified playbook
- hosts: all
tasks:
- add_host:
hostname: "{{ item }}"
groups: sql_servers
loop: "{{ ansible_play_hosts|
map('extract', hostvars, 'sql_servers')|
flatten|unique }}"
run_once: true
- hosts: sql_servers
tasks:
- debug:
var: ansible_play_hosts_all
run_once: true
gives
ansible_play_hosts_all:
- a
- b
- c
- e
Fit the control flow to your needs.

Ansible Return Value - Need IP Adress

We are currently using Ansible in conjunction with OpenStack. I've written a playbook (to deploy new server via OpenStack) where i use the module os_server where i use auto_ip: yes, the new server will become an IP Address assigned from the OpenStack Server.
If I use the -vvvv output command, i get a long output where in the middle of that output an IP-Address is listed.
So, cause I am a lazy guy, I want to put just this IP Address in a variable and let me show this IP Address in an extra field.
It should look like this:
"........output stuf.....
................................
.............................
..............................
..............................."
"The IP Adress of the New server is ....."
Is there any possibility you know to put these IP Address Field in a variable or to filter that output to the IP Address?
If you need an screenshot to see what I mean, no problem just write it and I'll give it to you!
Ansible OpenStack module uses shade python package to create a server.
According to the shade source code, create_server method returns a dict representing the created server.
Try to register the result of os_server and debug it. The IP Address should be there.
Example :
- name: launch a compute instance
hosts: localhost
tasks:
- name: launch an instance
os_server:
state: present
...
auto_ip: yes
register: result
- debug: var=result
Also, you can have a look to this sample playbook which does exactly this.
Here's an excerpt:
- name: create cluster notebook VM
register: notebook_vm
os_server:
name: "{{ cluster_name }}-notebook"
flavor: "{{ notebook_flavor }}"
image: "CentOS-7.0"
key_name: "{{ ssh_key }}"
network: "{{ network_name }}"
security_groups:
- "{{ cluster_name }}-notebook"
auto_ip: yes
boot_from_volume: "{{ notebook_boot_from_volume }}"
terminate_volume: yes
volume_size: 25
- name: add notebook to inventory
add_host:
name: "{{ cluster_name }}-notebook"
groups: notebooks
ansible_ssh_host: "{{ notebook_vm.openstack.private_v4 }}"
ansible_ssh_user: cloud-user
public_ip: "{{ notebook_vm.openstack.public_v4 }}"
public_name: "{{ lookup('dig', notebook_vm.openstack.public_v4 + '/PTR', wantlist=True)[0] }}"
tags: ['vm_creation']

Ansible copy ssh key from one host to another

I have 2 app servers with a loadbalancer in front of them and 1 database server in my system. I'm provisioning them using Ansible. App servers has Nginx + Passenger and running for a Rails app. Will use capistrano for deployment but I have an issue about ssh keys. My git repo is in another server and I have to generate ssh public keys on appservers and add them to the Git server(To authorized_keys file). How can I do this in ansible playbook?
PS: I may have more than 2 app servers.
This does the trick for me, it collects the public ssh keys on the nodes and distributes it over all the nodes. This way they can communicate with each other.
- hosts: controllers
gather_facts: false
remote_user: root
tasks:
- name: fetch all public ssh keys
shell: cat ~/.ssh/id_rsa.pub
register: ssh_keys
tags:
- ssh
- name: check keys
debug: msg="{{ ssh_keys.stdout }}"
tags:
- ssh
- name: deploy keys on all servers
authorized_key: user=root key="{{ item[0] }}"
delegate_to: "{{ item[1] }}"
with_nested:
- "{{ ssh_keys.stdout }}"
- "{{groups['controllers']}}"
tags:
- ssh
Info: This is for the user root
Take a look to the authorized_key module for getting info on how to manage your public keys.
The most straightforward solution I can think of would be to generate a fresh key pair for your application, to be shared accross all your app instances. This may have security implications (you are indeed sharing keys between all instances!), but it'll simplify a lot the provisioning process.
You'll also require a deploy user on each app machine, to be used later on during deployment process. You'll need your public key (or jenkins one) on each deploy user's authorized_keys.
A sketch playbook:
---
- name: ensure app/deploy public key is present on git server
hosts: gitserver
tasks:
- name: ensure app public key
authorized_key:
user: "{{ git_user }}"
key: app_keys/id_dsa.pub
state: present
- name: provision app servers
hosts: appservers
tasks:
- name: ensure app/deploy user is present
user:
name: "{{ deploy_user }}"
state: present
- name: ensure you'll be able to deploy later on
authorized_key:
user: "{{ deploy_user }}"
key: "{{ path_to_your_public_key }}"
state: present
- name: ensure private key and public one are present
copy:
src: keys/myapp.private
dest: "/home/{{ deploy_user }}/.ssh/{{ item }}"
mode: 0600
with_items:
- app_keys/id_dsa.pub
- app_keys/id_dsa
I created a parameterized role to make sure ssh key pair is generated in a source user in a source remote host and its public key copied to a target user in a target remote host.
You can invoke that role in a nested loop of source and target host lists as shown at the bottom:
---
#****h* ansible/ansible_roles_ssh_authorize_user
# NAME
# ansible_roles_ssh_authorize_user - Authorizes user via ssh keys
#
# FUNCTION
#
# Copies user's SSH public key from a source user in a source host
# to a target user in a target host
#
# INPUTS
#
# * ssh_authorize_user_source_user
# * ssh_authorize_user_source_host
# * ssh_authorize_user_target_user
# * ssh_authorize_user_target_host
#****
#****h* ansible_roles_ssh_authorize_user/main.yml
# NAME
# main.yml - Main playbook for role ssh_authorize_user
# HISTORY
# $Id: $
#****
- assert:
that:
- ssh_authorize_user_source_user != ''
- ssh_authorize_user_source_host != ''
- ssh_authorize_user_target_user != ''
- ssh_authorize_user_target_host != ''
tags:
- check_vars
- name: Generate SSH Keypair in Source
user:
name: "{{ ssh_authorize_user_source_user }}"
state: present
ssh_key_comment: "ansible-generated for {{ ssh_authorize_user_source_user }}#{{ ssh_authorize_user_source_host }}"
generate_ssh_key: yes
delegate_to: "{{ ssh_authorize_user_source_host }}"
register: source_user
- name: Install SSH Public Key in Target
authorized_key:
user: "{{ ssh_authorize_user_target_user }}"
key: "{{ source_user.ssh_public_key }}"
delegate_to: "{{ ssh_authorize_user_target_host }}"
- debug:
msg: "{{ ssh_authorize_user_source_user }}#{{ ssh_authorize_user_source_host }} authorized to log in to {{ ssh_authorize_user_target_user }}#{{ ssh_authorize_user_target_host }}"
Invoking role in a loop:
- name: Authorize User
include_role:
name: ssh_authorize_user
vars:
ssh_authorize_user_source_user: "{{ git_user }}"
ssh_authorize_user_source_host: "{{ item[0] }}"
ssh_authorize_user_target_user: "{{ git_user }}"
ssh_authorize_user_target_host: "{{ item[1] }}"
with_nested:
- "{{ app_server_list }}"
- "{{ git_server_list }}"
I would create a deploy user that is restricted to pull access to your repos. You can either allow this through http or there are a few options to do it over ssh.
If you don't care about limiting the user to read-only access to your repo then you can create a normal ssh user. Once the user is created you can use Ansible to add the user's public key to the authorized key file on the git server you can use the authorized key module.
Once that is setup you have two options:
If you use ssh use ssh key forwarding so that the user that is used to run the Ansible task sends his public key to the dev server.
Temporarily transfer the key and use the ssh_opts git module option to use the deploy user's public key.
Use the openssh_keypair and authorized_key module to create and deploy the keys at the same time without saving it into your ansible host.
- openssh_keypair:
group: root
owner: root
path: /some/path/in/your/server
register: ssh_key
- name: Store public key into origin
delegate_to: central_server_name
authorized_key:
key: "{{ssh_key.public_key}}"
comment: "{{ansible_hostname}}"
user: any_user_on_central
Will create and/or make sure the ssh key on your server will enable ssh connection to central_server_name.
I wanted to contribute this code by removing the shell module and using slurp. Thanks a lot Jonas Libbrecht for the code. It is quite useful.
- name: Get ssh keys
slurp:
src: /home/nsbl/.ssh/id_ed25519.pub
register: ssh_keys
tags:
- ssh
- name: Check keys
debug: msg="{{ ssh_keys['content'] | b64decode }}"
tags:
- ssh
- name: deploy keys on nodes 1
authorized_key:
user: root
key: "{{ item[1] }}"
delegate_to: "{{ item[0] }}"
with_nested:
- "{{ groups['cluster'] }}"
- "{{ ssh_keys['content'] | b64decode }}"
tags:
- ssh
Thanks community.
This is what I use to exchange RSA keys between multiple hosts (many to many). I have variations that create the user accounts with the key pairs and also to deal with 'one to many' and 'many to one' scenarios.
#:TASK: Exchange SSH RSA keys between multiple hosts (many to many)
#:....: RSA keypairs are created as required at play (1)
#:....: authorized_keys updated at play <root user (2a.1 & 2a.2)>, <non root user (2b.1)>
#:....: -- We need a 2a or 2b option becasue there is a 'chicken & egg' issue for the root user!
#:....: known_hosts files are updated at play (3)
#:REQD: *IF* your security policy allows:
#:....: -- Add 'host_key_checking = False' to ansible.cfg
#:....: -- Or use one of the variations of 'StrictHostKeyChecking=no' elsewhere:
#:....: e.g. inventory setting - ansible_ssh_common_args='-o StrictHostKeyChecking=no'
#:....: - or - host variable - ansible_ssh_extra_args='-o StrictHostKeyChecking=no'
#:USER: RUN this as the 'root' user; it hasn't been tested or adapted to be run as any other user
#:EXEC: ansible-playbook <playbook>.yml -e "nodes=<inventory_hosts> user=<username>"
#:VERS: 20230119.01
#
---
- name: Exchange RSA keys and update known_hosts between multiple hosts
hosts: "{{ nodes }}"
vars:
ip: "{{ hostvars[inventory_hostname]['ansible_default_ipv4']['address'] }}"
tasks:
- name: (1) Generate an SSH RSA key pair
community.crypto.openssh_keypair:
path: "~{{ user }}/.ssh/id_rsa"
comment: "{{ user }}#{{ ip }}"
size: 2048
- name: (2) Retrieve RSA key/s then exchange it with other hosts
block:
- name: (2a.1) Retrieve client public RSA key/s to a variable
slurp:
src: ".ssh/id_rsa.pub"
register: rsa_key
# Using the debug module here seems to make the slurp above more reliable
# as during testing not all hosts that were slurped worked.
- debug:
msg: "{{ rsa_key['content'] | b64decode }} / {{ ip }} / {{ user }}"
- name: (2a.2) Exchange RSA keys between hosts and update authorized_key files
delegate_to: "{{ item }}"
authorized_key:
user: "{{ user }}"
key: "{{ rsa_key['content'] | b64decode }}"
with_items:
- "{{ ansible_play_hosts }}"
when: item != inventory_hostname
when: user == "root"
- name: (2b.1) Exchange RSA keys between hosts and update authorized_key files
block:
- delegate_to: "{{ item }}"
authorized_key:
user: "{{ user }}"
key: "{{ rsa_key['content'] | b64decode }}"
with_items:
- "{{ ansible_play_hosts }}"
when: item != inventory_hostname
when: user != "root"
- name: (3) Ensure nodes are present in known_hosts file
become: yes
become_user: "{{ user }}"
known_hosts:
name: "{{ item }}"
path: "~{{ user }}/.ssh/known_hosts"
key: "{{ lookup('pipe', 'ssh-keyscan -t rsa {{ item }}') }}"
when: item != inventory_hostname
with_items:
- "{{ ansible_play_hosts }}"