I have 2 app servers with a loadbalancer in front of them and 1 database server in my system. I'm provisioning them using Ansible. App servers has Nginx + Passenger and running for a Rails app. Will use capistrano for deployment but I have an issue about ssh keys. My git repo is in another server and I have to generate ssh public keys on appservers and add them to the Git server(To authorized_keys file). How can I do this in ansible playbook?
PS: I may have more than 2 app servers.
This does the trick for me, it collects the public ssh keys on the nodes and distributes it over all the nodes. This way they can communicate with each other.
- hosts: controllers
gather_facts: false
remote_user: root
tasks:
- name: fetch all public ssh keys
shell: cat ~/.ssh/id_rsa.pub
register: ssh_keys
tags:
- ssh
- name: check keys
debug: msg="{{ ssh_keys.stdout }}"
tags:
- ssh
- name: deploy keys on all servers
authorized_key: user=root key="{{ item[0] }}"
delegate_to: "{{ item[1] }}"
with_nested:
- "{{ ssh_keys.stdout }}"
- "{{groups['controllers']}}"
tags:
- ssh
Info: This is for the user root
Take a look to the authorized_key module for getting info on how to manage your public keys.
The most straightforward solution I can think of would be to generate a fresh key pair for your application, to be shared accross all your app instances. This may have security implications (you are indeed sharing keys between all instances!), but it'll simplify a lot the provisioning process.
You'll also require a deploy user on each app machine, to be used later on during deployment process. You'll need your public key (or jenkins one) on each deploy user's authorized_keys.
A sketch playbook:
---
- name: ensure app/deploy public key is present on git server
hosts: gitserver
tasks:
- name: ensure app public key
authorized_key:
user: "{{ git_user }}"
key: app_keys/id_dsa.pub
state: present
- name: provision app servers
hosts: appservers
tasks:
- name: ensure app/deploy user is present
user:
name: "{{ deploy_user }}"
state: present
- name: ensure you'll be able to deploy later on
authorized_key:
user: "{{ deploy_user }}"
key: "{{ path_to_your_public_key }}"
state: present
- name: ensure private key and public one are present
copy:
src: keys/myapp.private
dest: "/home/{{ deploy_user }}/.ssh/{{ item }}"
mode: 0600
with_items:
- app_keys/id_dsa.pub
- app_keys/id_dsa
I created a parameterized role to make sure ssh key pair is generated in a source user in a source remote host and its public key copied to a target user in a target remote host.
You can invoke that role in a nested loop of source and target host lists as shown at the bottom:
---
#****h* ansible/ansible_roles_ssh_authorize_user
# NAME
# ansible_roles_ssh_authorize_user - Authorizes user via ssh keys
#
# FUNCTION
#
# Copies user's SSH public key from a source user in a source host
# to a target user in a target host
#
# INPUTS
#
# * ssh_authorize_user_source_user
# * ssh_authorize_user_source_host
# * ssh_authorize_user_target_user
# * ssh_authorize_user_target_host
#****
#****h* ansible_roles_ssh_authorize_user/main.yml
# NAME
# main.yml - Main playbook for role ssh_authorize_user
# HISTORY
# $Id: $
#****
- assert:
that:
- ssh_authorize_user_source_user != ''
- ssh_authorize_user_source_host != ''
- ssh_authorize_user_target_user != ''
- ssh_authorize_user_target_host != ''
tags:
- check_vars
- name: Generate SSH Keypair in Source
user:
name: "{{ ssh_authorize_user_source_user }}"
state: present
ssh_key_comment: "ansible-generated for {{ ssh_authorize_user_source_user }}#{{ ssh_authorize_user_source_host }}"
generate_ssh_key: yes
delegate_to: "{{ ssh_authorize_user_source_host }}"
register: source_user
- name: Install SSH Public Key in Target
authorized_key:
user: "{{ ssh_authorize_user_target_user }}"
key: "{{ source_user.ssh_public_key }}"
delegate_to: "{{ ssh_authorize_user_target_host }}"
- debug:
msg: "{{ ssh_authorize_user_source_user }}#{{ ssh_authorize_user_source_host }} authorized to log in to {{ ssh_authorize_user_target_user }}#{{ ssh_authorize_user_target_host }}"
Invoking role in a loop:
- name: Authorize User
include_role:
name: ssh_authorize_user
vars:
ssh_authorize_user_source_user: "{{ git_user }}"
ssh_authorize_user_source_host: "{{ item[0] }}"
ssh_authorize_user_target_user: "{{ git_user }}"
ssh_authorize_user_target_host: "{{ item[1] }}"
with_nested:
- "{{ app_server_list }}"
- "{{ git_server_list }}"
I would create a deploy user that is restricted to pull access to your repos. You can either allow this through http or there are a few options to do it over ssh.
If you don't care about limiting the user to read-only access to your repo then you can create a normal ssh user. Once the user is created you can use Ansible to add the user's public key to the authorized key file on the git server you can use the authorized key module.
Once that is setup you have two options:
If you use ssh use ssh key forwarding so that the user that is used to run the Ansible task sends his public key to the dev server.
Temporarily transfer the key and use the ssh_opts git module option to use the deploy user's public key.
Use the openssh_keypair and authorized_key module to create and deploy the keys at the same time without saving it into your ansible host.
- openssh_keypair:
group: root
owner: root
path: /some/path/in/your/server
register: ssh_key
- name: Store public key into origin
delegate_to: central_server_name
authorized_key:
key: "{{ssh_key.public_key}}"
comment: "{{ansible_hostname}}"
user: any_user_on_central
Will create and/or make sure the ssh key on your server will enable ssh connection to central_server_name.
I wanted to contribute this code by removing the shell module and using slurp. Thanks a lot Jonas Libbrecht for the code. It is quite useful.
- name: Get ssh keys
slurp:
src: /home/nsbl/.ssh/id_ed25519.pub
register: ssh_keys
tags:
- ssh
- name: Check keys
debug: msg="{{ ssh_keys['content'] | b64decode }}"
tags:
- ssh
- name: deploy keys on nodes 1
authorized_key:
user: root
key: "{{ item[1] }}"
delegate_to: "{{ item[0] }}"
with_nested:
- "{{ groups['cluster'] }}"
- "{{ ssh_keys['content'] | b64decode }}"
tags:
- ssh
Thanks community.
This is what I use to exchange RSA keys between multiple hosts (many to many). I have variations that create the user accounts with the key pairs and also to deal with 'one to many' and 'many to one' scenarios.
#:TASK: Exchange SSH RSA keys between multiple hosts (many to many)
#:....: RSA keypairs are created as required at play (1)
#:....: authorized_keys updated at play <root user (2a.1 & 2a.2)>, <non root user (2b.1)>
#:....: -- We need a 2a or 2b option becasue there is a 'chicken & egg' issue for the root user!
#:....: known_hosts files are updated at play (3)
#:REQD: *IF* your security policy allows:
#:....: -- Add 'host_key_checking = False' to ansible.cfg
#:....: -- Or use one of the variations of 'StrictHostKeyChecking=no' elsewhere:
#:....: e.g. inventory setting - ansible_ssh_common_args='-o StrictHostKeyChecking=no'
#:....: - or - host variable - ansible_ssh_extra_args='-o StrictHostKeyChecking=no'
#:USER: RUN this as the 'root' user; it hasn't been tested or adapted to be run as any other user
#:EXEC: ansible-playbook <playbook>.yml -e "nodes=<inventory_hosts> user=<username>"
#:VERS: 20230119.01
#
---
- name: Exchange RSA keys and update known_hosts between multiple hosts
hosts: "{{ nodes }}"
vars:
ip: "{{ hostvars[inventory_hostname]['ansible_default_ipv4']['address'] }}"
tasks:
- name: (1) Generate an SSH RSA key pair
community.crypto.openssh_keypair:
path: "~{{ user }}/.ssh/id_rsa"
comment: "{{ user }}#{{ ip }}"
size: 2048
- name: (2) Retrieve RSA key/s then exchange it with other hosts
block:
- name: (2a.1) Retrieve client public RSA key/s to a variable
slurp:
src: ".ssh/id_rsa.pub"
register: rsa_key
# Using the debug module here seems to make the slurp above more reliable
# as during testing not all hosts that were slurped worked.
- debug:
msg: "{{ rsa_key['content'] | b64decode }} / {{ ip }} / {{ user }}"
- name: (2a.2) Exchange RSA keys between hosts and update authorized_key files
delegate_to: "{{ item }}"
authorized_key:
user: "{{ user }}"
key: "{{ rsa_key['content'] | b64decode }}"
with_items:
- "{{ ansible_play_hosts }}"
when: item != inventory_hostname
when: user == "root"
- name: (2b.1) Exchange RSA keys between hosts and update authorized_key files
block:
- delegate_to: "{{ item }}"
authorized_key:
user: "{{ user }}"
key: "{{ rsa_key['content'] | b64decode }}"
with_items:
- "{{ ansible_play_hosts }}"
when: item != inventory_hostname
when: user != "root"
- name: (3) Ensure nodes are present in known_hosts file
become: yes
become_user: "{{ user }}"
known_hosts:
name: "{{ item }}"
path: "~{{ user }}/.ssh/known_hosts"
key: "{{ lookup('pipe', 'ssh-keyscan -t rsa {{ item }}') }}"
when: item != inventory_hostname
with_items:
- "{{ ansible_play_hosts }}"
Related
I have built a playbook in ansible that creates 2 groups of ec2 instances.
In a second playbook, I want that the first play lists the existing group to the user so the user can choose one. Then in a second play, use this group in hosts
---
- name: playbook
hosts: localhost
vars_prompt:
- name: groupvar
prompt: "Select a group"
private: no
tasks:
- name: task 1
debug:
msg: "{{ groupvar}}"
- name: Another play
hosts: "{{ groupvar }}"
# ...
How can I pass on the value of groupvar to the second play in this playbook?
Note: make sure you are not simply re-inventing the existing --limit option of the ansible-playbook command line
As you found out, vars_prompt do not survive the play they're declared in. In that case you have to use set_fact. Here is an example using your above code as a starting point:
- name: playbook
hosts: localhost
vars_prompt:
- name: groupvar
prompt: "Select a group"
private: no
tasks:
- name: task 1
debug:
msg: "{{ groupvar }}"
- name: Save value in a fact for current host
set_fact:
groupvar: "{{ groupvar }}"
- name: Another play running on above chosen group
# Remember we have set the fact on current host above which was localhost
hosts: "{{ hostvars['localhost'].groupvar }}"
# ... rest of your play.
I have a playbook that, for one of the hosts, how I need to connect differs according to whether certain tasks have previously succeeded.
In this specific case there's a tunnel between two of them, and one routes all its traffic over that tunnel, so once configured I need to use the other as a jump box in order to connect - but I can imagine many other circumstances where you might want to change connection method mid-playbook, from as simple as modifying users/passwords.
How can I have a conditional connection method?
I can't simply update with set_fact, since by the time I reach that task ansible will already have tried and possibly failed to 'gather facts' at the start, and won't proceed.
The devil is in the details for such a question, for sure, but in general I think use of add_host will be the most legible way to do what you want. You can also change the connection on a per-task basis, or conditionally change the connection for the whole playbook against that host:
- hosts: all
connection: ssh # <-- or whatever bootstrap connection plugin
gather_facts: no
tasks:
- command: echo "do something here"
register: the_thing
# now, you can either switch to the alternate connection per task:
- command: echo "do the other thing"
connection: lxd # <-- or whatever
when: the_thing is success
# OR, you can make the alternate connection the default
# for the rest of the current playbook
- name: switch the rest of the playbook
set_fact:
ansible_connection: chroot
when: the_thing is success
# OR, perhaps run another playbook using the alternate connection
# by adding the newly configured host to a special group
- add_host:
name: '{{ ansible_host }}'
groups:
- configured_hosts
when: the_thing is success
# and then running the other playbook against configured hosts
- hosts: configured_hosts
connection: docker # <-- or whatever connection you want
tasks:
- setup:
I use the following snippet as a role and invoke this role depending on the situation whether I need jumphost(bastion or proxy) or not. An example is also given in the comments. This role can add multiple hosts at the same time. Put the following contents in roles/inventory/tasks/main.yml
# Description: |
# Adds given hosts to inventory.
# Inputs:
# hosts_info: |
# (mandatory)
# List of hosts with the structure which looks like this:
#
# - name: <host name>
# address: <url or ip address of host>
# groups: [] list of groups to which this host will be added.
# user: <SSH user>
# ssh_priv_key_path: <private key path for ssh access to host>
# proxy: <define following structure if host should be accessed using proxy>
# ssh_priv_key_path: <priv key path for ssh access to proxy node>
# user: <login user on proxy node>
# host: <proxy host address>
#
# Example Usage:
# - include_role:
# name: inventory
# vars:
# hosts_info:
# - name: controller-0
# address: 10.100.10.13
# groups:
# - controller
# user: user1
# ssh_priv_key_path: /home/user/.ssh/id_rsa
# - name: node-0
# address: 10.10.1.14
# groups:
# - worker
# - nodes
# user: user1
# ssh_priv_key_path: /home/user/.ssh/id_rsa
# proxy:
# ssh_priv_key_path: /home/user/jumphost_key.rsa.priv
# user: jumphost-user
# host: 10.100.10.13
- name: validate inventory input
assert:
that:
- "single_host_info.name is defined"
- "single_host_info.groups is defined"
- "single_host_info.address is defined"
- "single_host_info.user is defined"
- "single_host_info.ssh_priv_key_path is defined"
loop: "{{ hosts_info }}"
loop_control:
loop_var: single_host_info
- name: validate inventory proxy input
assert:
that:
- "single_host_info.proxy.host is defined"
- "single_host_info.proxy.user is defined"
- "single_host_info.proxy.ssh_priv_key_path is defined"
when: "single_host_info.proxy is defined"
loop: "{{ hosts_info }}"
loop_control:
loop_var: single_host_info
- name: Add hosts to inventory without proxy
add_host:
groups: "{{ single_host_info.groups | join(',') }}"
name: "{{ single_host_info.name }}"
host: "{{ single_host_info.name }}"
hostname: "{{ single_host_info.name }}"
ansible_host: "{{ single_host_info.address }}"
ansible_connection: ssh
ansible_ssh_user: "{{ single_host_info.user }}"
ansible_ssh_extra_args: "-o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null"
ansible_ssh_private_key_file: "{{ single_host_info.ssh_priv_key_path }}"
loop: "{{ hosts_info | json_query(\"[?contains(keys(#), 'proxy') == `false`]\") | list }}"
loop_control:
loop_var: single_host_info
- name: Add hosts to inventory with proxy
add_host:
groups: "{{ single_host_info.groups | join(',') }}"
name: "{{ single_host_info.name }}"
host: "{{ single_host_info.name }}"
hostname: "{{ single_host_info.name }}"
ansible_host: "{{ single_host_info.address }}"
ansible_connection: ssh
ansible_ssh_user: "{{ single_host_info.user }}"
ansible_ssh_extra_args: "-o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null"
ansible_ssh_private_key_file: "{{ single_host_info.ssh_priv_key_path }}"
ansible_ssh_common_args: >-
-o ProxyCommand='ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null
-W %h:%p -q -i {{ single_host_info.proxy.ssh_priv_key_path }}
{{ single_host_info.proxy.user }}#{{ single_host_info.proxy.host }}'
loop: "{{ hosts_info | json_query(\"[?contains(keys(#), 'proxy') == `true`]\") }}"
loop_control:
loop_var: single_host_info
I'm programming a simple task with Ansible to create a user and add an existing RSA public key. This is the code I wrote:
- name: SYSTEM - Create test user
tags: system-user
user:
name: "{{ test_user }}"
state: present
createhome: yes
- name: SYSTEM - Add existing pub key for test user
tags: system-user
copy:
content: "{{ test_user_pubkey }}"
dest: "/tmp/test_user_id_rsa.pub"
force: no
owner: "{{ test_user }}"
group: "{{ test_user }}"
mode: 0600
- name: SYSTEM - Set authorized key for test_user took from file
tags: system-user
authorized_key:
user: "{{ test_user }}"
state: present
key: "{{ lookup('file', '/tmp/test_user_id_rsa.pub') }}"
The code I wrote is not elegant and I think that the best option will be to add the existing RSA public key with the user creation block in order to create and filled up the authorized_keys file.
I've read the Ansible user module but ssh_key_file method does not include the possibility to echo the value of an existing pub key to the authorized_keys file (the end purpose is to be able to remote connect with ssh using the user and the private key).
ssh_key_file = Optionally specify the SSH key filename. If this is a
relative filename then it will be relative to the user's home
directory.
Is it possible with Ansible to manage this process within the user module?
The answer to your problem is:
- name: SYSTEM - Create test user
tags: system-user
user:
name: "{{ test_user }}"
state: present
createhome: yes
- name: SYSTEM - Set authorized key for test_user took from file
tags: system-user
authorized_key:
user: "{{ test_user }}"
state: present
key: "{{ test_user_pubkey }}"
That's all that is needed.
Regarding your reading of the documentation, ssh_key_file pertains to generating an SSH key pair, which is not what you want.
So I've been lurking this thread trying to get this wrapped around my head.. And I ended up being able to make my way around this.
First things first, I tend to cram everything in dicts and then use | dict2items whenever I need to loop within jinja2.
My main problem is that once the user module generates the ssh_keys, there are no clean ways to use the authorized_key module with what you just made (or so I think? I am probably not the smartest guy in here) without bending Ansible in ways impossible (slurping? it is impossible to place another variable within a variable (from what I've tried)"{{ slurp_{{ item.key }} | b64decode }}" seem undoable)
So if you are using massive loops and unwilling to copy all keys to your localhost (which honestly is time consuming), I've found this sneaky trick that does not make reading your code an Olympian challenge :
- name: Prepare the SFTP user
user:
name: "{{ item.key }}"
groups: sftp_users
home: /home/{{ item.key }}
password: "{{ impossible_sftp_pass }}"
generate_ssh_key: yes
ssh_key_file: .ssh/id_rsa
shell: /bin/nologin
with_dict: "{{ instances }}"
- name: sneaky way to get the keys right
shell: cat /home/{{ item.key }}/.ssh/id_rsa.pub > /home/{{ item.key }}/.ssh/authorized_keys
args:
creates: /home/{{ item.key }}/.ssh/authorized_keys
with_dict: "{{ instances }}"
In this example, our goal is to setup an STFP bastion host that will finally rsync SFTP data repos to the appropriate web fronts that are within a private network.
I'm following the documentation for creating an instance using Ansible
http://docs.ansible.com/ansible/guide_gce.html
However, when I run this I get:
Required 'compute.zones.list' permission for 'projects/quick-line-137923'
I don't know where I'm meant to configure these permissions for a service account, because the documentation seems to suggest that you can only configure permissions for a service account on an instance that is already created:
"You can set scopes only when you create a new instance"
When I try to grant IAM permissions for this service account (admin), it isn’t in the list and when I select the service account in ‘service accounts’ I’m asked to add a member for domain wide permissions, nowhere to assign permissions to this service account for the compute.zones.list
Any help?
My playbook looks like so:
- name: "Create instance(s)"
hosts: localhost
gather_facts: no
connection: local
vars:
machine_type: n1-standard-1 # default
image: ubuntu-1404-lts
service_account_email: admin-531#quick-line-137923.iam.gserviceaccount.com
credentials_file: /Users/Mike/Downloads/project.json
project_id: quick-line-137923
tasks:
- name: "Launch instances"
gce:
instance_names: dev
machine_type: "{{ machine_type }}"
image: "{{ image }}"
service_account_email: "{{ service_account_email }}"
credentials_file: "{{ credentials_file }}"
project_id: "{{ project_id }}"
tags: webserver
register: gce
- name: "Wait for SSH to come up"
wait_for: host={{ item.public_ip }} port=22 delay=10 timeout=60
with_items: gce.instance_data
- name: "Add host to groupname"
add_host: hostname={{ item.public_ip }} groupname=new_instances
with_items: gce.instance_data
- name: "Manage new instances"
vars_files:
- "vars/webserver.yml"
hosts: new_instances
connection: ssh
sudo: True
roles:
- geerlingguy.apache
- geerlingguy.php
- geerlingguy.drush
- geerlingguy.mysql
Add the Compute Instance Admin and Service Account Actor roles to the service account.
You also have to activate the service account. The gcloud tool can be used: https://cloud.google.com/sdk/gcloud/reference/auth/activate-service-account .
I'm working with Ansible and trying to put SSH Key from my Server to another Remote Server.
Here is my code.
- name: Add RSA key to the remote host
authorized_key:
user:
name:"{{ item.user }}"
key:"{{ lookup('file', '/home/ansible/.ssh/id_rsa.pub') }}"
path:"/home/{{ item.username }}/.ssh/authorized_keys"
when: item.get('state', 'present') == 'present'
with_items: USER_LIST
and getting the following error every time when I try to execute it.
ERROR: Syntax Error while loading YAML script, /home/ansible/public_html/ansible/roles/user/tasks/main.yml
Note: The error may actually appear before this position: line 39, column 5
Your syntax is wrong, try this:
- name: Add RSA key to the remote host
authorized_key:
user: "{{ item.user }}"
key: "{{ lookup('file', '/home/ansible/.ssh/id_rsa.pub') }}"
path: "/home/{{ item.username }}/.ssh/authorized_keys"
when: item.get('state', 'present') == 'present'
with_items: USER_LIST