Ansible - Moving ssh keys between two nodes - ssh

Here is the problem I'm working on.
I have an ansible server
I have another server M
I have other servers B1, B2, B3... all known by ansible
I have a hosts file such as this
[CTRL]
M
[SLAVES]
B1
B2
B3
I want to generate a ssh key on my master (not ansible itself) and deploy it on my other slave servers to permit the master to connect on the slaves by keys.
Here is what I tried :
- hosts: CTRL
remote_user: root
vars_prompt:
- name: ssh_password
prompt : Please enter password for ssh key copy on remote nodes
private: yes
tasks:
- yum: name=sshpass state=present
sudo: yes
- name: generate ssh key on the controller
shell : ssh-keygen -b 2048 -t rsa -f /root/.ssh/id_rsa -q -N /dev/null
- name: copy ssh key to the other nodes
shell : sshpass -p '{{ ssh_password }}' ssh-copy-id root#'{{ item }}'
with_items: groups['SLAVES']
delegate_to: "{{ groups['CTRL'][0] }}"
The key generation works but no matter how I work I have a problem copying the key to the slave hosts
failed: [M -> M] => (item=B1) => {"changed": true, "cmd": "sshpass -p 'mypassword' ssh-copy-id root#'B1'", "delta": "0:00:00.101102", "end": "2016-07-18 11:08:56.985623", "item": "B1", "rc": 6, "start": "2016-07-18 11:08:56.884521", "warnings": []}
failed: [M -> M] => (item=B2) => {"changed": true, "cmd": "sshpass -p 'mypassword' ssh-copy-id root#'B2'", "delta": "0:00:00.101102", "end": "2016-07-18 11:08:56.985623", "item": "B1", "rc": 6, "start": "2016-07-18 11:08:56.884521", "warnings": []}
failed: [M -> M] => (item=B3) => {"changed": true, "cmd": "sshpass -p 'mypassword' ssh-copy-id root#'B3'", "delta": "0:00:00.101102", "end": "2016-07-18 11:08:56.985623", "item": "B1", "rc": 6, "start": "2016-07-18 11:08:56.884521", "warnings": []}
Do you know how I could correct my code or maybe do you have a simplier way to do what I want to do ?
Thank you.

This is more neat solution without file fetch:
---
- hosts: M
tasks:
- name: generate key pair
shell: ssh-keygen -b 2048 -t rsa -f /root/.ssh/id_rsa -q -N /dev/null
args:
creates: /root/.ssh/id_rsa
- name: test public key
shell: ssh-keygen -l -f /root/.ssh/id_rsa.pub
changed_when: false
- name: retrieve public key
shell: cat /root/.ssh/id_rsa.pub
register: master_public_key
changed_when: false
- hosts: SLAVES
tasks:
- name: add master public key to slaves
authorized_key:
user: root
key: "{{ hostvars['M'].master_public_key.stdout }}"
One of possible solutions (my first answer):
---
- hosts: M
tasks:
- name: generate key pair
shell: ssh-keygen -b 2048 -t rsa -f /root/.ssh/id_rsa -q -N /dev/null
- name: fetch public key
fetch:
src: /root/.ssh/id_rsa.pub
dest: tmp/
flat: yes
- hosts: SLAVES
tasks:
- name: add master public key to slaves
authorized_key:
user: root
key: "{{ lookup('file', 'tmp/id_rsa.pub') }}"

Related

Run ansible as root with specific sudoers

My issue is that I have one server where the sudoers for the ansible user is like this:
ansible ALL=(root) NOPASSWD: /usr/bin/su - root
Hence, the only way to switch to the root user is:
sudo su - root
When I try to run the below ansible playbook:
---
- name: Configure Local Repo server address
hosts: lab
remote_user: ansible
become: yes
become_user: root
become_method: runas
tasks:
- name: test whoami
become: yes
shell:
cmd: whoami
register: whoami_output
- debug: var=whoami_output
- name: Deploy local.repo file to the hosts
become: yes
copy:
src: /etc/ansible/files/local.repo
dest: /etc/yum.repos.d/local.repo
owner: ansible
group: ansible
mode: 0644
backup: yes
register: deploy_file_output
- debug: var=deploy_file_output
I got the following error:
ansible-playbook --private-key /etc/ansible/keys/ansible_key /etc/ansible/playbooks/local_repo_provisioning.yml
PLAY [Configure Local Repo server address] *****************************************************************************************************************************************************************************************************
TASK [Gathering Facts] *************************************************************************************************************************************************************************************************************************
ok: [10.175.65.12]
TASK [test whoami] *****************************************************************************************************************************************************************************************************************************
changed: [10.175.65.12]
TASK [debug] ***********************************************************************************************************************************************************************************************************************************
ok: [10.175.65.12] => {
"whoami_output": {
"changed": true,
"cmd": "whoami",
"delta": "0:00:00.003301",
"end": "2023-01-15 17:53:56.312715",
"failed": false,
"msg": "",
"rc": 0,
"start": "2023-01-15 17:53:56.309414",
"stderr": "",
"stderr_lines": [],
"stdout": "ansible",
"stdout_lines": [
"ansible"
]
}
}
TASK [Deploy local.repo file to the hosts] *****************************************************************************************************************************************************************************************************
fatal: [10.175.65.12]: FAILED! => {"changed": false, "checksum": "2356deb90d20d5f31351c719614d5b5760ab967d", "msg": "Destination /etc/yum.repos.d not writable"}
PLAY RECAP *************************************************************************************************************************************************************************************************************************************
10.175.65.12 : ok=3 changed=1 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
When I tried to use become_method: sudo I got the "Missing sudo password" message. Further, when I tried become_method: su I got the "Timeout (12s) waiting for privilege escalation prompt:" message.
All in all, would someone know how to explain how ansible runs the commands deppending on the "become_method" set? Is there a way to switch to the root user with that kind of sudoers conf?
Thanks in advance!

how to get inside Vault (ssh) with Ansible playbook?

Im using Vagrant and I want to start vault server from inside the vagrant box, via ansible playbook.
to do so, without playbook, I need to execute # vagrant ssh and then I'm in the vagrant box and can start the vault server using # vault server -dev.
I want to execute the # vault server -dev directly from the playbook. any ideas how?
this is my playbook -
---
- name: Playbook to install and use Vault
become: true
hosts: all
tasks:
- name: Uptade1
become: true
become_user: root
shell: apt update
- name: gpg
become: true
become_user: root
shell: apt install gpg
- name: verify key
become: true
become_user: root
shell: wget -O- https://apt.releases.hashicorp.com/gpg | gpg --dearmor | sudo tee /usr/share/keyrings/hashicorp-archive-keyring.gpg >/dev/null
- name: fingerprint
become: true
become_user: root
shell: gpg --no-default-keyring --keyring /usr/share/keyrings/hashicorp-archive-keyring.gpg --fingerprint
- name: repository
become: true
become_user: root
shell: echo "deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/hashicorp.list
- name: update2
become: true
become_user: root
shell: apt update
- name: vault install
become: true
become_user: root
shell: apt install vault
- name: start vault
become: true
become_user: vagrant
shell: vault server -dev -dev-listen-address=0.0.0.0:8200
the last one is my try to start the vault server but it gets stuck in the
TASK [start vault] *********************************************************
I also tried adding
- name: start vault
become: true
shell: vagrant ssh
before but then I get :
TASK [start vault] *************************************************************
fatal: [default]: FAILED! => {"changed": true, "cmd": "vagrant ssh", "delta": "0:00:00.003245", "end": "2022-07-03 16:18:31.480702", "msg": "non-zero return code", "rc": 127, "start": "2022-07-03 16:18:31.477457", "stderr": "/bin/sh: 1: vagrant: not found", "stderr_lines": ["/bin/sh: 1: vagrant: not found"], "stdout": "", "stdout_lines": []}
this is my Vagrantfile if needed:
Vagrant.configure("2") do |config|
VAGRANT_DEFAULT_PROVIDER = "virtualbox"
config.vm.hostname = "carebox-idan"
config.vm.provision "ansible", playbook: "playbook.yml"
config.vm.box = "laravel/homestead"
config.vm.network "forwarded_port", guest: 8200, host: 8200, auto_correct: "true"
config.ssh.forward_agent = true
end
thank you.

Ansible synchronize (rsync) fails

ansible.posix.synchronize, a wrapper for rsync, is failing with message
"msg": "Warning: Permanently added <host> (ECDSA) to the list of known hosts.\r\n=========================================================================\nUse of this computer system is for authorized and management approved use\nonly. All usage is subject to monitoring. Unauthorized use is strictly\nprohibited and subject to prosecution and/or corrective action up to and\nincluding termination of employment.\n=========================================================================\nrsync error: received SIGINT, SIGTERM, or SIGHUP (code 20) at rsync.c(644) [sender=3.1.3]\n"
My playbook
---
- name: Test rsync
hosts: all
become: yes
become_user: postgres
tasks:
- name: Populate scripts/common using copy
copy:
src: common/
dest: /home/postgres/scripts/common
- name: Populate scripts/common using rsync
ansible.posix.synchronize:
src: common/
dest: /home/postgres/scripts/common
Populate scripts/common using copy executes with no problem.
Full error output
fatal: [<host>]: FAILED! => {
"changed": false,
"cmd": "sshpass -d3 /usr/bin/rsync --delay-updates -F --compress --archive --rsh='/usr/bin/ssh -S none -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null' --rsync-path='sudo -u postgres rsync' --out-format='<<CHANGED>>%i %n%L' /opt/common/ pg_deployment#<host>t:/home/postgres/scripts/common",
"invocation": {
"module_args": {
"_local_rsync_password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"_local_rsync_path": "rsync",
"_substitute_controller": false,
"archive": true,
"checksum": false,
"compress": true,
"copy_links": false,
"delay_updates": true,
"delete": false,
"dest": "pg_deployment#<host>:/home/postgres/scripts/common",
"dest_port": null,
"dirs": false,
"existing_only": false,
"group": null,
"link_dest": null,
"links": null,
"mode": "push",
"owner": null,
"partial": false,
"perms": null,
"private_key": null,
"recursive": null,
"rsync_opts": [],
"rsync_path": "sudo -u postgres rsync",
"rsync_timeout": 0,
"set_remote_user": true,
"src": "/opt/common/",
"ssh_args": null,
"ssh_connection_multiplexing": false,
"times": null,
"verify_host": false
}
},
"msg": "Warning: Permanently added '<host>' (ECDSA) to the list of known hosts.\r\n=========================================================================\nUse of this computer system is for authorized and management approved use\nonly. All usage is subject to monitoring. Unauthorized use is strictly\nprohibited and subject to prosecution and/or corrective action up to and\nincluding termination of employment.\n=========================================================================\nrsync error: received SIGINT, SIGTERM, or SIGHUP (code 20) at rsync.c(644) [sender=3.1.3]\n",
"rc": 5
}
Notes:
User pg_deployment has passwordless sudo to postgres. This ansible playbook is being run inside a docker container.
After messing with it a bit more, I found that I can directly run the rsync command (not using ansible)
SSHPASS=<my_ssh_pass> sshpass -e /usr/bin/rsync --delay-updates -F --compress --archive --rsh='/usr/bin/ssh -S none -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null' --rsync-path='sudo -u postgres rsync' --out-format='<<CHANGED>>%i %n%L' common pg_deployment#<host>:/home/postgres/
The only difference I can see is I used sshpass -e while ansible defaulted to sshpass -d#. Could the credentials ansible was trying to pass in be incorrect? If they are incorrect for ansible.posix.synchronize then why aren't they incorrect for other ansible tasks?
EDIT
Confirmed that if I run
sshpass -d10 /usr/bin/rsync --delay-updates -F --compress --archive --rsh='/usr/bin/ssh -S none -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null' --rsync-path='sudo -u postgres rsync' --out-format='<<CHANGED>>%i %n%L' common pg_deployment#<host>:/home/postgres
(I chose random number for the file descriptor d10) I get the same error as above
"msg": "Warning: Permanently added <host> (ECDSA) to the list of known hosts.\r\n=========================================================================\nUse of this computer system is for authorized and management approved use\nonly. All usage is subject to monitoring. Unauthorized use is strictly\nprohibited and subject to prosecution and/or corrective action up to and\nincluding termination of employment.\n=========================================================================\nrsync error: received SIGINT, SIGTERM, or SIGHUP (code 20) at rsync.c(644) [sender=3.1.3]\n"
Suggesting that the problem is whatever ansible is using as the file descriptor? It isn't a huge problem since I can just pass in the sshpass as an env variable in my docker container since it's ephemeral, but I still would like to know what is going on with ansible here.
SOLUTION (using command)
---
- name: Create Postgres Cluster
hosts: all
become: yes
become_user: postgres
tasks:
- name: Create Scripts Directory
file:
path: /home/postgres/scripts
state: directory
- name: Populate scripts/common
command: sshpass -e /usr/bin/rsync --delay-updates -F --compress --archive --rsh='/usr/bin/ssh -S none -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null' --rsync-path='sudo -u postgres rsync' --out-format='<<CHANGED>>%i %n%L' common pg_deployment#<host>:/home/postgres/scripts
delegate_to: 127.0.0.1
become: no

Ansible: setting user on dynamic ec2

I don't appear to be connecting to the remote host. Why not?
Command-line: ansible-playbook -i "127.0.0.1," -c local playbook.yml
This is the playbook. The role, create_ec2_instance, creates the variable ec2hosts used within the second portion of the playbook (ansible/playbook.yml):
# Create instance
- hosts: 127.0.0.1
connection: local
gather_facts: false
roles:
- create_ec2_instance
# Configure and install all we need
- hosts: ec2hosts
remote_user: admin
gather_facts: false
roles:
- show-hosts
- prepare-target-system
- install-project-dependencies
- install-project
This is just a simple ec2 module creation. This works as desired. (ansible/roles/create-ec2-instance/tasks/main.yml):
- name: Create instance
ec2:
region: "{{ instance_values['region'] }}"
zone: "{{ instance_values['zone'] }}"
keypair: "{{ instance_values['key_pair'] }}"
group: "{{ instance_values['security_groups'] }}"
instance_type: "{{ instance_values['instance_type'] }}"
image: "{{ instance_values['image_id'] }}"
count_tag: "{{ instance_values['name'] }}"
exact_count: 1
wait: yes
instance_tags:
Name: "{{ instance_values['name'] }}"
when: ec2_instances.instances[instance_values['name']]|default("") == ""
register: ec2_info
- name: Wait for instances to listen on port 22
wait_for:
state: started
host: "{{ ec2_info.instances[0].public_dns_name }}"
port: 22
when: ec2_info|changed
- name: Add new instance to ec2hosts group
add_host:
hostname: "{{ ec2_info.instances[0].public_ip }}"
groupname: ec2hosts
instance_id: "{{ ec2_info.instances[0].id }}"
when: ec2_info|changed
I've included extra methods for transparency, though these are really basic (ansible/roles/show-hosts/tasks/main.yml):
- name: List hosts
debug: msg="groups={{groups}}"
run_once: true
and we have (ansible/roles/prepare-target-system/tasks/main.yml):
- name: get the username running the deploy
local_action: command whoami
register: username_on_the_host
- debug: var=username_on_the_host
- name: Add necessary system packages
become: yes
become_method: sudo
package: "name={{item}} state=latest"
with_items:
- software-properties-common
- python-software-properties
- devscripts
- build-essential
- libffi-dev
- libssl-dev
- vim
Edit: I've updated to remote_user above and below is the error output:
TASK [prepare-target-system : debug] *******************************************
task path: <REDACTED>/ansible/roles/prepare-target-system/tasks/main.yml:5
ok: [35.166.52.247] => {
"username_on_the_host": {
"changed": true,
"cmd": [
"whoami"
],
"delta": "0:00:00.009067",
"end": "2017-01-07 08:23:42.033551",
"rc": 0,
"start": "2017-01-07 08:23:42.024484",
"stderr": "",
"stdout": "brianbruggeman",
"stdout_lines": [
"brianbruggeman"
],
"warnings": []
}
}
TASK [prepare-target-system : Ensure that we can update apt-repository] ********
task path: /<REDACTED>/ansible/roles/prepare-target-system/tasks/Debian.yml:2
Using module file <REDACTED>/.envs/dg2/lib/python2.7/site-packages/ansible/modules/core/packaging/os/apt.py
<35.166.52.247> ESTABLISH LOCAL CONNECTION FOR USER: brianbruggeman
<35.166.52.247> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1483799022.33-268449475843769 `" && echo ansible-tmp-1483799022.33-268449475843769="` echo $HOME/.ansible/tmp/ansible-tmp-1483799022.33-268449475843769 `" ) && sleep 0'
<35.166.52.247> PUT /var/folders/r9/kv1j05355r34570x2f5wpxpr0000gn/T/tmpK2__II TO <REDACTED>/.ansible/tmp/ansible-tmp-1483799022.33-268449475843769/apt.py
<35.166.52.247> EXEC /bin/sh -c 'chmod u+x <REDACTED>/.ansible/tmp/ansible-tmp-1483799022.33-268449475843769/ <REDACTED>/.ansible/tmp/ansible-tmp-1483799022.33-268449475843769/apt.py && sleep 0'
<35.166.52.247> EXEC /bin/sh -c 'sudo -H -S -n -u root /bin/sh -c '"'"'echo BECOME-SUCCESS-owktjrfvqssjrqcetaxjkwowkzsqfitq; /usr/bin/python <REDACTED>/.ansible/tmp/ansible-tmp-1483799022.33-268449475843769/apt.py; rm -rf "<REDACTED>/.ansible/tmp/ansible-tmp-1483799022.33-268449475843769/" > /dev/null 2>&1'"'"' && sleep 0'
failed: [35.166.52.247] (item=[u'software-properties-common', u'python-software-properties', u'devscripts', u'build-essential', u'libffi-dev', u'libssl-dev', u'vim']) => {
"failed": true,
"invocation": {
"module_name": "apt"
},
"item": [
"software-properties-common",
"python-software-properties",
"devscripts",
"build-essential",
"libffi-dev",
"libssl-dev",
"vim"
],
"module_stderr": "sudo: a password is required\n",
"module_stdout": "",
"msg": "MODULE FAILURE"
}
to retry, use: --limit #<REDACTED>/ansible/<redacted playbook>.retry
PLAY RECAP *********************************************************************
127.0.0.1 : ok=6 changed=2 unreachable=0 failed=0
35.166.52.247 : ok=3 changed=1 unreachable=0 failed=1
Use become:
remote_user: ansible
become: true
become_user: root
Ansible docs: Become (Privilege Escalation)
For example: in my scripts i connect to remote host as user 'ansible' (because ssh is disabled for root), and then become 'root'. Rarely, i connect as 'ansible', then become 'apache' user. So, remote_user specify username to connect, become_user is username after connection.
PS Passwordless sudo for user ansible:
- name: nopasswd sudo for ansible user
lineinfile: "dest=/etc/sudoers state=present regexp='^{{ ansible_user }}' line='{{ ansible }} ALL=(ALL) NOPASSWD: ALL'"
This is known workaround, see here: Specify sudo password for Ansible

Creating ssh keys on remote hosts using ansible fails

I am using Ansible to create ssh keys on remote hosts. Following is the playbook code
- name: Test playbook
hosts: all
remote_user: admin
tasks:
- name: Create ssh keys
expect:
command: ssh-keygen -t rsa
echo: yes
timeout: 5
responses:
"file": "" ## Enter file in which to save the key (/home/admin/.ssh/id_rsa)
"Overwrite": "n" ## Overwrite (y/n)?
"passphrase": "" ## Enter passphrase (empty for no passphrase)
However, it get the following error:
fatal: [10.1.1.1]: FAILED! => {"changed": true, "cmd": "ssh-keygen -t rsa", "delta": "0:00:00.301769", "end": "2015-12-30 09:56:29.465815", "failed": true, "invocation": {"module_args": {"chdir": null, "command": "ssh-keygen -t rsa", "creates": null, "echo": true, "removes": null, "responses": {"Overwrite": "n", "file": "", "passphrase": ""}, "timeout": 5}, "module_name": "expect"}, "rc": 1, "start": "2015-12-30 09:56:29.164046", "stdout": "Generating public/private rsa key pair.\r\nEnter file in which to save the key (/home/admin/.ssh/id_rsa): \r\n/home/admin/.ssh/id_rsa already exists.\r\nOverwrite (y/n)? n", "stdout_lines": ["Generating public/private rsa key pair.", "Enter file in which to save the key (/home/admin/.ssh/id_rsa): ", "/home/admin/.ssh/id_rsa already exists.", "Overwrite (y/n)? n"]}
This does work fine when "Overwrite" is mapped to "y".
This does work fine when "Overwrite" is mapped to "y".
If that's the case then it sounds like your task is working properly. ssh-keygen will only prompt to overwrite the file if it already exists, and your response to "Overwrite" in the task is "n". If you tell ssh-keygen to not overwrite the file then it will exit immediately with a non-zero return code, which Ansible interprets as an error.
If you only want this task to execute when the key doesn't exist (in order to create a new key but not overwrite an existing one) then you probably want to add the following to your task:
creates: /home/admin/.ssh/id_rsa
The creates modifier will prevent the task from executing if the specified file already exists.
I used the following, to create keys for a specific user with the right access rights:
- name: Create ssh key
shell: |
ssh-keygen -t rsa -N "" -f /home/{{ ansible_user }}/.ssh/id_ed25519 -C {{ ansible_user }}#{{ inventory_hostname }}
chown {{ ansible_user }}:{{ ansible_user }} /home/{{ ansible_user }}/.ssh/id_ed25519*
args:
creates: '/home/{{ ansible_user }}/.ssh/id_ed25519'