I have the following playbook which I am using to create a new user on an Ubuntu 16.04 host:
---
- hosts: all
become: yes
become_method: sudo
tasks:
- name: Create user guybrush
user: name=guybrush comment="Guybrush Threepwood" shell=/bin/bash groups=pirates append=yes
- name: Copy SSH key for guybrush
authorized_key: user=guybrush key={{ lookup("file", "/home/guybrush/.ssh/id_rsa.pub") }}
However when I log in as the new user and try to se my password using passwd I get asked for my (current) UNIX password. Given that no password has been set for user I'm not sure why I am asked for it.
How can I fix my playbook so that newly created users can easily set their passwords after logging in the first time?
You can set password to empty line, so passwd will not ask for old password.
You also can set it as expired to force user to change password on first login.
- user: name=jsmith password=""
- authorized_key: user=jsmith key={{ lookup("file", "/somepath/id_rsa.pub") }}
- command: chage -d 0 jsmith
If there are security concerns with empty password, you can set it to some random predefined one and play with welcome ssh message to display it on first login.
Related
I am attempting to have playbooks that run once to set up a new user and disable root ssh access.
For now, I am doing that by declaring all of my inventory twice. Each host needs an entry that accesses with the root user, used to create a new user, set up ssh settings, and then disable root access.
Then each host needs another entry with the new user that gets created.
My current inventory looks like this. It's only one host for now, but with a larger inventory, the repetition would just take up a ton of unnecessary space:
---
# ./hosts.yaml
---
all:
children:
master_roots:
hosts:
demo_master_root:
ansible_host: a.b.c.d # same ip as below
ansible_user: root
ansible_ssh_private_key_file: ~/.ssh/id_rsa_infra_ops
masters:
hosts:
demo_master:
ansible_host: a.b.c.d # same ip as above
ansible_user: infraops
ansible_ssh_private_key_file: ~/.ssh/id_rsa_infra_ops
Is there a cleaner way to do this?
Is this an anti-pattern in any way? It is not idempotent. It would be nice to have this run in a way that running the same playbook twice always has the same output - either "success", or "no change".
I am using DigitalOcean and they have a functionality to have this done via a bash script before the VM comes up for the first time, but I would prefer a platform-independent solution.
Here is the playbook for setting up the users & ssh settings and disabling root access
---
# ./initial-host-setup.yaml
---
# References
# Digital Ocean recommended droplet setup script:
# - https://docs.digitalocean.com/droplets/tutorials/recommended-setup
# Digital Ocean tutorial on installing kubernetes with Ansible:
# - https://www.digitalocean.com/community/tutorials/how-to-create-a-kubernetes-cluster-using-kubeadm-on-debian-9
# Ansible Galaxy (Community) recipe for securing ssh:
# - https://github.com/vitalk/ansible-secure-ssh
---
- hosts: master_roots
become: 'yes'
tasks:
- name: create the 'infraops' user
user:
state: present
name: infraops
password_lock: 'yes'
groups: sudo
append: 'yes'
createhome: 'yes'
shell: /bin/bash
- name: add authorized keys for the infraops user
authorized_key: 'user=infraops key="{{item}}"'
with_file:
'{{ hostvars[inventory_hostname].ansible_ssh_private_key_file }}.pub'
- name: allow infraops user to have passwordless sudo
lineinfile:
dest: /etc/sudoers
line: 'infraops ALL=(ALL) NOPASSWD: ALL'
validate: visudo -cf %s
- name: disable empty password login for all users
lineinfile:
dest: /etc/ssh/sshd_config
regexp: '^#?PermitEmptyPasswords'
line: PermitEmptyPasswords no
notify: restart sshd
- name: disable password login for all users
lineinfile:
dest: /etc/ssh/sshd_config
regexp: '^(#\s*)?PasswordAuthentication '
line: PasswordAuthentication no
notify: restart sshd
- name: Disable remote root user login
lineinfile:
dest: /etc/ssh/sshd_config
regexp: '^#?PermitRootLogin'
line: 'PermitRootLogin no'
notify: restart sshd
handlers:
- name: restart sshd
service:
name: sshd
state: restarted
Everything after this would use the masters inventory.
EDIT
After some research I have found that "init scripts"/"startup scripts"/"user data" scripts are supported across AWS, GCP, and DigitalOcean, potentially via cloud-init (this is what DigitalOcean uses, didn't research the others), which is cross-provider enough for me to just stick with a bash init script solution.
I would still be interested & curious if someone had a killer Ansible-only solution for this, although I am not sure there is a great way to make this happen without a pre-init script.
Regardless of any ansible limitations, it seems that without using the cloud init script, you can't have this. Either the server starts with a root or similar user to perform these actions, or the server starts without a user with those powers, then you can't perform these actions.
Further, I have seen Ansible playbooks and bash scripts that try to solve the desired "idempotence" (complete with no errors even if root is already disabled) by testing root ssh access, then falling back to another user, but "I can't ssh with root" is a poor test for "is the root user disabled" because there are plenty of ways your ssh access could fail even though the server is still configured to allow root to ssh.
EDIT 2 placing this here, since I can't use newlines in my response to a comment:
β.εηοιτ.βε responded to my assertion:
"but "I can't ssh with root" is a poor test for "is the root user disabled" because there are plenty of ways your ssh access could fail even though the server is still configured to allow root to ssh
with
then, try to ssh with infraops and assert that PermitRootLogin no is in the ssh daemon config file?"
It sounds like the suggestion is:
- attempt ssh with root
- if success, we know user/ssh setup tasks have not completed, so run those tasks
- if failure, attempt ssh with infraops
- if success, go ahead and run everything except the user creation again to ensure ssh config is as desired
- if failure... ? something else is probably wrong, since I can't ssh with either user
I am not sure what this sort of if-then failure recovery actually looks like in an Ansible script
You can overwrite host variables for a given play by using vars.
- hosts: masters
become: 'yes'
vars:
ansible_ssh_user: "root"
ansible_ssh_private_key_file: "~/.ssh/id_rsa_infra_ops"
tasks:
You could only define the demo_master group and alter the ansible_user and ansible_ssh_private_key_file at run time, using command flags --user and --private-key.
So with an host.yaml containing
all:
children:
masters:
hosts:
demo_master:
ansible_host: a.b.c.d # same ip as above
ansible_user: infraops
ansible_ssh_private_key_file: ~/.ssh/id_rsa_infra_ops
And run on - hosts: master, the first run would, for example be with
ansible-playbook initial-host-setup.yaml \
--user root \
--private-key ~/.ssh/id_rsa_root
When the subsequent runs would simply by
ansible-playbook subsequent-host-setup.yaml
Since all the required values are in the inventory already.
Working on an ansible script that does create users to login via ssh via pub key, but i will need also a password, a default one, once they will login in will be forced to change the default password. So far i did tested the update_password : on_create but no luck. Any suggestion?
- name: Add {{ user }} user
user:
name: "{{ user }}"
state: present
groups: "wheel"
shell: /bin/bash
update_password: on_create
Ansible deploys to multiple servers: dev, qa, uat, prod etc. SSH keys are set up for all environments.
I would like to restrict the deployment to prod only after entering a specific password (note: not an SSH password).
How do I enforce this only while running on the prod inventory?
To prevent accidental deploys you can use extra vars or environment variables (add it as your first task):
---
- hosts: localhost
gather_facts: no
tasks:
- assert:
that: "'prod' not in group_names or ('prod' in group_names and (allow_prod_deploy | default(false) or lookup('env', 'ALLOW_PROD_DEPLOY') | default(false)))"
msg: "Trying to deploy to production, but allow_prod_deploy is not set!"
Execute prod deploy as follows:
ansible-playbook -e allow_prod_deploy=1 myplaybook.yml
or
ALLOW_PROD_DEPLOY=1 ansible-playbook myplaybook.yml
The easiest you can do, is to actually use an SSH password, but at the same time set up a requirement on the production servers to provide both: the password and the key for the Ansible user.
Add the following to the end of sshd_config and restart the SSH daemon:
Match User ansible
PasswordAuthentication yes
AuthenticationMethods "publickey,password" "publickey,keyboard-interactive"
(of course, replace ansible with the actual account name)
This way you don't subvert the security:
ansible user must use both: password (you'd need to add -k to the Ansible call for production servers) and key-based authentication
all other users must use key-based authentication (assuming you have PasswordAuthentication no in the general section of sshd_config)
I have the following task in my ansible playbook that adds my ssh public key for a remote user pranjal that was already created by a previous task.
- authorized_key:
user: pranjal
key: "{{ lookup('file', 'pranjal.pub') }}"
When I run the ansible playbook, it runs successfully. However when I try logging in to the server using: ssh pranjal#<server_ip>
I get a Permission denied (publickey) error.
To be sure I logged into server from another user and double checked that key listed in /home/pranjal/.ssh/authorized_keys matches with my local public key that I am using to login.
The issue that I am guessing here could be a permissions issue and I understood the solution from a related question.
But how do we change permissions of authorized_key from within the Ansible task itself? (So that I don't have to separately log into the instance to modify permissions of .ssh/authorized_keys)
- file: path=/home/pranjal/.ssh state=directory owner=pranjal mode=0700
- file: path=/home/pranjal/.ssh/authorized_keys state=file owner=pranjal mode=0600
You may also want to check/verify /etc/ssh/sshd_config has the following:
PubkeyAuthentication yes
You can debug further by ssh -vvv pranjal#<server_ip>
TL;DR: Is it possible to chain two playbooks with one ansible-playbook command where one playbook is password auth and the other playbook is key auth? (see last section for real-world purpose).
Setup:
I have two playbooks, the second of which includes the first.
PlaybookA.yml
---
- name: PlaybookA # requires password authentication
hosts: sub.domain.ext
remote_user: root
roles:
- { role: role1, sudo: yes }
...
PlaybookB.yml
---
- name: Run PlaybookA
include: PlaybookA.yml
- name: PlaybookB # requires ssh-key authentication
hosts: sub.domain.ext
remote_user: ansible
roles:
- { role: role2, sudo: yes }
...
Requirements:
Execute only one command.
Use password auth for PlaybookA.
Use ssh-key auth for PlaybookB.
Question 1:
Is it possible within Ansible (versions 1.9.4 or lower) to execute one ansible-playbook command that will successfully run PlaybookB using ssh-key authentication but when PlaybookB includes PlaybookA, run PlaybookA using password authentication?
Question 2:
If this is not possible with Ansible 1.9.4 or lower, is this possible with 2.0.0+?
Notes of worth:
Ansible provides --ask-pass (or -k) as a command line switch enabling password authentication.
Ansible provides ask_pass as a variable but it seems as though it can only be set within ansible.cfg (I haven't been able to set this as a playbook variable to the desired effect).
Attempting to set ask_pass as an instruction within a playbook results in the following: ERROR: ask_pass is not a legal parameter of an Ansible Play. If this parameter was legal, it would provide a way to instruct ansible on a per-playbook level, what authentication method to use.
Purpose / Real World:
I'm attempting to create a configuration management workflow with Ansible that will be simple enough that others at work will be able to learn / adapt to it (and hopefully the use of Ansible in general for CM and orchestration).
For any new machine (VM or physical) that gets built, I intend for us to run two playbooks immediately. PlaybookA (as shown above) has the responsibility of logging in with the correct default user (typically depends upon the infrastructure [aws, vsphere, none, etc]). Once in, its very limited job is to:
Create the standardized user for ansible to run as (and install its ssh-key).
Remove any non-root users that may exist (artifacts of the vm infrastructure, etc).
Disable root access.
Disable password authentication (ssh-key only from this point on).
Depending upon the vm infrastructure (or lack thereof), the default user or the default authentication method can be different. Toward the goal of adoption of Ansible, I'm attempting to keep things extremely simple for fellow co-workers, so I'd like to automate as much of this flow-control as possible.
Once PlaybookA has locked down the vm and setup the standardized user, PlaybookB uses that standardized user to perform all other operations necessary to bring our vm's up to the necessary baseline of tools and utilities, etc.
Any tips, hints, suggestions would be greatly appreciated.
I have been facing the same problem today. Two ideas may help you here:
You can ask for the password using vars_prompt in your playbook instead of --ask-pass
Set the password using set_fact:
- name: "set password for the play"
set_fact: ansible_ssh_pass="{{ my_pass }}"
You could store the password in a file, or prompt for it, as in the example below. In my example, the sshd config thats being created will forbid password logins, but using ansible defaults, you will be surprised that the second playbook will still be executed (!), even though I "forgot" to create an authorized_key. Thats due to the fact, that ansible uses the ControlPersist options of ssh, and simply keeps the connection between single tasks open. You can turn that off in ansible.cfg
Example Playbook:
- name: "MAKE BARE: Run preparatory steps on a newly acquired server"
hosts: blankee
tasks:
- name: "set password for the play"
set_fact: ansible_ssh_pass="{{ my_pass }}"
- name: "Create directory {{ pathsts }}/registry/ansible-init"
file: name="{{ pathsts }}/registry/ansible-init" state=directory owner=root group=www-data mode=770
- name: "copy sshd config file"
copy:
src: 'roles/newhost/files/sshd_config'
dest: '/etc/ssh/sshd_config'
owner: 'root'
group: 'root'
mode: '0644'
- name: "Check syntax of sshd configuration"
shell: sshd -t
register: result
changed_when: false
failed_when: "result.rc != 0"
- name: "Restart SSHD and enable Service to start at boot"
service: name=sshd state=restarted
changed_when: false
vars:
my_pass2: foobar
vars_prompt:
- name: "my_pass"
prompt: "########## Enter PWD:\n "
- name: "Second run: This should authenticate w/out password:"
hosts: blankee
tasks:
- name: "Create directory {{ pathsts }}/registry/ansible-init"
file: name="{{ pathsts }}/registry/ansible-init22" state=directory owner=root group=www-data mode=770
I don't know a way to change the authentication method within the play. I think I'd prefer running two different playbooks as Jenkins job or similar, but I can think of a pure Ansible workaround: instead of including the second playbook, you could get ansible to run a shell command as a local action, and run the command to execute the second playbook from the first one. Here's a rough proof of concept:
---
- hosts: all
vars_files:
- vars.yml
tasks:
- debug: msg="Run your first role here."
- name: Then call Ansible to run the second playbook.
local_action: shell ansible-playbook -i ~/workspace/hosts ~/workspace/second_playbook.yml
register: playbook_results
- debug: var=playbook_results.stdout_lines
Here's the output:
GATHERING FACTS ***************************************************************
ok: [vagrantbox]
TASK: [debug msg="Run your first role here."] *********************************
ok: [vagrantbox] => {
"msg": "Run your first role here."
}
TASK: [Then call Ansible to run the second playbook.] *************************
changed: [vagrantbox -> 127.0.0.1]
TASK: [debug var=playbook_results.stdout_lines] *******************************
ok: [vagrantbox] => {
"var": {
"playbook_results.stdout_lines": [
"",
"PLAY [Proof of concept] ******************************************************* ",
"",
"GATHERING FACTS *************************************************************** ",
"ok: [vagrantbox]",
"",
"TASK: [debug msg=\"This playbook was called from another playbook!\"] *********** ",
"ok: [vagrantbox] => {",
" \"msg\": \"This playbook was called from another playbook!\"",
"}",
"",
"PLAY RECAP ******************************************************************** ",
"vagrantbox : ok=2 changed=0 unreachable=0 failed=0 "
]
}
}
PLAY RECAP ********************************************************************
vagrantbox : ok=4 changed=1 unreachable=0 failed=0