Issue with User Creation & SSH key pair creation using Ansible - ssh

I'm my scenario my requirement in my dev server is to Take the users' first names' and create a user, we have a devs group in our developer server, and now want to create ansible-playbook with the following requirements:
adduser with first name
usermod -a -G devs $1
mkdir /home/$1/.ssh
chmod 700 /home/$1/.ssh
touch /home/$1/.ssh/authorized_keys
chmod 600 /home/$1/.ssh/authorized_keys
mkdir /var/www/html/$1-dev.abc.com/
mkdir /var/log/httpd/$1-dev.abc.com/
chown $1.devs /var/www/html/$1-dev.abc.com/
chown $1.devs /var/log/httpd/$1-dev.abc.com/
im not able to get /var/www/html
what is the best advice here? If you have time, any help at all would be appreciated. Thank you.
I have tried this :
- hosts: cluster
tasks:
# create users for us
# note user skb added to devs group
# on many system you may need to use wheel
# user in devs or wheel group can sudo
- user:
name: skb
comment: "santosh baruah"
shell: /bin/bash
groups: devs
append: yes
generate_ssh_key: yes
## run command 'mkpasswd --method=sha-512' to create your own encrypted password ##
password: $6$gF1EHgeUSSwDT3$xgw22QBdZfNe3OUjJkwXZOlEsL645TpacwiYwTwlUyah03.Zh1aUTTfh7iC7Uu5WfmHBkv5fxdbJ2OkzMAPkm/
ssh_key_type: ed25519
# upload ssh key
- authorized_key:
user: skb
state: present
manage_dir: yes
key: "{{ lookup('file', '/home/skb/.ssh/id_ed25519.pub') }}"
# configure ssh server
- template:
src: ssh-setup.j2
dest: /etc/ssh/sshd_config
owner: root
mode: '0600'
validate: /usr/sbin/sshd -t -f %s
backup: yes
# restart sshd
- service:
name: sshd
state: restarted

Related

Ansible ssh variable options coding issues

What I am trying to accomplish overall is to ssh into systems which are untouched by ansible, and have them set up by ansible, including its account, and ssh keys, and adding to the dynamic inventory... and so on and so forth. In this case, it's via a proxy jump. Unfortunately this means having to ssh into them using the ssh command and the shell module, as well as storing a password. Keep in mind I am on ansible 2.9, and this is a build environment, so passwords can be copied to files during build for use and then deleted at the end of the run, so this isn't a problem. If this succeeds, we can set up accounts and ssh keys, then delete the build files and everyone is happy.
I don't need all that much I hope, I would just like to get one sticky piece of that working better. That part is the ssh options that are needed for a proxyjump connection. ansible-controller doesn't have direct access to host p0, but the ecc67 host does. I have it working in the shell command no problem, but for whatever reason, I can't shift it up to the ansible_ssh_common_args variable where it belongs.
Here is the working example of the task as it functions now:
- name: sshpass attempt with the raw module for testing.
shell: sshpass -p "{{ access_var.ansible_ssh_pass_ssn }}" ssh -o 'ProxyCommand=ssh -W %h:%p bob#ecc67 nc %h %p' bob#p0 "w; exit"
register: output_1
The above works just fine and uses an undefined ansible_ssh_common_args. The nc is the netcat binary and is simply being passed options through the proxy command. Then we have the below playbook in which I tried to complete my stated mission, however, it is not functional, and fails at the sshpass task:
- name: Play that is testing for a successful proxyjump connection to p0 through ecc67.
hosts: ansible-controller
remote_user: bob
become: no
become_method: sudo
gather_facts: no
vars:
ansible_connection: ssh
ansible_ssh_common_args: '-o "ProxyCommand=ssh -W %h:%p bob#ecc67 nc %h %p"'
tasks:
- name: Import the password file so that we have the bob account's password.
include_vars:
file: ~/project/copyable-files/dynamic-files/build/active-vars-repository/access.yml
name: access_var
- name: Set password for the bob account from the file value using previous operator input.
set_fact:
ansible_ssh_pass: "{{ access_var.ansible_ssh_pass_b }}"
ansible_become_password: "{{ access_var.ansible_ssh_pass_b }}"
cacheable: yes
- name: sshpass attempt with the raw module for testing.
shell: sshpass -p "{{ ansible_ssh_pass_b }}" ssh "{{ ansible_ssh_common_args }}" bob#p0 "hostname; exit"
register: output_1
- debug:
var: output_1
The error I get when I attempt to use the above playbook with the reworked task and variables is as follows:
TASK [sshpass attempt with the raw module for testing.] ***********************************************
fatal: [ansible-controller]: UNREACHABLE! => {"changed": false, "msg": "Invalid/incorrect password: Killed by signal 1.", "unreachable": true}
The password is not the issue despite the error stating it is, though it's possible it's accessing something I don't expect. Is there any way to do what I want, heck, is there even just a better way to go about it that I didn't think of? Any suggestions would be helpful thanks!
From your description I understand that there is an issue with special characters in variables, quoting, templating and debugging. Therefore I am explicit not addressing the question "Is there ... a better way to go?".
To address the different topics I've created the following minimal example playbook
---
- hosts: localhost
become: false
gather_facts: false
vars:
ansible_ssh_pass: !unsafe "P4$$w0rd!_%&"
ansible_ssh_common_args: !unsafe '-o "ProxyCommand=ssh -W %h:%p user#jump.example.com nc %h %p"'
tasks:
- name: Debug task to show command content
lineinfile:
path: ssh.file
create: true
line: 'sshpass -p {{ ansible_ssh_pass | quote }} ssh {{ ansible_ssh_common_args }} user#test.example.com "hostname; exit"'
resulting into an output of
sshpass -p 'P4$$w0rd!_%&' ssh -o "ProxyCommand=ssh -W %h:%p user#jump.example.com nc %h %p" user#test.example.com "hostname; exit"
... the content of ssh.file and what the shell would "see"
Further Documentation
Advanced playbook syntax - Unsafe or raw strings for usage of !unsafe
The most common use cases include passwords that allow special characters
Using filters to manipulate data
You can use YAML single quote escaping ... Escaping single quotes within single quotes in YAML is done by doubling the single quote.
Using filters to manipulate data - Manipulating strings for usage of quote
To add quotes for shell usage ... | quote
Templating (Jinja2)
Ansible uses Jinja2 templating to enable dynamic expressions and access to variables and facts.

Ansible Inventory Specifying the Same Host with Different Users and Keys for Initial SSH User Setup and Disabling Root Access

I am attempting to have playbooks that run once to set up a new user and disable root ssh access.
For now, I am doing that by declaring all of my inventory twice. Each host needs an entry that accesses with the root user, used to create a new user, set up ssh settings, and then disable root access.
Then each host needs another entry with the new user that gets created.
My current inventory looks like this. It's only one host for now, but with a larger inventory, the repetition would just take up a ton of unnecessary space:
---
# ./hosts.yaml
---
all:
children:
master_roots:
hosts:
demo_master_root:
ansible_host: a.b.c.d # same ip as below
ansible_user: root
ansible_ssh_private_key_file: ~/.ssh/id_rsa_infra_ops
masters:
hosts:
demo_master:
ansible_host: a.b.c.d # same ip as above
ansible_user: infraops
ansible_ssh_private_key_file: ~/.ssh/id_rsa_infra_ops
Is there a cleaner way to do this?
Is this an anti-pattern in any way? It is not idempotent. It would be nice to have this run in a way that running the same playbook twice always has the same output - either "success", or "no change".
I am using DigitalOcean and they have a functionality to have this done via a bash script before the VM comes up for the first time, but I would prefer a platform-independent solution.
Here is the playbook for setting up the users & ssh settings and disabling root access
---
# ./initial-host-setup.yaml
---
# References
# Digital Ocean recommended droplet setup script:
# - https://docs.digitalocean.com/droplets/tutorials/recommended-setup
# Digital Ocean tutorial on installing kubernetes with Ansible:
# - https://www.digitalocean.com/community/tutorials/how-to-create-a-kubernetes-cluster-using-kubeadm-on-debian-9
# Ansible Galaxy (Community) recipe for securing ssh:
# - https://github.com/vitalk/ansible-secure-ssh
---
- hosts: master_roots
become: 'yes'
tasks:
- name: create the 'infraops' user
user:
state: present
name: infraops
password_lock: 'yes'
groups: sudo
append: 'yes'
createhome: 'yes'
shell: /bin/bash
- name: add authorized keys for the infraops user
authorized_key: 'user=infraops key="{{item}}"'
with_file:
'{{ hostvars[inventory_hostname].ansible_ssh_private_key_file }}.pub'
- name: allow infraops user to have passwordless sudo
lineinfile:
dest: /etc/sudoers
line: 'infraops ALL=(ALL) NOPASSWD: ALL'
validate: visudo -cf %s
- name: disable empty password login for all users
lineinfile:
dest: /etc/ssh/sshd_config
regexp: '^#?PermitEmptyPasswords'
line: PermitEmptyPasswords no
notify: restart sshd
- name: disable password login for all users
lineinfile:
dest: /etc/ssh/sshd_config
regexp: '^(#\s*)?PasswordAuthentication '
line: PasswordAuthentication no
notify: restart sshd
- name: Disable remote root user login
lineinfile:
dest: /etc/ssh/sshd_config
regexp: '^#?PermitRootLogin'
line: 'PermitRootLogin no'
notify: restart sshd
handlers:
- name: restart sshd
service:
name: sshd
state: restarted
Everything after this would use the masters inventory.
EDIT
After some research I have found that "init scripts"/"startup scripts"/"user data" scripts are supported across AWS, GCP, and DigitalOcean, potentially via cloud-init (this is what DigitalOcean uses, didn't research the others), which is cross-provider enough for me to just stick with a bash init script solution.
I would still be interested & curious if someone had a killer Ansible-only solution for this, although I am not sure there is a great way to make this happen without a pre-init script.
Regardless of any ansible limitations, it seems that without using the cloud init script, you can't have this. Either the server starts with a root or similar user to perform these actions, or the server starts without a user with those powers, then you can't perform these actions.
Further, I have seen Ansible playbooks and bash scripts that try to solve the desired "idempotence" (complete with no errors even if root is already disabled) by testing root ssh access, then falling back to another user, but "I can't ssh with root" is a poor test for "is the root user disabled" because there are plenty of ways your ssh access could fail even though the server is still configured to allow root to ssh.
EDIT 2 placing this here, since I can't use newlines in my response to a comment:
β.εηοιτ.βε responded to my assertion:
"but "I can't ssh with root" is a poor test for "is the root user disabled" because there are plenty of ways your ssh access could fail even though the server is still configured to allow root to ssh
with
then, try to ssh with infraops and assert that PermitRootLogin no is in the ssh daemon config file?"
It sounds like the suggestion is:
- attempt ssh with root
- if success, we know user/ssh setup tasks have not completed, so run those tasks
- if failure, attempt ssh with infraops
- if success, go ahead and run everything except the user creation again to ensure ssh config is as desired
- if failure... ? something else is probably wrong, since I can't ssh with either user
I am not sure what this sort of if-then failure recovery actually looks like in an Ansible script
You can overwrite host variables for a given play by using vars.
- hosts: masters
become: 'yes'
vars:
ansible_ssh_user: "root"
ansible_ssh_private_key_file: "~/.ssh/id_rsa_infra_ops"
tasks:
You could only define the demo_master group and alter the ansible_user and ansible_ssh_private_key_file at run time, using command flags --user and --private-key.
So with an host.yaml containing
all:
children:
masters:
hosts:
demo_master:
ansible_host: a.b.c.d # same ip as above
ansible_user: infraops
ansible_ssh_private_key_file: ~/.ssh/id_rsa_infra_ops
And run on - hosts: master, the first run would, for example be with
ansible-playbook initial-host-setup.yaml \
--user root \
--private-key ~/.ssh/id_rsa_root
When the subsequent runs would simply by
ansible-playbook subsequent-host-setup.yaml
Since all the required values are in the inventory already.

Use ansible vault passwords for ask-become-pass and ssh password

I would like to use ansible vault passwords for the ssh and become passwords when running ansible-playbook. This way I dont need to type them in when using the parameters --ask-become-pass or the ssh password.
Problem:
Every time I run my ansible-playbook command I am prompted for a ssh and become password.
My original command where I need to type the SSH and become password:
ansible-playbook playbook.yaml --ask-become-pass -e ansible_python_interpreter='/usr/bin/python3' -i inventory -k --ask-vault-pass -T 40
Command I have tried to make ansible-playbook use my vault passwords instead of my typing them in:
ansible-playbook playbook.yaml -e ansible_python_interpreter='/usr/bin/python3' -i inventory -k -T 40 --extra-vars #group_vars/all/main.yaml
I tried creating the directory structure from where the command is run group_vars/all/main.yaml, where main.yaml has my ansible vault passwords for "ansible_ssh_user", "ansible_ssh_pass", and "ansible_become_pass"
I even tried putting my password in the command:
ansible-playbook playbook.yaml -e ansible_python_interpreter='/usr/bin/python3' -i inventory -k -T 40 --extra-vars ansible_ssh_pass=$'"MyP455word"'
ansible-playbook playbook.yaml -e ansible_python_interpreter='/usr/bin/python3' -i inventory -k -T 40 --extra-vars ansible_ssh_pass='MyP455word'
Every time I run my playbook command, I keep getting prompted for a SSH pass and become pass. What am I missing here?
I have already read these two posts, both of which were not clear to me on the exact process, so neither helped:
https://serverfault.com/questions/686347/ansible-command-line-retriving-ssh-password-from-vault
Ansible vault password in group_vars not detected
Any recommendations?
EDIT: Including my playbook, role, settings.yaml, and inventory file as well.
Here is my playbook:
- name: Enable NFS server
hosts: nfs_server
gather_facts: False
become: yes
roles:
- { role: nfs_enable }
Here is the role located in roles/nfs_enable/tasks/main.yaml
- name: Include vars
include_vars:
file: ../../../settings.yaml
name: settings
- name: Start NFS service on server
systemd:
state: restarted
name: nfs-kernel-server.service
Here is my settings file
#nfs share directory
nfs_ssh_user: admin
nfs_share_dir: "/nfs-share/logs/"
ansible_become_pass: !vault |
$ANSIBLE_VAULT;1.1;AES256
55543131373731393764333932626261383765326432613239356638616234643335643438326165
3332363366623937386635653463656537353663326139360a316436356634386135653038643238
61313123656332663232633833366133373630396434346165336337623364383261356234653461
3335386135553835610a303666346561376161366330353935363937663233353064653938646263
6539
ansible_ssh_pass: !vault |
$ANSIBLE_VAULT;1.1;AES256
55543131373731393764333932626261383765326432613239356638616234643335643438326165
3332363366623937386635653463656537353663326139360a316436356634386135653038643238
61313123656332663232633833366133373630396434346165336337623364383261356234653461
3335386135553835610a303666346561376161366330353935363937663233353064653938646263
6539
Here is my inventory
[nfs_server]
10.10.10.10 ansible_ssh_user=admin ansible_ssh_private_key_file=~/.ssh/id_ed25519

How to run Ansible playbook to multiple servers in a right way?

Ansible use ssh to setup softwares to remote hosts.
If there are some fresh machines just been installed, run Ansible playbook from one host will not connect them because of no authorized_keys on remote hosts.
If copy the Ansible host's pub key to those target hosts like:
$ ssh user#server "echo \"`cat .ssh/id_rsa.pub`\" >> .ssh/authorized_keys"
First should ssh login and make file on every remote host:
$ mkdir .ssh
$ touch .ssh/authorized_keys
Is this the common way to run Ansible playbook to remote servers? Is there a better way exist?
I think it's better to do that using Ansible as well, with the authorized_key module. For example, to authorize your key for user root:
ansible <hosts> -m authorized_key -a "user=root state=present key=\"$(cat ~/.ssh/id_rsa.pub)\"" --ask-pass
This can be done in a playbook also, with the target user as a variable that defaults to root:
- hosts: <NEW_HOSTS>
vars:
- username: root
tasks:
- name: Add authorized key
authorized_key:
user: "{{ username }}"
state: present
key: "{{ lookup('file', '/home/<YOUR_USER>/.ssh/id_rsa.pub') }}"
And executed with:
ansible-playbook auth.yml --ask-pass -e username=<TARGET_USER>
Your user should have privileges, if not use became.

SSH authorization in Ansible between local and remote host

I have a Vagrant box with centos 7 where I am creating LXC containers. An Ansible run in the Vagrant box. I create the container with Ansible like this:
- name: Create containers
lxc_container:
name: localdev_nginx
container_log: true
template: centos
container_config:
- 'lxc.network.ipv4 = 192.168.42.110/24'
- 'lxc.network.ipv4.gateway = 192.168.42.1'
container_command: |
yum -y install openssh-server
echo "Som*th1ng" | passwd root --stdin
ssh-keygen -b 2048 -t rsa -f /root/.ssh/id_rsa -q -N ""
state: started
This is create the container for me, but after this I can't access to the container from the Ansible. Just if I take the container ssh pubkey to the Vagrant known_hosts like this:
- name: Tell the host about our servers it might want to ssh to
shell: ssh-keyscan -t rsa 192.168.42.110 >> /root/.ssh/known_hosts
And if I add the container root password in the Ansible hosts file like this:
[dev-webservers]
loc-dev-www1.internavenue.com hostname=loc-dev-www1.internavenue.com ansible_ssh_host=192.168.42.110 ansible_connection=ssh ansible_user=root ansible_ssh_pass=Som*th1ng
I hope it has a better solution, because really bad. How can I do it normally?
I copy the Vagrant box public key to the container's authorized_keys and in the hosts using this tag:
ansible_ssh_extra_args="-o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null"
This is only allowed from Ansible >2.0