freeipa restrict access of sudorules - ldap

i have a user admin who need to run some commands as other users with sudo privileges
example:
sudo -u dev_dummy chown /tmp/dump_file
for that i created :
hbac rule : to grant access to sudo service
Description: Generated_rule_dummy_sudo
Enabled: TRUE
Users: admin
Hosts: enode2.26f5de01-5e40-4d8a-98bd-a4353b7bf5e3.com
Services: sudo
and
sudo rule : to grant admin user to run all commands
Rule name: [admin]_RunAs
Description: Allows user admin to run any commands as dev_* users sudo -u dev_* <commands>
Enabled: TRUE
Command category: all
Users: admin
Hosts: enode2.26f5de01-5e40-4d8a-98bd-a4353b7bf5e3.com
Groups of RunAs Users: g_airflow_dev
Sudo Option: !authenticate
This configuration works well, i just want to restrict user (admin) access to run sudo -u instead of all commands

Related

Issue with User Creation & SSH key pair creation using Ansible

I'm my scenario my requirement in my dev server is to Take the users' first names' and create a user, we have a devs group in our developer server, and now want to create ansible-playbook with the following requirements:
adduser with first name
usermod -a -G devs $1
mkdir /home/$1/.ssh
chmod 700 /home/$1/.ssh
touch /home/$1/.ssh/authorized_keys
chmod 600 /home/$1/.ssh/authorized_keys
mkdir /var/www/html/$1-dev.abc.com/
mkdir /var/log/httpd/$1-dev.abc.com/
chown $1.devs /var/www/html/$1-dev.abc.com/
chown $1.devs /var/log/httpd/$1-dev.abc.com/
im not able to get /var/www/html
what is the best advice here? If you have time, any help at all would be appreciated. Thank you.
I have tried this :
- hosts: cluster
tasks:
# create users for us
# note user skb added to devs group
# on many system you may need to use wheel
# user in devs or wheel group can sudo
- user:
name: skb
comment: "santosh baruah"
shell: /bin/bash
groups: devs
append: yes
generate_ssh_key: yes
## run command 'mkpasswd --method=sha-512' to create your own encrypted password ##
password: $6$gF1EHgeUSSwDT3$xgw22QBdZfNe3OUjJkwXZOlEsL645TpacwiYwTwlUyah03.Zh1aUTTfh7iC7Uu5WfmHBkv5fxdbJ2OkzMAPkm/
ssh_key_type: ed25519
# upload ssh key
- authorized_key:
user: skb
state: present
manage_dir: yes
key: "{{ lookup('file', '/home/skb/.ssh/id_ed25519.pub') }}"
# configure ssh server
- template:
src: ssh-setup.j2
dest: /etc/ssh/sshd_config
owner: root
mode: '0600'
validate: /usr/sbin/sshd -t -f %s
backup: yes
# restart sshd
- service:
name: sshd
state: restarted

Use ansible vault passwords for ask-become-pass and ssh password

I would like to use ansible vault passwords for the ssh and become passwords when running ansible-playbook. This way I dont need to type them in when using the parameters --ask-become-pass or the ssh password.
Problem:
Every time I run my ansible-playbook command I am prompted for a ssh and become password.
My original command where I need to type the SSH and become password:
ansible-playbook playbook.yaml --ask-become-pass -e ansible_python_interpreter='/usr/bin/python3' -i inventory -k --ask-vault-pass -T 40
Command I have tried to make ansible-playbook use my vault passwords instead of my typing them in:
ansible-playbook playbook.yaml -e ansible_python_interpreter='/usr/bin/python3' -i inventory -k -T 40 --extra-vars #group_vars/all/main.yaml
I tried creating the directory structure from where the command is run group_vars/all/main.yaml, where main.yaml has my ansible vault passwords for "ansible_ssh_user", "ansible_ssh_pass", and "ansible_become_pass"
I even tried putting my password in the command:
ansible-playbook playbook.yaml -e ansible_python_interpreter='/usr/bin/python3' -i inventory -k -T 40 --extra-vars ansible_ssh_pass=$'"MyP455word"'
ansible-playbook playbook.yaml -e ansible_python_interpreter='/usr/bin/python3' -i inventory -k -T 40 --extra-vars ansible_ssh_pass='MyP455word'
Every time I run my playbook command, I keep getting prompted for a SSH pass and become pass. What am I missing here?
I have already read these two posts, both of which were not clear to me on the exact process, so neither helped:
https://serverfault.com/questions/686347/ansible-command-line-retriving-ssh-password-from-vault
Ansible vault password in group_vars not detected
Any recommendations?
EDIT: Including my playbook, role, settings.yaml, and inventory file as well.
Here is my playbook:
- name: Enable NFS server
hosts: nfs_server
gather_facts: False
become: yes
roles:
- { role: nfs_enable }
Here is the role located in roles/nfs_enable/tasks/main.yaml
- name: Include vars
include_vars:
file: ../../../settings.yaml
name: settings
- name: Start NFS service on server
systemd:
state: restarted
name: nfs-kernel-server.service
Here is my settings file
#nfs share directory
nfs_ssh_user: admin
nfs_share_dir: "/nfs-share/logs/"
ansible_become_pass: !vault |
$ANSIBLE_VAULT;1.1;AES256
55543131373731393764333932626261383765326432613239356638616234643335643438326165
3332363366623937386635653463656537353663326139360a316436356634386135653038643238
61313123656332663232633833366133373630396434346165336337623364383261356234653461
3335386135553835610a303666346561376161366330353935363937663233353064653938646263
6539
ansible_ssh_pass: !vault |
$ANSIBLE_VAULT;1.1;AES256
55543131373731393764333932626261383765326432613239356638616234643335643438326165
3332363366623937386635653463656537353663326139360a316436356634386135653038643238
61313123656332663232633833366133373630396434346165336337623364383261356234653461
3335386135553835610a303666346561376161366330353935363937663233353064653938646263
6539
Here is my inventory
[nfs_server]
10.10.10.10 ansible_ssh_user=admin ansible_ssh_private_key_file=~/.ssh/id_ed25519

Ansible unable to create folder on localhost with different user

I'm executing ansible playbook with appuser whereas I wish to create folder with user webuser on localhost.
ssh keys are setup for webuser on my localhost. So after login with appuser I can simply ssh webuser#localhost to switch user to webuser.
Note: I do not have sudo priveledges so I cannot sudo to switch to webuser from appuser.
Below is my playbook that is run with user appuser but needs to create a folder 04May2020 on localhost using webuser
- name: "Play 1"
hosts: localhost
remote_user: "webuser"
vars:
ansible_ssh_extra_args: -o StrictHostKeyChecking=no
ansible_ssh_private_key_file: /app/misc_automation/ssh_keys_id_rsa
tasks:
- name: create folder for today's print
file:
path: "/webWeb/htdocs/print/04May2020"
state: directory
remote_user: webuser
However, the output shows that the folder is created with appuser instead of webuser. See output showing ssh connectivity with appuser instead of webuser.
ansible-playbook /app/Ansible/playbook/print_oracle/print.yml -i /app/Ansible/playbook/print_oracle/allhosts.hosts -vvv
TASK [create folder for today] ***********************************
task path: /app/Ansible/playbook/print_oracle/print.yml:33
Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/file.py
Pipelining is enabled.
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: appuser
<127.0.0.1> EXEC /bin/sh -c '/usr/bin/python2 && sleep 0'
Can you please suggest if it is possible without sudo?
Putting all my comments together in a comprehensive answer.
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: appuser
This is indicating that you are connecting to localhost through the local connection plugin, either because you explicitelly re-declared the host as such or because you are using the implicit localhost. From discussions, you are in the second situation.
When using the local connection plugin, as indicated in the above documentation, the remote_user is ignored. Trying to change the user has no effect as you can see in the below test run (user (u)ids changed):
# Check we are locally running as user1
$ id -a
uid=xxxx(user1) gid=yyy(group1) groups=yyy(group1)
# Running the same command through ansible returns the same result
$ ansible localhost -a 'id -a'
localhost | CHANGED | rc=0 >>
uid=xxxx(user1) gid=yyy(group1) groups=yyy(group1)
# Trying to change the remote user has no effect
$ ansible localhost -u whatever -a 'id -a'
localhost | CHANGED | rc=0 >>
uid=xxxx(user1) gid=yyy(group1) groups=yyy(group1)
Without changing your playbook and/or inventory, the only solution is to launch the playbook as the user who needs to create the directory.
Since you have ssh available, an other solution is to declare a new host that you will use only for this purpose, which will target the local IP through ssh. (Note: you can explicitly declare localhost like this but then all connections will go through ssh which might not be what you want to do).
Somewhere at the top of you inventory, add the line:
localssh ansible_host=127.0.0.1
And in your playbook, change
hosts: localssh
Now the connection to your local machine will go through ssh and the remote_user will be obeyed correctly.
One way you can try is by setting the ansible_connection to localhost. To do this, in the directory from which you are running ansible commands, create a host_vars directory. In that sub-directory, create a file named localhost, containing the line ansible_connection: smart

How to run Ansible playbook to multiple servers in a right way?

Ansible use ssh to setup softwares to remote hosts.
If there are some fresh machines just been installed, run Ansible playbook from one host will not connect them because of no authorized_keys on remote hosts.
If copy the Ansible host's pub key to those target hosts like:
$ ssh user#server "echo \"`cat .ssh/id_rsa.pub`\" >> .ssh/authorized_keys"
First should ssh login and make file on every remote host:
$ mkdir .ssh
$ touch .ssh/authorized_keys
Is this the common way to run Ansible playbook to remote servers? Is there a better way exist?
I think it's better to do that using Ansible as well, with the authorized_key module. For example, to authorize your key for user root:
ansible <hosts> -m authorized_key -a "user=root state=present key=\"$(cat ~/.ssh/id_rsa.pub)\"" --ask-pass
This can be done in a playbook also, with the target user as a variable that defaults to root:
- hosts: <NEW_HOSTS>
vars:
- username: root
tasks:
- name: Add authorized key
authorized_key:
user: "{{ username }}"
state: present
key: "{{ lookup('file', '/home/<YOUR_USER>/.ssh/id_rsa.pub') }}"
And executed with:
ansible-playbook auth.yml --ask-pass -e username=<TARGET_USER>
Your user should have privileges, if not use became.

cloud-init .yaml script on Digital Ocean droplets

I am stuck with creating users with Cloud-Config.
Did anyone succeed and can share a .yaml gist to create a new user with ssh-authorized-keys and ability to sudo?
This article on DigitalOcean runs through doing some of the most common initial tasks using cloud-config files.
Setting up a new sudo user would look like this:
#cloud-config
users:
- name: username
ssh-authorized-keys:
- ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCv60WjxoM39LgPDbiW7ne3gu18q0NIVv0RE6rDLNal1quXZ3nqAlANpl5qmhDQ+GS/sOtygSG4/9aiOA4vXO54k1mHWL2irjuB9XbXr00+44vSd2q/vtXdGXhdSMTf4/XK17fjKSG/9y3yD6nml6 user#example.com
sudo: ['ALL=(ALL) NOPASSWD:ALL']
groups: sudo
shell: /bin/bash
Of course, replace the ssh key with your public key.