Run ansible as root with specific sudoers - ssh

My issue is that I have one server where the sudoers for the ansible user is like this:
ansible ALL=(root) NOPASSWD: /usr/bin/su - root
Hence, the only way to switch to the root user is:
sudo su - root
When I try to run the below ansible playbook:
---
- name: Configure Local Repo server address
hosts: lab
remote_user: ansible
become: yes
become_user: root
become_method: runas
tasks:
- name: test whoami
become: yes
shell:
cmd: whoami
register: whoami_output
- debug: var=whoami_output
- name: Deploy local.repo file to the hosts
become: yes
copy:
src: /etc/ansible/files/local.repo
dest: /etc/yum.repos.d/local.repo
owner: ansible
group: ansible
mode: 0644
backup: yes
register: deploy_file_output
- debug: var=deploy_file_output
I got the following error:
ansible-playbook --private-key /etc/ansible/keys/ansible_key /etc/ansible/playbooks/local_repo_provisioning.yml
PLAY [Configure Local Repo server address] *****************************************************************************************************************************************************************************************************
TASK [Gathering Facts] *************************************************************************************************************************************************************************************************************************
ok: [10.175.65.12]
TASK [test whoami] *****************************************************************************************************************************************************************************************************************************
changed: [10.175.65.12]
TASK [debug] ***********************************************************************************************************************************************************************************************************************************
ok: [10.175.65.12] => {
"whoami_output": {
"changed": true,
"cmd": "whoami",
"delta": "0:00:00.003301",
"end": "2023-01-15 17:53:56.312715",
"failed": false,
"msg": "",
"rc": 0,
"start": "2023-01-15 17:53:56.309414",
"stderr": "",
"stderr_lines": [],
"stdout": "ansible",
"stdout_lines": [
"ansible"
]
}
}
TASK [Deploy local.repo file to the hosts] *****************************************************************************************************************************************************************************************************
fatal: [10.175.65.12]: FAILED! => {"changed": false, "checksum": "2356deb90d20d5f31351c719614d5b5760ab967d", "msg": "Destination /etc/yum.repos.d not writable"}
PLAY RECAP *************************************************************************************************************************************************************************************************************************************
10.175.65.12 : ok=3 changed=1 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
When I tried to use become_method: sudo I got the "Missing sudo password" message. Further, when I tried become_method: su I got the "Timeout (12s) waiting for privilege escalation prompt:" message.
All in all, would someone know how to explain how ansible runs the commands deppending on the "become_method" set? Is there a way to switch to the root user with that kind of sudoers conf?
Thanks in advance!

Related

how to get inside Vault (ssh) with Ansible playbook?

Im using Vagrant and I want to start vault server from inside the vagrant box, via ansible playbook.
to do so, without playbook, I need to execute # vagrant ssh and then I'm in the vagrant box and can start the vault server using # vault server -dev.
I want to execute the # vault server -dev directly from the playbook. any ideas how?
this is my playbook -
---
- name: Playbook to install and use Vault
become: true
hosts: all
tasks:
- name: Uptade1
become: true
become_user: root
shell: apt update
- name: gpg
become: true
become_user: root
shell: apt install gpg
- name: verify key
become: true
become_user: root
shell: wget -O- https://apt.releases.hashicorp.com/gpg | gpg --dearmor | sudo tee /usr/share/keyrings/hashicorp-archive-keyring.gpg >/dev/null
- name: fingerprint
become: true
become_user: root
shell: gpg --no-default-keyring --keyring /usr/share/keyrings/hashicorp-archive-keyring.gpg --fingerprint
- name: repository
become: true
become_user: root
shell: echo "deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/hashicorp.list
- name: update2
become: true
become_user: root
shell: apt update
- name: vault install
become: true
become_user: root
shell: apt install vault
- name: start vault
become: true
become_user: vagrant
shell: vault server -dev -dev-listen-address=0.0.0.0:8200
the last one is my try to start the vault server but it gets stuck in the
TASK [start vault] *********************************************************
I also tried adding
- name: start vault
become: true
shell: vagrant ssh
before but then I get :
TASK [start vault] *************************************************************
fatal: [default]: FAILED! => {"changed": true, "cmd": "vagrant ssh", "delta": "0:00:00.003245", "end": "2022-07-03 16:18:31.480702", "msg": "non-zero return code", "rc": 127, "start": "2022-07-03 16:18:31.477457", "stderr": "/bin/sh: 1: vagrant: not found", "stderr_lines": ["/bin/sh: 1: vagrant: not found"], "stdout": "", "stdout_lines": []}
this is my Vagrantfile if needed:
Vagrant.configure("2") do |config|
VAGRANT_DEFAULT_PROVIDER = "virtualbox"
config.vm.hostname = "carebox-idan"
config.vm.provision "ansible", playbook: "playbook.yml"
config.vm.box = "laravel/homestead"
config.vm.network "forwarded_port", guest: 8200, host: 8200, auto_correct: "true"
config.ssh.forward_agent = true
end
thank you.

Ansible: How do you properly skip ssh first connection to fresh host?

Context: I'm trying to automate the provision of a fresh new server, but when a new machine is spawned and my ansible playbook is played against it from my provisioning server the usual message pops out:
The authenticity of host '192.168.1.25 (192.168.1.25)' can't be established.
ECDSA key fingerprint is SHA256:QF/AyFhYXaz5bjZ1O+kvceoOjBzmI8M1PYmg3lukYmE.
Are you sure you want to continue connecting (yes/no/[fingerprint])?
I am aware this question has been answered a couple times already but, I do not want to add this line to my .cfg file or give the relative argument when I launch an ansible-playbook command.
Problem: So this answer came to my attention https://stackoverflow.com/a/54735937/18647199
I copy pasted the two tasks in my playbook and if they're by themselves the script runs properly. Skipping the aforementioned prompt (even though it skips it on one server that I still have to made the first connection) see:
TASK [Check known_hosts for 192.168.1.14] **************************************
ok: [192.168.1.16 -> localhost]
ok: [192.168.1.14 -> localhost]
ok: [192.168.1.25 -> localhost]
TASK [Ignore host key for 192.168.1.14 on first run] ***************************
skipping: [192.168.1.14]
skipping: [192.168.1.16]
skipping: [192.168.1.25]
PLAY RECAP *********************************************************************
192.168.1.14 : ok=1 changed=0 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0
192.168.1.16 : ok=1 changed=0 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0
192.168.1.25 : ok=1 changed=0 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0
But if I add just one more task to it, it asks again for the auth prompt that I'm trying to skip.
p.s. using OpenSSH, latest current version.
What I'm trying to run:
---
#all
- hosts: all
#connection: local
become: true
gather_facts: false #otherwise ssh prompt appears
tasks:
- name: Check known_hosts
local_action: shell ssh-keygen -F "{{ inventory_hostname }}"
register: is_known
failed_when: false
changed_when: false
ignore_errors: yes
- name: debug message
debug:
msg: the "{{ inventory_hostname }}"" was tested with output "{{ is_known }}"
- name: Ignore host key for "{{ inventory_hostname }}" on first run
when: is_known.rc == 1
set_fact:
ansible_ssh_common_args: '-o StrictHostKeyChecking=no'
- name: Bootstrap check
stat:
path: /home/bot/bootstrapped-ok
register: bootstrap_result
[..] more code
Debug output:
ansible-playbook debug-bootstrap.yml
PLAY [all] *********************************************************************
TASK [Check known_hosts] *******************************************************
ok: [192.168.1.16 -> localhost]
ok: [192.168.1.14 -> localhost]
ok: [192.168.1.25 -> localhost]
TASK [debug message] ***********************************************************
ok: [192.168.1.14] => {
"msg": "the \"192.168.1.14\"\" was tested with output \"{'msg': 'non-zero return code', 'cmd': 'ssh-keygen -F \"192.168.1.14\"', 'stdout': '', 'stderr': 'do_known_hosts: hostkeys_foreach failed: No such file or directory', 'rc': 255, 'start': '2022-04-02 12:30:50.940041', 'end': '2022-04-02 12:30:50.943287', 'delta': '0:00:00.003246', 'changed': False, 'failed': False, 'stdout_lines': [], 'stderr_lines': ['do_known_hosts: hostkeys_foreach failed: No such file or directory'], 'failed_when_result': False}\""
}
ok: [192.168.1.16] => {
"msg": "the \"192.168.1.16\"\" was tested with output \"{'msg': 'non-zero return code', 'cmd': 'ssh-keygen -F \"192.168.1.16\"', 'stdout': '', 'stderr': 'do_known_hosts: hostkeys_foreach failed: No such file or directory', 'rc': 255, 'start': '2022-04-02 12:30:50.937097', 'end': '2022-04-02 12:30:50.941015', 'delta': '0:00:00.003918', 'changed': False, 'failed': False, 'stdout_lines': [], 'stderr_lines': ['do_known_hosts: hostkeys_foreach failed: No such file or directory'], 'failed_when_result': False}\""
}
ok: [192.168.1.25] => {
"msg": "the \"192.168.1.25\"\" was tested with output \"{'msg': 'non-zero return code', 'cmd': 'ssh-keygen -F \"192.168.1.25\"', 'stdout': '', 'stderr': 'do_known_hosts: hostkeys_foreach failed: No such file or directory', 'rc': 255, 'start': '2022-04-02 12:30:50.978944', 'end': '2022-04-02 12:30:50.982119', 'delta': '0:00:00.003175', 'changed': False, 'failed': False, 'stdout_lines': [], 'stderr_lines': ['do_known_hosts: hostkeys_foreach failed: No such file or directory'], 'failed_when_result': False}\""
}
TASK [Ignore host key for "192.168.1.14" on first run] *************************
skipping: [192.168.1.14]
skipping: [192.168.1.16]
skipping: [192.168.1.25]
TASK [Bootstrap check] *********************************************************
The authenticity of host '192.168.1.25 (192.168.1.25)' can't be established.
ECDSA key fingerprint is SHA256:QF/AyFhYXaz5bjZ1O+kvceoOjBzmI8M1PYmg3lukYmE.
Are you sure you want to continue connecting (yes/no/[fingerprint])? ok: [192.168.1.16]
ok: [192.168.1.14]
So it seems like the command shell ssh-keygen -F "{{ inventory_hostname }}" isn't doing what it's supposed to do as if we had to launch that via terminal.
Question: Does anyone know how to implement that "one-time skip" or has a better way to do this for a fully automated provisioning / deploy?
(I tried to create an unique .yml file with scarce results, I hit a wall and have not many ideas left on how to continue a fully automated provisioning)
Just added mine answer to How to ignore ansible SSH authenticity checking? which list lots of options.
This is what we are using for stable hosts (when running the playbook from Jenkins and you simply want to accept the host key when connecting to the host for the first time) in inventory file:
[all:vars]
ansible_ssh_common_args='-o StrictHostKeyChecking=accept-new'
And this is what we have for temporary hosts (in the end this will ignore they host key at all):
[all:vars]
ansible_ssh_common_args='-o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null'
There is also environment variable or you can add it into group/host variables file. No need to have it in the inventory - it was just convenient in our case.
Maybe this could help?

How to detect unreachable target hosts in ansible

I wish to grab in a variable sshreachable if a target hosts all_hosts are reachable or not.
I wrote the below playbook for the same.
- name: Play 3- check telnet nodes
hosts: localhost
ignore_unreachable: yes
- name: Check all port numbers are accessible from current host
include_tasks: innertelnet.yml
with_items: "{{ groups['all_hosts'] }}"
cat innertelnet.yml
---
- name: Check ssh connectivity
block:
- raw: "ssh -o BatchMode=yes root#{{ item }} echo success"
ignore_errors: yes
register: sshcheck
- debug:
msg: "SSHCHECK variable:{{ sshcheck }}"
- set_fact:
sshreachable: 'SSH SUCCESS'
when: sshcheck.unreachable == 'false'
- set_fact:
sshreachable: 'SSH FAILED'
when: sshcheck.unreachable == 'true'
- debug:
msg: "INNERSSH1: {{ sshreachable }}"
Unfortunately, i get error like below:
Output:
TASK [raw] *********************************************************************
fatal: [localhost]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: Shared connection to 10.9.9.126 closed.", "skip_reason": "Host localhost is unreachable", "unreachable": true}
TASK [debug] ***********************************************************************************************************************************************************
task path:
ok: [localhost] => {
"msg": "SSHCHECK variable:{'msg': u'Failed to connect to the host via ssh: Shared connection to 10.9.9.126 closed.', 'unreachable': True, 'changed': False}"
}
TASK [set_fact] ****************************************************************
skipping: [localhost]
TASK [set_fact] ****************************************************************
skipping: [localhost]
TASK [debug] *******************************************************************
fatal: [localhost]: FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: 'sshreachable' is undefined\n\nThe error appears to be in '/app/playbook/checkssh/innertelnet.yml': line 45, column 10, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n - debug:\n ^ here\n"}
PLAY RECAP *********************************************************************
10.0.116.194 : ok=101 changed=1 unreachable=9 failed=0 skipped=12 rescued=0 ignored=95
localhost : ok=5 changed=0 unreachable=1 failed=1 skipped=4 rescued=0 ignored=0
Can you please suggest changes to my code to get this to work?
The error seems to indicate that sshreachable variable is not getting set as the when: condition does not match. I.e. sshcheck.unreachable might not be something returned by raw.
For this purpose, command module should be enough, and we can evaluate the return code of the command to set_fact.
You could do something like:
- block:
- command: ssh -o BatchMode=yes user#host1 echo success
ignore_errors: yes
register: sshcheck
- set_fact:
sshreachable: "{{ sshcheck is success }}"
- debug:
msg: "Host1 reachable: {{ sshreachable | string }}"
Update:
raw module seems to work the same way. Example (including #mdaniel's valuable input):
- block:
- raw: ssh -o BatchMode=yes user#host1 echo success
ignore_errors: yes
register: sshcheck
- set_fact:
sshreachable: SSH SUCCESS
when: sshcheck is success
- set_fact:
sshreachable: SSH FAILED
when: sshcheck is failed
- debug:
msg: "Host1 reachable: {{ sshreachable }}"

Ansible not reporting distribution info on Ubuntu 20.04?

Example on Ubuntu 18.04 reporting distribution info in 'ansible_facts':
$ ansible -i hosts ubuntu1804 -u root -m setup -a "filter=ansible_distribution*"
ubuntu1804 | SUCCESS => {
"ansible_facts": {
"ansible_distribution": "Ubuntu",
"ansible_distribution_file_parsed": true,
"ansible_distribution_file_path": "/etc/os-release",
"ansible_distribution_file_variety": "Debian",
"ansible_distribution_major_version": "18",
"ansible_distribution_release": "bionic",
"ansible_distribution_version": "18.04"
},
"changed": false
}
Example of same command against Ubuntu 20.04:
$ ansible -i hosts ubuntu2004 -u root -m setup -a "filter=ansible_distribution*"
ubuntu2004 | SUCCESS => {
"ansible_facts": {},
"changed": false
}
Is this an issue with Ubuntu or Ansible? Is there a workaround?
Issue resolved with today's update to ansible 2.9.7.
After very research to find out Ubuntu 20.04 version then we have got released a version using ansible version-2.5.1
- hosts: localhost
become: true
gather_facts: yes
tasks:
- name: System details
debug:
msg: "{{ ansible_facts['lsb']['release'] }}"
- name: ubuntu 18
shell: echo "hello 18"
register: ub18
when: ansible_facts['lsb']['release'] == "18.04"
- debug:
msg: "{{ ub18 }}"
- name: ubuntu 20
shell: echo "hello 20"
register: ub20
when: ansible_facts['lsb']['release'] == "20.04"
- debug:
msg: "{{ ub20 }}"

Ansible: setting user on dynamic ec2

I don't appear to be connecting to the remote host. Why not?
Command-line: ansible-playbook -i "127.0.0.1," -c local playbook.yml
This is the playbook. The role, create_ec2_instance, creates the variable ec2hosts used within the second portion of the playbook (ansible/playbook.yml):
# Create instance
- hosts: 127.0.0.1
connection: local
gather_facts: false
roles:
- create_ec2_instance
# Configure and install all we need
- hosts: ec2hosts
remote_user: admin
gather_facts: false
roles:
- show-hosts
- prepare-target-system
- install-project-dependencies
- install-project
This is just a simple ec2 module creation. This works as desired. (ansible/roles/create-ec2-instance/tasks/main.yml):
- name: Create instance
ec2:
region: "{{ instance_values['region'] }}"
zone: "{{ instance_values['zone'] }}"
keypair: "{{ instance_values['key_pair'] }}"
group: "{{ instance_values['security_groups'] }}"
instance_type: "{{ instance_values['instance_type'] }}"
image: "{{ instance_values['image_id'] }}"
count_tag: "{{ instance_values['name'] }}"
exact_count: 1
wait: yes
instance_tags:
Name: "{{ instance_values['name'] }}"
when: ec2_instances.instances[instance_values['name']]|default("") == ""
register: ec2_info
- name: Wait for instances to listen on port 22
wait_for:
state: started
host: "{{ ec2_info.instances[0].public_dns_name }}"
port: 22
when: ec2_info|changed
- name: Add new instance to ec2hosts group
add_host:
hostname: "{{ ec2_info.instances[0].public_ip }}"
groupname: ec2hosts
instance_id: "{{ ec2_info.instances[0].id }}"
when: ec2_info|changed
I've included extra methods for transparency, though these are really basic (ansible/roles/show-hosts/tasks/main.yml):
- name: List hosts
debug: msg="groups={{groups}}"
run_once: true
and we have (ansible/roles/prepare-target-system/tasks/main.yml):
- name: get the username running the deploy
local_action: command whoami
register: username_on_the_host
- debug: var=username_on_the_host
- name: Add necessary system packages
become: yes
become_method: sudo
package: "name={{item}} state=latest"
with_items:
- software-properties-common
- python-software-properties
- devscripts
- build-essential
- libffi-dev
- libssl-dev
- vim
Edit: I've updated to remote_user above and below is the error output:
TASK [prepare-target-system : debug] *******************************************
task path: <REDACTED>/ansible/roles/prepare-target-system/tasks/main.yml:5
ok: [35.166.52.247] => {
"username_on_the_host": {
"changed": true,
"cmd": [
"whoami"
],
"delta": "0:00:00.009067",
"end": "2017-01-07 08:23:42.033551",
"rc": 0,
"start": "2017-01-07 08:23:42.024484",
"stderr": "",
"stdout": "brianbruggeman",
"stdout_lines": [
"brianbruggeman"
],
"warnings": []
}
}
TASK [prepare-target-system : Ensure that we can update apt-repository] ********
task path: /<REDACTED>/ansible/roles/prepare-target-system/tasks/Debian.yml:2
Using module file <REDACTED>/.envs/dg2/lib/python2.7/site-packages/ansible/modules/core/packaging/os/apt.py
<35.166.52.247> ESTABLISH LOCAL CONNECTION FOR USER: brianbruggeman
<35.166.52.247> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1483799022.33-268449475843769 `" && echo ansible-tmp-1483799022.33-268449475843769="` echo $HOME/.ansible/tmp/ansible-tmp-1483799022.33-268449475843769 `" ) && sleep 0'
<35.166.52.247> PUT /var/folders/r9/kv1j05355r34570x2f5wpxpr0000gn/T/tmpK2__II TO <REDACTED>/.ansible/tmp/ansible-tmp-1483799022.33-268449475843769/apt.py
<35.166.52.247> EXEC /bin/sh -c 'chmod u+x <REDACTED>/.ansible/tmp/ansible-tmp-1483799022.33-268449475843769/ <REDACTED>/.ansible/tmp/ansible-tmp-1483799022.33-268449475843769/apt.py && sleep 0'
<35.166.52.247> EXEC /bin/sh -c 'sudo -H -S -n -u root /bin/sh -c '"'"'echo BECOME-SUCCESS-owktjrfvqssjrqcetaxjkwowkzsqfitq; /usr/bin/python <REDACTED>/.ansible/tmp/ansible-tmp-1483799022.33-268449475843769/apt.py; rm -rf "<REDACTED>/.ansible/tmp/ansible-tmp-1483799022.33-268449475843769/" > /dev/null 2>&1'"'"' && sleep 0'
failed: [35.166.52.247] (item=[u'software-properties-common', u'python-software-properties', u'devscripts', u'build-essential', u'libffi-dev', u'libssl-dev', u'vim']) => {
"failed": true,
"invocation": {
"module_name": "apt"
},
"item": [
"software-properties-common",
"python-software-properties",
"devscripts",
"build-essential",
"libffi-dev",
"libssl-dev",
"vim"
],
"module_stderr": "sudo: a password is required\n",
"module_stdout": "",
"msg": "MODULE FAILURE"
}
to retry, use: --limit #<REDACTED>/ansible/<redacted playbook>.retry
PLAY RECAP *********************************************************************
127.0.0.1 : ok=6 changed=2 unreachable=0 failed=0
35.166.52.247 : ok=3 changed=1 unreachable=0 failed=1
Use become:
remote_user: ansible
become: true
become_user: root
Ansible docs: Become (Privilege Escalation)
For example: in my scripts i connect to remote host as user 'ansible' (because ssh is disabled for root), and then become 'root'. Rarely, i connect as 'ansible', then become 'apache' user. So, remote_user specify username to connect, become_user is username after connection.
PS Passwordless sudo for user ansible:
- name: nopasswd sudo for ansible user
lineinfile: "dest=/etc/sudoers state=present regexp='^{{ ansible_user }}' line='{{ ansible }} ALL=(ALL) NOPASSWD: ALL'"
This is known workaround, see here: Specify sudo password for Ansible