Ansible to Conditionally Prompt for a Variable? - variables

I would like to be able to prompt for my super secure password variable if it is not already in the environment variables. (I'm thinking that I might not want to put the definition into .bash_profile or one of the other spots.)
This is not working. It always prompts me.
vars:
THISUSER: "{{ lookup('env','LOGNAME') }}"
SSHPWD: "{{ lookup('env','MY_PWD') }}"
vars_prompt:
- name: "release_version"
prompt: "Product release version"
default: "1.0"
when: SSHPWD == null
NOTE: I'm on a Mac, but I'd like for any solutions to be platform-independent.

According to the replies from the devs and a quick test I've done with the latest version, the vars_prompt is run before "GATHERING FACTS". This means that the env var SSHPWD is always null at the time of your check with when.
Unfortunately it seems there is no way of allowing the vars_prompt statement at task level.
Michael DeHaan's reasoning for this is that allowing prompts at the task-level would open up the doors to roles asking a lot of questions. This would make using Ansible Galaxy roles which do this difficult:
There's been a decided emphasis in automation in Ansible and asking questions at task level is not something we really want to do.
However, you can still ask vars_prompt questions at play level and use those variables throughout tasks. You just can't ask questions in roles.
And really, that's what I would like to enforce -- if a lot of Galaxy roles start asking questions, I can see that being annoying :)

I might be late to the party but a quick way to avoid vars_prompt is to disable the interactive mode by doing that simple trick:
echo -n | ansible-playbook -e MyVar=blih site.yaml
This add no control over which vars_prompt to avoid but coupled with default: "my_default" it can be used in a script.
Full example here:
---
- hosts: localhost
vars_prompt:
- prompt: Enter blah value
- default: "{{ my_blah }}"
- name: blah
echo -n | ansible-playbook -e my_blah=blih site.yaml
EDIT:
I've found that using the pause module and the prompt argument was doing what I wanted:
---
- pause:
prompt: "Sudo password for localhost "
when: ( env == 'local' ) and
( inventory_hostname == "localhost" ) and
( hostvars["localhost"]["ansible_become_password"] is not defined )
register: sudo_password
no_log: true
tags:
- always

Based on tehmoon's answer with some modifications I did it that way:
- hosts:
- hostA
become: yes
pre_tasks:
- pause:
prompt: "Give your username"
register: prompt
no_log: yes
run_once: yes
- set_fact:
username: "{{prompt.user_input}}"
no_log: yes
run_once: yes
- pause:
prompt: "Give your password"
echo: no
register: prompt
no_log: yes
run_once: yes
- set_fact:
password: "{{prompt.user_input}}"
no_log: yes
run_once: yes
tags: [my_role_using_user_pass]
roles:
- role: my_role_using_user_pass

This is indeed not possible by default in Ansible. I understand the reasoning behind not allowing it, yet I think it can be appropriate in some contexts. I've been writing an AWS EC2 deploy script, using the blue/green deploy system, and at some point in the role I need to ask the user if a rollback needs to be done if something has gone awry. As said, there is no way to do this (conditionally and/or non-fugly).
So I wrote a very simple Ansible (2.x) action plugin, based on the pause action from the standard library. It a bit spartan in that it only accepts a single key press, but it might be of use. You can find it in a Github gist here. You need to copy the whole Gist file to the action_plugins directory of your playbook directory. See the documentation in the file.

As can be seen in the source code, the when keyword isn't implemented for vars_prompt (and in fact never was). The same was mentioned in this Github comment.
The only way in which vars_prompt is currently conditional is that it only prompts when the variable (defined in name) is already defined (using the command-line extra_vars argument).

This works for me (2.3) .. do two bits in the one file.
This allows me to consruct a tmp vars file when running the playbook via jenkins.. but also allow prompting on the command line
And you get to do it with only the one var used
---
- name: first bit
hosts: all
connection: local
tasks:
- set_fact:
favColour: "{{ favColour }}"
when: favColour is defined
- name: second bit
hosts: all
connection: local
vars_prompt:
favColour:
prompt: "Whats ya favorite colour: "
when: favColour is not defined
tasks:
- debug: msg="{{favColour}}"

Based on #tehmoon's answer, this is what worked for me with ansible-core 2.14:
tasks:
- name: Prompt SSH password if necessary
when: ansible_password is undefined
block:
- name: Conditionally prompt for ssh/sudo password
ansible.builtin.pause:
prompt: "Password for {{ ansible_user_id }}#{{ ansible_host }}"
echo: false
register: password_prompt
no_log: true
- name: Set ansible_password
ansible.builtin.set_fact:
ansible_password: "{{ password_prompt.user_input }}"
no_log: true
- name: Set ansible_become_password
ansible.builtin.set_fact:
ansible_become_password: "{{ ansible_password }}"
no_log: true
when: ansible_become_password is undefined

Related

Ansible ssh variable options coding issues

What I am trying to accomplish overall is to ssh into systems which are untouched by ansible, and have them set up by ansible, including its account, and ssh keys, and adding to the dynamic inventory... and so on and so forth. In this case, it's via a proxy jump. Unfortunately this means having to ssh into them using the ssh command and the shell module, as well as storing a password. Keep in mind I am on ansible 2.9, and this is a build environment, so passwords can be copied to files during build for use and then deleted at the end of the run, so this isn't a problem. If this succeeds, we can set up accounts and ssh keys, then delete the build files and everyone is happy.
I don't need all that much I hope, I would just like to get one sticky piece of that working better. That part is the ssh options that are needed for a proxyjump connection. ansible-controller doesn't have direct access to host p0, but the ecc67 host does. I have it working in the shell command no problem, but for whatever reason, I can't shift it up to the ansible_ssh_common_args variable where it belongs.
Here is the working example of the task as it functions now:
- name: sshpass attempt with the raw module for testing.
shell: sshpass -p "{{ access_var.ansible_ssh_pass_ssn }}" ssh -o 'ProxyCommand=ssh -W %h:%p bob#ecc67 nc %h %p' bob#p0 "w; exit"
register: output_1
The above works just fine and uses an undefined ansible_ssh_common_args. The nc is the netcat binary and is simply being passed options through the proxy command. Then we have the below playbook in which I tried to complete my stated mission, however, it is not functional, and fails at the sshpass task:
- name: Play that is testing for a successful proxyjump connection to p0 through ecc67.
hosts: ansible-controller
remote_user: bob
become: no
become_method: sudo
gather_facts: no
vars:
ansible_connection: ssh
ansible_ssh_common_args: '-o "ProxyCommand=ssh -W %h:%p bob#ecc67 nc %h %p"'
tasks:
- name: Import the password file so that we have the bob account's password.
include_vars:
file: ~/project/copyable-files/dynamic-files/build/active-vars-repository/access.yml
name: access_var
- name: Set password for the bob account from the file value using previous operator input.
set_fact:
ansible_ssh_pass: "{{ access_var.ansible_ssh_pass_b }}"
ansible_become_password: "{{ access_var.ansible_ssh_pass_b }}"
cacheable: yes
- name: sshpass attempt with the raw module for testing.
shell: sshpass -p "{{ ansible_ssh_pass_b }}" ssh "{{ ansible_ssh_common_args }}" bob#p0 "hostname; exit"
register: output_1
- debug:
var: output_1
The error I get when I attempt to use the above playbook with the reworked task and variables is as follows:
TASK [sshpass attempt with the raw module for testing.] ***********************************************
fatal: [ansible-controller]: UNREACHABLE! => {"changed": false, "msg": "Invalid/incorrect password: Killed by signal 1.", "unreachable": true}
The password is not the issue despite the error stating it is, though it's possible it's accessing something I don't expect. Is there any way to do what I want, heck, is there even just a better way to go about it that I didn't think of? Any suggestions would be helpful thanks!
From your description I understand that there is an issue with special characters in variables, quoting, templating and debugging. Therefore I am explicit not addressing the question "Is there ... a better way to go?".
To address the different topics I've created the following minimal example playbook
---
- hosts: localhost
become: false
gather_facts: false
vars:
ansible_ssh_pass: !unsafe "P4$$w0rd!_%&"
ansible_ssh_common_args: !unsafe '-o "ProxyCommand=ssh -W %h:%p user#jump.example.com nc %h %p"'
tasks:
- name: Debug task to show command content
lineinfile:
path: ssh.file
create: true
line: 'sshpass -p {{ ansible_ssh_pass | quote }} ssh {{ ansible_ssh_common_args }} user#test.example.com "hostname; exit"'
resulting into an output of
sshpass -p 'P4$$w0rd!_%&' ssh -o "ProxyCommand=ssh -W %h:%p user#jump.example.com nc %h %p" user#test.example.com "hostname; exit"
... the content of ssh.file and what the shell would "see"
Further Documentation
Advanced playbook syntax - Unsafe or raw strings for usage of !unsafe
The most common use cases include passwords that allow special characters
Using filters to manipulate data
You can use YAML single quote escaping ... Escaping single quotes within single quotes in YAML is done by doubling the single quote.
Using filters to manipulate data - Manipulating strings for usage of quote
To add quotes for shell usage ... | quote
Templating (Jinja2)
Ansible uses Jinja2 templating to enable dynamic expressions and access to variables and facts.

Ansible vmware_guest customization runonce: parameter not being executed

I am using the vmware_guest module to deploy vms to our vsphere environment and everything is working well. Now there is a requirement to handle post-processing of the vms once they are deployed which requires WinRM on Windows. The goal is to use vmware_guest module to use the "runonce:" param in order to configure WinRM using the ConfigureRemotingForAnsible.ps1 script and then to join the win system to the domain with using the "joindomain:" parameter.
The issue I am running into is that it appears the "runonce:" parameter is executed after the system is joined to the domain. Once the system is joined, it autologons to the domain but there is a Cyber banner pop-up and have to hit "OK" to continue the login process. This interferes with the running of pwsh script so I decided to try to break up the two items.
My thoughts were to create two vmware_guest Ansible tasks, the first one to create the vm and run the script to configure WinRM and the second vmware_guest task to join the system automatically to AD.
The first customization block works well and looks like ..
customization:
autologon: yes
autologoncount: 8
password: "{{ local_pass }}"
existing_vm: false
hostname: "{{ vm_name }}"
dns_servers:
- "{{ dns_ns1 }}"
- "{{ dns_ns2 }}"
runonce:
- powershell.exe -ExecutionPolicy Unrestricted -File C:\Windows\Temp\ConfigureRemotingForAnsible.ps1 -ForceNewSSLCert
wait_for_customization: yes
The second customization block in the second Ansible task looks like ...
customization:
autologon: yes
autologoncount: 8
password: "{{ local_pass }}"
existing_vm: true
domainadmin: "{{ elevated }}"
domainadminpassword: "{{ elevated_pass }}"
joindomain: my-domain
wait_for_customization: yes
No errors are produced but the second customization block in the second Ansible task doesn't seem to be executed. The first task is marked as "changed" while the second task is marked as "ok."
Any ideas?

Ansible: How to check SSH access

Good morning all,
I'm racking my brains over a simple subject.
I'm on a "master" server and I would like to check if he manages to connect in SSH on a server list.
Example
ansible-playbook -i inventaire_test test_ssh.yml
---
tasks:
- name: test unreachable
ansible.builtin.ping:
register: test_ssh
ignore_unreachable: true
- name: test
fail:
msg: "test"
when: test_ssh.unreachable is defined
- name: header CSV
lineinfile:
insertafter: EOF
dest: /home/list.csv
line: "Server;OS;access"
delegate_to:localhost
- name: Info
lineinfile:
dest: /home/list.csv
line: "{{ inventory_hostname }};OK"
state: present
when: test_ssh is successful
delegate_to:localhost
- name: Info csv
lineinfile:
dest: /home/list.csv
line: "{{ inventory_hostname }};KO"
state: present
when: test_ssh.unreachable is undefined
delegate_to:localhost
I can't find a check_ssh module. There is ansible.builtin.ssh but I can't use it.
Do you have an idea?
Thanks in advance.
Regarding
I'm on a "master" server and I would like to check if he manages to connect in SSH on a server list. ... I can't find a check_ssh module.
According the documentation there is a
ping module – Try to connect to host, verify a usable python and return pong on success
... test module, this module always returns pong on successful contact. It does not make sense in playbooks, but it is useful from /usr/bin/ansible to verify the ability to login and that a usable Python is configured.
which seems to be doing what you are looking for.

ansible created password for a user does not work for ssh sessions

I know this question has been asked several times, however, i am still having the issue where Users created using ansible and password setup referenced to ansible doc article is not working for ssh sessions.
I understand the password has to be hashed rather than the plain text. i tried following however still can't ssh to remote host.
---
- hosts: all #modify your server list
remote_user: root
vars:
#created using the sha-512
password: $6$i77J0vHI5M$/cWpyM72mGY5h8V6PW1KTg3Tjh6VH5jtdBTm2nLwjxKzW/iR2zbzm2X.eUYT833xEDaco5NxZgY.obtDNhPNz0
tasks:
- include_vars: users.yml
- name: Creating users to Jump Server
user: name="{{ item.username}}" password= "{{ password }}" state=present
with_items: "{{ users }}"
- name: Placing SSH Key to Authorized Key
#please note that this code assumes as if the public-private key pair is generated, all public users (created above) have public keys copied at one place i.e. keyfiles directory for the ease
authorized_key: user="{{item.username}}" key="{{ lookup('file', './keyfiles/authorized_keys.{{ item.username}}.pub')}}"
with_items: "{{ users }}"
/etc/shadow looks like this on all hosts
root#serverX:/home# cat /etc/shadow | grep sam
sam::17393:0:99999:7:::
What am i doing wrong or missing? I Will appreciate if someone can put some light. Thanks a lot in advance.
You can also use your password variable directly instead of hash using the password_hash filter:
Your password variable:
password: "my_secure_password"
Then modified your user creation task:
- name: Creating users to Jump Server
user:
name: "{{ item.username}}"
password: "{{ password | password_hash('sha512') }}"
state: present
with_items: "{{ users }}"
I have figured out the way. there was a small syntax error. Tried the password variable as password={{ password }} instead of putting in quotation marks.
Now the /etc/shadow file had same hashed password generated as set in variable.
root#serverX:/home# cat /etc/shadow | grep sam
sam:$6$i77J0vHI5M$/cWpyM72mGY5h8V6PW1KTg3Tjh6VH5jtdBTm2nLwjxKzW/iR2zbzm2X.eUYT833xEDaco5NxZgY.obtDNhPNz0:17393:0:99999:7:::
Hopefully this will help someone else too.

In Ansible, is it possible to define the authentication method per playbook?

TL;DR: Is it possible to chain two playbooks with one ansible-playbook command where one playbook is password auth and the other playbook is key auth? (see last section for real-world purpose).
Setup:
I have two playbooks, the second of which includes the first.
PlaybookA.yml
---
- name: PlaybookA # requires password authentication
hosts: sub.domain.ext
remote_user: root
roles:
- { role: role1, sudo: yes }
...
PlaybookB.yml
---
- name: Run PlaybookA
include: PlaybookA.yml
- name: PlaybookB # requires ssh-key authentication
hosts: sub.domain.ext
remote_user: ansible
roles:
- { role: role2, sudo: yes }
...
Requirements:
Execute only one command.
Use password auth for PlaybookA.
Use ssh-key auth for PlaybookB.
Question 1:
Is it possible within Ansible (versions 1.9.4 or lower) to execute one ansible-playbook command that will successfully run PlaybookB using ssh-key authentication but when PlaybookB includes PlaybookA, run PlaybookA using password authentication?
Question 2:
If this is not possible with Ansible 1.9.4 or lower, is this possible with 2.0.0+?
Notes of worth:
Ansible provides --ask-pass (or -k) as a command line switch enabling password authentication.
Ansible provides ask_pass as a variable but it seems as though it can only be set within ansible.cfg (I haven't been able to set this as a playbook variable to the desired effect).
Attempting to set ask_pass as an instruction within a playbook results in the following: ERROR: ask_pass is not a legal parameter of an Ansible Play. If this parameter was legal, it would provide a way to instruct ansible on a per-playbook level, what authentication method to use.
Purpose / Real World:
I'm attempting to create a configuration management workflow with Ansible that will be simple enough that others at work will be able to learn / adapt to it (and hopefully the use of Ansible in general for CM and orchestration).
For any new machine (VM or physical) that gets built, I intend for us to run two playbooks immediately. PlaybookA (as shown above) has the responsibility of logging in with the correct default user (typically depends upon the infrastructure [aws, vsphere, none, etc]). Once in, its very limited job is to:
Create the standardized user for ansible to run as (and install its ssh-key).
Remove any non-root users that may exist (artifacts of the vm infrastructure, etc).
Disable root access.
Disable password authentication (ssh-key only from this point on).
Depending upon the vm infrastructure (or lack thereof), the default user or the default authentication method can be different. Toward the goal of adoption of Ansible, I'm attempting to keep things extremely simple for fellow co-workers, so I'd like to automate as much of this flow-control as possible.
Once PlaybookA has locked down the vm and setup the standardized user, PlaybookB uses that standardized user to perform all other operations necessary to bring our vm's up to the necessary baseline of tools and utilities, etc.
Any tips, hints, suggestions would be greatly appreciated.
I have been facing the same problem today. Two ideas may help you here:
You can ask for the password using vars_prompt in your playbook instead of --ask-pass
Set the password using set_fact:
- name: "set password for the play"
set_fact: ansible_ssh_pass="{{ my_pass }}"
You could store the password in a file, or prompt for it, as in the example below. In my example, the sshd config thats being created will forbid password logins, but using ansible defaults, you will be surprised that the second playbook will still be executed (!), even though I "forgot" to create an authorized_key. Thats due to the fact, that ansible uses the ControlPersist options of ssh, and simply keeps the connection between single tasks open. You can turn that off in ansible.cfg
Example Playbook:
- name: "MAKE BARE: Run preparatory steps on a newly acquired server"
hosts: blankee
tasks:
- name: "set password for the play"
set_fact: ansible_ssh_pass="{{ my_pass }}"
- name: "Create directory {{ pathsts }}/registry/ansible-init"
file: name="{{ pathsts }}/registry/ansible-init" state=directory owner=root group=www-data mode=770
- name: "copy sshd config file"
copy:
src: 'roles/newhost/files/sshd_config'
dest: '/etc/ssh/sshd_config'
owner: 'root'
group: 'root'
mode: '0644'
- name: "Check syntax of sshd configuration"
shell: sshd -t
register: result
changed_when: false
failed_when: "result.rc != 0"
- name: "Restart SSHD and enable Service to start at boot"
service: name=sshd state=restarted
changed_when: false
vars:
my_pass2: foobar
vars_prompt:
- name: "my_pass"
prompt: "########## Enter PWD:\n "
- name: "Second run: This should authenticate w/out password:"
hosts: blankee
tasks:
- name: "Create directory {{ pathsts }}/registry/ansible-init"
file: name="{{ pathsts }}/registry/ansible-init22" state=directory owner=root group=www-data mode=770
I don't know a way to change the authentication method within the play. I think I'd prefer running two different playbooks as Jenkins job or similar, but I can think of a pure Ansible workaround: instead of including the second playbook, you could get ansible to run a shell command as a local action, and run the command to execute the second playbook from the first one. Here's a rough proof of concept:
---
- hosts: all
vars_files:
- vars.yml
tasks:
- debug: msg="Run your first role here."
- name: Then call Ansible to run the second playbook.
local_action: shell ansible-playbook -i ~/workspace/hosts ~/workspace/second_playbook.yml
register: playbook_results
- debug: var=playbook_results.stdout_lines
Here's the output:
GATHERING FACTS ***************************************************************
ok: [vagrantbox]
TASK: [debug msg="Run your first role here."] *********************************
ok: [vagrantbox] => {
"msg": "Run your first role here."
}
TASK: [Then call Ansible to run the second playbook.] *************************
changed: [vagrantbox -> 127.0.0.1]
TASK: [debug var=playbook_results.stdout_lines] *******************************
ok: [vagrantbox] => {
"var": {
"playbook_results.stdout_lines": [
"",
"PLAY [Proof of concept] ******************************************************* ",
"",
"GATHERING FACTS *************************************************************** ",
"ok: [vagrantbox]",
"",
"TASK: [debug msg=\"This playbook was called from another playbook!\"] *********** ",
"ok: [vagrantbox] => {",
" \"msg\": \"This playbook was called from another playbook!\"",
"}",
"",
"PLAY RECAP ******************************************************************** ",
"vagrantbox : ok=2 changed=0 unreachable=0 failed=0 "
]
}
}
PLAY RECAP ********************************************************************
vagrantbox : ok=4 changed=1 unreachable=0 failed=0