Ansible vmware_guest customization runonce: parameter not being executed - module

I am using the vmware_guest module to deploy vms to our vsphere environment and everything is working well. Now there is a requirement to handle post-processing of the vms once they are deployed which requires WinRM on Windows. The goal is to use vmware_guest module to use the "runonce:" param in order to configure WinRM using the ConfigureRemotingForAnsible.ps1 script and then to join the win system to the domain with using the "joindomain:" parameter.
The issue I am running into is that it appears the "runonce:" parameter is executed after the system is joined to the domain. Once the system is joined, it autologons to the domain but there is a Cyber banner pop-up and have to hit "OK" to continue the login process. This interferes with the running of pwsh script so I decided to try to break up the two items.
My thoughts were to create two vmware_guest Ansible tasks, the first one to create the vm and run the script to configure WinRM and the second vmware_guest task to join the system automatically to AD.
The first customization block works well and looks like ..
customization:
autologon: yes
autologoncount: 8
password: "{{ local_pass }}"
existing_vm: false
hostname: "{{ vm_name }}"
dns_servers:
- "{{ dns_ns1 }}"
- "{{ dns_ns2 }}"
runonce:
- powershell.exe -ExecutionPolicy Unrestricted -File C:\Windows\Temp\ConfigureRemotingForAnsible.ps1 -ForceNewSSLCert
wait_for_customization: yes
The second customization block in the second Ansible task looks like ...
customization:
autologon: yes
autologoncount: 8
password: "{{ local_pass }}"
existing_vm: true
domainadmin: "{{ elevated }}"
domainadminpassword: "{{ elevated_pass }}"
joindomain: my-domain
wait_for_customization: yes
No errors are produced but the second customization block in the second Ansible task doesn't seem to be executed. The first task is marked as "changed" while the second task is marked as "ok."
Any ideas?

Related

Ansible variable precedence and vault

I am reworking my ansible inventory to use ansible-vault.
Everything is working fine however I have an issue with, I think precendence of variables. When I try to make a local connection to ansiblemaster ( localhost 127.0.0.1 ) it seems to be using the sudo passwords of the global configuration instead of that one in the host_vars
this is my setup:
hosts.ini
group_vars/all/config.yml
group_vars/all/secrets.yml
host_vars/ansiblemaster
So I have this defined in group_vars/all/config.yml:
### GLOBAL ###
ansible_become_password: "{{ secret_ansible_become_password }}"
ansible_password: "{{ secret_ansible_password }}"
ansible_user: "{{ secret_ansible_user }}"
And I have this defined in host_vars/ansiblemaster:
ansible_ssh_host: 127.0.0.1
ansible_user: "{{secret_master_ansible_user}}"
ansible_password: "{{secret_master_ansible_password}}"
ansible_become_password: "{{secret_master_ansible_become_password}}"
ansible_become_user: "{{secret_master_ansible_become_user}}"
ansible_connection: local
I keep getting:
password: \nsudo: 1 incorrect password attempt\n"
When I run a playbook that makes a local connection and performs sudo.
Does my definition in host_vars/ansiblemaster not overwrite group_vars/all/config ?
I've solved it. Comes down to this:
I had a local_action: Task that wasn't picking up the variables for "ansiblemaster" ( which is localhost ) ... I changed it to use delegate_to: ansiblemaster and now it does pick up the variables in my host_vars/
...
not sure if this is best practise.

In Ansible, is it possible to define the authentication method per playbook?

TL;DR: Is it possible to chain two playbooks with one ansible-playbook command where one playbook is password auth and the other playbook is key auth? (see last section for real-world purpose).
Setup:
I have two playbooks, the second of which includes the first.
PlaybookA.yml
---
- name: PlaybookA # requires password authentication
hosts: sub.domain.ext
remote_user: root
roles:
- { role: role1, sudo: yes }
...
PlaybookB.yml
---
- name: Run PlaybookA
include: PlaybookA.yml
- name: PlaybookB # requires ssh-key authentication
hosts: sub.domain.ext
remote_user: ansible
roles:
- { role: role2, sudo: yes }
...
Requirements:
Execute only one command.
Use password auth for PlaybookA.
Use ssh-key auth for PlaybookB.
Question 1:
Is it possible within Ansible (versions 1.9.4 or lower) to execute one ansible-playbook command that will successfully run PlaybookB using ssh-key authentication but when PlaybookB includes PlaybookA, run PlaybookA using password authentication?
Question 2:
If this is not possible with Ansible 1.9.4 or lower, is this possible with 2.0.0+?
Notes of worth:
Ansible provides --ask-pass (or -k) as a command line switch enabling password authentication.
Ansible provides ask_pass as a variable but it seems as though it can only be set within ansible.cfg (I haven't been able to set this as a playbook variable to the desired effect).
Attempting to set ask_pass as an instruction within a playbook results in the following: ERROR: ask_pass is not a legal parameter of an Ansible Play. If this parameter was legal, it would provide a way to instruct ansible on a per-playbook level, what authentication method to use.
Purpose / Real World:
I'm attempting to create a configuration management workflow with Ansible that will be simple enough that others at work will be able to learn / adapt to it (and hopefully the use of Ansible in general for CM and orchestration).
For any new machine (VM or physical) that gets built, I intend for us to run two playbooks immediately. PlaybookA (as shown above) has the responsibility of logging in with the correct default user (typically depends upon the infrastructure [aws, vsphere, none, etc]). Once in, its very limited job is to:
Create the standardized user for ansible to run as (and install its ssh-key).
Remove any non-root users that may exist (artifacts of the vm infrastructure, etc).
Disable root access.
Disable password authentication (ssh-key only from this point on).
Depending upon the vm infrastructure (or lack thereof), the default user or the default authentication method can be different. Toward the goal of adoption of Ansible, I'm attempting to keep things extremely simple for fellow co-workers, so I'd like to automate as much of this flow-control as possible.
Once PlaybookA has locked down the vm and setup the standardized user, PlaybookB uses that standardized user to perform all other operations necessary to bring our vm's up to the necessary baseline of tools and utilities, etc.
Any tips, hints, suggestions would be greatly appreciated.
I have been facing the same problem today. Two ideas may help you here:
You can ask for the password using vars_prompt in your playbook instead of --ask-pass
Set the password using set_fact:
- name: "set password for the play"
set_fact: ansible_ssh_pass="{{ my_pass }}"
You could store the password in a file, or prompt for it, as in the example below. In my example, the sshd config thats being created will forbid password logins, but using ansible defaults, you will be surprised that the second playbook will still be executed (!), even though I "forgot" to create an authorized_key. Thats due to the fact, that ansible uses the ControlPersist options of ssh, and simply keeps the connection between single tasks open. You can turn that off in ansible.cfg
Example Playbook:
- name: "MAKE BARE: Run preparatory steps on a newly acquired server"
hosts: blankee
tasks:
- name: "set password for the play"
set_fact: ansible_ssh_pass="{{ my_pass }}"
- name: "Create directory {{ pathsts }}/registry/ansible-init"
file: name="{{ pathsts }}/registry/ansible-init" state=directory owner=root group=www-data mode=770
- name: "copy sshd config file"
copy:
src: 'roles/newhost/files/sshd_config'
dest: '/etc/ssh/sshd_config'
owner: 'root'
group: 'root'
mode: '0644'
- name: "Check syntax of sshd configuration"
shell: sshd -t
register: result
changed_when: false
failed_when: "result.rc != 0"
- name: "Restart SSHD and enable Service to start at boot"
service: name=sshd state=restarted
changed_when: false
vars:
my_pass2: foobar
vars_prompt:
- name: "my_pass"
prompt: "########## Enter PWD:\n "
- name: "Second run: This should authenticate w/out password:"
hosts: blankee
tasks:
- name: "Create directory {{ pathsts }}/registry/ansible-init"
file: name="{{ pathsts }}/registry/ansible-init22" state=directory owner=root group=www-data mode=770
I don't know a way to change the authentication method within the play. I think I'd prefer running two different playbooks as Jenkins job or similar, but I can think of a pure Ansible workaround: instead of including the second playbook, you could get ansible to run a shell command as a local action, and run the command to execute the second playbook from the first one. Here's a rough proof of concept:
---
- hosts: all
vars_files:
- vars.yml
tasks:
- debug: msg="Run your first role here."
- name: Then call Ansible to run the second playbook.
local_action: shell ansible-playbook -i ~/workspace/hosts ~/workspace/second_playbook.yml
register: playbook_results
- debug: var=playbook_results.stdout_lines
Here's the output:
GATHERING FACTS ***************************************************************
ok: [vagrantbox]
TASK: [debug msg="Run your first role here."] *********************************
ok: [vagrantbox] => {
"msg": "Run your first role here."
}
TASK: [Then call Ansible to run the second playbook.] *************************
changed: [vagrantbox -> 127.0.0.1]
TASK: [debug var=playbook_results.stdout_lines] *******************************
ok: [vagrantbox] => {
"var": {
"playbook_results.stdout_lines": [
"",
"PLAY [Proof of concept] ******************************************************* ",
"",
"GATHERING FACTS *************************************************************** ",
"ok: [vagrantbox]",
"",
"TASK: [debug msg=\"This playbook was called from another playbook!\"] *********** ",
"ok: [vagrantbox] => {",
" \"msg\": \"This playbook was called from another playbook!\"",
"}",
"",
"PLAY RECAP ******************************************************************** ",
"vagrantbox : ok=2 changed=0 unreachable=0 failed=0 "
]
}
}
PLAY RECAP ********************************************************************
vagrantbox : ok=4 changed=1 unreachable=0 failed=0

How to ignore ansible SSH authenticity checking?

Is there a way to ignore the SSH authenticity checking made by Ansible? For example when I've just setup a new server I have to answer yes to this question:
GATHERING FACTS ***************************************************************
The authenticity of host 'xxx.xxx.xxx.xxx (xxx.xxx.xxx.xxx)' can't be established.
RSA key fingerprint is xx:yy:zz:....
Are you sure you want to continue connecting (yes/no)?
I know that this is generally a bad idea but I'm incorporating this in a script that first creates a new virtual server at my cloud provider and then automatically calls my ansible playbook to configure it. I want to avoid any human intervention in the middle of the script execution.
Two options - the first, as you said in your own answer, is setting the environment variable ANSIBLE_HOST_KEY_CHECKING to False.
The second way to set it is to put it in an ansible.cfg file, and that's a really useful option because you can either set that globally (at system or user level, in /etc/ansible/ansible.cfg or ~/.ansible.cfg), or in an config file in the same directory as the playbook you are running.
To do that, make an ansible.cfg file in one of those locations, and include this:
[defaults]
host_key_checking = False
You can also set a lot of other handy defaults there, like whether or not to gather facts at the start of a play, whether to merge hashes declared in multiple places or replace one with another, and so on. There's a whole big list of options here in the Ansible docs.
Edit: a note on security.
SSH host key validation is a meaningful security layer for persistent hosts - if you are connecting to the same machine many times, it's valuable to accept the host key locally.
For longer-lived EC2 instances, it would make sense to accept the host key with a task run only once on initial creation of the instance:
- name: Write the new ec2 instance host key to known hosts
connection: local
shell: "ssh-keyscan -H {{ inventory_hostname }} >> ~/.ssh/known_hosts"
There's no security value for checking host keys on instances that you stand up dynamically and remove right after playbook execution, but there is security value in checking host keys for persistent machines. So you should manage host key checking differently per logical environment.
Leave checking enabled by default (in ~/.ansible.cfg)
Disable host key checking in the working directory for playbooks you run against ephemeral instances (./ansible.cfg alongside the playbook for unit tests against vagrant VMs, automation for short-lived ec2 instances)
I found the answer, you need to set the environment variable ANSIBLE_HOST_KEY_CHECKING to False. For example:
ANSIBLE_HOST_KEY_CHECKING=False ansible-playbook ...
Changing host_key_checking to false for all hosts is a very bad idea.
The only time you want to ignore it, is on "first contact", which this playbook will accomplish:
---
- name: Bootstrap playbook
# Don't gather facts automatically because that will trigger
# a connection, which needs to check the remote host key
gather_facts: false
tasks:
- name: Check known_hosts for {{ inventory_hostname }}
local_action: shell ssh-keygen -F {{ inventory_hostname }}
register: has_entry_in_known_hosts_file
changed_when: false
ignore_errors: true
- name: Ignore host key for {{ inventory_hostname }} on first run
when: has_entry_in_known_hosts_file.rc == 1
set_fact:
ansible_ssh_common_args: "-o StrictHostKeyChecking=no"
# Now that we have resolved the issue with the host key
# we can "gather facts" without issue
- name: Delayed gathering of facts
setup:
So we only turn off host key checking if we don't have the host key in our known_hosts file.
You can pass it as command line argument while running the playbook:
ansible-playbook play.yml --ssh-common-args='-o StrictHostKeyChecking=no'
forward to nikobelia
For those who using jenkins to run the play book, I just added to my jenkins job before running the ansible-playbook the he environment variable ANSIBLE_HOST_KEY_CHECKING = False
For instance this:
export ANSIBLE_HOST_KEY_CHECKING=False
ansible-playbook 'playbook.yml' \
--extra-vars="some vars..." \
--tags="tags_name..." -vv
If you don't want to modify ansible.cfg or the playbook.yml then you can just set an environment variable:
export ANSIBLE_HOST_KEY_CHECKING=False
Ignoring checking is a bad idea as it makes you susceptible to Man-in-the-middle attacks.
I took the freedom to improve nikobelia's answer by only adding each machine's key once and actually setting ok/changed status in Ansible:
- name: Accept EC2 SSH host keys
connection: local
become: false
shell: |
ssh-keygen -F {{ inventory_hostname }} ||
ssh-keyscan -H {{ inventory_hostname }} >> ~/.ssh/known_hosts
register: known_hosts_script
changed_when: "'found' not in known_hosts_script.stdout"
However, Ansible starts gathering facts before the script runs, which requires an SSH connection, so we have to either disable this task or manually move it to later:
- name: Example play
hosts: all
gather_facts: no # gather facts AFTER the host key has been accepted instead
tasks:
# https://stackoverflow.com/questions/32297456/
- name: Accept EC2 SSH host keys
connection: local
become: false
shell: |
ssh-keygen -F {{ inventory_hostname }} ||
ssh-keyscan -H {{ inventory_hostname }} >> ~/.ssh/known_hosts
register: known_hosts_script
changed_when: "'found' not in known_hosts_script.stdout"
- name: Gathering Facts
setup:
One kink I haven't been able to work out is that it marks all as changed even if it only adds a single key. If anyone could contribute a fix that would be great!
You can simply tell SSH to automatically accept fingerprints for new hosts. Just add
StrictHostKeyChecking=accept-new
to your ~/.ssh/config. It does not disable host-key checking entirely, it merely disables this annoying question whether you want to add a new fingerprint to your list of known hosts. In case the fingerprint for a known machine changes, you will still get the error.
This policy also works with ANSIBLE_HOST_KEY_CHECKING and other ways of passing this param to SSH.
I know the question has been answered and it's correct as well, but just wanted to link the ansible doc where it's explained clearly when and why respective check should be added: host-key-checking
The most problems appear when you want to add new host to dynamic inventory (via add_host module) in playbook. I don't want to disable fingerprint host checking permanently so solutions like disabling it in a global config file are not ok for me. Exporting var like ANSIBLE_HOST_KEY_CHECKING before running playbook is another thing to do before running that need to be remembered.
It's better to add local config file in the same dir where playbook is. Create file named ansible.cfg and paste following text:
[defaults]
host_key_checking = False
No need to remember to add something in env vars or add to ansible-playbook options. It's easy to put this file to ansible git repo.
This one is the working one I used in my environment. I use the idea from this ticket https://github.com/mitogen-hq/mitogen/issues/753
- name: Example play
gather_facts: no
hosts: all
tasks:
- name: Check SSH known_hosts for {{ inventory_hostname }}
local_action: shell ssh-keygen -l -F {{ inventory_hostname }}
register: checkForKnownHostsEntry
failed_when: false
changed_when: false
ignore_errors: yes
- name: Add {{ inventory_hostname }} to SSH known hosts automatically
when: checkForKnownHostsEntry.rc == 1
changed_when: checkForKnownHostsEntry.rc == 1
local_action:
module: shell
args: ssh-keyscan -H "{{ inventory_hostname }}" >> $HOME/.ssh/known_hosts
Host key checking is important security measure so I would not just skip it everywhere. Yes, it can be annoying if you keep reinstalling same testing host (without backing up it's SSH certificates) or if you have stable hosts but you run your playbook for Jenkins without simple option to add host key if you are connecting to the host for a first time. So:
This is what we are using for stable hosts (when running the playbook from Jenkins and you simply want to accept the host key when connecting to the host for the first time) in inventory file:
[all:vars]
ansible_ssh_common_args='-o StrictHostKeyChecking=accept-new'
And this is what we have for temporary hosts (in the end this will ignore they host key at all):
[all:vars]
ansible_ssh_common_args='-o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null'
There is also environment variable or you can add it into group/host variables file. No need to have it in the inventory - it was just convenient in our case.
Used some other responses here and a co-worker solution, thank you!
Use the parameter named as validate_certs to ignore the ssh validation
- ec2_ami:
instance_id: i-0661fa8b45a7531a7
wait: yes
name: ansible
validate_certs: false
tags:
Name: ansible
Service: TestService
By doing this it ignores the ssh validation process

How do I get a variable with the name of the user running ansible?

I'm scripting a deployment process that takes the name of the user running the ansible script (e.g. tlau) and creates a deployment directory on the remote system based on that username and the current date/time (e.g. tlau-deploy-2014-10-15-16:52).
You would think this is available in ansible facts (e.g. LOGNAME or SUDO_USER), but those are all set to either "root" or the deployment id being used to ssh into the remote system. None of those contain the local user, the one who is currently running the ansible process.
How can I script getting the name of the user running the ansible process and use it in my playbook?
If you gather_facts, which is enabled by default for playbooks, there is a built-in variable that is set called ansible_user_id that provides the user name that the tasks are being run as. You can then use this variable in other tasks or templates with {{ ansible_user_id }}. This would save you the step of running a task to register that variable.
See: https://docs.ansible.com/ansible/latest/user_guide/playbooks_variables.html#variables-discovered-from-systems-facts
If you mean the username on the host system, there are two options:
You can run a local action (which runs on the host machine rather than the target machine):
- name: get the username running the deploy
become: false
local_action: command whoami
register: username_on_the_host
- debug: var=username_on_the_host
In this example, the output of the whoami command is registered in a variable called "username_on_the_host", and the username will be contained in username_on_the_host.stdout.
(the debug task is not required here, it just demonstrates the content of the variable)
The second options is to use a "lookup plugin":
{{ lookup('env', 'USER') }}
Read about lookup plugins here: docs.ansible.com/ansible/playbooks_lookups.html
I put something like the following in all templates:
# Placed here by {{ lookup('env','USER') }} using Ansible, {{ ansible_date_time.date }}.
When templated over it shows up as:
# Placed here by staylorx using Ansible, 2017-01-11.
If I use {{ ansible_user_id }} and I've become root then that variable indicates "root", not what I want most of the time.
This seems to work for me (ansible 2.9.12):
- name: get the non root remote user
set_fact:
remote_regular_user: "{{ ansible_env.SUDO_USER or ansible_user_id }}"
You can also simply set this as a variable - e.g. in your group_vars/all.yml:
remote_regular_user: "{{ ansible_env.SUDO_USER or ansible_user_id }}"
This reads the user name from the remote system, because it is not guaranteed, that the user names on the local and the remote system are the same. It is possible to change the name in the SSH configuration.
- name: Run whoami without become.
command: whoami
changed_when: false
become: false
register: whoami
- name: Set a fact with the user name.
set_fact:
login_user: "{{ whoami.stdout }}"
When you use the "become" option to launch Ansible or run a task, the logged in user will change to the user you are changing to (typically root). To get the name of the original user used to log in to the remote host with (ie: before escalating) you can use the ansible_user special variable. In addition, if you want to gather facts for a specific user other than the one currently running a task, you can use the user built-in module by doing something like this:
- user
name: "username"
register: user_data
Now the user_data fact contains a bunch of useful information about that user, including their uid, gid, home folder, and a bunch of other stuff. See the return value for this task in the docs for details. Using this technique, you can get details about the original user Ansible was launched with by doing something like this:
- user
name: "{{ ansible_user }}"
register: user_data
Conversely, if all you want is the name of the active user that is running a specific task (ie: which accounts for any user-switches that occur with the "become" operation) you can use the ansible_user_id fact instead.
if you want to get the user who run the template in ansible tower you could use this var {{tower_user_name}} in your playbook but it´s only defined on manually executions
tower_user_name :The user name of the Tower user that started this job. This is not available for callback or scheduled jobs.
check this docs https://docs.ansible.com/ansible-tower/latest/html/userguide/job_templates.html

Ansible to Conditionally Prompt for a Variable?

I would like to be able to prompt for my super secure password variable if it is not already in the environment variables. (I'm thinking that I might not want to put the definition into .bash_profile or one of the other spots.)
This is not working. It always prompts me.
vars:
THISUSER: "{{ lookup('env','LOGNAME') }}"
SSHPWD: "{{ lookup('env','MY_PWD') }}"
vars_prompt:
- name: "release_version"
prompt: "Product release version"
default: "1.0"
when: SSHPWD == null
NOTE: I'm on a Mac, but I'd like for any solutions to be platform-independent.
According to the replies from the devs and a quick test I've done with the latest version, the vars_prompt is run before "GATHERING FACTS". This means that the env var SSHPWD is always null at the time of your check with when.
Unfortunately it seems there is no way of allowing the vars_prompt statement at task level.
Michael DeHaan's reasoning for this is that allowing prompts at the task-level would open up the doors to roles asking a lot of questions. This would make using Ansible Galaxy roles which do this difficult:
There's been a decided emphasis in automation in Ansible and asking questions at task level is not something we really want to do.
However, you can still ask vars_prompt questions at play level and use those variables throughout tasks. You just can't ask questions in roles.
And really, that's what I would like to enforce -- if a lot of Galaxy roles start asking questions, I can see that being annoying :)
I might be late to the party but a quick way to avoid vars_prompt is to disable the interactive mode by doing that simple trick:
echo -n | ansible-playbook -e MyVar=blih site.yaml
This add no control over which vars_prompt to avoid but coupled with default: "my_default" it can be used in a script.
Full example here:
---
- hosts: localhost
vars_prompt:
- prompt: Enter blah value
- default: "{{ my_blah }}"
- name: blah
echo -n | ansible-playbook -e my_blah=blih site.yaml
EDIT:
I've found that using the pause module and the prompt argument was doing what I wanted:
---
- pause:
prompt: "Sudo password for localhost "
when: ( env == 'local' ) and
( inventory_hostname == "localhost" ) and
( hostvars["localhost"]["ansible_become_password"] is not defined )
register: sudo_password
no_log: true
tags:
- always
Based on tehmoon's answer with some modifications I did it that way:
- hosts:
- hostA
become: yes
pre_tasks:
- pause:
prompt: "Give your username"
register: prompt
no_log: yes
run_once: yes
- set_fact:
username: "{{prompt.user_input}}"
no_log: yes
run_once: yes
- pause:
prompt: "Give your password"
echo: no
register: prompt
no_log: yes
run_once: yes
- set_fact:
password: "{{prompt.user_input}}"
no_log: yes
run_once: yes
tags: [my_role_using_user_pass]
roles:
- role: my_role_using_user_pass
This is indeed not possible by default in Ansible. I understand the reasoning behind not allowing it, yet I think it can be appropriate in some contexts. I've been writing an AWS EC2 deploy script, using the blue/green deploy system, and at some point in the role I need to ask the user if a rollback needs to be done if something has gone awry. As said, there is no way to do this (conditionally and/or non-fugly).
So I wrote a very simple Ansible (2.x) action plugin, based on the pause action from the standard library. It a bit spartan in that it only accepts a single key press, but it might be of use. You can find it in a Github gist here. You need to copy the whole Gist file to the action_plugins directory of your playbook directory. See the documentation in the file.
As can be seen in the source code, the when keyword isn't implemented for vars_prompt (and in fact never was). The same was mentioned in this Github comment.
The only way in which vars_prompt is currently conditional is that it only prompts when the variable (defined in name) is already defined (using the command-line extra_vars argument).
This works for me (2.3) .. do two bits in the one file.
This allows me to consruct a tmp vars file when running the playbook via jenkins.. but also allow prompting on the command line
And you get to do it with only the one var used
---
- name: first bit
hosts: all
connection: local
tasks:
- set_fact:
favColour: "{{ favColour }}"
when: favColour is defined
- name: second bit
hosts: all
connection: local
vars_prompt:
favColour:
prompt: "Whats ya favorite colour: "
when: favColour is not defined
tasks:
- debug: msg="{{favColour}}"
Based on #tehmoon's answer, this is what worked for me with ansible-core 2.14:
tasks:
- name: Prompt SSH password if necessary
when: ansible_password is undefined
block:
- name: Conditionally prompt for ssh/sudo password
ansible.builtin.pause:
prompt: "Password for {{ ansible_user_id }}#{{ ansible_host }}"
echo: false
register: password_prompt
no_log: true
- name: Set ansible_password
ansible.builtin.set_fact:
ansible_password: "{{ password_prompt.user_input }}"
no_log: true
- name: Set ansible_become_password
ansible.builtin.set_fact:
ansible_become_password: "{{ ansible_password }}"
no_log: true
when: ansible_become_password is undefined