Ansible connect when there are multiple interactive prompts - ssh

I have an ansible inventory file with a single managed node.
When I try to connect via cli I get the following interactive prompt:
login as: 286
Keyboard-interactive authentication prompts from server:
| Password authentication
| Password:
Notice the extra line Password authentication
This is just information, you cannot actually input something here.
Trying to run an ssh connection test I run the following ad-hoc:
ansible --ask-pass all -i hosts -m win_ping
and get the error:
Server | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: ",
"unreachable": true
}
I then run ssh with -v as an ssh_common_var and got the following line in between others:
Next authentication method: keyboard-interactive\r\nPassword authentication\r\ndebug1: Authentication succeeded (keyboard-interactive).\r\nAuthenticated to 108.16.126.45
I believe the extra message during interactive authentication causes some issues with the python library that uses it since ssh appears to authenticate to the controlled node.
Has anyone encountered anything similar to this?

Related

how to fix the issue "Failed to connect to the host via ssh" in ansible

when i execute ansible playbook from one server to other remote server i'm getting an error as
"msg": "Failed to connect to the host via ssh: ssh_askpass: exec(/usr/bin/ssh-askpass): No such file or directory\r\nHost key verification failed.", "unreachable": true"
blow is my play book
- hosts: igwcluster_AM:igwcluster_IS
become: true
become_method: sudo
gather_facts: True
tasks:
- name: Install Oracle Java 8
script:/data2/jenkins/workspace/PreReq_Install_To_Servers/IGW/IGW_Cluster/prereqs_Products/Java.sh
I'm using two host groups and each group has 2 servers.
Error log:
UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: ssh_askpass: exec(/usr/bin/ssh-askpass): No such file or directory\r\nHost key verification failed.", "unreachable": true}
Note : I have tried with
host_key_checking = False
ssh_args = -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no
But still it fails. please advise me on this
First of all you have to put space after "script:" and place script exactly under "name:" so it will look like that.
tasks:
- name: Install Oracle Java 8
script: /data2/jenkins/workspace/PreReq_Install_To_Servers/IGW/IGW_Clust/prereqs_Products/Java.sh
Try to use ssh key for ssh authorization.
On the server that you are execute ansible playbook from, generate ssh key if you didn't already, you can do it with simple command:
ssh-keygen
(press enter till command exit)
Next copy it to remote server by ssh copy id command:
ssh-copy-id <remote server IP/FQDN>
After this your ansible server will be able to connect to remote server without password prompt and this error should not appear.
If this method doesn't work for you please share this information:
hosts file
become user that you are using to run this playbook

Ansible Permission denied (public key) but ssh using same key works

I'm running this Ansible ad-hoc command on Ubuntu 16.x (ansible ver. 2.2.1.0 and 2.2.2.0)
ansible host_alias -a "df -h" -u USER
where host_alias is the defined the ansible hosts file (defines an ec2 instance and its .pem file).
the host file looks like this:
[host_alias]
my_host.compute.amazonaws.com
private_key_file=/path/to/key/my_key.pem
I get this error:
private_key_file=/path/to/key/my_key.pem | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: ssh: Could not resolve hostname private_key_file=/path/to/key/my_key.pem: Name or service not known\r\n",
"unreachable": true
}
my_host.compute.amazonaws.com | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: Permission denied (publickey).\r\n",
"unreachable": true
The same host and key work fine when I ssh (defined by ~/.ssh/config).
I have made triple sure the key is there and has read permissions. I also tried setting the ansible_user in the Ansible hosts file.
Any ideas?
Please check the format of the Ansible inventory file in the documentation.
You have defined two hosts in a host group named host_alias:
the first host is: my_host.compute.amazonaws.com,
the second host is: private_key_file=/path/to/key/my_key.pem.
Ansible complains it cannot connect to the second host:
Could not resolve hostname private_key_file=/path/to/key/my_key.pem
It also cannot connect to the first host, because the SSH key is not defined:
Failed to connect to the host via ssh: Permission denied (publickey).
On top of the mistake of splitting the hostname and the parameter into separate lines, you also got the name of the parameter wrong -- it should be ansible_ssh_private_key_file.
The parameters are listed in a later section of the same document.
Your inventory file should look like this:
[host_group_name]
my_host.compute.amazonaws.com ansible_ssh_private_key_file=/path/to/key/my_key.pem
and your command:
ansible host_group_name -a "df -h" -u USER
The second line needs to be dropped in the
[host_alias] section.
The above section is meant for hosts only.
Once you do that try
ansible all -m ping
to check if you can ping the host.

Ansible: Cannot login to local Vagrant server

I have two Vagrant instances running having different IP:
192.168.33.17 [Ansible installed here]
192.168.33.19 [Another server where I am trying to connect]
My Ansible hosts file is in /etc/ansible/hosts and it looks like:
[example]
192.168.33.19:2222
I can easily connect via SSH to the second server with command:
ssh vagrant#192.168.33.19
without password.
But running the Ansible command yields error:
[root#centos72x64 vagrant]# ansible example -m ping -u vagrant
192.168.33.19 | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh.",
"unreachable": true
}
How I can solve this error?
My Ansible hosts file is in /etc/ansible/hosts and it looks like
[example]
192.168.33.19:2222
You don't put port number in the Ansible inventory file this way. To learn how to do it, confer the docs.
But you also mentioned:
I can easily connect via ssh to the second server with command
ssh vagrant#192.168.33.19
So you don't use port 2222 at all.

Ansible - establishing initial SSH connection

I am trying to copy an SSH public key to a newly created VM:
- hosts: vm1
remote_user: root
tasks:
- name: deploy ssh key to account
authorized_key: user='root' key="{{lookup('file','/root/.ssh/id_rsa.pub')}}"
But getting error:
fatal: [jenkins]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).\r\n", "unreachable": true}
So to establish SSH I need first to establish SSH?
How can I establish SSH for newly created KVM automatically, without manual key copy.
(host_key_checking = False in ancible.cfg)
Assuming the target machine allows root-login with password (from the error message it seems it does), you must provide the credentials to your playbook:
ansible-playbook playbook.yml --extra-vars "ansible_ssh_user=root ansible_ssh_pass=password"
Something I tried (and it worked) when I had this same issue:
ansible target-server-name -m command -a "whatever command" -k
The -k prompts you for the ssh password to the target server.
Add below changes to the /etc/ansible/hosts file:
[target-server-name]
target_server_ip
Example:
ansible target-server-name -m ping -k

Ansible: "sudo: a password is required\r\n" [duplicate]

This question already has an answer here:
How can a user with SSH keys authentication have sudo powers in Ansible? [duplicate]
(1 answer)
Closed 5 years ago.
quick question
I have setup an Ubuntu server with a user named test. I copy the authorized_keys to it, I can ssh no problem.
If I do $ ansible -m ping ubu1, no problem I get a response
<i><p>ubu1 | SUCCESS => {
<br>"changed": false,
<br>"ping": "pong"
<br>}</i>
What I dont get is this, If I do
$ ansible-playbook -vvvv Playbooks/htopInstall.yml
fatal: [ubu1]: FAILED! => {"changed": false, "failed": true, "invocation": {"module_name": "setup"}, "module_stderr": "OpenSSH_7.2p2 Ubuntu-4ubuntu2.1, OpenSSL 1.0.2g-fips 1 Mar 2016\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 6109\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 1\r\nShared connection to 192.168.1.112 closed.\r\n", "module_stdout": "sudo: a password is required\r\n", "msg": "MODULE FAILURE", "parsed": false}
If I do $ ansible-playbook --ask-sudo-pass Playbooks/htopInstall.yml, then it ask my user password and the play is a success.
If I rename the authorized_keys it tells me I "Failed to connect to the host via ssh." which is ok.
What I dont understand is why is it asking for a sudo password. I definetly missed something along the way.
my ansible.cfg file looks like this
[defaults]
nocows = 1
inventory = ./Playbooks/hosts
remote_user = test
private_key_file = /home/test/.ssh/id_ubu
host_key_checking = false
my hosts file looks like this
[servers]
ubu1 ansible_ssh_host=192.168.1.112 ansible_ssh_user=test
What I dont understand is why is it asking for a sudo password.
We can't say for certain without seeing your playbook, but it's almost certainly because a) your playbook asks Ansible to run a particular command with sudo (via the sudo or become directives) and b) the test user does not have password-less sudo enabled.
It sounds like you are aware of (a) but are confused about (b); specifically, what I'm picking up is that you don't understand the difference between ssh authentication and sudo authentication. Again, without more information I can't confirm if this is the case, but I'll take a stab at explaining it in case I guessed correctly.
When you connect to a machine via ssh, there are two primary ways in which sshd authenticates you and allows you to log in as a particular user. The first is to ask for the account's password, which is hands off to the system, and allows a login if it was correct. The second is through public-key cryptography, in which you prove that you have access to a private key that corresponds to a public key fingerprint in ~/.ssh/authorized_keys. Passing sshd's authentication checks gives you a shell on the machine.
When you invoke a command with sudo, you're asking sudo to elevate your privileges beyond what the account normally gets. This is an entirely different system, with rules defined in /etc/sudoers (which you should edit using sudo visudo) that control which users are allowed to use sudo, what commands they should be able to run, whether they need to re-enter their password or not when using the command, and a variety of other configuration options.
When you run the playbook normally, Ansible is presented with a sudo prompt and doesn't know how to continue - it doesn't know the account password. That's why --ask-sudo-pass exists: you're giving the password to Ansible so that it can pass it on to sudo when prompted. If you don't want to have to type this every time and you've decided it's within your security parameters to allow anyone logged in as the test user to perform any action as root, then you can consult man sudoers on how to set passwordless sudo for that account.
I solved this exact error sudo: a password is required\n which I got when running my playbook with become: true but somewhere in a task delegating to localhost, something like this:
uri:
url: "{{ some_url }}"
return_content: yes
status_code: 200
delegate_to: 127.0.0.1
If I understood correctly, the become: true causes Ansible to log into the remote host as my user and then use sudo in order to execute all commands on the remote host as root. Now when delegating to 127.0.0.1, sudo is also executed and as it happens that on my localhost a password is expected when using sudo.
For me the solution was simply to remove the delegate_to, which was not actually needed in that particular use case.