Ansible ssh giving permission denied error - ssh

I am trying to execute an ansible script against a server1.xxx.com, I am getting a permission denied error.
I have created a ssh key using command
ssh-keygen -f t11pkey
and also have added passphrase, copied the key to the server.
ssh-copy-id -i /home/user.name/t11pkey.pub user.name#server1.xxx.com
my ~/.ssh/config
Host server?.xxx.com
User user.name
Port 22
IdentityFile /home/user.name/.ssh/t11pkey.pub
Permission of my keys:
-rw------- 1 user.name Domain Users 1766 Dec 5 10:55 t11pkey
-rw------- 1 user.name Domain Users 412 Dec 5 10:55 t11pkey.pub
ansible.cfg
[defaults]
filter_plugins =./filter_plugins
roles_path = ./roles
sudo_user = root
host_key_checking = False
retry_files_enabled = False
[ssh_connection]
ssh_args = -F /home/user.name/.ssh/config -o ControlMaster=auto -o ControlPersist=30m
control_path = ~/.ssh/ansible-%%r#%%h:%%p
inventory file
[new]
server1.xxx.com
my ansible-playbook
- hosts: new
remote_user: user.name
become: true
vars_files:
- xx.yml
- xx.yml
- xx.yml
roles:
- role: ~/path/to/the/role
Anisble error:
TASK [Gathering Facts] *****************************************************************************************************************************************************
Enter passphrase for key '/home/user.name/.ssh/t11pkey.pub':
Enter passphrase for key '/home/user.name/.ssh/t11pkey.pub':
Enter passphrase for key '/home/user.name/.ssh/t11pkey.pub':
fatal: [server1.xxx.com]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).\r\n", "unreachable": true}
ansible --version: ansible 2.3.1.0 (stable-2.3 5512c94017) last updated 2017/06/21 22:56:43 (GMT -400)

IdentityFile parameter in the config file should point to the private key (t11pkey), not the public one (t11pkey.pub).

Related

Need to use ansible to connect to a host via a jump host using a key in the jump host

I have a machine that is accessible through a jump host.
What I need is this.
A is my local machine
B is the jump host
C is the destination machine
I need to connect to C using ansible via B but use a private key in B.
Current config is the inventory file is as shown below
[deployment_host:vars]
ansible_port = 22 # remote host port
ansible_user = <user_to_the_Target_machine> # remote user host
private_key_file = <key file to bastion in my laptop> # laptop key to login to bastion host
ansible_ssh_common_args='-o StrictHostKeyChecking=no -o ProxyCommand="ssh -o \'ForwardAgent yes\' <user>#<bastion> -p 2222 \'ssh-add /home/<user>/.ssh/id_rsa && nc %h 22\'"'
[deployment_host]
10.200.120.218 ansible_ssh_port=22 ansible_ssh_extra_args='-o StrictHostKeyChecking=no'
How can I do that
I have not made any changes to my ssh config and when i run ansible like below
ansible -vvv all -i inventory.ini -m shell -a 'hostname'
I get this error
ansible 2.9.0
config file = None
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.9/dist-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.9.5 (default, May 11 2021, 08:20:37) [GCC 10.3.0]
No config file found; using defaults
host_list declined parsing /root/temp_ansible/inventory.ini as it did not pass its verify_file() method
script declined parsing /root/temp_ansible/inventory.ini as it did not pass its verify_file() method
auto declined parsing /root/temp_ansible/inventory.ini as it did not pass its verify_file() method
yaml declined parsing /root/temp_ansible/inventory.ini as it did not pass its verify_file() method
Parsed /root/temp_ansible/inventory.ini inventory source with ini plugin
META: ran handlers
<10.200.120.218> ESTABLISH SSH CONNECTION FOR USER: <user> # remote user host
<10.200.120.218> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o Port=22 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="<user> # remote user host"' -o ConnectTimeout=10 -o StrictHostKeyChecking=no -o 'ProxyCommand=ssh -o '"'"'ForwardAgent yes'"'"' <user>#35.223.214.105 -p 2222 '"'"'ssh-add /home/<user>/.ssh/id_rsa && nc %h 22'"'"'' -o StrictHostKeyChecking=no -o ControlPath=/root/.ansible/cp/ec0480070b 10.200.120.218 '/bin/sh -c '"'"'echo '"'"'"'"'"'"'"'"'~<user> # remote user host'"'"'"'"'"'"'"'"' && sleep 0'"'"''
<10.200.120.218> (255, b'', b'kex_exchange_identification: Connection closed by remote host\r\nConnection closed by UNKNOWN port 65535\r\n')
10.200.120.218 | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: kex_exchange_identification: Connection closed by remote host\r\nConnection closed by UNKNOWN port 65535",
"unreachable": true
}
I figured out the solution.
For me this was
adding both entries of my server A and B into the ~/.ssh/config
Host bastion1
HostName <IP/FQDN>
StrictHostKeyChecking no
User <USER>
IdentityFile <File to log into the first bastion server> # should be present in your local machine.
Host bastion2
HostName <IP/FQDN>
StrictHostKeyChecking no
User <USER>
IdentityFile <File to log into the second bastion server> # should be present in your local machine.
ProxyJump bastion
Then editing the inventory file like shown below.
[deployment_host]
VM_IP ansible_user=<vm_user> ansible_ssh_extra_args='-o StrictHostKeyChecking=no' ansible_ssh_private_key_file=<file to login to VM>
[deployment_host:vars]
ansible_ssh_common_args='-J bastion1,bastion2'
Then any ansible command with this inventory should work without issue
❯ ansible all -i inventory.ini -m shell -a "hostname"
<VM_IP> | CHANGED | rc=0 >>
development-host-1
NOTE: All these ssh keys should be in your local system. You can get
the bastion2 private key from bastion1 using ansible and the same for
the VM from bastion2 using ansible ad-hoc fetch

incorrect sudo password ansible

In my ansible run i am getting the following error:
PLAY [test hashi vault] ******************************************************************************************************
TASK [Gathering Facts] *******************************************************************************************************
/usr/lib/python2.7/site-packages/urllib3/connectionpool.py:988: InsecureRequestWarning: Unverified HTTPS request is being made to host 'vault.domain'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
InsecureRequestWarning,
ok: [192.168.1.200]
TASK [show bar] **************************************************************************************************************
/usr/lib/python2.7/site-packages/urllib3/connectionpool.py:988: InsecureRequestWarning: Unverified HTTPS request is being made to host 'vault.domain'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
InsecureRequestWarning,
fatal: [192.168.1.200]: FAILED! => {"msg": "Incorrect sudo password"}
PLAY RECAP *******************************************************************************************************************
192.168.1.200 : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
i know that the password is correct having done a debug and the same password works when extracting from vault using curl. this is the new code were i get the error:
---
- name: test hashi vault
hosts: all
remote_user: ec2-user
tasks:
- name: show bar
systemd:
state: restarted
name: sssd.service
async: 45
become: yes
become_method: sudo
this is what im running:
ansible-playbook -l 192.168.1.200 test.yml --private-key=/home/rehna/.ssh/testKeyPair.pem --vault-password-file /etc/ansible/ansible.vault -e #credentials
contents of credentials:
ansible_user: ec2-user
ansible_become_pass: "{{ lookup('hashi_vault', 'secret=secret/test/ec2_password auth_method=userpass username={{vault_user}} password={{vault_password}} url={{vault_url}}:{{vault_port}} validate_certs=false') }}"
hosts
[ec2]
192.168.1.200
[test_env]
192.168.1.200 remote_user=ec2-user
from /var/log/secure:
unix_chkpwd[30174]: password check failed for user (ec2-user)
sudo: pam_unix(sudo:auth): authentication failure; logname=ec2-user uid=1000 euid=0 tty=/dev/pts/4 ruser=ec2-user rhost= user=ec2-user
sudo: pam_unix(sudo:auth): conversation failed
sudo: pam_unix(sudo:auth): auth could not identify password for [ec2-user]
should be like this:
sudo: ec2-user : TTY=pts/4 ; PWD=/home/ec2-user ; USER=root ; COMMAND=/bin/passwd --stdin ec2-user
sudo: pam_unix(sudo:session): session opened for user root by ec2-user(uid=0)
sudo: pam_unix(sudo:session): session closed for user root
the format of the data returned is dict key/value pairs.
you need to extract the content from the return data provided by the lookup:
ec2_pass: "{{ lookup('hashi_vault', 'secret=secret/test/ec2_password auth_method=userpass username={{vault_user}} password={{vault_password}} url={{vault_url}}:{{vault_port}} validate_certs=false') }}"
ansible_become_pass: "{{ec2_pass.value}}"

how to fix the issue "Failed to connect to the host via ssh" in ansible

when i execute ansible playbook from one server to other remote server i'm getting an error as
"msg": "Failed to connect to the host via ssh: ssh_askpass: exec(/usr/bin/ssh-askpass): No such file or directory\r\nHost key verification failed.", "unreachable": true"
blow is my play book
- hosts: igwcluster_AM:igwcluster_IS
become: true
become_method: sudo
gather_facts: True
tasks:
- name: Install Oracle Java 8
script:/data2/jenkins/workspace/PreReq_Install_To_Servers/IGW/IGW_Cluster/prereqs_Products/Java.sh
I'm using two host groups and each group has 2 servers.
Error log:
UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: ssh_askpass: exec(/usr/bin/ssh-askpass): No such file or directory\r\nHost key verification failed.", "unreachable": true}
Note : I have tried with
host_key_checking = False
ssh_args = -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no
But still it fails. please advise me on this
First of all you have to put space after "script:" and place script exactly under "name:" so it will look like that.
tasks:
- name: Install Oracle Java 8
script: /data2/jenkins/workspace/PreReq_Install_To_Servers/IGW/IGW_Clust/prereqs_Products/Java.sh
Try to use ssh key for ssh authorization.
On the server that you are execute ansible playbook from, generate ssh key if you didn't already, you can do it with simple command:
ssh-keygen
(press enter till command exit)
Next copy it to remote server by ssh copy id command:
ssh-copy-id <remote server IP/FQDN>
After this your ansible server will be able to connect to remote server without password prompt and this error should not appear.
If this method doesn't work for you please share this information:
hosts file
become user that you are using to run this playbook

Ansible - establishing initial SSH connection

I am trying to copy an SSH public key to a newly created VM:
- hosts: vm1
remote_user: root
tasks:
- name: deploy ssh key to account
authorized_key: user='root' key="{{lookup('file','/root/.ssh/id_rsa.pub')}}"
But getting error:
fatal: [jenkins]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).\r\n", "unreachable": true}
So to establish SSH I need first to establish SSH?
How can I establish SSH for newly created KVM automatically, without manual key copy.
(host_key_checking = False in ancible.cfg)
Assuming the target machine allows root-login with password (from the error message it seems it does), you must provide the credentials to your playbook:
ansible-playbook playbook.yml --extra-vars "ansible_ssh_user=root ansible_ssh_pass=password"
Something I tried (and it worked) when I had this same issue:
ansible target-server-name -m command -a "whatever command" -k
The -k prompts you for the ssh password to the target server.
Add below changes to the /etc/ansible/hosts file:
[target-server-name]
target_server_ip
Example:
ansible target-server-name -m ping -k

pinging ec2 instance from ansible

I have an ec2 amazon linux running which I can ssh in to using:
ssh -i "keypair.pem" ec2-user#some-ip.eu-west-1.compute.amazonaws.com
but when I try to ping the server using ansible I get:
testserver | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh.",
"unreachable": true
}
I'm using the following hosts file:
testserver ansible_ssh_host=some-ip.eu-west-1.compute.amazonaws.com ansible_ssh_user=ec2-user ansible_ssh_private_key_file=/Users/me/playbook/key-pair.pem
and running the following command to run ansible:
ansible testserver -i hosts -m ping -vvvvv
The output is:
<some-ip.eu-west-1.compute.amazonaws.com> ESTABLISH SSH CONNECTION FOR USER: ec2-user
<some-ip.eu-west-1.compute.amazonaws.com> SSH: ansible.cfg set ssh_args: (-o)(ControlMaster=auto)(-o)(ControlPersist=60s)
<some-ip.eu-west-1.compute.amazonaws.com> SSH: ANSIBLE_PRIVATE_KEY_FILE/private_key_file/ansible_ssh_private_key_file set: (-o)(IdentityFile="/Users/me/playbook/key-pair.pem")
<some-ip.eu-west-1.compute.amazonaws.com> SSH: ansible_password/ansible_ssh_pass not set: (-o)(KbdInteractiveAuthentication=no)(-o)(PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey)(-o)(PasswordAuthentication=no)
<some-ip.eu-west-1.compute.amazonaws.com> SSH: ANSIBLE_REMOTE_USER/remote_user/ansible_user/user/-u set: (-o)(User=ec2-user)
<some-ip.eu-west-1.compute.amazonaws.com> SSH: ANSIBLE_TIMEOUT/timeout set: (-o)(ConnectTimeout=10)
<some-ip.eu-west-1.compute.amazonaws.com> SSH: PlayContext set ssh_common_args: ()
<some-ip.eu-west-1.compute.amazonaws.com> SSH: PlayContext set ssh_extra_args: ()
<some-ip.eu-west-1.compute.amazonaws.com> SSH: found only ControlPersist; added ControlPath: (-o)(ControlPath=/Users/me/.ansible/cp/ansible-ssh-%h-%p-%r)
<some-ip.eu-west-1.compute.amazonaws.com> SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o 'IdentityFile="/Users/me/playbook/key-pair.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ec2-user -o ConnectTimeout=10 -o ControlPath=/Users/me/.ansible/cp/ansible-ssh-%h-%p-%r ec2-52-18-106-35.eu-west-1.compute.amazonaws.com '/bin/sh -c '"'"'( umask 22 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1462096401.65-214839021792201 `" && echo "` echo $HOME/.ansible/tmp/ansible-tmp-1462096401.65-214839021792201 `" )'"'"''
testserver | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh.",
"unreachable": true
}
What am i doing wrong?
Try this Solution it worked fine for me
ansible ipaddress -m ping -i inventory -u ec2-user
where inventory is the host file name.
inventory :
[host]
xx.xx.xx.xx
[host:vars]
ansible_user=ec2-user
ansible_ssh_private_key_file=/location of your pem file/filename.pem
I was facing the problem as I didn't give the location of the host file I was referring to.
This is what my host file looks like.
[apache] is the group of hosts on which we are going to install apache server.
ansible_ssh_private_key_file should be the path of the dowloaded .pem file to access your instances. In my case both instances have same credentials.
[apache]
50.112.133.205 ansible_ssh_user=ubuntu
54.202.7.87 ansible_ssh_user=ubuntu
[apache:vars]
ansible_user=ubuntu
ansible_ssh_private_key_file=/home/hashimyousaf/Desktop/hashim_key_oregon.pem
I was having a similar problem, and reading throughTroubleshooting Connecting to Your Instance helped me. Specifically, I was pinging an Ubuntu instance from an Amazon-Linux instance but forgot to change the connection username from "ec2-user" to "ubuntu"!
You have to change the hosts file and make sure you have the correct username
test2 ansible_ssh_host=something.something.eu-west-1.compute.amazonaws.com ansible_ssh_user=theUser
'test2' - is the name I have give to the ssh machice on my local ansible hosts file
'ansible_ssh_host=something.something.eu-west-1.compute.amazonaws.com' - This is the connection to the ec2 instance
'ansible_ssh_user=theUser' - The user of the instance. (Important)
'ssh' into your instance
[theUser#Instance:] make sure you copy the 'theUser' into the hosts and place as the 'ansible_ssh_user' variable
then try to ping it.
If this does not work, check if you have rights to the ICMP packeting in the amazon aws enabled.
Worked for me ->
vi inventory
[hosts]
serveripaddress ansible_ssh_user=ec2-user
[hosts:vars]
ansible_user=ec2-user
ansible_ssh_private_key_file=/home/someuser/ansible1.pem
chmod 400 ansible1.pem
ansible -i inventory hosts -m ping -u ec2-user