I am trying to use Ansible to create an infrastructure for ssh connections.
- name: Copy ssh key to each server
copy: src=static_folder_key dest=/home/ec2-user/.ssh/ mode=0600
- name: Enable ssh Agent
shell: eval $(ssh-agent -s)
- name: Adding ssh key for static forlder project
shell: ssh-add /home/ec2-user/.ssh/static_folder_key
sudo: True
I create a new ssh key and copy to my servers. Then I execute the agent and later I add the new key to allow the connection. But When I execute the ansible I got this error.
TASK: [git | Adding ssh key for static forlder project] ***********************
failed: [admin_vehicles] => {"changed": true, "cmd": "ssh-add /home/ec2-user/.ssh/static_folder_key", "delta": "0:00:00.004346", "end": "2015-08-12 15:05:00.878208", "rc": 2, "start": "2015-08-12 15:05:00.873862", "warnings": []}
stderr: Could not open a connection to your authentication agent.
failed: [leads_messages] => {"changed": true, "cmd": "ssh-add /home/ec2-user/.ssh/static_folder_key", "delta": "0:00:00.004508", "end": "2015-08-12 15:05:01.286031", "rc": 2, "start": "2015-08-12 15:05:01.281523", "warnings": []}
stderr: Could not open a connection to your authentication agent.
FATAL: all hosts have already failed -- aborting
If I execute this actions manually, everything goes fine.
ssh-add /home/ec2-user/.ssh/static_folder_key
Identity added: /home/ec2-user/.ssh/static_folder_key (/home/ec2-user/.ssh/static_folder_key)
So any tips? Maybe I am missing something in my playbook task?
The solution for this is to invoke eval "$(ssh-agent)" before the ssh-add. Initially I tried with two Ansible tasks but it failed the same way since they are atomic and cannot persist the state. The ultimate solution I end up with is to invoke both commands in a single task like this:
- name: Evaluating the authentication agent & adding the key...
shell: |
eval "$(ssh-agent)"
ssh-add ~/.ssh/id_rsa_svn_ssh
The environment for each task is independent, so you cannot leave ssh-agent settings made in one task to others.
I strongly recommend you to utilize SSH agent forwading. Put the following in ~/.ssh/config, then run ssh-agent and ssh-add static_folder_key locally before running ansible-playbook. That's all.
Host *
ForwardAgent yes
Even when agent forwarding is not an option, you don't have to run ssh-agent for a private key file with no passphrase. Copy the following configuration in ~/.ssh/config on remote hosts and run ssh to static-folder-host.
Host static-folder-host
Hostname static-folder-host.static-folder-domain
User static-folder-user
IdentityFile ~/.ssh/static_folder_key
Related
when i execute ansible playbook from one server to other remote server i'm getting an error as
"msg": "Failed to connect to the host via ssh: ssh_askpass: exec(/usr/bin/ssh-askpass): No such file or directory\r\nHost key verification failed.", "unreachable": true"
blow is my play book
- hosts: igwcluster_AM:igwcluster_IS
become: true
become_method: sudo
gather_facts: True
tasks:
- name: Install Oracle Java 8
script:/data2/jenkins/workspace/PreReq_Install_To_Servers/IGW/IGW_Cluster/prereqs_Products/Java.sh
I'm using two host groups and each group has 2 servers.
Error log:
UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: ssh_askpass: exec(/usr/bin/ssh-askpass): No such file or directory\r\nHost key verification failed.", "unreachable": true}
Note : I have tried with
host_key_checking = False
ssh_args = -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no
But still it fails. please advise me on this
First of all you have to put space after "script:" and place script exactly under "name:" so it will look like that.
tasks:
- name: Install Oracle Java 8
script: /data2/jenkins/workspace/PreReq_Install_To_Servers/IGW/IGW_Clust/prereqs_Products/Java.sh
Try to use ssh key for ssh authorization.
On the server that you are execute ansible playbook from, generate ssh key if you didn't already, you can do it with simple command:
ssh-keygen
(press enter till command exit)
Next copy it to remote server by ssh copy id command:
ssh-copy-id <remote server IP/FQDN>
After this your ansible server will be able to connect to remote server without password prompt and this error should not appear.
If this method doesn't work for you please share this information:
hosts file
become user that you are using to run this playbook
In shell I am following the below approach to become root user without any password. And it is working fine.
ssh-agent bash
ssh-add /repository/ansible/.ssh/id_rsa_ansible
ssh -A ansible#e8-df1
[ansible#e8-df1 ~]$ sudo -i
[root#e8-df1 ~]#
However, In ansible, I do not achieve the same and getting error. Below is my ansible inventory and playbook.
Inventory:
[qv]
e8-df1
e8-df2
[qv:vars]
ansible_ssh_user=ansible
ansible_ssh_private_key_file=/repository/ansible/.ssh/id_rsa_ansible
Playbook:
---
- hosts: qv
become: yes
roles:
- abc
Error:
fatal: [e8-df1]: FAILED! => {
"changed": false,
"failed": true,
"invocation": {
"module_name": "setup"
},
"module_stderr": "Shared connection to e8-df1 closed.\r\n",
"module_stdout": "sudo: a password is required\r\n",
"msg": "MODULE FAILURE"
}
fatal: [e8-df2]: FAILED! => {
"changed": false,
"failed": true,
"invocation": {
"module_name": "setup"
},
"module_stderr": "Shared connection to e8-df2 closed.\r\n",
"module_stdout": "sudo: a password is required\r\n",
"msg": "MODULE FAILURE"
}
I have gone through some documents and Q&As and they are suggesting to add below line in the sudoers file.
ansible ALL=(ALL) NOPASSWD: ALL
Now, I am not able to realize why the shell procedure is working without the sudoers configuration. And if there is any other way to achieve the same in the ansible?
The problem is that when you connect via shell, you are passing the Agent in the SSH connection using the -A parameter, in Ansible you need to configure this behavior if you want to pass the agent on SSH connection.
Here a related question with a solution: SSH Agent Forwarding with Ansible
Basically you need to provide on ansible.cfg the SSH parameter that you want, also you can add the parameters to hosts you are connecting, with a configuration of SSH client on ~/.ssh/config.
You need to setup this private_key_file = /path/to/file in configuration file /etc/ansible/ansible.cfg
As per you questioned it will should look like as below:
private_key_file = /repository/ansible/.ssh/id_rsa_ansible
Hope this helps.
I have two Vagrant instances running having different IP:
192.168.33.17 [Ansible installed here]
192.168.33.19 [Another server where I am trying to connect]
My Ansible hosts file is in /etc/ansible/hosts and it looks like:
[example]
192.168.33.19:2222
I can easily connect via SSH to the second server with command:
ssh vagrant#192.168.33.19
without password.
But running the Ansible command yields error:
[root#centos72x64 vagrant]# ansible example -m ping -u vagrant
192.168.33.19 | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh.",
"unreachable": true
}
How I can solve this error?
My Ansible hosts file is in /etc/ansible/hosts and it looks like
[example]
192.168.33.19:2222
You don't put port number in the Ansible inventory file this way. To learn how to do it, confer the docs.
But you also mentioned:
I can easily connect via ssh to the second server with command
ssh vagrant#192.168.33.19
So you don't use port 2222 at all.
I am trying to copy an SSH public key to a newly created VM:
- hosts: vm1
remote_user: root
tasks:
- name: deploy ssh key to account
authorized_key: user='root' key="{{lookup('file','/root/.ssh/id_rsa.pub')}}"
But getting error:
fatal: [jenkins]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).\r\n", "unreachable": true}
So to establish SSH I need first to establish SSH?
How can I establish SSH for newly created KVM automatically, without manual key copy.
(host_key_checking = False in ancible.cfg)
Assuming the target machine allows root-login with password (from the error message it seems it does), you must provide the credentials to your playbook:
ansible-playbook playbook.yml --extra-vars "ansible_ssh_user=root ansible_ssh_pass=password"
Something I tried (and it worked) when I had this same issue:
ansible target-server-name -m command -a "whatever command" -k
The -k prompts you for the ssh password to the target server.
Add below changes to the /etc/ansible/hosts file:
[target-server-name]
target_server_ip
Example:
ansible target-server-name -m ping -k
I'm not able to connect to a host in Ansible. This is the error:
192.168.1.12 | UNREACHABLE! => {
"changed": false,
"msg": "ERROR! SSH encountered an unknown error during the connection. We recommend you re-run the command using -vvvv, which
will enable SSH debugging output to help diagnose the issue",
"unreachable": true }
This is my hosts file:
[test]
192.168.1.12
And this is the ad-hoc instruction:
ansible all -m ping
I'm able to connect via raw ssh.
By default Ansible try to use SSH keys. It seems that you have wrong keys. Try to use Password authentication.
ansible all -m ping --ask-pass --ask-sudo-pass
I Hope it helps.
#bigdestroyer, to setup ssh public keys use this playbook
- hosts: all
remote_user: root
vars:
authorized_key_list:
- name: root
authorized_keys:
- key: "{{ lookup('file', '~/.ssh/id_rsa.pub') }}"
state: present
roles:
- { role: GROG.authorized-key }
Execute this playbook with --ask-pass since you'll use it to setup public key authentication.
ansible-playbook setup_ssh.yml --ask-pass
This role will add your current user public key to remote host authorized_keys file.
NOTE
ask-pass works only one time per run so this will only work with hosts that has the same password.
I usually use -limit and execute in batches on hosts that has the same password.
For example, let's assume host1,host2 and host3 has password foo host4 and host5 bar
ansible-playbook setup-ssh.yml --ask-pass -l host1,host2,host3
provide password foo
ansible-playbook setup-ssh.yml --ask-pass -l host4,host5
provide password bar
THEN
ansible -m ping host1,host2,host3,host4,host5
You can read the role documentation here
For those that come here running Ansible 2.6, --ask-sudo-pass is now deprecated. The correct syntax is:
ansible all -m ping --ask-pass --ask-become-pass
I encountered this issue - my ssh keys weren't set up correctly. I fixed this using the following:
Make sure each machine has an ssh keys set up, using the ssh-keygen command.
ssh-keygen
Pass your public key over to the machine, using the ssh-copy-id command.
ssh-copy-id -i <location of id_rsa.pub> <ip-address of host>
This helped resolve my error, hopefully it helps!
I resolved this issue by adding --ask-pass argument