I am new to Ansible and I am trying to implement it. I tried all the possible ways present on the Internet and also all questions related to it, but still I can't resolve the error. How can I fix it?
I installed Ansible playbook on my MacBook Pro. I created a VM whose IP address is 10.4.1.141 and host IP address is 10.4.1.140.
I tried to connect to my VM using the host via SSH. It connected by the following command:
ssh user#10.4.1.141
And I got the shell access. This means my SSH connection is working fine.
Now I tried the following command for Ansible:
ansible all -m ping
And the content in the /etc/ansible/host is 10.4.1.141.
Then it shows the following error:
10.4.1.141 | FAILED => SSH Error: Permission denied (publickey,password).
while connecting to 10.4.1.141:22
It is sometimes useful to rerun the command using -vvvv, which prints SSH debug output to help diagnose the issue.
Then I tried creating the config file in .ssh/ folder on the host machine, but the error is still the same.
The content of the config file is:
IdentityFile ~/.ssh/id_rsa
which is the path to my private key.
Then I ran the same command ansible all -m ping and got the same error again.
When I tried another command,
ansible all -m ping -u user --ask-pass
Then it asked for the SSH password. I gave it (I am very sure the password is correct), but I got this error:
10.4.1.141 | FAILED => FAILED: Authentication failed.
This is the log using -vvvv:
<10.4.1.141> ESTABLISH CONNECTION FOR USER: rajatg
<10.4.1.141> REMOTE_MODULE ping
<10.4.1.141> EXEC ssh -C -tt -vvv -o ControlMaster=auto -o ControlPersist=60s -o ControlPath="/Users/rajatg/.ansible/cp/ansible-ssh-%h-%p-%r" -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 10.4.1.141 /bin/sh -c 'mkdir -p $HOME/.ansible/tmp/ansible-tmp-1445512455.7-116096114788007 && chmod a+rx $HOME/.ansible/tmp/ansible-tmp-1445512455.7-116096114788007 && echo $HOME/.ansible/tmp/ansible-tmp-1445512455.7-116096114788007'
10.4.1.141 | FAILED => SSH Error: Permission denied (publickey,password).
while connecting to 10.4.1.141:22
It is sometimes useful to re-run the command using -vvvv, which prints SSH debug output to help diagnose the issue.
I am still not able to figure it out what the problem is. It is my last choice to ask it here after doing my all research. This is the link I referred to.
I fixed the issue. The problem was in my /etc/ansible/hosts file.
The content written in /etc/ansible/hosts was 10.4.1.141. But when I changed it to rajat#10.4.1.141, then the issue got fixed.
If you log in with ssh user#10.4.1.141:
Option 1
Then make sure that in your hosts file inside etc\ansible you have:
[server01]
10.4.1.141
Then within etc\ansible run:
ansible all -m ping -u user --ask-pass
Option 2
If you want to log in without typing the SSH password then in your hosts file inside etc\ansible you add:
[server01]
10.4.1.141 ansible_ssh_pass=xxx ansible_ssh_user=user
Then within etc\ansible run:
ansible all -m ping
For me it worked both ways.
My case is I have multiple private keys in my .ssh.
Here is how I fix it by telling ansible to use a certain private key
ansible-playbook -i ../../inventory.ini --private-key=~/.ssh/id_rsa_ansiadmin update.yml
The previous solutions didn't work for me, unfortunately (DevOps layman here!).
But the below one worked for me.
Change your inventory file to:
[webserver] 10.4.1.141 ansible_user=ubuntu
ansible webserver --private-key pem_file.pem -m ping
Hitting the command with -vvvv helped me to debug it more.
Reference: Failed to connect to the host via ssh: Permission denied (publickey,password) #19584
If you execute Ansible with sudo, for example
sudo ansible -m ping all
Please keep in mind that the public key for root has to be on the server you want to reach as well, not only the public key from your non-root-user. Otherwise, you get the error message above as well.
Most of the issues happen while connecting Ubuntu machines in hosts.
Solution Ansible required which user want to connect, because Ubuntu doesn't have a default root user.
For the hosts file
[Test-Web-Server]
10.192.168.10 ansible_ssh_pass=foo ansible_ssh_user=foo
The problem lies in the inventory file.
vi /etc/ansible/hosts
It should be:
[webserver]
192.###.###.### ansible_ssh_user=user ansible_ssh_pass=pass
I have fixed this issue as well.
My issue was also in my hosts file, /etc/ansible/hosts.
I changed my hosts file from
172.28.2.101
to
name-of-server-in-ssh-config
I had IP addresses in the hosts file. Since I have SSH configurations already set up for names, I do not need to use a variable or username in front of the hosts.
[name-stg-web]
server-name-stg-web[01:02]
What first worked for me was to hardcode the target machine root's password in the /etc/ansible/hosts like this:
[load_balancers_front]
loadbalancer1 ansible_host=xxx.xxx.xxx.xxx ansible_user=root ansible_password=root_password_in_target
But it is not recommended to do this of course because of security issues.
Then, I figured out a solutions from the docs by doing:
ssh-agent bash --> read here
and then
ssh-add /my/private/ssh-key
After this, my hosts file looks like this and ansible all -m ping works fine:
[load_balancers_front]
loadbalancer1 ansible_host=xxx.xxx.xxx.xxx ansible_user=root
Mentioning the username in /etc/hosts file also can resolve the issue.
#sudo vim /etc/hosts
[test-server]
ip_address ansible_user="remote pc's username"
[jenkinsserver]
publicdnsname ansible_user=ubuntu private_key=ubuntu.cer
After years some OS require strong encryption of the SSH key, they don't support RSA and DSA keys. Therefore the message Permission denied (publickey,password) may indicate that OS needs strong SSH-key instead of id_rsa.
Use the following command to generate new key:
ssh-keygen -t ecdsa -f ~/.ssh/id_ecdsa -N ""
Ensure that server has an option
PubkeyAuthentication yes
in /etc/ssh/sshd_config or /etc/openssh/sshd_config.
Some other options may be required as well (read the documentation of your OS first), for example:
Protocol 2
PermitRootLogin without-password
AuthorizedKeysFile /etc/openssh/authorized_keys/%u /etc/openssh/authorized_keys2/%u .ssh/authorized_keys .ssh/authorized_keys2
Do not forget to restart sshd service to apply changes.
Copy the new key with ssh-copy-id -i ~/.ssh/id_ecdsa, then you can connect to remote server using ansible.
At the host machine you should install sshpass with the below command
sudo apt install sshpass -y
and use this command to ping
ansible all -i slaves.txt -m ping -u test --ask-pass
it will provide you keyboard interactive password entry, where you shall enter the passowrd of the slave machine
Related
I have an ansible playbook which connects to a virtual machine via a non-standard ssh port (forwarded to localhost) and a different user than the host user (vagrant).
The ssh port is specified in the ansible inventory:
[vms]
localhost:2222
The username given on the command line to ansible-playbook:
ansible-playbook -i <inventory from above> <some playbook> -u vagrant
The communication with the VM works correctly, however, %p always expands to 22 and %r to the host username.
Consequently, I cannot flush the SSH connection (for the user's changed group membership to take effect) like this:
- name: flush the ssh connection
command: ssh -o ControlPath="~/.ansible/cp/ansible-ssh-%h-%p-%r" -O stop {{inventory_hostname}}
delegate_to: 127.0.0.1
Am I making a silly mistake somewhere? Alternatively, is there a different way to flush the SSH connection?
The percent expand is not expanded by ansible, but by ssh later on.
Sorry, forgot to add the most important part
Using
command: ssh -o ControlPath=[...] -O stop {{inventory_hostname}}
will use default port, because you didn't specify it on the command-line. You would have to specify also the port to "flush" the connection this way:
command: ssh -o ControlPath=[...] -O stop -p {{inventory_port}} {{inventory_hostname}}
But I don't think it is needed. Ansible should clean up the connections when the playbook ends and I don't see any different reason why to do that.
I'm generated ssh key, and copy it to remote server. When I try to ssh to that server everything works fine:
ssh user#ip_address
User is not a root. If I try to ssh throw ansible:
ansible-playbook -i hosts playbook.yml
with ansible playbook:
---
- hosts: web
remote_user: user
tasks:
- name: test connection
ping:
and hosts file:
[web]
192.168.0.103
I got error:
...
Permission denied (publickey,password)
What's the problem?
Ansible is using different key compared to what you are using to connect to that 'web' machine.
You can explicitly configure ansible to use a specific private key by
private_key_file=/path/to/key_rsa
as mentioned in the docs Make sure that you authorize that key which ansible uses, to the remote user in remote machine with ssh-copy-id -i /path/to/key_rsa.pub user#webmachine_ip_address
In my case I got similar error while running ansible playbook when host changed it's fingerprint. I found this, trying to establish ssh connection from command line. So, after running ssh-keygen -f "/root/.ssh/known_hosts" -R my_ip this problem was solved.
Hi Run the play as below. by default ansible plays using root.
ansible-playbook -i hosts playbook.yml -u user
If you still get the error, run below and paste the out-put here.
ansible-playbook -i hosts playbook.yml -u user -vvv
I'm stuck in the Permission denied (publickey) hell trying to copy public key to a remote server so Jenkins can rsync files during builds.
Running:
sudo ssh-copy-id -i id_rsa.pub ubuntu#xx.xx.xx.xx
I have done this for another server, but that one has a separate key pair for SSH assigned by EC2, and my current guess is that ssh-copy-id is trying to use wrong private key for this connection. Is there a way to pass -vv to ssh-copy-id so I can see what jey it's trying to use. I've looked into the -o switch, but can't seem to get it right.
Thank you.
So here's what I've done:
added following to /etc/ssh/ssh_config:
Host xx.xx.xx.xx
User ubuntu
IdentityFile ~/.ssh/key-name-for-that-machine.pem
Then copied key-name-for-that-machine.pem into /var/lib/jenkins/.ssh
Didn't run ssh-copy-id again, simply have rsync use that key file when moving stuff, here's the rsync script:
rsync -rvh -e 'ssh -v' "/tmp/project-DEV-${BUILD_ID}/" ubuntu#xx.xx.xx.xx:"/www/www.project-dir.net/"
my guess would by running it without sudo. But that's depending on how you normally log into the server.
If you normally login by using ssh ubuntu#xx.xx.xx.xx then lose the
sudo.
If not than try to login with sudo ssh ubuntu#xx.xx.xx.xx
Reading your question, at least one of these should fail.
I am using the gcutil to attempt to SSH into my new Google Cloud instance. When it went through the set up, it asked me to enter a passphrase. Now when I attempt to access the instance, it requests the passphrase, I enter it, and I get Permission Denied.
Here is the error that I'm getting:
INFO: Running command line:
ssh -o UserKnownHostsFile=/dev/null -o CheckHostIP=no -o StrictHostKeyChecking=no -i /Users/admin/.ssh/google_compute_engine -A -p 22 root#173.255.119.141 --
Warning: Permanently added '173.255.119.141' (RSA) to the list of known hosts.
Permission denied (publickey,gssapi-keyex,gssapi-with-mic).
The passphrase you entered when you set it up initially should be the one for /Users/admin/.ssh/google_compute_engine. From the output you've got here, I'd suggest you try with the username of "admin" instead of "root". If that fails, then open the console webpage and make sure you've got the ssh-key entered into the metadata for this VM.
We have created a startup script to self-manage and troubleshoot ssh connectivity issues https://github.com/GoogleCloudPlatform/compute-ssh-diagnostic-sh/
I want to sync a directory /var/sites/example.net/ from a remote machine to a directory at the same path on my local machine.
The remote machine only authenticates SSH connections with keys, not passwords.
On my local machine I have an alias set up in ~/.ssh/config so that I can easily run ssh myserver to get in.
I'm trying rsync -a myserver:/var/sites/example.net/ /var/sites/example.net/ but it fails because my local user does not have permission to edit the local directory /var/sites/example.net/.
If I try sudo rsync -a myserver:/var/sites/example.net/ /var/sites/example.net/ (just adding sudo), I can fix the local permission issue, but then I encounter a different issue -- my local root user does not see the proper ssh key or ssh alias.
Is there a way I can accomplish this file sync by modifying this rsync command? I'd like to avoid changing anything else (e.g. no changes to file perms or ssh setup)
Try this:
sudo rsync -e "sudo -u localuser ssh" -a myserver:/var/sites/example.net/ /var/sites/example.net/
This runs rsync as root, but the -e flag causes rsync to run ssh as your local user (using sudo -u localuser), so the ssh command has access to the necessary credentials. Rsync itself is still running as root, so it has the necessary filesystem permissions.
Just improving on top of larsks's response:
sudo rsync -e "sudo -u $USER ssh" ...
So in your case change rsync -a myserver:/var/sites/example.net/ /var/sites/example.net/ to sudo rsync -e "sudo -u $USER ssh" -a myserver:/var/sites/example.net/ /var/sites/example.net/.
With regards to #larsks' answer, If you have your key loaded into the ssh agent, which is my use case, you can instead do:
sudo rsync -e "env SSH_AUTH_SOCK=$SSH_AUTH_SOCK ssh" /source/path /destination/path
Instead of the double sudo.
My use case, if anyone is interested in replicating, is that I'm SSHing to a non-root sudo-er account on remote A, and need to rsync root-owned files between remote A and remote B. Authentication to both remotes is done using keys I have on my real local machine and I use -A to forward the ssh-agent authentication socket to remote A.
Guss's answer works well if you want to use sudo rsync for local file permissions but want to utilise your user's SSH session. However, it falls short when you also want to use your SSH config file.
You can follow Wernight's approach by using sudo to switch the user for the SSH connection and supplying a path to the config file, but this won't work if you have to enter a passphrase. So, you can combine both approaches by making use of the --preserve-env flag:
sudo --preserve-env=SSH_AUTH_SOCK rsync -e "sudo --preserve-env=SSH_AUTH_SOCK -u $USER ssh" hostname:/source/path /destination/path
Note that it's necessary to cascade this flag through both sudo commands so it does look a bit messy!
As requested by Derek above:
when sudo asks for a password then you need to modify the sudoers config with sudo visudo and add a entry with NOPASSWD: in front of the rsync command.
For details you could consult man sudoers.
this will work in every mode, even via cron, at, systemd.service+timer, etc.
test it with: ssh <user>#<your-server> "sudo <your-rsync-command>"