rsync ssh file copying to GCE instance fails with permission denied - ssh

I'm executing the following on my local machine which is authenticated with my project in Google Compute Engine via the Google Cloud SDK:
rsync -avu --omit-dir-times -e ssh \
-o UserKnownHostsFile=/dev/null \
-o CheckHostIP=no -o StrictHostKeyChecking=no \
-i /home/fredrik/.ssh/google_compute_engine \
/somefolder/hello.txt \
1.2.3.4:/mymount/
...where 1.2.3.4 is the public IP of my GCE instance and I get the following error:
Permission denied (publickey,gssapi-keyex,gssapi-with-mic).
rsync: connection unexpectedly closed (0 bytes received so far) [sender]
rsync error: unexplained error (code 255) at io.c(600) [sender=3.0.6]
The machine I execute the command on is authenticated and can, e.g., successfully execute gcloud compute ssh instance-1 in order to SSH into the same instance.
What do I need to do in order to successfully execute the rsync command?

Quotes around -e solved it:
rsync -avu --omit-dir-times -e "ssh -o UserKnownHostsFile=/dev/null -o CheckHostIP=no -o StrictHostKeyChecking=no -i /home/fredrik/.ssh/google_compute_engine" /somefolder/hello.txt 1.2.3.4:/mymount/

Related

Trying to test run playbook. Getting permission denied

I am trying to do a "dry-run" of a playbook. The machine I am targeting I am able to ssh into and vice versa. When I run the ansible all -m ping -vvv this is the output.
Using module file /usr/lib/python2.7/dist-packages/ansible/modules/core/system/ping.py
<192.168.4.136> ESTABLISH SSH CONNECTION FOR USER: hwaraich207970
<192.168.4.136> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=username -o ConnectTimeout=10 -o ControlPath=/home/username/.ansible/cp/ansible-ssh-%h-%p-%r 192.168.4.136 '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo ~/.ansible/tmp/ansible-tmp-1604952591.08-32914241935398 ` " && echo ansible-tmp-1604952591.08-32914241935398="` echo ~/.ansible/tmp/ansible-tmp-1604952591.08-32914241935
398 `" ) && sleep 0'"'"''
192.168.4.136 | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: Permission denied (publickey,password).\r\n",
"unreachable": true
This could happen even if you have made sure the passwordless ssh between System A and System B (say using either ssh-copy-id command or by manually copying the public key i.e content of the idrsa.pub file on System A to .ssh/authorizedkeys file on System B. If this is happening, one of the reason could be the user home directories.
On System A user home directory is say /home/tester and on System B, it is /users/tester, then passwordless ssh might not work. Make sure both users have the same home directory solves this issue. I observed this case in CentOS machines and on making sure the home directories for users same, the issue resolved.
Ansible typically works when ssh public keys of the controller node are added to authorized keys of the remote node. This enables ansible to ssh into the remote node from the controlled node without the need for a password.
There is an alternate way to make ansible work without sharing public keys using sshpass. In this case, you need to input the password of the remote users via the ansible_ssh_pass variable. This can be done via inventory file, group_vars, or the extra-vars.
Regarding the error shared by you. It says, "Permission denied", meaning there is something wrong related to either ssh key sharing or password setting.
msg": "Failed to connect to the host via ssh: Permission denied (publickey,password).\r\n",
Debug mode provides more info related to the issue:
SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=username -o ConnectTimeout=10 -o ControlPath=/home/username/.ansible/cp/ansible-ssh-%h-%p-%r 192.168.4.136 '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo ~/.ansible/tmp/ansible-tmp-1604952591.08-32914241935398 ` " && echo ansible-tmp-1604952591.08-32914241935398="` echo ~/.ansible/tmp/ansible-tmp-1604952591.08-32914241935
Some relevant information you can extract from the above snippet:
-o User=username: This means playbook is trying to execute from username user ID.
-o PasswordAuthentication=no: This would force ansible to use public keys over password.
This authentication failure is happening for 192.168.4.136.
Please check this for official info regarding connections for ansible.
Check this for generating and sharing ssh keys between the nodes.

Docker-machine can't use userdata add key to ssh cloud image

My provider : OpenStack
VM OS: Ubuntu 16.04
Docker-machine Version: 0.14.0
Problem:
I want to use userdata add another public key to authorized_keys,
using --openstack-user-data-file option to specify my userdata.yml.
Here is my userdata.yml:
#cloud-config
users:
- default
- name: ubuntu
groups: sudo
shell: /bin/bash
sudo: ['ALL=(ALL) NOPASSWD:ALL']
ssh-authorized-keys:
- ssh-rsa XXXXXXXXXXXXXX
Use docker-machine command to create vm:
docker-machine --debug create --driver openstack
--openstack-auth-url http://x.x.x.x:5001/v3
--openstack-domain-id defaule
--openstack-endpoint-type adminURL
--openstack-floatingip-pool ext-net
--openstack-keypair-name mykey
--openstack-flavor-id 4
--openstack-image-name ubuntu-16.04-cloud
--openstack-net-name private
--openstack-password XXXXX
--openstack-private-key-file /home/demo/id_rsa
--openstack-sec-groups default
--openstack-ssh-user ubuntu
--openstack-tenant-name admin
--openstack-user-data-file /home/demo/userdata.yml
--openstack-username admin
vm
After creating vm , docker-machine stuck " waiting for ssh to be available".
Here is debug output:
Getting to WaitForSSH function...
(vm) Calling .GetSSHHostname
(vm) Calling .GetSSHPort
(vm) Calling .GetSSHKeyPath
(vm) Calling .GetSSHKeyPath
(vm) Calling .GetSSHUsername
Using SSH client type: external
Using SSH private key: /root/.docker/machine/machines/vm/id_rsa (-rw-------)
&{[-F /dev/null -o PasswordAuthentication=no -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o LogLevel=quiet -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none ubuntu#10.50.2.36 -o IdentitiesOnly=yes -i /root/.docker/machine/machines/vm/id_rsa -p 22] /usr/bin/ssh <nil>}
About to run SSH command:
exit 0
SSH cmd err, output: exit status 255:
Error getting ssh command 'exit 0' : ssh command error:
command : exit 0
err : exit status 255
output :
I try to ssh to vm by command:
ssh -i /root/.docker/machine/machines/vm/id_rsa ubuntu#10.50.2.36
But got error message:
Permission denied (publickey).
So, I try another key , the key was in option of --openstack-private-key-file /home/demo/id_rsa
ssh -i /home/demo/id_rsa ubuntu#10.50.2.36
ssh was successful!
I checked two keys, /root/.docker/machine/machines/vm/id_rsa and /home/demo/id_rsa,
but two keys are the same.
I was confused, why the same keys, one can ssh another one can't ssh?
In order for Docker-Machine to set-up a virtual machine on OpenStack, you need to activate the config_drive option: docker-machine --openstack-config-drive [OTHER_OPTIONS] <MACHINE_NAME>

Ansible giving ssh_exchange_identification ERROR

My Ansible playbook connects to a remote node using a Proxy.
When the Ansible play book runs; it gives the following ERROR while doing the ssh step.
[root#vm1-msdp ANSIBLE]# ansible-playbook fend_file.yaml -i env/target -vvvvv
PLAY [LAB1] *******************************************************************
GATHERING FACTS ***************************************************************
<10.169.99.222> ESTABLISH CONNECTION FOR USER: msdp
<10.169.99.222> REMOTE_MODULE setup
<10.169.99.222> EXEC sshpass -d9 ssh -C -tt -vvv -o ProxyCommand="nc -x 142.133.134.161:1088 %h %p" -o StrictHostKeyChecking=no -o GSSAPIAuthentication=no -o PubkeyAuthentication=no -o User=msdp -o ConnectTimeout=10 10.169.99.222 /bin/sh -c 'mkdir -p $HOME/.ansible/tmp/ansible-tmp-1473708903.98-28407509853006 && echo $HOME/.ansible/tmp/ansible-tmp-1473708903.98-28407509853006'
fatal: [10.169.99.222] => SSH Error: ssh_exchange_identification: Connection closed by remote host
It is sometimes useful to re-run the command using -vvvv, which prints SSH debug output to help diagnose the issue.
But when I run the ssh command myself, I am able to successfully connect.
[root#vm1-msdp ANSIBLE]# ssh -C -tt -o ProxyCommand="nc -x 142.133.134.161:1088 %h %p" -o StrictHostKeyChecking=no -o GSSAPIAuthentication=no -o PubkeyAuthentication=no -o User=root -o ConnectTimeout=10 10.169.99.222
root#10.169.99.222's password:
Last login: Mon Sep 12 12:28:19 2016 from 10.169.102.6
root#IC02 ~ #
Do I need to clear any ansible files ?
When you run the SSH command manually, you are specifying the root user. Your Ansible playbook is using your local user of msdp. Try setting your ansible_user variable in your inventory file. Maybe something like:
10.169.99.22 ansible_user=root

Ansible: "Failed to connect to the host via ssh" error

I'm trying to get set up with Ansible for the first time, to connect to a Raspberry Pi. Following the official 'getting started' steps, I've made an inventory file:
192.168.1.206
.. but the ping fails as follows:
$ ansible all -m ping -vvv
No config file found; using defaults
<192.168.1.206> ESTABLISH SSH CONNECTION FOR USER: pi
<192.168.1.206> SSH: EXEC ssh -C -q -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=pi -o ConnectTimeout=10 -o ControlPath=/Users/username/.ansible/cp/ansible-ssh-%h-%p-%r 192.168.1.206 '/bin/sh -c '"'"'( umask 22 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1464128959.67-131325759126042 `" && echo "` echo $HOME/.ansible/tmp/ansible-tmp-1464128959.67-131325759126042 `" )'"'"''
192.168.1.206 | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh.",
"unreachable": true
}
This looks the same as this question, but adding password/user bits make no effect for me, shouldn't be necessary to ping, and aren't in the official example anyhow. In any case I'd prefer to configure Ansible to use a specific public/private key pair (as per ssh -i ~/.ssh/keyfile method..)
Grateful for assistance.
Oh and yes the Raspberry is available at that address:
$ ping 192.168.1.206
PING 192.168.1.206 (192.168.1.206): 56 data bytes
64 bytes from 192.168.1.206: icmp_seq=0 ttl=64 time=83.822 ms
Despite what its name could suggest, Ansible ping module doesn't make an ICMP ping.
It tries to connect to host and makes sure a compatible version of Python is installed (as stated in the documentation).
ping - Try to connect to host, verify a usable python and return pong on success.
If you want to use a specific private key, you can specify ansible_ssh_private_key_file in your inventory file:
[all]
192.168.1.206 ansible_ssh_private_key_file=/home/example/.ssh/keyfile
It works for me.
10.23.4.5 ansible_ssh_pass='password' ansible_user='root'
You can also troubleshoot by executing ssh in debug mode and compare the results when running:
ssh -v pi#192.168.1.206
with:
ansible all -m ping -vvvv

Ansible : SSH Error: ControlPath too long

I run a computer with Ubuntu 15.10 and I try to run Vagrant with Ansible.
Before start, I like to say that I don't have any idea about server management and especialy the Ansible.
The reason I am going to run my system this way, is because I have start working on a project that requires this installation.
After all, the problem I have is that while provisioning the Vagrant I get the following message
<aaa.dev> ESTABLISH CONNECTION FOR USER: vagrant
<aaa.dev> REMOTE_MODULE setup
<aaa.dev> EXEC ssh -C -tt -vvv -o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o IdentityFile=/media/merianos/Large Internal/Vagrant/ansible-project/.vagrant/machines/default/virtualbox/private_key -o ForwardAgent=yes -o ControlMaster=auto -o ControlPersist=60s -o ControlPath="/home/merianos/.ansible/cp/%h-%r" -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=30 aaa.dev /bin/sh -c 'mkdir -p $HOME/.ansible/tmp/ansible-tmp-1446622406.54-199921739516776 && chmod a+rx $HOME/.ansible/tmp/ansible-tmp-1446622406.54-199921739516776 && echo $HOME/.ansible/tmp/ansible-tmp-1446622406.54-199921739516776'
fatal: [aaa.dev] => SSH Error: ControlPath too long
It is sometimes useful to re-run the command using -vvvv, which prints SSH debug output to help diagnose the issue.
So, is it posible to help me someone with this issue ?
Just to say that I have try this article : https://github.com/ansible/ansible/issues/11536 and I changed the control_path in my ansible.cfg to control_path = %(directory)s/%%h-%%r but still not working.
Note My installation path contains a space that I can't remove it because are running many other projects on the same HDD and the configuration will be huge for all the projects. I don't know if that space is the problem, but just I say about it.
UPDATE #1
Result before I change anything:
<aaa.dev> ESTABLISH CONNECTION FOR USER: vagrant
<aaa.dev> REMOTE_MODULE setup
<aaa.dev> EXEC ssh -C -tt -vvv -o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o IdentityFile=/media/merianos/Large Internal/Vagrant/ansible-project/.vagrant/machines/default/virtualbox/private_key -o ForwardAgent=yes -o ControlMaster=auto -o ControlPersist=60s -o ControlPath="/home/merianos/.ansible/cp/ansible-ssh-%h-%p-%r" -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=30 aaa.dev /bin/sh -c 'mkdir -p $HOME/.ansible/tmp/ansible-tmp-1446628138.53-155680153347939 && chmod a+rx $HOME/.ansible/tmp/ansible-tmp-1446628138.53-155680153347939 && echo $HOME/.ansible/tmp/ansible-tmp-1446628138.53-155680153347939'
fatal: [aaa.dev] => SSH Error: ControlPath too long
It is sometimes useful to re-run the command using -vvvv, which prints SSH debug output to help diagnose the issue.
Result with control_path = %(directory)s/%%h-%%r :
<aaa.dev> ESTABLISH CONNECTION FOR USER: vagrant
<aaa.dev> REMOTE_MODULE setup
<aaa.dev> EXEC ssh -C -tt -vvv -o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o IdentityFile=/media/merianos/Large Internal/Vagrant/ansible-project/.vagrant/machines/default/virtualbox/private_key -o ForwardAgent=yes -o ControlMaster=auto -o ControlPersist=60s -o ControlPath="/home/merianos/.ansible/cp/%h-%r" -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=30 aaa.dev /bin/sh -c 'mkdir -p $HOME/.ansible/tmp/ansible-tmp-1446628320.4-231606404275563 && chmod a+rx $HOME/.ansible/tmp/ansible-tmp-1446628320.4-231606404275563 && echo $HOME/.ansible/tmp/ansible-tmp-1446628320.4-231606404275563'
fatal: [aaa.dev] => SSH Error: ControlPath too long
It is sometimes useful to re-run the command using -vvvv, which prints SSH debug output to help diagnose the issue.
UPDATE #2
After I set the ssh_args = -o ControlMaster=off I get the following result:
<aaa.dev> ESTABLISH CONNECTION FOR USER: vagrant
<aaa.dev> REMOTE_MODULE setup
<aaa.dev> EXEC ssh -C -tt -vvv -o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o IdentityFile=/media/merianos/Large Internal/Vagrant/ansible-project/.vagrant/machines/default/virtualbox/private_key -o ForwardAgent=yes -o ControlMaster=auto -o ControlPersist=60s -o ControlPath="/home/merianos/.ansible/cp/ansible-ssh-%h-%p-%r" -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=30 aaa.dev /bin/sh -c 'mkdir -p $HOME/.ansible/tmp/ansible-tmp-1446628489.4-10074395967553 && chmod a+rx $HOME/.ansible/tmp/ansible-tmp-1446628489.4-10074395967553 && echo $HOME/.ansible/tmp/ansible-tmp-1446628489.4-10074395967553'
fatal: [aaa.dev] => SSH Error: ControlPath too long
It is sometimes useful to re-run the command using -vvvv, which prints SSH debug output to help diagnose the issue.
In general for each modification I did, the error message seems to be the same, and maybe the configuration it happens from some other level, but not the ansible.cfg.
Unfortunatelly I don't know where to find that location :(
I described the problem in similar question.
You need to change it to something shorter (if you have long hostname). For test case you can try just ./master, but for real use case, you should use at least ./s/%%h-%%r.
The very first sentence of the documentation for OpenSSH specific settings in Ansible says:
Under the [ssh_connection] header, the following settings are tunable for SSH connections.
So you need to place the ssh_args variable in [ssh_connection] section of the ansible.cfg, for example:
[defaults]
timeout = 600
[ssh_connection]
ssh_args = -o ControlMaster=off
In fact overriding the ssh_args with an empty value disables the defaults for ControlMaster/ControlPersistent/ControlPath in Ansible, so it should simply be:
[ssh_connection]
ssh_args =
Short Answers
Pass this as an argument to ansible or ansible-playbook commands:
-e "ansible_ssh_common_args='-o ControlPath=/tmp/ssh-%r#%h:%p'"
Or as env variable
export ANSIBLE_SSH_ARGS="-o ControlPath=/tmp/ssh-%r#%h:%p'"
Or as an argument to the host definition in the inventory file
[web_servers]
host ansible_ssh_common_args='-o ControlPath=/tmp/ssh-%r#%h:%p'
Avoid using /tmp directory as its not secure
Long Answer
ControlPath too long is the error that belongs to SSH. SSH creates a control (unix) socket to reuse a TCP connection. The Control path is where this socket is saved.
Ansible may use its local config directory as the location to the ControlPath and if this local config directory is long would raise this error. Alternatively if the home directory itself is at a long path this problem may raise.
Typically, this can be fixed at SSH end by simply using a shorter path in the local ssh config file (~/.ssh/config):
Host *
ControlPath /tmp/ssh-%r#%h:%p
This will create socket file in the /tmp directory (very short path and is world writeable) using the username (%r), hostname (%h), and port number (%p) from the SSH connection as part of the filename
/tmp is world writeable and is not safe. Hackers can use the socket file from the /tmp to login to target SSH server and is dangerous. Please prefer using home directory if it is at short path
Ansible typically should read the file ~/.ssh/config and load ControlPath settings. However in some cases if it isn't, then the following can be done:
Use ANSIBLE_SSH_CONFIG environment variable like:
export ANSIBLE_SSH_CONFIG=~/.ssh/config
OR use ansible configuration file. Create an ansible.cfg file in the ansible project directory and add this code to it:
[ssh_connection]
ssh_args = -F /full/path/to/ssh_config
For me, ~ or \~ in the cfg file didn't work. So ~/.ssh/config wouldn't work and we need to specify full path to the .ssh/config for ex: /home/xxxx/.ssh/config