MacBook-Pro:rails1 woo$ ssh vagrant#10.0.1.92
Welcome to Ubuntu 14.04.4 LTS (GNU/Linux 3.13.0-91-generic x86_64)
* Documentation: https://help.ubuntu.com/
System information as of Tue Jul 5 03:52:20 UTC 2016
System load: 0.0 Users logged in: 1
Usage of /: 4.0% of 39.34GB IP address for eth0: 10.0.2.15
Memory usage: 32% IP address for eth1: 10.0.1.100
Swap usage: 0% IP address for eth2: 10.0.1.92
Processes: 80
Graph this data and manage this system at:
https://landscape.canonical.com/
Get cloud support with Ubuntu Advantage Cloud Guest:
http://www.ubuntu.com/business/services/cloud
Last login: Tue Jul 5 03:52:20 2016 from 10.0.1.19
vagrant#vagrant-ubuntu-trusty-64:~$
But,
>ansible -vvvv all -m ping -u vagrant
/Library/Python/2.7/site-packages/Crypto/Util/number.py:57: PowmInsecureWarning: Not using mpz_powm_sec. You should rebuild using libgmp >= 5 to avoid timing attack vulnerability.
_warn("Not using mpz_powm_sec. You should rebuild using libgmp >= 5 to avoid timing attack vulnerability.", PowmInsecureWarning)
Using /Users/woo/vagrant_vms/rails1/ansible.cfg as config file
Loaded callback minimal of type stdout, v2.0
<10.0.1.92> ESTABLISH SSH CONNECTION FOR USER: vagrant
<10.0.1.92> SSH: EXEC ssh -C -vvv -o ForwardAgent=yes -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=10 10.0.1.92 '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1467746604.02-144506913281055 `" && echo ansible-tmp-1467746604.02-144506913281055="` echo $HOME/.ansible/tmp/ansible-tmp-1467746604.02-144506913281055 `" ) && sleep 0'"'"''
10.0.1.92 | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh.",
"unreachable": true
}
I've done:
cat ~/.ssh/id_rsa.pub | ssh vagrant#10.0.1.92 "mkdir -p ~/.ssh && cat >> ~/.ssh/authorized_keys"
and it was successful as tested by the ssh command.
I don't understand why I keep getting the Failed to connect message.
The 10.0.1.92 is in the hosts file and the ip of the vm is set to that ip.
Can you try this:
ansible -vvvv all -m ping -u vagrant
Try to issue these two commands before connecting the vagrant box with Ansible.
$ eval $(ssh-agent -s)
$ ssh-add
Related
in step 10 of tutorial
https://dzone.com/articles/kubespray-10-simple-steps-for-installing-a-product
for deploying a production ready kubernetes cluster with kubespray, an error occured when running ansible-playbook command.error is:
ERROR! SSH Error: data could not be sent to the remote host. Make sure this host can be reached over ssh
ssh passwordless is active between nodes and i can run ssh from each nodes without password.
can anyone help me?
thanks
this is my command and it's output:
master-node#master-node:~/kubespray$ sudo ansible all -i inventory/mycluster/hosts.ini -m ping -vvv
ansible 2.7.8
config file = /home/master-node/kubespray/ansible.cfg
configured module search path = [u'/home/master-node/kubespray/library']
ansible python module location = /usr/local/lib/python2.7/dist-packages/ansible
executable location = /usr/local/bin/ansible
python version = 2.7.12 (default, Nov 12 2018, 14:36:49) [GCC 5.4.0 20160609]
Using /home/master-node/kubespray/ansible.cfg as config file
/home/master-node/kubespray/inventory/mycluster/hosts.ini did not meet host_list requirements, check plugin documentation if this is unexpected
/home/master-node/kubespray/inventory/mycluster/hosts.ini did not meet script requirements, check plugin documentation if this is unexpected
/home/master-node/kubespray/inventory/mycluster/hosts.ini did not meet yaml requirements, check plugin documentation if this is unexpected
Parsed /home/master-node/kubespray/inventory/mycluster/hosts.ini inventory source with ini plugin
META: ran handlers
Using module file /usr/local/lib/python2.7/dist-packages/ansible/modules/system/ping.py
<192.168.1.107> ESTABLISH SSH CONNECTION FOR USER: worker-node
<192.168.1.107> SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=30m -o ConnectionAttempts=100 -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=worker-node -o ConnectTimeout=10 -o ControlPath=/home/master-node/.ansible/cp/e24ed02313 192.168.1.107 '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"''
Using module file /usr/local/lib/python2.7/dist-packages/ansible/modules/system/ping.py
<192.168.1.142> ESTABLISH SSH CONNECTION FOR USER: master-node
<192.168.1.142> SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=30m -o ConnectionAttempts=100 -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=master-node -o ConnectTimeout=10 -o ControlPath=/home/master-node/.ansible/cp/01ac2924af 192.168.1.142 '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"''
master-node | UNREACHABLE! => {
"changed": false,
"msg": "SSH Error: data could not be sent to remote host \"192.168.1.142\". Make sure this host can be reached over ssh",
"unreachable": true
}
worker-node | UNREACHABLE! => {
"changed": false,
"msg": "SSH Error: data could not be sent to remote host \"192.168.1.107\". Make sure this host can be reached over ssh",
"unreachable": true
}
My provider : OpenStack
VM OS: Ubuntu 16.04
Docker-machine Version: 0.14.0
Problem:
I want to use userdata add another public key to authorized_keys,
using --openstack-user-data-file option to specify my userdata.yml.
Here is my userdata.yml:
#cloud-config
users:
- default
- name: ubuntu
groups: sudo
shell: /bin/bash
sudo: ['ALL=(ALL) NOPASSWD:ALL']
ssh-authorized-keys:
- ssh-rsa XXXXXXXXXXXXXX
Use docker-machine command to create vm:
docker-machine --debug create --driver openstack
--openstack-auth-url http://x.x.x.x:5001/v3
--openstack-domain-id defaule
--openstack-endpoint-type adminURL
--openstack-floatingip-pool ext-net
--openstack-keypair-name mykey
--openstack-flavor-id 4
--openstack-image-name ubuntu-16.04-cloud
--openstack-net-name private
--openstack-password XXXXX
--openstack-private-key-file /home/demo/id_rsa
--openstack-sec-groups default
--openstack-ssh-user ubuntu
--openstack-tenant-name admin
--openstack-user-data-file /home/demo/userdata.yml
--openstack-username admin
vm
After creating vm , docker-machine stuck " waiting for ssh to be available".
Here is debug output:
Getting to WaitForSSH function...
(vm) Calling .GetSSHHostname
(vm) Calling .GetSSHPort
(vm) Calling .GetSSHKeyPath
(vm) Calling .GetSSHKeyPath
(vm) Calling .GetSSHUsername
Using SSH client type: external
Using SSH private key: /root/.docker/machine/machines/vm/id_rsa (-rw-------)
&{[-F /dev/null -o PasswordAuthentication=no -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o LogLevel=quiet -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none ubuntu#10.50.2.36 -o IdentitiesOnly=yes -i /root/.docker/machine/machines/vm/id_rsa -p 22] /usr/bin/ssh <nil>}
About to run SSH command:
exit 0
SSH cmd err, output: exit status 255:
Error getting ssh command 'exit 0' : ssh command error:
command : exit 0
err : exit status 255
output :
I try to ssh to vm by command:
ssh -i /root/.docker/machine/machines/vm/id_rsa ubuntu#10.50.2.36
But got error message:
Permission denied (publickey).
So, I try another key , the key was in option of --openstack-private-key-file /home/demo/id_rsa
ssh -i /home/demo/id_rsa ubuntu#10.50.2.36
ssh was successful!
I checked two keys, /root/.docker/machine/machines/vm/id_rsa and /home/demo/id_rsa,
but two keys are the same.
I was confused, why the same keys, one can ssh another one can't ssh?
In order for Docker-Machine to set-up a virtual machine on OpenStack, you need to activate the config_drive option: docker-machine --openstack-config-drive [OTHER_OPTIONS] <MACHINE_NAME>
I know there are a few about this but so far nothing seems to work for me.
So I am trying to learn to use Ansible and I got stuck at this ssh connection issue. I think I did everything right however I would appreciate if someone would help out. Let me post the files I have configures and the result I have.
### ansible.cfg ###
[defaults]
inventory = ./Playbooks/hosts
remote_user = ansible
private_key_file = .ssh/id_key.pub
### Playbooks/hosts ###
[server]
ubu1 ansible_ssh_host=192.16.20.69 ansible_ssh_pass=qwerty ansible_ssh_user=ansible
### Command executed ###
sudo ansible -m ping -vvvv ubu1
### The result I get ###
Using /home/ansible/ansible.cfg as config file
Loaded callback minimal of type stdout, v2.0
<192.16.20.69> ESTABLISH SSH CONNECTION FOR USER: ansible
<192.16.20.69> SSH: EXEC sshpass -d12 ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o 'IdentityFile=".ssh/id_key.pub"' -o User=ansible -o ConnectTimeout=10 -o ControlPath=/home/ansible/.ansible/cp/ansible-ssh-%h-%p-%r 192.16.20.69 '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1470766758.25-258256142287087 `" && echo ansible-tmp-1470766758.25-258256142287087="` echo $HOME/.ansible/tmp/ansible-tmp-1470766758.25-258256142287087 `" ) && sleep 0'"'"''
ubu1 | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh.",
"unreachable": true
}
Unfortunalty I am unable to continue learning Ansible until I get this solved. One of the things I am wondering if the ssh-agent is not interfering with Ansible and if so and I must admit I have no clue on what to next.
Any help would be appreciated.
Thanks
Perry
The answer from comments above:
Try ANSIBLE_DEBUG=1 ansible -m ping -vvvv ubu1 and check the exact error message
Allowed to trace down problems with ip-addresses and python installation.
I'm trying to get set up with Ansible for the first time, to connect to a Raspberry Pi. Following the official 'getting started' steps, I've made an inventory file:
192.168.1.206
.. but the ping fails as follows:
$ ansible all -m ping -vvv
No config file found; using defaults
<192.168.1.206> ESTABLISH SSH CONNECTION FOR USER: pi
<192.168.1.206> SSH: EXEC ssh -C -q -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=pi -o ConnectTimeout=10 -o ControlPath=/Users/username/.ansible/cp/ansible-ssh-%h-%p-%r 192.168.1.206 '/bin/sh -c '"'"'( umask 22 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1464128959.67-131325759126042 `" && echo "` echo $HOME/.ansible/tmp/ansible-tmp-1464128959.67-131325759126042 `" )'"'"''
192.168.1.206 | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh.",
"unreachable": true
}
This looks the same as this question, but adding password/user bits make no effect for me, shouldn't be necessary to ping, and aren't in the official example anyhow. In any case I'd prefer to configure Ansible to use a specific public/private key pair (as per ssh -i ~/.ssh/keyfile method..)
Grateful for assistance.
Oh and yes the Raspberry is available at that address:
$ ping 192.168.1.206
PING 192.168.1.206 (192.168.1.206): 56 data bytes
64 bytes from 192.168.1.206: icmp_seq=0 ttl=64 time=83.822 ms
Despite what its name could suggest, Ansible ping module doesn't make an ICMP ping.
It tries to connect to host and makes sure a compatible version of Python is installed (as stated in the documentation).
ping - Try to connect to host, verify a usable python and return pong on success.
If you want to use a specific private key, you can specify ansible_ssh_private_key_file in your inventory file:
[all]
192.168.1.206 ansible_ssh_private_key_file=/home/example/.ssh/keyfile
It works for me.
10.23.4.5 ansible_ssh_pass='password' ansible_user='root'
You can also troubleshoot by executing ssh in debug mode and compare the results when running:
ssh -v pi#192.168.1.206
with:
ansible all -m ping -vvvv
I am creating a vm in openstack (linux vm) and launching ansible script from there.I am getting following ssh error.
---
- hosts: licproxy
user: my-user
sudo: yes
tasks:
- name: Install tinyproxy#
command: sudo apt-get install tinyproxy
- name: Update tinyproxy
command: sudo apt-get update
- name: Install bind9
shell: yes '' | sudo apt-get install bind9
Though I am directly able to ssh to machine 10.32.1.40 from the linux box in openstack admin-keydev29
PLAY [licproxy] ***********************************************************
GATHERING FACTS ***************************************************************
<10.32.1.40> ESTABLISH CONNECTION FOR USER: my-user
<10.32.1.40> REMOTE_MODULE setup
<10.32.1.40> EXEC ssh -C -tt -vvv -o StrictHostKeyChecking=no -o IdentityFile="/opt/apps/installer/tenant-dev29/ssh/admin-key-dev29" -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=my-user -o ConnectTimeout=10 10.32.1.40 /bin/sh -c 'mkdir -p $HOME/.ansible/tmp/ansible-tmp-1450797442.33-90087292637238 && chmod a+rx $HOME/.ansible/tmp/ansible-tmp-1450797442.33-90087292637238 && echo $HOME/.ansible/tmp/ansible-tmp-1450797442.33-90087292637238'
EXEC previous known host file not found for 10.32.1.40
fatal: [10.32.1.40] => SSH Error: ssh: connect to host 10.32.1.40 port 22: Connection refused
while connecting to 10.32.1.40:22
It is sometimes useful to re-run the command using -vvvv, which prints SSH debug output to help diagnose the issue.
TASK: [Install tinyproxy] *****************************************************
FATAL: no hosts matched or all hosts have already failed -- aborting
I removed from known_host entry and ran the script again it is still showing me same message.
UPDATE
I observed manual ssh is working fine.but ansible script is giving ssh error.
I logged in to the newly created vm using ssh key and checked /var/log/auth.log file
Dec 30 13:00:33 licproxy-vm sshd[1184]: Server listening on :: port 22.
Dec 30 13:01:10 licproxy-vm sshd[1448]: error: Could not load host key: /etc/ssh/ssh_host_ed25519_key
Dec 30 13:01:10 licproxy-vm sshd[1448]: Connection closed by 192.168.0.106 [preauth]
Dec 30 13:01:32 licproxy-vm sshd[1450]: error: Could not load host key: /etc/ssh/ssh_host_ed25519_key
The vm has sshd version OpenSSH_6.6.1 version
I checked /etc/ssh folder i found ssh_host_ed25519_key and ssh_host_ed25519_key.pub missing
I created those file using command ssh-keygen -A.
Now I want to know why these files are missing from ssh folder.Is this a bug?
Problem was because of ssh port 22.The port was not up.
I added the following code.which basically wait for ssh port to come up.
while ! nc -z $PROXY_SERVER_IP 22; do
sleep 10s
done