Ansible script ssh error - ssh

I am creating a vm in openstack (linux vm) and launching ansible script from there.I am getting following ssh error.
---
- hosts: licproxy
user: my-user
sudo: yes
tasks:
- name: Install tinyproxy#
command: sudo apt-get install tinyproxy
- name: Update tinyproxy
command: sudo apt-get update
- name: Install bind9
shell: yes '' | sudo apt-get install bind9
Though I am directly able to ssh to machine 10.32.1.40 from the linux box in openstack admin-keydev29
PLAY [licproxy] ***********************************************************
GATHERING FACTS ***************************************************************
<10.32.1.40> ESTABLISH CONNECTION FOR USER: my-user
<10.32.1.40> REMOTE_MODULE setup
<10.32.1.40> EXEC ssh -C -tt -vvv -o StrictHostKeyChecking=no -o IdentityFile="/opt/apps/installer/tenant-dev29/ssh/admin-key-dev29" -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=my-user -o ConnectTimeout=10 10.32.1.40 /bin/sh -c 'mkdir -p $HOME/.ansible/tmp/ansible-tmp-1450797442.33-90087292637238 && chmod a+rx $HOME/.ansible/tmp/ansible-tmp-1450797442.33-90087292637238 && echo $HOME/.ansible/tmp/ansible-tmp-1450797442.33-90087292637238'
EXEC previous known host file not found for 10.32.1.40
fatal: [10.32.1.40] => SSH Error: ssh: connect to host 10.32.1.40 port 22: Connection refused
while connecting to 10.32.1.40:22
It is sometimes useful to re-run the command using -vvvv, which prints SSH debug output to help diagnose the issue.
TASK: [Install tinyproxy] *****************************************************
FATAL: no hosts matched or all hosts have already failed -- aborting
I removed from known_host entry and ran the script again it is still showing me same message.
UPDATE
I observed manual ssh is working fine.but ansible script is giving ssh error.
I logged in to the newly created vm using ssh key and checked /var/log/auth.log file
Dec 30 13:00:33 licproxy-vm sshd[1184]: Server listening on :: port 22.
Dec 30 13:01:10 licproxy-vm sshd[1448]: error: Could not load host key: /etc/ssh/ssh_host_ed25519_key
Dec 30 13:01:10 licproxy-vm sshd[1448]: Connection closed by 192.168.0.106 [preauth]
Dec 30 13:01:32 licproxy-vm sshd[1450]: error: Could not load host key: /etc/ssh/ssh_host_ed25519_key
The vm has sshd version OpenSSH_6.6.1 version
I checked /etc/ssh folder i found ssh_host_ed25519_key and ssh_host_ed25519_key.pub missing
I created those file using command ssh-keygen -A.
Now I want to know why these files are missing from ssh folder.Is this a bug?

Problem was because of ssh port 22.The port was not up.
I added the following code.which basically wait for ssh port to come up.
while ! nc -z $PROXY_SERVER_IP 22; do
sleep 10s
done

Related

Docker-machine can't use userdata add key to ssh cloud image

My provider : OpenStack
VM OS: Ubuntu 16.04
Docker-machine Version: 0.14.0
Problem:
I want to use userdata add another public key to authorized_keys,
using --openstack-user-data-file option to specify my userdata.yml.
Here is my userdata.yml:
#cloud-config
users:
- default
- name: ubuntu
groups: sudo
shell: /bin/bash
sudo: ['ALL=(ALL) NOPASSWD:ALL']
ssh-authorized-keys:
- ssh-rsa XXXXXXXXXXXXXX
Use docker-machine command to create vm:
docker-machine --debug create --driver openstack
--openstack-auth-url http://x.x.x.x:5001/v3
--openstack-domain-id defaule
--openstack-endpoint-type adminURL
--openstack-floatingip-pool ext-net
--openstack-keypair-name mykey
--openstack-flavor-id 4
--openstack-image-name ubuntu-16.04-cloud
--openstack-net-name private
--openstack-password XXXXX
--openstack-private-key-file /home/demo/id_rsa
--openstack-sec-groups default
--openstack-ssh-user ubuntu
--openstack-tenant-name admin
--openstack-user-data-file /home/demo/userdata.yml
--openstack-username admin
vm
After creating vm , docker-machine stuck " waiting for ssh to be available".
Here is debug output:
Getting to WaitForSSH function...
(vm) Calling .GetSSHHostname
(vm) Calling .GetSSHPort
(vm) Calling .GetSSHKeyPath
(vm) Calling .GetSSHKeyPath
(vm) Calling .GetSSHUsername
Using SSH client type: external
Using SSH private key: /root/.docker/machine/machines/vm/id_rsa (-rw-------)
&{[-F /dev/null -o PasswordAuthentication=no -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o LogLevel=quiet -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none ubuntu#10.50.2.36 -o IdentitiesOnly=yes -i /root/.docker/machine/machines/vm/id_rsa -p 22] /usr/bin/ssh <nil>}
About to run SSH command:
exit 0
SSH cmd err, output: exit status 255:
Error getting ssh command 'exit 0' : ssh command error:
command : exit 0
err : exit status 255
output :
I try to ssh to vm by command:
ssh -i /root/.docker/machine/machines/vm/id_rsa ubuntu#10.50.2.36
But got error message:
Permission denied (publickey).
So, I try another key , the key was in option of --openstack-private-key-file /home/demo/id_rsa
ssh -i /home/demo/id_rsa ubuntu#10.50.2.36
ssh was successful!
I checked two keys, /root/.docker/machine/machines/vm/id_rsa and /home/demo/id_rsa,
but two keys are the same.
I was confused, why the same keys, one can ssh another one can't ssh?
In order for Docker-Machine to set-up a virtual machine on OpenStack, you need to activate the config_drive option: docker-machine --openstack-config-drive [OTHER_OPTIONS] <MACHINE_NAME>

Setup ICP 2.1.0 (IBM Cloud Private) fails due to ssh troubles. Single host installation under Ubuntu

When running
sudo docker run --net=host -t -e LICENSE=accept -v $(pwd):/installer/cluster ibmcom/icp-inception:2.1.0-ee install
I get fatal: [192.168.201.130] => Failed to connect to the host via ssh: Permission denied (publickey,password).
I have debugged the session:
root#icpecm:/opt/ibm-cloud-private-2.1.0/cluster# ssh -vvv -i cluster/ssh_key root#192.168.201.130
this is successful.
Have you copied the public key in all the nodes?
In your case:
$ ssh-copy-id -i .ssh/id_rsa root#192.168.201.130

Running ansible but keep getting failed to connect via ssh

MacBook-Pro:rails1 woo$ ssh vagrant#10.0.1.92
Welcome to Ubuntu 14.04.4 LTS (GNU/Linux 3.13.0-91-generic x86_64)
* Documentation: https://help.ubuntu.com/
System information as of Tue Jul 5 03:52:20 UTC 2016
System load: 0.0 Users logged in: 1
Usage of /: 4.0% of 39.34GB IP address for eth0: 10.0.2.15
Memory usage: 32% IP address for eth1: 10.0.1.100
Swap usage: 0% IP address for eth2: 10.0.1.92
Processes: 80
Graph this data and manage this system at:
https://landscape.canonical.com/
Get cloud support with Ubuntu Advantage Cloud Guest:
http://www.ubuntu.com/business/services/cloud
Last login: Tue Jul 5 03:52:20 2016 from 10.0.1.19
vagrant#vagrant-ubuntu-trusty-64:~$
But,
>ansible -vvvv all -m ping -u vagrant
/Library/Python/2.7/site-packages/Crypto/Util/number.py:57: PowmInsecureWarning: Not using mpz_powm_sec. You should rebuild using libgmp >= 5 to avoid timing attack vulnerability.
_warn("Not using mpz_powm_sec. You should rebuild using libgmp >= 5 to avoid timing attack vulnerability.", PowmInsecureWarning)
Using /Users/woo/vagrant_vms/rails1/ansible.cfg as config file
Loaded callback minimal of type stdout, v2.0
<10.0.1.92> ESTABLISH SSH CONNECTION FOR USER: vagrant
<10.0.1.92> SSH: EXEC ssh -C -vvv -o ForwardAgent=yes -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=10 10.0.1.92 '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1467746604.02-144506913281055 `" && echo ansible-tmp-1467746604.02-144506913281055="` echo $HOME/.ansible/tmp/ansible-tmp-1467746604.02-144506913281055 `" ) && sleep 0'"'"''
10.0.1.92 | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh.",
"unreachable": true
}
I've done:
cat ~/.ssh/id_rsa.pub | ssh vagrant#10.0.1.92 "mkdir -p ~/.ssh && cat >> ~/.ssh/authorized_keys"
and it was successful as tested by the ssh command.
I don't understand why I keep getting the Failed to connect message.
The 10.0.1.92 is in the hosts file and the ip of the vm is set to that ip.
Can you try this:
ansible -vvvv all -m ping -u vagrant
Try to issue these two commands before connecting the vagrant box with Ansible.
$ eval $(ssh-agent -s)
$ ssh-add

ansible SSH connection fail

I'm trying to run ansible role on multiple servers, but i get an error:
fatal: [192.168.0.10]: UNREACHABLE! => {"changed": false, "msg":
"Failed to connect to the host via ssh.", "unreachable": true}
My /etc/ansible/hosts file looks like this:
192.168.0.10 ansible_sudo_pass='passphrase' ansible_ssh_user=user
192.168.0.11 ansible_sudo_pass='passphrase' ansible_ssh_user=user
192.168.0.12 ansible_sudo_pass='passphrase' ansible_ssh_user=user
I have no idea what's going on - everything looks fine - I can login via SSH, but ansible ping returns the same error.
The log from verbose execution:
<192.168.0.10> ESTABLISH SSH CONNECTION FOR USER: user <192.168.0.10>
SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o
KbdInteractiveAuthentication=no -o
PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey
-o PasswordAuthentication=no -o User=user -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/ansible-ssh-%h-%p-%r 192.168.0.10
'/bin/sh -c '"'"'( umask 22 && mkdir -p "echo
$HOME/.ansible/tmp/ansible-tmp-1463151813.31-156630225033829" &&
echo "echo
$HOME/.ansible/tmp/ansible-tmp-1463151813.31-156630225033829"
)'"'"''
Can you help me somehow? If I have to use ansible in local mode (-c local), then it's useless.
I've tried to delete ansible_sudo_pass and ansible_ssh_user, but it did'nt help.
You need to change the ansible_ssh_pass as well or ssh key, for example I am using this in my inventory file:
192.168.33.100 ansible_ssh_pass=vagrant ansible_ssh_user=vagrant
After that I can connect to the remote host:
ansible all -i tests -m ping
With the following result:
192.168.33.100 | SUCCESS => {
"changed": false,
"ping": "pong"
}
Hope that help you.
EDIT: ansible_ssh_pass & ansible_ssh_user don't work in the latest version of Ansible. It has changed to ansible_user & ansible_pass
mkdir /etc/ansible
cat > hosts
default ansible_ssh_host=127.0.0.1 ansible_ssh_port=2222 ansible_ssh_user=vagrant ansible_ssh_private_key_file=.vagrant/machines/default/virtualbox/private_key
Go to your playbook directory and run ansible all -m ping or ansible ping -m "server-group-name"
I had this issue, but it was for a different reason than was documented in other answers. My host that I was trying to deploy to was only available by going through a jump box. Originally, I thought that it was because Ansible wasn't recognizing my SSH config file, but it was. The solution for me was to make sure that the user that was present in the SSH config file matched the user in the Ansible playbook. That resolved the issue for me.
Try to modify your host file to:
192.168.0.10
192.168.0.11
192.168.0.12
$ansible -m ping all -vvv
After installing ansible on Ubuntu or CentOS.
You can have messages below. Do not panic, you must have an access right to the file /tmp of user [/home/user_name/.ansible/tmp/].
"Authentication or permission failure".
This preconisaion will solve the problem.
[Your_server ~]$ ansible -m ping all
rusub-bm-gt | SUCCESS => {
"changed": false,
"ping": "pong"
}
Your_server | SUCCESS => {
"changed": false,
"ping": "pong"
}
Best Practice for me I'm using SSH keys to access to server hosts
1.Create hosts file in inventories folder
[example]
example1.com
example2.com
example3.com
2. Create ansible-playbook file playbook.yml
---
- hosts:
- all
- roles:
- example
3. let's try to deploy ansible-playbook with multiple server hosts
ansible-playbook playbook.yml -i inventories/hosts example --user vagrant
The ansible_ssh_port changed while reloading the vm.
==> default: Booting VM...
==> default: Waiting for machine to boot. This may take a few minutes...
default: SSH address: 127.0.0.1:2222
default: SSH username: vagrant
default: SSH auth method: private key
So I had to update the inventory/hosts file as follows:
default ansible_ssh_host=127.0.0.1 ansible_ssh_port=2222 ansible_ssh_user='centos' ansible_ssh_private_key_file=<key path>

I can ssh just fine, but ansible says "no route to host"

I wrote a script to run up several vms using vagrant, which I have to then provision with ansible. Unfortunately my host is a windows machine, so I thought I could solve the issue by putting all the vms into a vpn and then provision them from another machine in the same vpn.
In theory, it works... I can ssh into the other machines without trouble. But when I run my ansible playbook, ansible fails.
At first I got the message "ssh: connect to host 10.1.2.100 [10.1.2.100] port 22: No route to host" when running ansible with -vvvv
This was in the evening, and I was very tired, and this error didn't recur the following morning. Not sure if it's got something to do with the vm I'm doing deployment from being rebooted in the meantime, or the receiving machine being destroyed and uped completely since then. In any case, the problem has not gone away.
results now, after recreating both vms:
# ansible-playbook -i vms -k -u vagrant vms.yml -vvvv
result:
<10.1.2.100> ESTABLISH SSH CONNECTION FOR USER: vagrant <10.1.2.100>
SSH: EXEC sshpass -d14 ssh -C -vvv -o ServerAliveInterval=50 -o
User=vagrant -o ConnectTimeout=10 -tt 10.1.2.100 '( umask 22 && mkdir
-p "$( echo $HOME/.ansible/tmp/ansible-tmp-1455781388.36-25193904947084 )" && echo
"$( echo $HOME/.ansible/tmp/ansible-tmp-1455781388.36-25193904947084
)" )' fatal: [10.1.2.100]: FAILED! => {"failed": true, "msg": "ERROR!
Using a SSH password instead of a key is not possible because Host Key
checking is enabled and sshpass does not support this. Please add
this host's fingerprint to your known_hosts file to manage this
host."}
So far so clear. I ssh into the other instance to add it to the known hosts. This works without any trouble.
Back to ansible, I try the same command again. The result now is:
<10.1.2.100> ESTABLISH SSH CONNECTION FOR USER: vagrant <10.1.2.100>
SSH: EXEC sshpass -d14 ssh -C -vvv -o ServerAliveInterval=50 -o
StrictHostKeyChecking=no -o User=vagrant -o ConnectTimeout=10 -tt
10.1.2.100 '( umask 22 && mkdir -p "$( echo $HOME/.ansible/tmp/ansible-tmp-1455782149.99-271768166468916 )" &&
echo "$( echo
$HOME/.ansible/tmp/ansible-tmp-1455782149.99-271768166468916 )" )'
<10.1.2.100> PUT /tmp/tmpXQKa8Z TO
/home/vagrant/.ansible/tmp/ansible-tmp-1455782149.99-271768166468916/setup
<10.1.2.100> SSH: EXEC sshpass -d14 sftp -b - -C -vvv -o
ServerAliveInterval=50 -o StrictHostKeyChecking=no -o User=vagrant -o
ConnectTimeout=10 '[10.1.2.100]' fatal: [10.1.2.100]: UNREACHABLE! =>
{"changed": false, "msg": "ERROR! SSH Error: data could not be sent to
the remote host. Make sure this host can be reached over ssh",
"unreachable": true}
Well, I made sure the host was reachable by ssh, thank you very much! Ansible still can't get through, and I'm about to get a brain tumor from thinking of things that might be the problem.
Any suggestions what might be the problem?
This issue was reported here, with some workarounds:
https://github.com/ansible/ansible/issues/15321
The consensus seems to be either to a. use ansible_password or b. use -u username in the connection parameters. However, any number of things can disrupt an SSH connection in ways that make it look "unreachable" to higher level apps, so I recommend going through each of the steps outlined in that ticket.