Purpose
I want ansible to provision virtual box vm's on my windows 8 machine [via Vagrant]. Everything needs to run locally and since Ansible doesn't run on Windows, I bootstrap a debian vm with ansible as the control machine. This code served as an example.
After struggling with the system I got it somewhat working, but not completely (although ansible doesn't tell me).
Question
What configuration is required for a multi-machine setup using ansible [in a vm], vagrant and virtualbox [on windows host] if we want:
ssh acces from the host machine to the ansible-vm as well as all the slaves
ssh acces from the ansible-vm to all the slaves
being able to shield the multi-machine network from the host's network, if possible
Problem
Running ansible -m ping -all -i path-to-hosts yields ssh errors. It seems ansible tries to reach the machines named web1 and db1, but can't find such hosts.
ESTABLISH CONNECTION FOR USER: vagrant
REMOTE_MODULE ping
ESTABLISH CONNECTION FOR USER: vagrant
REMOTE_MODULE ping
EXEC ['ssh', '-C', '-tt', '-vvv', '-o', 'ControlMaster=auto', '-o', 'ControlPersist=60s', '-o', 'ControlPath=/home/vagrant/.ansible/cp/ansible-ssh-%h-%p-%r', '-o', 'KbdInteractiveAuthentication=no', '-o', 'PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey', '-o', 'PasswordAuthentication=no', '-o', 'ConnectTimeout=10', 'web1', "/bin/sh -c 'mkdir -p $HOME/.ansible/tmp/ansible-tmp-1398362619.41-142470238612762 && chmod a+rx $HOME/.ansible/tmp/ansible-tmp-1398362619.41-142470238612762 && echo $HOME/.ansible/tmp/ansible-tmp-1398362619.41-142470238612762'"]
EXEC previous known host file not found for web1
EXEC ['ssh', '-C', '-tt', '-vvv', '-o', 'ControlMaster=auto', '-o', 'ControlPersist=60s', '-o', 'ControlPath=/home/vagrant/.ansible/cp/ansible-ssh-%h-%p-%r', '-o', 'KbdInteractiveAuthentication=no', '-o', 'PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey', '-o', 'PasswordAuthentication=no', '-o', 'ConnectTimeout=10', 'db1', "/bin/sh -c 'mkdir -p $HOME/.ansible/tmp/ansible-tmp-1398362619.41-4982781019922 && chmod a+rx $HOME/.ansible/tmp/ansible-tmp-1398362619.41-4982781019922 && echo $HOME/.ansible/tmp/ansible-tmp-1398362619.41-4982781019922'"]
EXEC previous known host file not found for db1
web1 | FAILED => SSH encountered an unknown error. The output was:
OpenSSH_6.0p1 Debian-4, OpenSSL 1.0.1e 11 Feb 2013
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 19: Applying options for *
debug1: auto-mux: Trying existing master
debug1: Control socket "/home/vagrant/.ansible/cp/ansible-ssh-web1-22-vagrant" does not exist
debug2: ssh_connect: needpriv 0
ssh: Could not resolve hostname web1: Name or service not known
db1 | FAILED => SSH encountered an unknown error. The output was:
OpenSSH_6.0p1 Debian-4, OpenSSL 1.0.1e 11 Feb 2013
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 19: Applying options for *
debug1: auto-mux: Trying existing master
debug1: Control socket "/home/vagrant/.ansible/cp/ansible-ssh-db1-22-vagrant" does not exist
debug2: ssh_connect: needpriv 0
ssh: Could not resolve hostname db1: Name or service not known
Code
The following code tries to provision
1. ansible-master: the control machine running ansible
1. db1: a database server
1. web1: a web server
Vagrantfile
Vagrant.configure("2") do |config|
config.vm.box = "wheezy64"
config.vm.box_url = "http://puppet-vagrant-boxes.puppetlabs.com/debian-70rc1-x64-vbox4210.box"
config.vm.synced_folder ".", "/vagrant", :mount_options => ['dmode=777','fmode=666']
config.vm.network :public_network
config.vm.provider "virtualbox" do |v|
v.customize [
"modifyvm", :id,
"--groups", "/Vagrant/Ansible",
# "--natdnshostresolver1", "on"
]
end
config.vm.define :ansiblemaster do |ansiblemaster|
# ansiblemaster.vm.network :private_network, ip: "192.168.111.101"
ansiblemaster.vm.hostname = "ansiblemaster"
# ansiblemaster.vm.network :forwarded_port, guest: 80, host: 8080
ansiblemaster.ssh.forward_agent = true
ansiblemaster.vm.provider :virtualbox do |vb|
vb.customize ["modifyvm", :id, "--memory", 512]
vb.customize ["modifyvm", :id, "--name", "ansible-master"]
vb.name = "ansiblemaster"
end
ansiblemaster.vm.provision :shell, :inline =>
"if [[ ! -f /apt-get-run ]]; then sudo apt-get update && sudo touch /apt-get-run; fi"
ansiblemaster.vm.provision :shell do |sh|
sh.path = "provision.sh"
sh.args = "./ansible provisioning/site.yml provisioning/hosts/dev_hosts"
end
end
config.vm.define :web1 do |slave|
slave.vm.hostname = "web1"
# slave.vm.network :private_network, ip: "192.168.111.201"
slave.vm.synced_folder "./src", "/var/www/site", id: "proj-root"
slave.vm.provider :virtualbox do |vb|
vb.name = "web1"
vb.customize ["modifyvm", :id, "--memory", "512"]
end
end
config.vm.define :db1 do |slave|
slave.vm.hostname = "db1"
#slave.vm.network :private_network, ip: "192.168.111.202"
slave.vm.provider :virtualbox do |vb|
vb.name = "db1"
vb.customize ["modifyvm", :id, "--memory", "512"]
end
end
end
Provision.sh
#!/bin/bash
ANSIBLE_DIR=$1
ANSIBLE_PLAYBOOK=$2
ANSIBLE_HOSTS=$3
TEMP_HOSTS="/tmp/ansible_hosts"
if [ ! -f /vagrant/$ANSIBLE_PLAYBOOK ]; then
echo "Cannot find Ansible playbook"
exit 1
fi
if [ ! -f /vagrant/$ANSIBLE_HOSTS ]; then
echo "Cannot find Ansible hosts"
exit 2
fi
if [ ! -d $ANSIBLE_DIR ]; then
echo "Updating apt cache"
apt-get update
echo "Installing Ansible dependencies and Git"
apt-get install -y git python-yaml python-paramiko python-jinja2
echo "Cloning Ansible"
git clone git://github.com/ansible/ansible.git ${ANSIBLE_DIR}
fi
cd ${ANSIBLE_DIR}
cp /vagrant/${ANSIBLE_HOSTS} ${TEMP_HOSTS} && chmod -x ${TEMP_HOSTS}
echo "Running Ansible"
echo "dir is nu: " $(pwd)
source hacking/env-setup
echo "source ${ANSIBLE_DIR}/hacking/env-setup" >> /home/vagrant/.bashrc
ansible-playbook /vagrant/${ANSIBLE_PLAYBOOK} --inventory-file=${TEMP_HOSTS} --connection=local
rm ${TEMP_HOSTS}
provsioning/hosts/dev_hosts
[webservers]
web1
[dbservers]
db1
To answer my own question: the problem was resolved by upgrading ansible and importing the ssh keys of the other machines in Provision.sh.
# fix permissions on private key file
chmod 600 /home/vagrant/.ssh/id_rsa
# add web/database hosts to known_hosts (IP is defined in Vagrantfile)
ssh-keyscan -H 192.168.51.4 >> /home/vagrant/.ssh/known_hosts
ssh-keyscan -H 192.168.52.4 >> /home/vagrant/.ssh/known_hosts
chown vagrant:vagrant /home/vagrant/.ssh/known_hosts
# reload ssh in order to load the known hosts
/etc/init.d/ssh reload
Thats a really long question problem.
Ansible cant figure out the dns "ssh: Could not resolve hostname web1: Name or service not known"
Option 1
I like to simplify i boot my vagrant vm with static ip vm.network :private_network, ip: "xxx.xxx.xxx.xxx" and i edit my ansible host file
provsioning/hosts/dev_hosts
[webservers]
web1 ansible_ssh_host=xxx.xxx.xxx.xxx
[dbservers]
db1 ansible_ssh_host=xxx.xxx.xxx.yyy
Option 2
Use DNS or hostsfile
Hope that helps
Using "normal" Ansible modules to manage Windows boxes isn´t possible. Instead you have to use one of the windows modules. That´s the same for ping. It tries to connect via SSH to the Windows box, which doesn´t work.
Like skinnedknuckles already said, Ansible uses native Powershell remoting (and WinRM) instead of SSH to communicate with a Windows machine. So the win_ping module is the right way to do a ping with Ansible onto a Windows box:
ansible -m win_ping -all -i path-to-hosts
I assume you prepared your Windows 8 VM, like the docs are describing!?! If not, there´s this blog post explaining how to do all the steps incl. Vagrant setup with winrm connectivity in quite compact form.
Related
I have set the following ansible variables:
ansible_port: 5986
ansible_connection: winrm
ansible_winrm_server_cert_validation: ignore
and running my playbook via ansible-playbook -i ansible/inventory.ini -vvvvv ansible/playbook.yml works fine.
Now I would like vagrant to trigger the ansible provision via vagrant. The Vagrantfilelooks like this:
Vagrant.configure(2) do |config|
config.vm.define "virtualbox_windows_server_2016_1" do |s|
...
s.vm.provision "ansible" do |ansible|
ansible.playbook = "ansible/playbook.yml"
ansible.inventory_path = "ansible/inventory.ini"
ansible.config_file = "ansible/ansible.cfg"
ansible.verbose = "-vvvvv"
end
end
end
Doing a vagrant provision or vagrant up --provision results in the following error:
fatal: [virtualbox_windows_server_2016_1]: UNREACHABLE! => {
"changed": false,
"msg": "ssl: HTTPSConnectionPool(host='192.168.57.3', port=5986): Max retries exceeded with url: /wsman (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1056)')))",
"unreachable": true
}
The vagrant log info says it runs the following ansible command:
PYTHONUNBUFFERED=1 ANSIBLE_FORCE_COLOR=true ANSIBLE_CONFIG='ansible/ansible.cfg' ANSIBLE_HOST_KEY_CHECKING=false ANSIBLE_SSH_ARGS='-o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o IdentityFile=/Users/user/.vagrant.d/insecure_private_key -o ControlMaster=auto -o ControlPersist=60s' ansible-playbook --connection=ssh --timeout=30 --extra-vars=ansible_user\=\'vagrant\' --limit="virtualbox_windows_server_2016_1" --inventory-file=ansible/inventory.ini -vvvvv ansible/playbook.yml
Interestingly, when I copy and paste the above command and run it separately (i.e. on the terminal not via vagrant), there is no error and everything works just like the short ansible-playbook command I mentioned above.
It also works with and without vagrant if I set
ansible_port: 5985 # not 5986
What is the problem here?
Vagrant 2.2.4
ansible 2.8.1
Python 3.7.3
macOS 10.13.6
I would like to provision with my three nodes from the last one by using Ansible.
My host machine is Windows 10.
My Vagrantfile looks like:
Vagrant.configure("2") do |config|
(1..3).each do |index|
config.vm.define "node#{index}" do |node|
node.vm.box = "ubuntu"
node.vm.box = "../boxes/ubuntu_base.box"
node.vm.network :private_network, ip: "192.168.10.#{10 + index}"
if index == 3
node.vm.provision :setup, type: :ansible_local do |ansible|
ansible.playbook = "playbook.yml"
ansible.provisioning_path = "/vagrant/ansible"
ansible.inventory_path = "/vagrant/ansible/hosts"
ansible.limit = :all
ansible.install_mode = :pip
ansible.version = "2.0"
end
end
end
end
end
My playbook looks like:
---
# my little playbook
- name: My little playbook
hosts: webservers
gather_facts: false
roles:
- create_user
My hosts file looks like:
[webservers]
192.168.10.11
192.168.10.12
[dbservers]
192.168.10.11
192.168.10.13
[all:vars]
ansible_connection=ssh
ansible_ssh_user=vagrant
ansible_ssh_pass=vagrant
After executing vagrant up --provision I got the following error:
Bringing machine 'node1' up with 'virtualbox' provider...
Bringing machine 'node2' up with 'virtualbox' provider...
Bringing machine 'node3' up with 'virtualbox' provider...
==> node3: Running provisioner: setup (ansible_local)...
node3: Running ansible-playbook...
PLAY [My little playbook] ******************************************************
TASK [create_user : Create group] **********************************************
fatal: [192.168.10.11]: FAILED! => {"failed": true, "msg": "ERROR! Using a SSH password instead of a key is not possible because Host Key checking is enabled and sshpass does not support this. Please add this host's fingerprint to your known_hosts file to manage this host."}
fatal: [192.168.10.12]: FAILED! => {"failed": true, "msg": "ERROR! Using a SSH password instead of a key is not possible because Host Key checking is enabled and sshpass does not support this. Please add this host's fingerprint to your known_hosts file to manage this host."}
PLAY RECAP *********************************************************************
192.168.10.11 : ok=0 changed=0 unreachable=0 failed=1
192.168.10.12 : ok=0 changed=0 unreachable=0 failed=1
Ansible failed to complete successfully. Any error output should be
visible above. Please fix these errors and try again.
I extended my Vagrantfile with ansible.limit = :all and added [all:vars] to the hostfile, but still cannot get through the error.
Has anyone encountered the same issue?
Create a file ansible/ansible.cfg in your project directory (i.e. ansible.cfg in the provisioning_path on the target) with the following contents:
[defaults]
host_key_checking = false
provided that your Vagrant box has sshpass already installed - it's unclear, because the error message in your question suggests it was installed (otherwise it would be "ERROR! to use the 'ssh' connection type with passwords, you must install the sshpass program"), but in your answer you add it explicitly (sudo apt-get install sshpass), like it was not
I'm using Ansible version 2.6.2 and solution with host_key_checking = false doesn't work.
Adding environment variable export ANSIBLE_HOST_KEY_CHECKING=False skipping fingerprint check.
This error can also be solved by simply export ANSIBLE_HOST_KEY_CHECKING variable.
export ANSIBLE_HOST_KEY_CHECKING=False
source: https://github.com/ansible/ansible/issues/9442
This SO post gave the answer.
I just extended the known_hosts file on the machine that is responsible for the provisioning like this:
Snippet from my modified Vagrantfile:
...
if index == 3
node.vm.provision :pre, type: :shell, path: "install.sh"
node.vm.provision :setup, type: :ansible_local do |ansible|
...
My install.sh looks like:
# add web/database hosts to known_hosts (IP is defined in Vagrantfile)
ssh-keyscan -H 192.168.10.11 >> /home/vagrant/.ssh/known_hosts
ssh-keyscan -H 192.168.10.12 >> /home/vagrant/.ssh/known_hosts
ssh-keyscan -H 192.168.10.13 >> /home/vagrant/.ssh/known_hosts
chown vagrant:vagrant /home/vagrant/.ssh/known_hosts
# reload ssh in order to load the known hosts
/etc/init.d/ssh reload
I had a similar challenge when working with Ansible 2.9.6 on Ubuntu 20.04.
When I run the command:
ansible all -m ping -i inventory.txt
I get the error:
target | FAILED! => {
"msg": "Using a SSH password instead of a key is not possible because Host Key checking is enabled and sshpass does not support this. Please add this host's fingerprint to your known_hosts file to manage this host."
}
Here's how I fixed it:
When you install ansible, it creates a file called ansible.cfg, this can be found in the /etc/ansible directory. Simply open the file:
sudo nano /etc/ansible/ansible.cfg
Uncomment this line to disable SSH key host checking
host_key_checking = False
Now save the file and you should be fine now.
Note: You could also try to add the host's fingerprint to your known_hosts file by SSHing into the server from your machine, this prompts you to save the host's fingerprint to your known_hosts file:
promisepreston#ubuntu:~$ ssh myusername#192.168.43.240
The authenticity of host '192.168.43.240 (192.168.43.240)' can't be established.
ECDSA key fingerprint is SHA256:9Zib8lwSOHjA9khFkeEPk9MjOE67YN7qPC4mm/nuZNU.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added '192.168.43.240' (ECDSA) to the list of known hosts.
myusername#192.168.43.240's password:
Welcome to Ubuntu 20.04.1 LTS (GNU/Linux 5.4.0-53-generic x86_64)
That's all.
I hope this helps
run the below command, it resolved my issue
export ANSIBLE_HOST_KEY_CHECKING=False && ansible-playbook -i
all provided solutions require changes in global config file or adding environment variable what create problems to onboard new people.
Instead you can add following variable to your inventory or host vars
ansible_ssh_common_args: '-o StrictHostKeyChecking=no'
Adding ansible_ssh_common_args='-o StrictHostKeyChecking=no'
to either your inventory
like:
[all:vars]
ansible_ssh_common_args='-o StrictHostKeyChecking=no'
[all:children]
servers
[servers]
host1
OR:
[servers]
host1 ansible_ssh_common_args='-o StrictHostKeyChecking=no'
I'm trying to run ansible role on multiple servers, but i get an error:
fatal: [192.168.0.10]: UNREACHABLE! => {"changed": false, "msg":
"Failed to connect to the host via ssh.", "unreachable": true}
My /etc/ansible/hosts file looks like this:
192.168.0.10 ansible_sudo_pass='passphrase' ansible_ssh_user=user
192.168.0.11 ansible_sudo_pass='passphrase' ansible_ssh_user=user
192.168.0.12 ansible_sudo_pass='passphrase' ansible_ssh_user=user
I have no idea what's going on - everything looks fine - I can login via SSH, but ansible ping returns the same error.
The log from verbose execution:
<192.168.0.10> ESTABLISH SSH CONNECTION FOR USER: user <192.168.0.10>
SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o
KbdInteractiveAuthentication=no -o
PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey
-o PasswordAuthentication=no -o User=user -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/ansible-ssh-%h-%p-%r 192.168.0.10
'/bin/sh -c '"'"'( umask 22 && mkdir -p "echo
$HOME/.ansible/tmp/ansible-tmp-1463151813.31-156630225033829" &&
echo "echo
$HOME/.ansible/tmp/ansible-tmp-1463151813.31-156630225033829"
)'"'"''
Can you help me somehow? If I have to use ansible in local mode (-c local), then it's useless.
I've tried to delete ansible_sudo_pass and ansible_ssh_user, but it did'nt help.
You need to change the ansible_ssh_pass as well or ssh key, for example I am using this in my inventory file:
192.168.33.100 ansible_ssh_pass=vagrant ansible_ssh_user=vagrant
After that I can connect to the remote host:
ansible all -i tests -m ping
With the following result:
192.168.33.100 | SUCCESS => {
"changed": false,
"ping": "pong"
}
Hope that help you.
EDIT: ansible_ssh_pass & ansible_ssh_user don't work in the latest version of Ansible. It has changed to ansible_user & ansible_pass
mkdir /etc/ansible
cat > hosts
default ansible_ssh_host=127.0.0.1 ansible_ssh_port=2222 ansible_ssh_user=vagrant ansible_ssh_private_key_file=.vagrant/machines/default/virtualbox/private_key
Go to your playbook directory and run ansible all -m ping or ansible ping -m "server-group-name"
I had this issue, but it was for a different reason than was documented in other answers. My host that I was trying to deploy to was only available by going through a jump box. Originally, I thought that it was because Ansible wasn't recognizing my SSH config file, but it was. The solution for me was to make sure that the user that was present in the SSH config file matched the user in the Ansible playbook. That resolved the issue for me.
Try to modify your host file to:
192.168.0.10
192.168.0.11
192.168.0.12
$ansible -m ping all -vvv
After installing ansible on Ubuntu or CentOS.
You can have messages below. Do not panic, you must have an access right to the file /tmp of user [/home/user_name/.ansible/tmp/].
"Authentication or permission failure".
This preconisaion will solve the problem.
[Your_server ~]$ ansible -m ping all
rusub-bm-gt | SUCCESS => {
"changed": false,
"ping": "pong"
}
Your_server | SUCCESS => {
"changed": false,
"ping": "pong"
}
Best Practice for me I'm using SSH keys to access to server hosts
1.Create hosts file in inventories folder
[example]
example1.com
example2.com
example3.com
2. Create ansible-playbook file playbook.yml
---
- hosts:
- all
- roles:
- example
3. let's try to deploy ansible-playbook with multiple server hosts
ansible-playbook playbook.yml -i inventories/hosts example --user vagrant
The ansible_ssh_port changed while reloading the vm.
==> default: Booting VM...
==> default: Waiting for machine to boot. This may take a few minutes...
default: SSH address: 127.0.0.1:2222
default: SSH username: vagrant
default: SSH auth method: private key
So I had to update the inventory/hosts file as follows:
default ansible_ssh_host=127.0.0.1 ansible_ssh_port=2222 ansible_ssh_user='centos' ansible_ssh_private_key_file=<key path>
I have just discovered Ansible and it is great! I have written some cool playbooks to manage 0downtime docker deployments on my servers, but I waste quite a bit of time waiting things to happen due to the fact that I sometimes have to work with poor internet connection. So i thought, I might be able to run Ansible against boot2docker, but got no success and after doing a lil bit of research I realized it would be too hacky and it would never behave like my actual Ubuntu server. So here I am trying to make it work with Vagrant.
I want to achive something like Laptop > Ansible > Vagrant Box; don`t want to run the playbooks from the Vagrant Box!
VagrantFile
Vagrant.configure(2) do |config|
config.vm.box = "ubuntu/trusty64"
config.ssh.forward_agent = true
end
vagrant ssh-config
Host default
HostName 127.0.0.1
User vagrant
Port 2222
UserKnownHostsFile /dev/null
StrictHostKeyChecking no
PasswordAuthentication no
IdentityFile "/Users/cesco/Code/vagrant/.vagrant/machines/default/virtualbox/private_key"
IdentitiesOnly yes
LogLevel FATAL
ForwardAgent yes
Thanks to some SO question I was able to do this:
$ vagrant ssh-config > vagrant-ssh
$ ssh -F vagrant-ssh default
$ vagrant#vagrant-ubuntu-trusty-64:~$
But I keep getting localhost | FAILED => SSH Error: Permission denied (publickey,password).every time I try to run the Ansible ping ont the vagrant box.
Ansible inventory
[staging]
vagrant#localhost
Ansible config
[ssh_connection]
ssh_args = -o UserKnownHostsFile=/dev/null \
-o StrictHostKeyChecking=no \
-o PasswordAuthentication=no \
-o IdentityFile=/Users/cesco/.vagrant.d/insecure_private_key \
-o IdentitiesOnly=yes \
-o LogLevel=FATAL \
-p 2222
How do I translate the ssh file to ansible configurantion?
It does not work on the command line also!
ssh -vvv vagrant#localhost -p 2222 -i /Users/cesco/.vagrant.d/insecure_private_key -o IdentitiesOnly=yes -o LogLevel=FATAL -o PasswordAuthentication=no -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null
To use vagrant with and classic ssh connection, first add another private IP to your Vagrant file.
config.vm.network "private_network", ip: "192.168.1.2"
Reload your instance
vagrant reload
Then you can connect by ssh using the private key.
ssh -vvv vagrant#192.168.1.2 -p 2222 -i /Users/cesco/.vagrant.d/insecure_private_key
That is the best way.
You misunderstand. The vagrant ansible plugin does not run ansible from the vagrant, but instead SSHs into the vagrant from your local box. That's the way to go since it means with a few small changes you can target a remote host instead.
To get it working you need to add something like this to your Vagrantfile:
config.vm.provision "ansible" do |ansible|
ansible.playbook = "ansible/vagrant.yml"
ansible.sudo = true
ansible.ask_vault_pass = true # comment out if you don't need
ansible.verbose = 'vv' # comment out if you don't want
ansible.groups = {
"tag_Role_myrole" => ["myrole"]
}
ansible.extra_vars = {
role: "myrole"
}
end
# Set the name of the VM.
config.vm.define "myrole" do |myrole|
luigi.vm.hostname = "myrole"
end
Create/update your ansible.cfg file with:
hostfile = ../.vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory
Create a hosts inventory file containing:
localhost=127.0.0.1 ansible_connection=local
Now vagrant up will bring up and provision the instance, or run vagrant provision to (re)provision a running vagrant.
To run a playbook directly against your vagrant use:
ansible-playbook -u vagrant --private-key=~/.vagrant.d/insecure_private_key yourplaybook.yml
I am creating a vm in openstack (linux vm) and launching ansible script from there.I am getting following ssh error.
---
- hosts: licproxy
user: my-user
sudo: yes
tasks:
- name: Install tinyproxy#
command: sudo apt-get install tinyproxy
- name: Update tinyproxy
command: sudo apt-get update
- name: Install bind9
shell: yes '' | sudo apt-get install bind9
Though I am directly able to ssh to machine 10.32.1.40 from the linux box in openstack admin-keydev29
PLAY [licproxy] ***********************************************************
GATHERING FACTS ***************************************************************
<10.32.1.40> ESTABLISH CONNECTION FOR USER: my-user
<10.32.1.40> REMOTE_MODULE setup
<10.32.1.40> EXEC ssh -C -tt -vvv -o StrictHostKeyChecking=no -o IdentityFile="/opt/apps/installer/tenant-dev29/ssh/admin-key-dev29" -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=my-user -o ConnectTimeout=10 10.32.1.40 /bin/sh -c 'mkdir -p $HOME/.ansible/tmp/ansible-tmp-1450797442.33-90087292637238 && chmod a+rx $HOME/.ansible/tmp/ansible-tmp-1450797442.33-90087292637238 && echo $HOME/.ansible/tmp/ansible-tmp-1450797442.33-90087292637238'
EXEC previous known host file not found for 10.32.1.40
fatal: [10.32.1.40] => SSH Error: ssh: connect to host 10.32.1.40 port 22: Connection refused
while connecting to 10.32.1.40:22
It is sometimes useful to re-run the command using -vvvv, which prints SSH debug output to help diagnose the issue.
TASK: [Install tinyproxy] *****************************************************
FATAL: no hosts matched or all hosts have already failed -- aborting
I removed from known_host entry and ran the script again it is still showing me same message.
UPDATE
I observed manual ssh is working fine.but ansible script is giving ssh error.
I logged in to the newly created vm using ssh key and checked /var/log/auth.log file
Dec 30 13:00:33 licproxy-vm sshd[1184]: Server listening on :: port 22.
Dec 30 13:01:10 licproxy-vm sshd[1448]: error: Could not load host key: /etc/ssh/ssh_host_ed25519_key
Dec 30 13:01:10 licproxy-vm sshd[1448]: Connection closed by 192.168.0.106 [preauth]
Dec 30 13:01:32 licproxy-vm sshd[1450]: error: Could not load host key: /etc/ssh/ssh_host_ed25519_key
The vm has sshd version OpenSSH_6.6.1 version
I checked /etc/ssh folder i found ssh_host_ed25519_key and ssh_host_ed25519_key.pub missing
I created those file using command ssh-keygen -A.
Now I want to know why these files are missing from ssh folder.Is this a bug?
Problem was because of ssh port 22.The port was not up.
I added the following code.which basically wait for ssh port to come up.
while ! nc -z $PROXY_SERVER_IP 22; do
sleep 10s
done