I'm following a tutorial showing how to setup a vagrant vm to practice using ansible.
I have the following directory structure, files, and configuration:
➜ trusty64 tree
.
├── Vagrantfile
├── ansible
│ ├── hosts
│ ├── playbooks
│ └── roles
└── ansible.cfg
3 directories, 3 files
➜ trusty64 cat ansible/hosts
[vagrantboxes]
vagrant ansible_ssh_host=127.0.0.1 ansible_ssh_port=2222
[vagrantboxes:vars]
ansible_ssh_user=vagrant
ansible_ssh_private_key_file=~/.vagrant.d/insecure_private_key
➜ trusty64 cat ansible.cfg
[defaults]
host_key_checking = False
hostfile = ./ansible/hosts
roles_path = ./ansible/roles
➜ trusty64 vagrant ssh-config
Host default
HostName 127.0.0.1
User vagrant
Port 2222
UserKnownHostsFile /dev/null
StrictHostKeyChecking no
PasswordAuthentication no
IdentityFile /Users/max/Desktop/vagrantboxes/trusty64/.vagrant/machines/default/virtualbox/private_key
IdentitiesOnly yes
LogLevel FATAL
But when I try pinging my vm it doesn't work:
➜ trusty64 ansible all -m ping
vagrant | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh.",
"unreachable": true
}
➜ trusty64 ansible all -m ping -u vagrant
vagrant | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh.",
"unreachable": true
}
What am I doing wrong?
Thanks :)
You might also want to put your ansible management node into a vagrant vm itself. This tutorial shows it quite well.
Vagrant changes insecure key on the first VM run.
Change ansible_ssh_private_key_file to the actual key from vagrant ssh-config:
ansible_ssh_private_key_file=/Users/max/Desktop/vagrantboxes/trusty64/.vagrant/machines/default/virtualbox/private_key
Related
My hosts:
➜ ansible cat hosts
[Production]
60.205.94.138
My ansible.cfg:
➜ ansible cat ansible.cfg
[Production]
60.205.94.138 ansible_ssh_private_key_file=/Users/yuanyuan/.ssh/yyb
My command, and its results:
The ssh command:
ssh-copy-id -i ~/.ssh/yyb.pub root#60.205.94.138
What is the problem?
You use ansible.cfg incorrectly. The content you have in there should be in your hosts file.
Try hosts:
[Production]
60.205.94.138 ansible_ssh_private_key_file=/Users/yuanyuan/.ssh/yyb
and:
$ ansible all -i hosts -u root -m ping
I would like to provision with my three nodes from the last one by using Ansible.
My host machine is Windows 10.
My Vagrantfile looks like:
Vagrant.configure("2") do |config|
(1..3).each do |index|
config.vm.define "node#{index}" do |node|
node.vm.box = "ubuntu"
node.vm.box = "../boxes/ubuntu_base.box"
node.vm.network :private_network, ip: "192.168.10.#{10 + index}"
if index == 3
node.vm.provision :setup, type: :ansible_local do |ansible|
ansible.playbook = "playbook.yml"
ansible.provisioning_path = "/vagrant/ansible"
ansible.inventory_path = "/vagrant/ansible/hosts"
ansible.limit = :all
ansible.install_mode = :pip
ansible.version = "2.0"
end
end
end
end
end
My playbook looks like:
---
# my little playbook
- name: My little playbook
hosts: webservers
gather_facts: false
roles:
- create_user
My hosts file looks like:
[webservers]
192.168.10.11
192.168.10.12
[dbservers]
192.168.10.11
192.168.10.13
[all:vars]
ansible_connection=ssh
ansible_ssh_user=vagrant
ansible_ssh_pass=vagrant
After executing vagrant up --provision I got the following error:
Bringing machine 'node1' up with 'virtualbox' provider...
Bringing machine 'node2' up with 'virtualbox' provider...
Bringing machine 'node3' up with 'virtualbox' provider...
==> node3: Running provisioner: setup (ansible_local)...
node3: Running ansible-playbook...
PLAY [My little playbook] ******************************************************
TASK [create_user : Create group] **********************************************
fatal: [192.168.10.11]: FAILED! => {"failed": true, "msg": "ERROR! Using a SSH password instead of a key is not possible because Host Key checking is enabled and sshpass does not support this. Please add this host's fingerprint to your known_hosts file to manage this host."}
fatal: [192.168.10.12]: FAILED! => {"failed": true, "msg": "ERROR! Using a SSH password instead of a key is not possible because Host Key checking is enabled and sshpass does not support this. Please add this host's fingerprint to your known_hosts file to manage this host."}
PLAY RECAP *********************************************************************
192.168.10.11 : ok=0 changed=0 unreachable=0 failed=1
192.168.10.12 : ok=0 changed=0 unreachable=0 failed=1
Ansible failed to complete successfully. Any error output should be
visible above. Please fix these errors and try again.
I extended my Vagrantfile with ansible.limit = :all and added [all:vars] to the hostfile, but still cannot get through the error.
Has anyone encountered the same issue?
Create a file ansible/ansible.cfg in your project directory (i.e. ansible.cfg in the provisioning_path on the target) with the following contents:
[defaults]
host_key_checking = false
provided that your Vagrant box has sshpass already installed - it's unclear, because the error message in your question suggests it was installed (otherwise it would be "ERROR! to use the 'ssh' connection type with passwords, you must install the sshpass program"), but in your answer you add it explicitly (sudo apt-get install sshpass), like it was not
I'm using Ansible version 2.6.2 and solution with host_key_checking = false doesn't work.
Adding environment variable export ANSIBLE_HOST_KEY_CHECKING=False skipping fingerprint check.
This error can also be solved by simply export ANSIBLE_HOST_KEY_CHECKING variable.
export ANSIBLE_HOST_KEY_CHECKING=False
source: https://github.com/ansible/ansible/issues/9442
This SO post gave the answer.
I just extended the known_hosts file on the machine that is responsible for the provisioning like this:
Snippet from my modified Vagrantfile:
...
if index == 3
node.vm.provision :pre, type: :shell, path: "install.sh"
node.vm.provision :setup, type: :ansible_local do |ansible|
...
My install.sh looks like:
# add web/database hosts to known_hosts (IP is defined in Vagrantfile)
ssh-keyscan -H 192.168.10.11 >> /home/vagrant/.ssh/known_hosts
ssh-keyscan -H 192.168.10.12 >> /home/vagrant/.ssh/known_hosts
ssh-keyscan -H 192.168.10.13 >> /home/vagrant/.ssh/known_hosts
chown vagrant:vagrant /home/vagrant/.ssh/known_hosts
# reload ssh in order to load the known hosts
/etc/init.d/ssh reload
I had a similar challenge when working with Ansible 2.9.6 on Ubuntu 20.04.
When I run the command:
ansible all -m ping -i inventory.txt
I get the error:
target | FAILED! => {
"msg": "Using a SSH password instead of a key is not possible because Host Key checking is enabled and sshpass does not support this. Please add this host's fingerprint to your known_hosts file to manage this host."
}
Here's how I fixed it:
When you install ansible, it creates a file called ansible.cfg, this can be found in the /etc/ansible directory. Simply open the file:
sudo nano /etc/ansible/ansible.cfg
Uncomment this line to disable SSH key host checking
host_key_checking = False
Now save the file and you should be fine now.
Note: You could also try to add the host's fingerprint to your known_hosts file by SSHing into the server from your machine, this prompts you to save the host's fingerprint to your known_hosts file:
promisepreston#ubuntu:~$ ssh myusername#192.168.43.240
The authenticity of host '192.168.43.240 (192.168.43.240)' can't be established.
ECDSA key fingerprint is SHA256:9Zib8lwSOHjA9khFkeEPk9MjOE67YN7qPC4mm/nuZNU.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added '192.168.43.240' (ECDSA) to the list of known hosts.
myusername#192.168.43.240's password:
Welcome to Ubuntu 20.04.1 LTS (GNU/Linux 5.4.0-53-generic x86_64)
That's all.
I hope this helps
run the below command, it resolved my issue
export ANSIBLE_HOST_KEY_CHECKING=False && ansible-playbook -i
all provided solutions require changes in global config file or adding environment variable what create problems to onboard new people.
Instead you can add following variable to your inventory or host vars
ansible_ssh_common_args: '-o StrictHostKeyChecking=no'
Adding ansible_ssh_common_args='-o StrictHostKeyChecking=no'
to either your inventory
like:
[all:vars]
ansible_ssh_common_args='-o StrictHostKeyChecking=no'
[all:children]
servers
[servers]
host1
OR:
[servers]
host1 ansible_ssh_common_args='-o StrictHostKeyChecking=no'
I'm trying to run ansible role on multiple servers, but i get an error:
fatal: [192.168.0.10]: UNREACHABLE! => {"changed": false, "msg":
"Failed to connect to the host via ssh.", "unreachable": true}
My /etc/ansible/hosts file looks like this:
192.168.0.10 ansible_sudo_pass='passphrase' ansible_ssh_user=user
192.168.0.11 ansible_sudo_pass='passphrase' ansible_ssh_user=user
192.168.0.12 ansible_sudo_pass='passphrase' ansible_ssh_user=user
I have no idea what's going on - everything looks fine - I can login via SSH, but ansible ping returns the same error.
The log from verbose execution:
<192.168.0.10> ESTABLISH SSH CONNECTION FOR USER: user <192.168.0.10>
SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o
KbdInteractiveAuthentication=no -o
PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey
-o PasswordAuthentication=no -o User=user -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/ansible-ssh-%h-%p-%r 192.168.0.10
'/bin/sh -c '"'"'( umask 22 && mkdir -p "echo
$HOME/.ansible/tmp/ansible-tmp-1463151813.31-156630225033829" &&
echo "echo
$HOME/.ansible/tmp/ansible-tmp-1463151813.31-156630225033829"
)'"'"''
Can you help me somehow? If I have to use ansible in local mode (-c local), then it's useless.
I've tried to delete ansible_sudo_pass and ansible_ssh_user, but it did'nt help.
You need to change the ansible_ssh_pass as well or ssh key, for example I am using this in my inventory file:
192.168.33.100 ansible_ssh_pass=vagrant ansible_ssh_user=vagrant
After that I can connect to the remote host:
ansible all -i tests -m ping
With the following result:
192.168.33.100 | SUCCESS => {
"changed": false,
"ping": "pong"
}
Hope that help you.
EDIT: ansible_ssh_pass & ansible_ssh_user don't work in the latest version of Ansible. It has changed to ansible_user & ansible_pass
mkdir /etc/ansible
cat > hosts
default ansible_ssh_host=127.0.0.1 ansible_ssh_port=2222 ansible_ssh_user=vagrant ansible_ssh_private_key_file=.vagrant/machines/default/virtualbox/private_key
Go to your playbook directory and run ansible all -m ping or ansible ping -m "server-group-name"
I had this issue, but it was for a different reason than was documented in other answers. My host that I was trying to deploy to was only available by going through a jump box. Originally, I thought that it was because Ansible wasn't recognizing my SSH config file, but it was. The solution for me was to make sure that the user that was present in the SSH config file matched the user in the Ansible playbook. That resolved the issue for me.
Try to modify your host file to:
192.168.0.10
192.168.0.11
192.168.0.12
$ansible -m ping all -vvv
After installing ansible on Ubuntu or CentOS.
You can have messages below. Do not panic, you must have an access right to the file /tmp of user [/home/user_name/.ansible/tmp/].
"Authentication or permission failure".
This preconisaion will solve the problem.
[Your_server ~]$ ansible -m ping all
rusub-bm-gt | SUCCESS => {
"changed": false,
"ping": "pong"
}
Your_server | SUCCESS => {
"changed": false,
"ping": "pong"
}
Best Practice for me I'm using SSH keys to access to server hosts
1.Create hosts file in inventories folder
[example]
example1.com
example2.com
example3.com
2. Create ansible-playbook file playbook.yml
---
- hosts:
- all
- roles:
- example
3. let's try to deploy ansible-playbook with multiple server hosts
ansible-playbook playbook.yml -i inventories/hosts example --user vagrant
The ansible_ssh_port changed while reloading the vm.
==> default: Booting VM...
==> default: Waiting for machine to boot. This may take a few minutes...
default: SSH address: 127.0.0.1:2222
default: SSH username: vagrant
default: SSH auth method: private key
So I had to update the inventory/hosts file as follows:
default ansible_ssh_host=127.0.0.1 ansible_ssh_port=2222 ansible_ssh_user='centos' ansible_ssh_private_key_file=<key path>
When I execute my cp-sshkey.yml playbook (logged in as myself, not the vagrant user) from my top-level Vagrantfile directory...
ansible-playbook cp-sshkey.yml
I'm getting this error:
TASK: [authorized_key user=vagrant key="{{ lookup('file', './files/id_rsa_vagrant.pub') }}"] ***
fatal: [web1] => Failed to template user=vagrant key="{{ lookup('file', './files/id_rsa_vagrant.pub') }}": could not locate file in lookup: ./files/id_rsa_vagrant.pub
I don't understand why this error is occurring. It's a very simple playbook and the public key file is where I say it is:
.
├── .vagrant
│ └── machines
├── Vagrantfile
├── ansible.cfg
├── bootstrap-mgmt.sh
├── files
│ └── id_rsa_vagrant.pub
├── inventory.ini
├── secrets.yml
├── site.yml
├── website
└── cp-sshkey.yml
Here's my config and host files and the playbook:
# ansible.cfg
[defaults]
hostfile = inventory.ini
remote_user = vagrant
private_key_file = .vagrant/machines/default/virtualbox/private_key
host_key_checking = False
# inventory.ini
[local]
localhost ansible_connection=local
[web]
web1 ansible_ssh_host=127.0.0.1 ansible_ssh_port=2222
# cp-sshkey.yml
- name: Install vagrant's public key on VM
hosts: web1
sudo: True
tasks:
- authorized_key: user=vagrant key="{{ lookup('file', './files/id_rsa_vagrant.pub') }}"
What am I doing wrong here? Thanks.
Quick answer will refine - am playing with this myself but just learning:
I assume you are trying to add your public key (or other key on your ansible console that is no related to the vagrant keys ) to the vagrant machine to allow you to ssh into it without vagrant ssh
I assume that you have checked all the file permissions etc and that you aren't juggling multiple instances. Tried with 127.0.0.1 and localhost and that you've tried with the full file path instead of relative to working directory - my examples use files in subfolders with templates although not in the working snippet below.
Are you able to vagrant ssh
and perhaps check the .ssh/authorized_keys file ?
Are you able to confirm that ansible can connect doing something like ansible web -a df
are you able to ssh into the Vagrant machine using ssh -i .vagrant/machines/default/virtualbox/private_key vagrant#127.0.0.1 -p 2222
In my role task file I have this task.
- name: Copy origin public key to auth keys
authorized_key: user=vagrant key="{{ lookup('file', lookup('env','HOME') + '/.ssh/id_rsa.pub') }}"
Also my host definition has the user:
web1 ansible_ssh_host=127.0.0.1 ansible_ssh_port=2222 ansible_ssh_user=vagrant
but assume that the config you use should work.
This play worked is working for me although my directory structure is different - I expect you're comfortable that your is fine.
Some things to watch out for that caught me:
when you rebuild the Vagrant machine you will need to flush out your .ssh/known_hosts if you have already ssh'd - can remove from known_hosts with sh-keygen -R [localhost]:2222
makes me uneasy seeing localhost as a machine tag
My Setup:
ansible 2.2.0.0
vagrant Version: 1.9.1
Mac OSX
VBox 5.1.6
Vagrant Instance - ubuntu/trusty64
I have just discovered Ansible and it is great! I have written some cool playbooks to manage 0downtime docker deployments on my servers, but I waste quite a bit of time waiting things to happen due to the fact that I sometimes have to work with poor internet connection. So i thought, I might be able to run Ansible against boot2docker, but got no success and after doing a lil bit of research I realized it would be too hacky and it would never behave like my actual Ubuntu server. So here I am trying to make it work with Vagrant.
I want to achive something like Laptop > Ansible > Vagrant Box; don`t want to run the playbooks from the Vagrant Box!
VagrantFile
Vagrant.configure(2) do |config|
config.vm.box = "ubuntu/trusty64"
config.ssh.forward_agent = true
end
vagrant ssh-config
Host default
HostName 127.0.0.1
User vagrant
Port 2222
UserKnownHostsFile /dev/null
StrictHostKeyChecking no
PasswordAuthentication no
IdentityFile "/Users/cesco/Code/vagrant/.vagrant/machines/default/virtualbox/private_key"
IdentitiesOnly yes
LogLevel FATAL
ForwardAgent yes
Thanks to some SO question I was able to do this:
$ vagrant ssh-config > vagrant-ssh
$ ssh -F vagrant-ssh default
$ vagrant#vagrant-ubuntu-trusty-64:~$
But I keep getting localhost | FAILED => SSH Error: Permission denied (publickey,password).every time I try to run the Ansible ping ont the vagrant box.
Ansible inventory
[staging]
vagrant#localhost
Ansible config
[ssh_connection]
ssh_args = -o UserKnownHostsFile=/dev/null \
-o StrictHostKeyChecking=no \
-o PasswordAuthentication=no \
-o IdentityFile=/Users/cesco/.vagrant.d/insecure_private_key \
-o IdentitiesOnly=yes \
-o LogLevel=FATAL \
-p 2222
How do I translate the ssh file to ansible configurantion?
It does not work on the command line also!
ssh -vvv vagrant#localhost -p 2222 -i /Users/cesco/.vagrant.d/insecure_private_key -o IdentitiesOnly=yes -o LogLevel=FATAL -o PasswordAuthentication=no -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null
To use vagrant with and classic ssh connection, first add another private IP to your Vagrant file.
config.vm.network "private_network", ip: "192.168.1.2"
Reload your instance
vagrant reload
Then you can connect by ssh using the private key.
ssh -vvv vagrant#192.168.1.2 -p 2222 -i /Users/cesco/.vagrant.d/insecure_private_key
That is the best way.
You misunderstand. The vagrant ansible plugin does not run ansible from the vagrant, but instead SSHs into the vagrant from your local box. That's the way to go since it means with a few small changes you can target a remote host instead.
To get it working you need to add something like this to your Vagrantfile:
config.vm.provision "ansible" do |ansible|
ansible.playbook = "ansible/vagrant.yml"
ansible.sudo = true
ansible.ask_vault_pass = true # comment out if you don't need
ansible.verbose = 'vv' # comment out if you don't want
ansible.groups = {
"tag_Role_myrole" => ["myrole"]
}
ansible.extra_vars = {
role: "myrole"
}
end
# Set the name of the VM.
config.vm.define "myrole" do |myrole|
luigi.vm.hostname = "myrole"
end
Create/update your ansible.cfg file with:
hostfile = ../.vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory
Create a hosts inventory file containing:
localhost=127.0.0.1 ansible_connection=local
Now vagrant up will bring up and provision the instance, or run vagrant provision to (re)provision a running vagrant.
To run a playbook directly against your vagrant use:
ansible-playbook -u vagrant --private-key=~/.vagrant.d/insecure_private_key yourplaybook.yml