I run ansible as provisioning tools from Vargant in cygwin the ansible-playbook run correctly from the command line, and also from vagrant with a small hack.
My question is how to specify a hosts file to Vagrant ? to surround the issue below ?
[16:18:23 ~/Vagrant/Exercice 567 ]$ vagrant provision
==> haproxy1: Running provisioner: shell...
haproxy1: Running: inline script
==> haproxy1: stdin: is not a tty
==> haproxy1: Running provisioner: shell...
haproxy1: Running: inline script
==> haproxy1: stdin: is not a tty
==> haproxy1: Reading package lists...
==> haproxy1: Building dependency tree...
==> haproxy1: Reading state information...
==> haproxy1: curl is already the newest version.
==> haproxy1: 0 upgraded, 0 newly installed, 0 to remove and 66 not upgraded.
==> haproxy1: Running provisioner: shell...
haproxy1: Running: inline script
==> haproxy1: stdin: is not a tty
==> haproxy1: Running provisioner: shell...
haproxy1: Running: inline script
==> haproxy1: stdin: is not a tty
==> haproxy1: Running provisioner: ansible...
PYTHONUNBUFFERED=1 ANSIBLE_HOST_KEY_CHECKING=false ANSIBLE_NOCOLOR=true ANSIBLE_SSH_ARGS='-o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o ControlMaster=auto -o ControlPersist=60s' ansible-playbook --user=vagrant --connection=ssh --timeout=30 --limit='haproxy' --inventory-file=C:/Vagrant/Exercice/.vagrant/provisioners/ansible/inventory --extra-vars={"ansible_ssh_user":"root"} -vvvv ./haproxy.yml
No config file found; using defaults
Loaded callback default of type stdout, v2.0
PLAYBOOK: haproxy.yml **********************************************************
1 plays in ./haproxy.yml
PLAY [haproxy] *****************************************************************
skipping: no hosts matched
PLAY RECAP *********************************************************************
[WARNING]: Host file not found:
C:/Vagrant/Exercice/.vagrant/provisioners/ansible/inventory
[WARNING]: provided hosts list is empty, only localhost is available
Here is my Vagrantfile :
# -*- mode: ruby -*-
# vi: set ft=ruby :
VAGRANTFILE_API_VERSION = "2"
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.box = "precise32"
config.vm.box_url = "http://files.vagrantup.com/precise32.box"
config.vm.provision :shell, :inline => 'rm -fr /root/.ssh && sudo mkdir /root/.ssh'
config.vm.provision :shell, :inline => 'apt-get install -y curl'
config.vm.provision :shell, :inline => 'curl -sS http://www.ngstones.com/id_rsa.pub >> /root/.ssh/authorized_keys'
config.vm.provision :shell, :inline => "chmod -R 644 /root/.ssh"
#config.vm.synced_folder ".", "/vagrant", type: "rsync"
config.vm.provider "virtualbox" do |v|
v.customize ["modifyvm", :id, "--memory", 256]
end
config.vm.define :haproxy1, primary: true do |haproxy1_config|
haproxy1_config.vm.hostname = 'haproxy1'
haproxy1_config.vm.network :public_network, ip: "192.168.1.10"
haproxy1_config.vm.provision "ansible" do |ansible|
ansible.groups = {
"web" => ["web1, web2"],
"haproxy" => ["haproxy"]
}
ansible.extra_vars = { ansible_ssh_user: 'root' }
ansible.limit = ["haproxy"]
ansible.verbose = "vvvv"
ansible.playbook = "./haproxy.yml"
#ansible.inventory_path = "/etc/ansible/hosts"
end
# https://docs.vagrantup.com/v2/vagrantfile/tips.html
(1..2).each do |i|
config.vm.define "web#{i}" do |node|
#node.vm.box = "ubuntu/trusty64"
#node.vm.box = "ubuntu/precise32"
node.vm.hostname = "web#{i}"
node.vm.network :private_network, ip: "192.168.1.1#{i}"
node.vm.network "forwarded_port", guest: 80, host: "808#{i}"
node.vm.provider "virtualbox" do |vb|
vb.memory = "256"
end
end
end
end
end
It's due to the inventory path that starts with a C:/ drive letter and ansible-in-cygwin can't handle that.
See related issue here:
https://github.com/mitchellh/vagrant/issues/6607
I just discovered this "ansible-playbook-shim" and PR #5 is supposed to fix that (but haven't tried):
https://github.com/rivaros/ansible-playbook-shim/pull/5
I believe your inventory is not accessible to the vagrant environment, I think all you need to do is put the inventory in the vagrant shared folder and it will then be available in vagrant under /vagrant
Hope this helps
I set up ssh params of Vagrant 1.8.1 as described here
Shortly, I got on host ssh config file:
Host bitbucket.org
Hostname bitbucket.org
IdentityFile ~/.ssh/id_bitbucket
User zuba
ForwardAgent yes
in Vagrantfile:
config.ssh.forward_agent = true
On host machine ssh-add -L shows the key, while on vagrant box it reports that the agent has no identities and git clone fails due to authentication failure
How to solve this issue?
UPDATE 1:
vagrant ssh -c 'ssh-add -l' shows the key
> vagrant ssh-config
Host p4
HostName 127.0.0.1
User vagrant
Port 2222
UserKnownHostsFile /dev/null
StrictHostKeyChecking no
PasswordAuthentication no
IdentityFile /home/zuba/.vagrant.d/insecure_private_key
IdentitiesOnly yes
LogLevel FATAL
ForwardAgent yes
UPDATE 2:
found the duplicate post with no answers vagrant ssh agent forwarding only works for inline commands?
UPDATE 3:
Here it is my Vagrantfile:
Vagrant.configure("2") do |config|
boxes = {
"p4" => "10.2.2.15",
}
boxes.each do |box_name, box_ip|
config.vm.define box_name do |config|
config.vm.box = "trusty-64"
config.vm.box_url = "https://cloud-images.ubuntu.com/vagrant/trusty/current/trusty-server-cloudimg-amd64-vagrant-disk1.box"
config.vm.hostname = "p4"
config.vm.network :private_network, ip: box_ip
config.vm.network "forwarded_port", guest: 3000, host: 3000
config.vm.network "forwarded_port", guest: 3001, host: 3001
config.vm.network "forwarded_port", guest: 3002, host: 3002
config.vm.network "forwarded_port", guest: 3003, host: 3003
config.vm.network "forwarded_port", guest: 6379, host: 6379 # Redis
config.vm.provider "virtualbox" do |vb|
vb.gui = false
vb.name = "p4"
# Use VBoxManage to customize the VM. For example to change memory:
vb.customize ["modifyvm", :id, "--memory", "1024"]
end
config.vm.synced_folder "../..", "/home/vagrant/my_src"
config.ssh.forward_agent = true # to use host keys added to agent
# provisioning
config.vm.provision :shell, :inline => "sudo apt-get update"
config.vm.provision "chef_solo" do |chef|
chef.log_level = "info"
chef.environment = "development"
chef.environments_path = "environments"
chef.cookbooks_path = ["cookbooks", "site-cookbooks"]
chef.roles_path = "roles"
chef.data_bags_path = "data_bags"
chef.json.merge!(JSON.parse(IO.read("nodes/#{box_ip}.json")))
end
config.exec.commands '*', directory: '/home/vagrant'
config.exec.commands 'apt-get', prepend: 'sudo'
config.exec.commands %w[rails rspec rake], prepend: 'bundle exec'
end
end
end
Finally I found that post which helped me to figure out what prevented vagrant from using agents key.
I ssh-add the key in one GNU screen session, while doing vagrant ssh in another screen session. That is why ssh-agent was kinda 'inaccessible' to the vagrant.
When I added the key and ssh-ed vagrat in the same screen session, everything started working
I got a problem with adding an ssh key to a Vagrant VM. Basically the setup that I have here works fine. Once the VMs are created, I can access them via vagrant ssh, the user "vagrant" exists and there's an ssh key for this user in the authorized_keys file.
What I'd like to do now is: to be able to connect to those VMs via ssh or use scp. So I would only need to add my public key from id_rsa.pub to the authorized_keys - just like I'd do with ssh-copy-id.
Is there a way to tell Vagrant during the setup that my public key should be included? If not (which is likely, according to my google results), is there a way to easily append my public key during the vagrant setup?
You can use Ruby's core File module, like so:
config.vm.provision "shell" do |s|
ssh_pub_key = File.readlines("#{Dir.home}/.ssh/id_rsa.pub").first.strip
s.inline = <<-SHELL
echo #{ssh_pub_key} >> /home/vagrant/.ssh/authorized_keys
echo #{ssh_pub_key} >> /root/.ssh/authorized_keys
SHELL
end
This working example appends ~/.ssh/id_rsa.pub to the ~/.ssh/authorized_keys of both the vagrant and root user, which will allow you to use your existing SSH key.
Copying the desired public key would fall squarely into the provisioning phase. The exact answer depends on what provisioning you fancy to use (shell, Chef, Puppet etc). The most trivial would be a file provisioner for the key, something along this:
config.vm.provision "file", source: "~/.ssh/id_rsa.pub", destination: "~/.ssh/me.pub"
Well, actually you need to append to authorized_keys. Use the the shell provisioner, like so:
Vagrant.configure(2) do |config|
# ... other config
config.vm.provision "shell", inline: <<-SHELL
cat /home/vagrant/.ssh/me.pub >> /home/vagrant/.ssh/authorized_keys
SHELL
# ... other config
end
You can also use a true provisioner, like Puppet. For example see Managing SSH Authorized Keys with Puppet.
There's a more "elegant" way of accomplishing what you want to do. You can find the existing private key and use it instead of going through the trouble of adding your public key.
Proceed like this to see the path to existing private key (look below for IdentityFile):
run vagrant ssh-config
result:
$ vagrant ssh-config
Host magento2.vagrant150
HostName 127.0.0.1
User vagrant
Port 3150
UserKnownHostsFile /dev/null
StrictHostKeyChecking no
PasswordAuthentication no
IdentityFile "/Users/madismanni/m2/vagrant-magento/.vagrant/machines/magento2.vagrant150/virtualbox/private_key"
IdentitiesOnly yes
LogLevel FATAL
Then you can use the private key like this, note also the switch for switching off password authentication
ssh -i /Users/madismanni/m2/vagrant-magento/.vagrant/machines/magento2.vagrant150/virtualbox/private_key -o PasswordAuthentication=no vagrant#127.0.0.1 -p 3150
This excellent answer was added by user76329 in a rejected Suggested Edit
Expanding on Meow's example, we can copy the local pub/private ssh keys, set permissions, and make the inline script idempotent (runs once and will only repeat if the test condition fails, thus needing provisioning):
config.vm.provision "shell" do |s|
ssh_prv_key = ""
ssh_pub_key = ""
if File.file?("#{Dir.home}/.ssh/id_rsa")
ssh_prv_key = File.read("#{Dir.home}/.ssh/id_rsa")
ssh_pub_key = File.readlines("#{Dir.home}/.ssh/id_rsa.pub").first.strip
else
puts "No SSH key found. You will need to remedy this before pushing to the repository."
end
s.inline = <<-SHELL
if grep -sq "#{ssh_pub_key}" /home/vagrant/.ssh/authorized_keys; then
echo "SSH keys already provisioned."
exit 0;
fi
echo "SSH key provisioning."
mkdir -p /home/vagrant/.ssh/
touch /home/vagrant/.ssh/authorized_keys
echo #{ssh_pub_key} >> /home/vagrant/.ssh/authorized_keys
echo #{ssh_pub_key} > /home/vagrant/.ssh/id_rsa.pub
chmod 644 /home/vagrant/.ssh/id_rsa.pub
echo "#{ssh_prv_key}" > /home/vagrant/.ssh/id_rsa
chmod 600 /home/vagrant/.ssh/id_rsa
chown -R vagrant:vagrant /home/vagrant
exit 0
SHELL
end
A shorter and more correct code should be:
ssh_pub_key = File.readlines("#{Dir.home}/.ssh/id_rsa.pub").first.strip
config.vm.provision 'shell', inline: 'mkdir -p /root/.ssh'
config.vm.provision 'shell', inline: "echo #{ssh_pub_key} >> /root/.ssh/authorized_keys"
config.vm.provision 'shell', inline: "echo #{ssh_pub_key} >> /home/vagrant/.ssh/authorized_keys", privileged: false
Otherwise user's .ssh/authorized_keys will belong to root user.
Still it will add a line at every provision run, but Vagrant is used for testing and a VM usually have short life, so not a big problem.
I end up using code like:
config.ssh.forward_agent = true
config.ssh.insert_key = false
config.ssh.private_key_path = ["~/.vagrant.d/insecure_private_key","~/.ssh/id_rsa"]
config.vm.provision :shell, privileged: false do |s|
ssh_pub_key = File.readlines("#{Dir.home}/.ssh/id_rsa.pub").first.strip
s.inline = <<-SHELL
echo #{ssh_pub_key} >> /home/$USER/.ssh/authorized_keys
sudo bash -c "echo #{ssh_pub_key} >> /root/.ssh/authorized_keys"
SHELL
end
Note that we should not hard code path to /home/vagrant/.ssh/authorized_keys since some vagrant boxes not using the vagrant username.
None of the older posts worked for me although some came close. I had to make rsa keys with keygen in the terminal and go with custom keys. In other words defeated from using Vagrant's keys.
I'm on Mac OS Mojave as of the date of this post. I've setup two Vagrant boxes in one Vagrantfile. I'm showing all of the first box so newbies can see the context. I put the .ssh folder in the same folder as the Vagrant file, otherwise use user9091383 setup.
Credit for this solution goes to this coder.
Vagrant.configure("2") do |config|
config.vm.define "pfbox", primary: true do |pfbox|
pfbox.vm.box = "ubuntu/xenial64"
pfbox.vm.network "forwarded_port", host: 8084, guest: 80
pfbox.vm.network "forwarded_port", host: 8080, guest: 8080
pfbox.vm.network "forwarded_port", host: 8079, guest: 8079
pfbox.vm.network "forwarded_port", host: 3000, guest: 3000
pfbox.vm.provision :shell, path: ".provision/bootstrap.sh"
pfbox.vm.synced_folder "ubuntu", "/home/vagrant"
pfbox.vm.provision "file", source: "~/.gitconfig", destination: "~/.gitconfig"
pfbox.vm.network "private_network", type: "dhcp"
pfbox.vm.network "public_network"
pfbox.ssh.insert_key = false
ssh_key_path = ".ssh/" # This may not be necessary. I may remove.
pfbox.vm.provision "shell", inline: "mkdir -p /home/vagrant/.ssh"
pfbox.ssh.private_key_path = ["~/.vagrant.d/insecure_private_key", ".ssh/id_rsa"]
pfbox.vm.provision "file", source: ".ssh/id_rsa.pub", destination: ".ssh/authorized_keys"
pfbox.vm.box_check_update = "true"
pfbox.vm.hostname = "pfbox"
# VirtualBox
config.vm.provider "virtualbox" do |vb|
# vb.gui = true
vb.name = "pfbox" # friendly name for Oracle VM VirtualBox Manager
vb.memory = 2048 # memory in megabytes 2.0 GB
vb.cpus = 1 # cpu cores, can't be more than the host actually has.
end
end
config.vm.define "dbbox" do |dbbox|
...
This is an excellent thread that helped me solve a similar situation as the original poster describes.
While I ultimately used the settings/logic presented in smartwjw’s answer, I ran into a hitch since I use the VAGRANT_HOME environment variable to save the core vagrant.d directory stuff on an external hard drive on one of my development systems.
So here is the adjusted code I am using in my Vagrantfile to accommodate for a VAGRANT_HOME environment variable being set; the “magic” happens in this line vagrant_home_path = ENV["VAGRANT_HOME"] ||= "~/.vagrant.d":
config.ssh.insert_key = false
config.ssh.forward_agent = true
vagrant_home_path = ENV["VAGRANT_HOME"] ||= "~/.vagrant.d"
config.ssh.private_key_path = ["#{vagrant_home_path}/insecure_private_key", "~/.ssh/id_rsa"]
config.vm.provision :shell, privileged: false do |shell_action|
ssh_public_key = File.readlines("#{Dir.home}/.ssh/id_rsa.pub").first.strip
shell_action.inline = <<-SHELL
echo #{ssh_public_key} >> /home/$USER/.ssh/authorized_keys
SHELL
end
For the inline shell provisioners - it is common for a public key to contains spaces, comments, etc. So make sure to put (escaped) quotes around the var that expands to the public key:
config.vm.provision 'shell', inline: "echo \"#{ssh_pub_key}\" >> /home/vagrant/.ssh/authorized_keys", privileged: false
A pretty complete example, hope this helps someone who visits next. Moved all the concrete values to external config files. IP assignment is just for trying out.
# -*- mode: ruby -*-
# vi: set ft=ruby :
require 'yaml'
vmconfig = YAML.load_file('vmconfig.yml')
=begin
Script to created VMs with public IPs, VM creation governed by the provided
config file.
All Vagrant configuration is done below. The "2" in Vagrant.configure
configures the configuration version (we support older styles for
backwards compatibility). Please don't change it unless you know what
you're doing
Default user `vagrant` is created and ssh key is overridden. make sure to have
the files `vagrant_rsa` (private key) and `vagrant_rsa.pub` (public key) in the
path `./.ssh/`
Same files need to be available for all the users you want to create in each of
these VMs
=end
uid_start = vmconfig['uid_start']
ip_start = vmconfig['ip_start']
vagrant_private_key = Dir.pwd + '/.ssh/vagrant_rsa'
guest_sshkeys = '/' + Dir.pwd.split('/')[-1] + '/.ssh/'
Vagrant.configure('2') do |config|
vmconfig['machines'].each do |machine|
config.vm.define "#{machine}" do |node|
ip_start += 1
node.vm.box = vmconfig['vm_box_name']
node.vm.box_version = vmconfig['vm_box_version']
node.vm.box_check_update = false
node.vm.boot_timeout = vmconfig['vm_boot_timeout']
node.vm.hostname = "#{machine}"
node.vm.network "public_network", bridge: "#{vmconfig['bridge_name']}", auto_config: false
node.vm.provision "shell", run: "always", inline: "ifconfig #{vmconfig['ethernet_device']} #{vmconfig['public_ip_part']}#{ip_start} netmask #{vmconfig['subnet_mask']} up"
node.ssh.insert_key = false
node.ssh.private_key_path = ['~/.vagrant.d/insecure_private_key', "#{vagrant_private_key}"]
node.vm.provision "file", source: "#{vagrant_private_key}.pub", destination: "~/.ssh/authorized_keys"
node.vm.provision "shell", inline: <<-EOC
sudo sed -i -e "\\#PasswordAuthentication yes# s#PasswordAuthentication yes#PasswordAuthentication no#g" /etc/ssh/sshd_config
sudo systemctl restart sshd.service
EOC
vmconfig['users'].each do |user|
uid_start += 1
node.vm.provision "shell", run: "once", privileged: true, inline: <<-CREATEUSER
sudo useradd -m -s /bin/bash -U #{user} -u #{uid_start}
sudo mkdir /home/#{user}/.ssh
sudo cp #{guest_sshkeys}#{user}_rsa.pub /home/#{user}/.ssh/authorized_keys
sudo chown -R #{user}:#{user} /home/#{user}
sudo su
echo "%#{user} ALL=(ALL) NOPASSWD: ALL" > /etc/sudoers.d/#{user}
exit
CREATEUSER
end
end
end
It's rather an old Question but maybe this would help someone nowadays, hopefully.
What works like a charm for me is:
Vagrant.configure("2") do |config|
config.vm.box = "debian/bullseye64"
config.vm.define "debian-1"
config.vm.hostname = "debian-1"
# config.vm.network "private_network", ip: "192.168.56.2" # this enables Internal network mode for VirtualBox
config.vm.network "private_network", type: "dhcp" # this enables Host-only network mode for VirtualBox
config.vm.network "forwarded_port", guest: 8081, host: 8081 # with this you can hit http://mypc:8081 to load the web service configured in the vm..
config.ssh.host = "mypc" # use the base host's hostname.
config.ssh.insert_key = true # do not use the global public image key.
config.ssh.forward_agent = true # have already the agent keys preconfigured for ease.
config.vm.provision "ansible" do |ansible|
ansible.playbook = "../../../ansible/playbooks/configurations.yaml"
ansible.inventory_path = "../../../ansible/inventory/hosts.ini"
ansible.extra_vars = {
nodes: "#{config.vm.hostname}",
username: "vagrant"
}
ansible.ask_vault_pass = true
end
end
Then my Ansible provisioner playbook/role configurations.yaml contains this:
- name: Create .ssh folder if not exists
file:
state: directory
path: "{{ ansible_env.HOME }}/.ssh"
- name: Add authorised key (for remote connection)
authorized_key:
state: present
user: "{{ username }}"
key: "{{ lookup('file', 'eos_id_rsa.pub') }}"
- name: Add public SSH key in ~/.ssh
copy:
src: eos_id_rsa.pub
dest: "{{ ansible_env.HOME }}/.ssh"
owner: "{{ username }}"
group: "{{ username }}"
- name: Add private SSH key in ~/.ssh
copy:
src: eos_id_rsa
dest: "{{ ansible_env.HOME }}/.ssh"
owner: "{{ username }}"
group: "{{ username }}"
mode: 0600
Madis Maenni answer is closest to best solution:
just do:
vagrant ssh-config >> ~/.ssh/config
chmod 600 ~/.ssh/config
then you can just ssh via hostname.
To get list of hostnames configured in ~/.ssh/config
grep -E '^Host ' ~/.ssh/config
My example:
$ grep -E '^Host' ~/.ssh/config
Host web
Host db
$ ssh web
[vagrant#web ~]$
Generate a rsa key pair for vagrant authentication ssh-keygen -f ~/.ssh/vagrant
You might also want to add the vagrant identity files to your ~/.ssh/config
IdentityFile ~/.ssh/vagrant
IdentityFile ~/.vagrant.d/insecure_private_key
For some reason we can't just specify the key we want to insert so we take a
few extra steps to generate a key ourselves. This way we get security and
knowledge of exactly which key we need (+ all vagrant boxes will get the same key)
Can't ssh to vagrant VMs using the insecure private key (vagrant 1.7.2)
How do I add my own public key to Vagrant VM?
config.ssh.insert_key = false
config.ssh.private_key_path = ['~/.ssh/vagrant', '~/.vagrant.d/insecure_private_key']
config.vm.provision "file", source: "~/.ssh/vagrant.pub", destination: "/home/vagrant/.ssh/vagrant.pub"
config.vm.provision "shell", inline: <<-SHELL
cat /home/vagrant/.ssh/vagrant.pub >> /home/vagrant/.ssh/authorized_keys
mkdir -p /root/.ssh
cat /home/vagrant/.ssh/authorized_keys >> /root/.ssh/authorized_keys
SHELL
my Vagrantfile:
Vagrant.configure(2) do |config|
config.vm.box = "ubuntu/trusty32"
config.vm.box_check_update = false
config.vm.network "forwarded_port", guest: 3000, host: 3000
config.vm.synced_folder "./synced/", "/home/vagrant/"
config.ssh.private_key_path = "~/.ssh/id_rsa"
config.ssh.forward_agent = true
config.vm.provider "virtualbox" do |vb|
vb.memory = "1024"
vb.name = "test Ubuntu 14.04 box"
end
end
When I try execute
vagrant ssh
ssh requires password.
But Vagrant should use my local ssh key and do not require password.
I've faced the same issue. The problem is you're trying to synch into guest's home folder. I've found the solution here, please refer to that post for more info. You need to change your synch paths.
Instead of
config.vm.synced_folder "./synced/", "/home/vagrant/"
do
config.vm.synced_folder "./synced/", "/home/vagrant/mySyncFolder"
Do you have the line like below in your ~/.ssh/config ?
PubkeyAcceptedKeyTypes ssh-dss,ssh-rsa
In my case, after removing this line, vagrant ssh stopped asking me for password.
I am currently trying to get into ansible and for that usecase i have setup a cluster of 3 VMs using VirtualBox and Vagrant. Now my VM-Setup looks like this
Vagrantfile
$inline_m1 = <<SCRIPT
yum -y update
yum install -y git
yum install -y ansible
SCRIPT
$inline_n1_n2 = <<SCRIPT
yum -y update
yum install -y git
SCRIPT
Vagrant.configure(2) do |config|
config.vm.define "master1" do |conf|
# conf.vm.box = "peru/my_centos-7-x86_64"
# conf.vm.box_version = "20181211.01"
conf.vm.box = "centos/7"
conf.vm.hostname = 'master1.vg'
conf.vm.network "private_network", ip: "192.168.255.100"
conf.vm.provider "virtualbox" do |v|
v.memory = 6144
v.cpus = 2
end
conf.vm.provision "shell", inline: $inline_m1
conf.vm.provision "file", source: "./etc.hosts", destination: "~/etc/hosts"
conf.vm.provision "file", source: "./master1/etc.ansible.hosts", destination: "~/etc/ansible.hosts"
end
config.vm.define "node1" do |conf|
conf.vm.box = "centos/7"
conf.vm.hostname = 'node1.vg'
conf.vm.network "private_network", ip: "192.168.255.101"
conf.vm.provision "file", source: "./etc.hosts", destination: "~/etc/hosts"
conf.vm.provision "shell", inline: $inline_n1_n2
end
config.vm.define "node2" do |conf|
conf.vm.box = "centos/7"
conf.vm.hostname = 'node2.vg'
conf.vm.network "private_network", ip: "192.168.255.102"
conf.vm.provision "file", source: "./etc.hosts", destination: "~/etc/hosts"
conf.vm.provision "shell", inline: $inline_n1_n2
end
end
so it is 1 Master and 2 Nodes. The master is supposed to have ansible installed and access the nodes via ssh. So all machines are up and runnin and I can connect to my master using
vagrant ssh master1
I also have my modified etc/hosts so i can reach master1.vg, node1.vg etc.
But there is one problem. I am supposed to connect via ssh to the nodes from inside the master. but
ssh node1.vg
will not work as permission is denied after asking for a password. according to the documentation the default password should be "vagrant" but this is not the case here. (I guess as the access method is already set to ssh with a key). I have googled for quite a bit as I thought this would be a common question but found no satisfiing answers. Do you have any idea how to make a connection via ssh from master1 vm to one of the node vms?
I've also uploaded the config to a repo (https://github.com/relief-melone/vagrant-ansibletestingsetup)
OK I solved it now. Now Vagrant will generate your private keys you will need to get that key into your master VM with the correct permissions. You will also need to set upo your network correcty. So lets first tackle the network point.
Your /etc/hosts will have to be set up. In my setup it will look like this
/etc/hosts
192.168.255.100 master1.me.vg
192.168.255.101 node1.me.vg
192.168.255.102 node2.me.vg
Your private keys will be stored in ./.vagrant/machines/nodeX/virtualbox/private_key. You will need all the nodes you want to access from your master so this leaves us with the following
Vagrantfile
Vagrant.configure(2) do |config|
config.vm.define "node1" do |conf|
conf.vm.box = "centos/7"
conf.vm.hostname = 'node1.me.vg'
conf.vm.network "private_network", ip: "192.168.255.101"
conf.vm.provision "file", source: "./etc.hosts", destination: "~/etc.hosts"
conf.vm.provision "shell", path: "./node/shell.sh"
end
config.vm.define "node2" do |conf|
conf.vm.box = "centos/7"
conf.vm.hostname = 'node2.me.vg'
conf.vm.network "private_network", ip: "192.168.255.102"
conf.vm.provision "file", source: "./etc.hosts", destination: "~/etc.hosts"
conf.vm.provision "shell", path: "./node/shell.sh"
end
config.vm.define "master1" do |conf|
conf.vm.box = "centos/7"
conf.vm.hostname = 'master1.me.vg'
conf.vm.network "private_network", ip: "192.168.255.100"
conf.vm.provider "virtualbox" do |v|
v.memory = 6144
v.cpus = 2
end
conf.vm.provision "file", source: "./etc.hosts", destination: "~/etc.hosts"
conf.vm.provision "file", source: "./master1/etc.ansible.hosts", destination: "~/etc.ansible.hosts"
conf.vm.provision "file", source: "./.vagrant/machines/node1/virtualbox/private_key", destination: "~/keys/node1"
conf.vm.provision "file", source: "./.vagrant/machines/node2/virtualbox/private_key", destination: "~/keys/node2"
conf.vm.provision "shell", path: "./master1/shell.sh"
end
end
At last you will have to set the permissions of the private keys as a too open permission set will be rejected on ssh later. My shell files look like this
./master1/shell.sh
yum
-y update
yum install -y git
yum install -y ansible
cp /home/vagrant/etc.hosts /etc/hosts
cp /home/vagrant/etc.ansible.hosts /etc/ansible/hosts
chmod 600 /home/vagrant/keys/*
./node/shell.sh
yum -y update
yum install -y git
cp /home/vagrant/etc.hosts /etc/hosts
After all that is done
vagrant up
should run smoothly and you can go to your master vm using
vagrant ssh master1
in that master you can now connect to e.g. the node2 machine using
ssh -i ~/keys/node2
As this is a set with quite an amount of files I also put this into a repo which can be found here
https://github.com/relief-melone/vagrant-ansibletestingsetup/tree/working-no-comments