Vagrant multi vm ssh connection setup works on one but not the others - ssh

I have searched many of the similar issues but can't seem to figure out the one I'm having. I have a Vagrantfile with which I setup 3 VMs. I add a public key to each VM so I can run Ansible against the boxes after vagrant up command (I don't want to use the ansible provisioner). I forward all the SSH ports on each box.
I can vagrant ssh <server_name> on to each box successfully.
With the following:
ssh vagrant#192.168.56.2 -p 2711 -i ~/.ssh/ansible <-- successful connection
ssh vagrant#192.168.56.3 -p 2712 -i ~/.ssh/ansible <-- connection error
ssh: connect to host 192.168.56.3 port 2712: Connection refused
ssh vagrant#192.168.56.4 -p 2713 -i ~/.ssh/ansible <-- connection error
ssh: connect to host 192.168.56.4 port 2713: Connection refused
And
ssh vagrant#localhost -p 2711 -i ~/.ssh/ansible <-- successful connection
ssh vagrant#localhost -p 2712 -i ~/.ssh/ansible <-- successful connection
ssh vagrant#localhost -p 2713 -i ~/.ssh/ansible <-- successful connection
Ansible can connect to the first one (vagrant#192.168.56.2) but not the other 2 also. I can't seem to find out why it connects to one and not the others. Any ideas what I could be doing wrong?
The Ansible inventory:
{
"all": {
"hosts": {
"kubemaster": {
"ansible_host": "192.168.56.2",
"ansible_user": "vagrant",
"ansible_ssh_port": 2711
},
"kubenode01": {
"ansible_host": "192.168.56.3",
"ansible_user": "vagrant",
"ansible_ssh_port": 2712
},
"kubenode02": {
"ansible_host": "192.168.56.4",
"ansible_user": "vagrant",
"ansible_ssh_port": 2713
}
},
"children": {},
"vars": {}
}
}
The Vagrantfile:
# Define the number of master and worker nodes
NUM_MASTER_NODE = 1
NUM_WORKER_NODE = 2
PRIV_IP_NW = "192.168.56."
MASTER_IP_START = 1
NODE_IP_START = 2
# Vagrant configuration
Vagrant.configure("2") do |config|
# The most common configuration options are documented and commented below.
# For a complete reference, please see the online documentation at
# https://docs.vagrantup.com.
# default box
config.vm.box = "ubuntu/jammy64"
# automatic box update checking.
config.vm.box_check_update = false
# Provision master nodes
(1..NUM_MASTER_NODE).each do |i|
config.vm.define "kubemaster" do |node|
# Name shown in the GUI
node.vm.provider "virtualbox" do |vb|
vb.name = "kubemaster"
vb.memory = 2048
vb.cpus = 2
end
node.vm.hostname = "kubemaster"
node.vm.network :private_network, ip: PRIV_IP_NW + "#{MASTER_IP_START + i}"
node.vm.network :forwarded_port, guest: 22, host: "#{2710 + i}"
# argo and traefik access
node.vm.network "forwarded_port", guest: 8080, host: "#{8080}"
node.vm.network "forwarded_port", guest: 9000, host: "#{9000}"
# synced folder for kubernetes setup yaml
node.vm.synced_folder "sync_folder", "/vagrant_data", create: true, owner: "root", group: "root"
node.vm.synced_folder ".", "/vagrant", disabled: true
# setup the hosts, dns and ansible keys
node.vm.provision "setup-hosts", :type => "shell", :path => "vagrant/setup-hosts.sh" do |s|
s.args = ["enp0s8"]
end
node.vm.provision "setup-dns", type: "shell", :path => "vagrant/update-dns.sh"
node.vm.provision "shell" do |s|
ssh_pub_key = File.readlines("#{Dir.home}/.ssh/ansible.pub").first.strip
s.inline = <<-SHELL
echo #{ssh_pub_key} >> /home/vagrant/.ssh/authorized_keys
echo #{ssh_pub_key} >> /root/.ssh/authorized_keys
SHELL
end
end
end
# Provision Worker Nodes
(1..NUM_WORKER_NODE).each do |i|
config.vm.define "kubenode0#{i}" do |node|
node.vm.provider "virtualbox" do |vb|
vb.name = "kubenode0#{i}"
vb.memory = 2048
vb.cpus = 2
end
node.vm.hostname = "kubenode0#{i}"
node.vm.network :private_network, ip: PRIV_IP_NW + "#{NODE_IP_START + i}"
node.vm.network :forwarded_port, guest: 22, host: "#{2711 + i}"
# synced folder for kubernetes setup yaml
node.vm.synced_folder ".", "/vagrant", disabled: true
# setup the hosts, dns and ansible keys
node.vm.provision "setup-hosts", :type => "shell", :path => "vagrant/setup-hosts.sh" do |s|
s.args = ["enp0s8"]
end
node.vm.provision "setup-dns", type: "shell", :path => "vagrant/update-dns.sh"
node.vm.provision "shell" do |s|
ssh_pub_key = File.readlines("#{Dir.home}/.ssh/ansible.pub").first.strip
s.inline = <<-SHELL
echo #{ssh_pub_key} >> /home/vagrant/.ssh/authorized_keys
echo #{ssh_pub_key} >> /root/.ssh/authorized_keys
SHELL
end
end
end
end

Your Vagrantfile confirms what I suspected:
You define port forwarding as follows:
node.vm.network :forwarded_port, guest: 22, host: "#{2710 + i}"
That means, port 22 of the guest is made reachable on the host under port 2710+i. For your 3 VMs, from the host's point of view, this means:
192.168.2.1:22 -> localhost:2711
192.168.2.2:22 -> localhost:2712
192.168.2.3:22 -> localhost:2713
As IP addresses for your VMs you have defined the range 192.168.2.0/24, but you try to access the range 192.168.56.0/24.
If a Private IP address is defined (for your 1st node e.g. 192.168.2.2), Vagrant implements this in the VM on VirtualBox as follows:
Two network adapters are defined for the VM:
NAT: this gives the VM Internet access
Host-Only: this gives the host access to the VM via IP 192.168.2.2.
For each /24 network, VirtualBox (and Vagrant) creates a separate VirtualBox Host-Only Ethernet Adapter, and the host is .1 on each of these networks.
What this means for you is that if you use an IP address from the 192.168.2.0/24 network, an adapter is created on your host that always gets the IP address 192.168.2.1/24, so you have the addresses 192.168.2.2 - 192.168.2.254 available for your VMs.
This means: You have for your master a collision of the IP address with your host!
But why does the access to your first VM work?
ssh vagrant#192.168.56.1 -p 2711 -i ~/.ssh/ansible <-- successful connection
That is relatively simple: The network 192.168.56.0/24 is the default network for Host-Only under VirtualBox, so you probably have a VirtualBox Host-Only Ethernet Adapter with the address 192.168.56.1/24.
Because you have defined a port forwarding in your Vagrantfile a mapping of the 1st VM to localhost:2711 takes place. If you now access 192.168.56.1:2711, this is your own host, thus localhost, and the SSH of the 1st VM is mapped to port 2711 on this host.
So what do you have to do now?
Change the IP addresses of your VMs, e.g. use 192.168.2.11 - 192.168.2.13.
The access to the VMs is possible as follows:
Node
via Guest-IP
via localhost
kubemaster
192.168.2.11:22
localhost:2711
kubenode01
192.168.2.12:22
localhost:2712
kubenode02
192.168.2.13:22
localhost:2713
Note: If you want to access with the guest IP address, use port 22, if you want to access via localhost, use port 2710+i defined by you.

Related

Expo + vagrant, metro bundle doesn't work

I created a VM to work on expo. I can't run Metro Bundle in browser in my host on 19002 port. Vagrantfile:
# -*- mode: ruby -*-
# vi: set ft=ruby :
VAGRANTFILE_API_VERSION = "2"
REACT_NATIVE_PACKAGER_HOSTNAME = Socket.ip_address_list.find { |ai| ai.ipv4? && !ai.ipv4_loopback? }.ip_address
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.box = "geerlingguy/ubuntu1804"
config.ssh.insert_key = false
config.vm.provider :virtualbox do |v|
v.name = "mobile-app"
v.memory = 2048
v.cpus = 1
v.customize ["modifyvm", :id, "--natdnshostresolver1", "on"]
v.customize ["modifyvm", :id, "--ioapic", "on"]
end
config.vm.synced_folder "./", "/home/vagrant/workspace", type: 'nfs', mount_options: ['nolock,vers=3,udp,noatime']
config.vm.hostname = "mobile-app"
config.vm.network :private_network, ip: "192.168.33.40"
config.vm.network "forwarded_port", guest: 3000, host: 3000
config.vm.network "forwarded_port", guest: 19000, host: 19000
config.vm.network "forwarded_port", guest: 19001, host: 19001
config.vm.network "forwarded_port", guest: 19002, host: 19002
config.vm.network "forwarded_port", guest: 19006, host: 19006
config.vm.provision "shell", path: "install.sh"
config.vm.provision "set_lan_ip", "type": "shell" do |installs|
installs.inline = "
echo 'export REACT_NATIVE_PACKAGER_HOSTNAME=#{REACT_NATIVE_PACKAGER_HOSTNAME}' >> /home/vagrant/.zshrc
"
end
end
When I run npm start I see information that expo start. I can see url and qr image:
Starting project at /home/vagrant/workspace/app
Expo DevTools is running at http://localhost:19002
Opening DevTools in the browser... (press shift-d to disable)
Starting Metro Bundler
When I run localhost:19002 and 192.168.33.40:19002, I see: This site can’t be reached. But when I launch expo start:web and open: 192.168.33.40:19006 works fine... When I run curl localhost:19002 on guest, I can see html, but the same cmd on host gets: Recv failure. (192.168.33.40 the same error).
I checked ports:
node 1116 vagrant 21u IPv4 21603 0t0 TCP 127.0.0.1:19002 (LISTEN)
node 1116 vagrant 22u IPv6 21661 0t0 TCP *:19000 (LISTEN)
node 1160 vagrant 20u IPv6 21720 0t0 TCP *:19001 (LISTEN)
and where web is active:
node 1220 vagrant 22u IPv4 22679 0t0 TCP *:19006 (LISTEN)
I think that it may be a problem with port. 19002 is mapped on localhost(127.0.0.1), but should allowed on all interfaces. Can I set it in expo?
Where is my mistake?
resources: https://github.com/jean553/react-native-dev, Expo and Vagrant
Solution:
export EXPO_DEVTOOLS_LISTEN_ADDRESS=192.168.33.40
I used a other parameter but this one is correct...

proxycommand doesnt seem to work with ansible and my environment

I've tried many combinations to get this to work but cannot for some reason. I am not using keys in our environment so passwords will have to do.
I've tried proxyjump and sshuttle as well.
It's strange has the ping module works but when trying another module or playbook it doesn't work.
Rough set up is:
laptop running ubuntu with ansible installed
[laptop] ---> [productionjumphost] ---> [production_iosxr_router]
ansible.cfg:
[ssh_connection]
pipelining = True
ssh_args = -o ControlMaster=auto -o ControlPersist=30m -o ControlPath=/tmp/ansible-%r#%h:%p -F ssh.config
~/.ssh/config == ssh.cfg: (configured both)
Host modeljumphost
HostName modeljumphost.fqdn.com.au
User user
Port 22
Host productionjumphost
HostName productionjumphost.fqdn.com.au
User user
Port 22
Host model_iosxr_router
HostName model_iosxr_router
User user
ProxyCommand ssh -W %h:22 modeljumphost
Host production_iosxr_router
HostName production_iosxr_router
User user
ProxyCommand ssh -W %h:22 productionjumphost
inventory:
[local]
192.168.xxx.xxx
[router]
production_iosxr_router ansible_connection=network_cli ansible_user=user ansible_ssh_pass=password
[router:vars]
ansible_network_os=iosxr
ansible_ssh_common_args: '-o ProxyCommand="ssh -W %h:%p -q user#productionjumphost.fqdn.com.au"'
ansible_user=user
ansible_ssh_pass=password
playbook.yml:
---
- name: Network Getting Started First Playbook
hosts: router
gather_facts: no
connection: network_cli
tasks:
- name: show version
iosxr_command:
commands: show version
I can run an ad-hoc ansible command and a successful ping is returned:
result: ansible production_iosxr_router -i inventory -m ping -vvvvv
production_iosxr_router | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"invocation": {
"module_args": {
"data": "pong"
}
},
"ping": "pong"
}
running playbook: ansible-playbook -i inventory playbook.yml -vvvvv
production_iosxr_router | FAILED! => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"msg": "[Errno -2] Name or service not known"
}

SSH forwarding does not work for vagrant

I set up ssh params of Vagrant 1.8.1 as described here
Shortly, I got on host ssh config file:
Host bitbucket.org
Hostname bitbucket.org
IdentityFile ~/.ssh/id_bitbucket
User zuba
ForwardAgent yes
in Vagrantfile:
config.ssh.forward_agent = true
On host machine ssh-add -L shows the key, while on vagrant box it reports that the agent has no identities and git clone fails due to authentication failure
How to solve this issue?
UPDATE 1:
vagrant ssh -c 'ssh-add -l' shows the key
> vagrant ssh-config
Host p4
HostName 127.0.0.1
User vagrant
Port 2222
UserKnownHostsFile /dev/null
StrictHostKeyChecking no
PasswordAuthentication no
IdentityFile /home/zuba/.vagrant.d/insecure_private_key
IdentitiesOnly yes
LogLevel FATAL
ForwardAgent yes
UPDATE 2:
found the duplicate post with no answers vagrant ssh agent forwarding only works for inline commands?
UPDATE 3:
Here it is my Vagrantfile:
Vagrant.configure("2") do |config|
boxes = {
"p4" => "10.2.2.15",
}
boxes.each do |box_name, box_ip|
config.vm.define box_name do |config|
config.vm.box = "trusty-64"
config.vm.box_url = "https://cloud-images.ubuntu.com/vagrant/trusty/current/trusty-server-cloudimg-amd64-vagrant-disk1.box"
config.vm.hostname = "p4"
config.vm.network :private_network, ip: box_ip
config.vm.network "forwarded_port", guest: 3000, host: 3000
config.vm.network "forwarded_port", guest: 3001, host: 3001
config.vm.network "forwarded_port", guest: 3002, host: 3002
config.vm.network "forwarded_port", guest: 3003, host: 3003
config.vm.network "forwarded_port", guest: 6379, host: 6379 # Redis
config.vm.provider "virtualbox" do |vb|
vb.gui = false
vb.name = "p4"
# Use VBoxManage to customize the VM. For example to change memory:
vb.customize ["modifyvm", :id, "--memory", "1024"]
end
config.vm.synced_folder "../..", "/home/vagrant/my_src"
config.ssh.forward_agent = true # to use host keys added to agent
# provisioning
config.vm.provision :shell, :inline => "sudo apt-get update"
config.vm.provision "chef_solo" do |chef|
chef.log_level = "info"
chef.environment = "development"
chef.environments_path = "environments"
chef.cookbooks_path = ["cookbooks", "site-cookbooks"]
chef.roles_path = "roles"
chef.data_bags_path = "data_bags"
chef.json.merge!(JSON.parse(IO.read("nodes/#{box_ip}.json")))
end
config.exec.commands '*', directory: '/home/vagrant'
config.exec.commands 'apt-get', prepend: 'sudo'
config.exec.commands %w[rails rspec rake], prepend: 'bundle exec'
end
end
end
Finally I found that post which helped me to figure out what prevented vagrant from using agents key.
I ssh-add the key in one GNU screen session, while doing vagrant ssh in another screen session. That is why ssh-agent was kinda 'inaccessible' to the vagrant.
When I added the key and ssh-ed vagrat in the same screen session, everything started working

How do I add my own public key to Vagrant VM?

I got a problem with adding an ssh key to a Vagrant VM. Basically the setup that I have here works fine. Once the VMs are created, I can access them via vagrant ssh, the user "vagrant" exists and there's an ssh key for this user in the authorized_keys file.
What I'd like to do now is: to be able to connect to those VMs via ssh or use scp. So I would only need to add my public key from id_rsa.pub to the authorized_keys - just like I'd do with ssh-copy-id.
Is there a way to tell Vagrant during the setup that my public key should be included? If not (which is likely, according to my google results), is there a way to easily append my public key during the vagrant setup?
You can use Ruby's core File module, like so:
config.vm.provision "shell" do |s|
ssh_pub_key = File.readlines("#{Dir.home}/.ssh/id_rsa.pub").first.strip
s.inline = <<-SHELL
echo #{ssh_pub_key} >> /home/vagrant/.ssh/authorized_keys
echo #{ssh_pub_key} >> /root/.ssh/authorized_keys
SHELL
end
This working example appends ~/.ssh/id_rsa.pub to the ~/.ssh/authorized_keys of both the vagrant and root user, which will allow you to use your existing SSH key.
Copying the desired public key would fall squarely into the provisioning phase. The exact answer depends on what provisioning you fancy to use (shell, Chef, Puppet etc). The most trivial would be a file provisioner for the key, something along this:
config.vm.provision "file", source: "~/.ssh/id_rsa.pub", destination: "~/.ssh/me.pub"
Well, actually you need to append to authorized_keys. Use the the shell provisioner, like so:
Vagrant.configure(2) do |config|
# ... other config
config.vm.provision "shell", inline: <<-SHELL
cat /home/vagrant/.ssh/me.pub >> /home/vagrant/.ssh/authorized_keys
SHELL
# ... other config
end
You can also use a true provisioner, like Puppet. For example see Managing SSH Authorized Keys with Puppet.
There's a more "elegant" way of accomplishing what you want to do. You can find the existing private key and use it instead of going through the trouble of adding your public key.
Proceed like this to see the path to existing private key (look below for IdentityFile):
run vagrant ssh-config
result:
$ vagrant ssh-config
Host magento2.vagrant150
HostName 127.0.0.1
User vagrant
Port 3150
UserKnownHostsFile /dev/null
StrictHostKeyChecking no
PasswordAuthentication no
IdentityFile "/Users/madismanni/m2/vagrant-magento/.vagrant/machines/magento2.vagrant150/virtualbox/private_key"
IdentitiesOnly yes
LogLevel FATAL
Then you can use the private key like this, note also the switch for switching off password authentication
ssh -i /Users/madismanni/m2/vagrant-magento/.vagrant/machines/magento2.vagrant150/virtualbox/private_key -o PasswordAuthentication=no vagrant#127.0.0.1 -p 3150
This excellent answer was added by user76329 in a rejected Suggested Edit
Expanding on Meow's example, we can copy the local pub/private ssh keys, set permissions, and make the inline script idempotent (runs once and will only repeat if the test condition fails, thus needing provisioning):
config.vm.provision "shell" do |s|
ssh_prv_key = ""
ssh_pub_key = ""
if File.file?("#{Dir.home}/.ssh/id_rsa")
ssh_prv_key = File.read("#{Dir.home}/.ssh/id_rsa")
ssh_pub_key = File.readlines("#{Dir.home}/.ssh/id_rsa.pub").first.strip
else
puts "No SSH key found. You will need to remedy this before pushing to the repository."
end
s.inline = <<-SHELL
if grep -sq "#{ssh_pub_key}" /home/vagrant/.ssh/authorized_keys; then
echo "SSH keys already provisioned."
exit 0;
fi
echo "SSH key provisioning."
mkdir -p /home/vagrant/.ssh/
touch /home/vagrant/.ssh/authorized_keys
echo #{ssh_pub_key} >> /home/vagrant/.ssh/authorized_keys
echo #{ssh_pub_key} > /home/vagrant/.ssh/id_rsa.pub
chmod 644 /home/vagrant/.ssh/id_rsa.pub
echo "#{ssh_prv_key}" > /home/vagrant/.ssh/id_rsa
chmod 600 /home/vagrant/.ssh/id_rsa
chown -R vagrant:vagrant /home/vagrant
exit 0
SHELL
end
A shorter and more correct code should be:
ssh_pub_key = File.readlines("#{Dir.home}/.ssh/id_rsa.pub").first.strip
config.vm.provision 'shell', inline: 'mkdir -p /root/.ssh'
config.vm.provision 'shell', inline: "echo #{ssh_pub_key} >> /root/.ssh/authorized_keys"
config.vm.provision 'shell', inline: "echo #{ssh_pub_key} >> /home/vagrant/.ssh/authorized_keys", privileged: false
Otherwise user's .ssh/authorized_keys will belong to root user.
Still it will add a line at every provision run, but Vagrant is used for testing and a VM usually have short life, so not a big problem.
I end up using code like:
config.ssh.forward_agent = true
config.ssh.insert_key = false
config.ssh.private_key_path = ["~/.vagrant.d/insecure_private_key","~/.ssh/id_rsa"]
config.vm.provision :shell, privileged: false do |s|
ssh_pub_key = File.readlines("#{Dir.home}/.ssh/id_rsa.pub").first.strip
s.inline = <<-SHELL
echo #{ssh_pub_key} >> /home/$USER/.ssh/authorized_keys
sudo bash -c "echo #{ssh_pub_key} >> /root/.ssh/authorized_keys"
SHELL
end
Note that we should not hard code path to /home/vagrant/.ssh/authorized_keys since some vagrant boxes not using the vagrant username.
None of the older posts worked for me although some came close. I had to make rsa keys with keygen in the terminal and go with custom keys. In other words defeated from using Vagrant's keys.
I'm on Mac OS Mojave as of the date of this post. I've setup two Vagrant boxes in one Vagrantfile. I'm showing all of the first box so newbies can see the context. I put the .ssh folder in the same folder as the Vagrant file, otherwise use user9091383 setup.
Credit for this solution goes to this coder.
Vagrant.configure("2") do |config|
config.vm.define "pfbox", primary: true do |pfbox|
pfbox.vm.box = "ubuntu/xenial64"
pfbox.vm.network "forwarded_port", host: 8084, guest: 80
pfbox.vm.network "forwarded_port", host: 8080, guest: 8080
pfbox.vm.network "forwarded_port", host: 8079, guest: 8079
pfbox.vm.network "forwarded_port", host: 3000, guest: 3000
pfbox.vm.provision :shell, path: ".provision/bootstrap.sh"
pfbox.vm.synced_folder "ubuntu", "/home/vagrant"
pfbox.vm.provision "file", source: "~/.gitconfig", destination: "~/.gitconfig"
pfbox.vm.network "private_network", type: "dhcp"
pfbox.vm.network "public_network"
pfbox.ssh.insert_key = false
ssh_key_path = ".ssh/" # This may not be necessary. I may remove.
pfbox.vm.provision "shell", inline: "mkdir -p /home/vagrant/.ssh"
pfbox.ssh.private_key_path = ["~/.vagrant.d/insecure_private_key", ".ssh/id_rsa"]
pfbox.vm.provision "file", source: ".ssh/id_rsa.pub", destination: ".ssh/authorized_keys"
pfbox.vm.box_check_update = "true"
pfbox.vm.hostname = "pfbox"
# VirtualBox
config.vm.provider "virtualbox" do |vb|
# vb.gui = true
vb.name = "pfbox" # friendly name for Oracle VM VirtualBox Manager
vb.memory = 2048 # memory in megabytes 2.0 GB
vb.cpus = 1 # cpu cores, can't be more than the host actually has.
end
end
config.vm.define "dbbox" do |dbbox|
...
This is an excellent thread that helped me solve a similar situation as the original poster describes.
While I ultimately used the settings/logic presented in smartwjw’s answer, I ran into a hitch since I use the VAGRANT_HOME environment variable to save the core vagrant.d directory stuff on an external hard drive on one of my development systems.
So here is the adjusted code I am using in my Vagrantfile to accommodate for a VAGRANT_HOME environment variable being set; the “magic” happens in this line vagrant_home_path = ENV["VAGRANT_HOME"] ||= "~/.vagrant.d":
config.ssh.insert_key = false
config.ssh.forward_agent = true
vagrant_home_path = ENV["VAGRANT_HOME"] ||= "~/.vagrant.d"
config.ssh.private_key_path = ["#{vagrant_home_path}/insecure_private_key", "~/.ssh/id_rsa"]
config.vm.provision :shell, privileged: false do |shell_action|
ssh_public_key = File.readlines("#{Dir.home}/.ssh/id_rsa.pub").first.strip
shell_action.inline = <<-SHELL
echo #{ssh_public_key} >> /home/$USER/.ssh/authorized_keys
SHELL
end
For the inline shell provisioners - it is common for a public key to contains spaces, comments, etc. So make sure to put (escaped) quotes around the var that expands to the public key:
config.vm.provision 'shell', inline: "echo \"#{ssh_pub_key}\" >> /home/vagrant/.ssh/authorized_keys", privileged: false
A pretty complete example, hope this helps someone who visits next. Moved all the concrete values to external config files. IP assignment is just for trying out.
# -*- mode: ruby -*-
# vi: set ft=ruby :
require 'yaml'
vmconfig = YAML.load_file('vmconfig.yml')
=begin
Script to created VMs with public IPs, VM creation governed by the provided
config file.
All Vagrant configuration is done below. The "2" in Vagrant.configure
configures the configuration version (we support older styles for
backwards compatibility). Please don't change it unless you know what
you're doing
Default user `vagrant` is created and ssh key is overridden. make sure to have
the files `vagrant_rsa` (private key) and `vagrant_rsa.pub` (public key) in the
path `./.ssh/`
Same files need to be available for all the users you want to create in each of
these VMs
=end
uid_start = vmconfig['uid_start']
ip_start = vmconfig['ip_start']
vagrant_private_key = Dir.pwd + '/.ssh/vagrant_rsa'
guest_sshkeys = '/' + Dir.pwd.split('/')[-1] + '/.ssh/'
Vagrant.configure('2') do |config|
vmconfig['machines'].each do |machine|
config.vm.define "#{machine}" do |node|
ip_start += 1
node.vm.box = vmconfig['vm_box_name']
node.vm.box_version = vmconfig['vm_box_version']
node.vm.box_check_update = false
node.vm.boot_timeout = vmconfig['vm_boot_timeout']
node.vm.hostname = "#{machine}"
node.vm.network "public_network", bridge: "#{vmconfig['bridge_name']}", auto_config: false
node.vm.provision "shell", run: "always", inline: "ifconfig #{vmconfig['ethernet_device']} #{vmconfig['public_ip_part']}#{ip_start} netmask #{vmconfig['subnet_mask']} up"
node.ssh.insert_key = false
node.ssh.private_key_path = ['~/.vagrant.d/insecure_private_key', "#{vagrant_private_key}"]
node.vm.provision "file", source: "#{vagrant_private_key}.pub", destination: "~/.ssh/authorized_keys"
node.vm.provision "shell", inline: <<-EOC
sudo sed -i -e "\\#PasswordAuthentication yes# s#PasswordAuthentication yes#PasswordAuthentication no#g" /etc/ssh/sshd_config
sudo systemctl restart sshd.service
EOC
vmconfig['users'].each do |user|
uid_start += 1
node.vm.provision "shell", run: "once", privileged: true, inline: <<-CREATEUSER
sudo useradd -m -s /bin/bash -U #{user} -u #{uid_start}
sudo mkdir /home/#{user}/.ssh
sudo cp #{guest_sshkeys}#{user}_rsa.pub /home/#{user}/.ssh/authorized_keys
sudo chown -R #{user}:#{user} /home/#{user}
sudo su
echo "%#{user} ALL=(ALL) NOPASSWD: ALL" > /etc/sudoers.d/#{user}
exit
CREATEUSER
end
end
end
It's rather an old Question but maybe this would help someone nowadays, hopefully.
What works like a charm for me is:
Vagrant.configure("2") do |config|
config.vm.box = "debian/bullseye64"
config.vm.define "debian-1"
config.vm.hostname = "debian-1"
# config.vm.network "private_network", ip: "192.168.56.2" # this enables Internal network mode for VirtualBox
config.vm.network "private_network", type: "dhcp" # this enables Host-only network mode for VirtualBox
config.vm.network "forwarded_port", guest: 8081, host: 8081 # with this you can hit http://mypc:8081 to load the web service configured in the vm..
config.ssh.host = "mypc" # use the base host's hostname.
config.ssh.insert_key = true # do not use the global public image key.
config.ssh.forward_agent = true # have already the agent keys preconfigured for ease.
config.vm.provision "ansible" do |ansible|
ansible.playbook = "../../../ansible/playbooks/configurations.yaml"
ansible.inventory_path = "../../../ansible/inventory/hosts.ini"
ansible.extra_vars = {
nodes: "#{config.vm.hostname}",
username: "vagrant"
}
ansible.ask_vault_pass = true
end
end
Then my Ansible provisioner playbook/role configurations.yaml contains this:
- name: Create .ssh folder if not exists
file:
state: directory
path: "{{ ansible_env.HOME }}/.ssh"
- name: Add authorised key (for remote connection)
authorized_key:
state: present
user: "{{ username }}"
key: "{{ lookup('file', 'eos_id_rsa.pub') }}"
- name: Add public SSH key in ~/.ssh
copy:
src: eos_id_rsa.pub
dest: "{{ ansible_env.HOME }}/.ssh"
owner: "{{ username }}"
group: "{{ username }}"
- name: Add private SSH key in ~/.ssh
copy:
src: eos_id_rsa
dest: "{{ ansible_env.HOME }}/.ssh"
owner: "{{ username }}"
group: "{{ username }}"
mode: 0600
Madis Maenni answer is closest to best solution:
just do:
vagrant ssh-config >> ~/.ssh/config
chmod 600 ~/.ssh/config
then you can just ssh via hostname.
To get list of hostnames configured in ~/.ssh/config
grep -E '^Host ' ~/.ssh/config
My example:
$ grep -E '^Host' ~/.ssh/config
Host web
Host db
$ ssh web
[vagrant#web ~]$
Generate a rsa key pair for vagrant authentication ssh-keygen -f ~/.ssh/vagrant
You might also want to add the vagrant identity files to your ~/.ssh/config
IdentityFile ~/.ssh/vagrant
IdentityFile ~/.vagrant.d/insecure_private_key
For some reason we can't just specify the key we want to insert so we take a
few extra steps to generate a key ourselves. This way we get security and
knowledge of exactly which key we need (+ all vagrant boxes will get the same key)
Can't ssh to vagrant VMs using the insecure private key (vagrant 1.7.2)
How do I add my own public key to Vagrant VM?
config.ssh.insert_key = false
config.ssh.private_key_path = ['~/.ssh/vagrant', '~/.vagrant.d/insecure_private_key']
config.vm.provision "file", source: "~/.ssh/vagrant.pub", destination: "/home/vagrant/.ssh/vagrant.pub"
config.vm.provision "shell", inline: <<-SHELL
cat /home/vagrant/.ssh/vagrant.pub >> /home/vagrant/.ssh/authorized_keys
mkdir -p /root/.ssh
cat /home/vagrant/.ssh/authorized_keys >> /root/.ssh/authorized_keys
SHELL

(vagrant & ssh) require password

my Vagrantfile:
Vagrant.configure(2) do |config|
config.vm.box = "ubuntu/trusty32"
config.vm.box_check_update = false
config.vm.network "forwarded_port", guest: 3000, host: 3000
config.vm.synced_folder "./synced/", "/home/vagrant/"
config.ssh.private_key_path = "~/.ssh/id_rsa"
config.ssh.forward_agent = true
config.vm.provider "virtualbox" do |vb|
vb.memory = "1024"
vb.name = "test Ubuntu 14.04 box"
end
end
When I try execute
vagrant ssh
ssh requires password.
But Vagrant should use my local ssh key and do not require password.
I've faced the same issue. The problem is you're trying to synch into guest's home folder. I've found the solution here, please refer to that post for more info. You need to change your synch paths.
Instead of
config.vm.synced_folder "./synced/", "/home/vagrant/"
do
config.vm.synced_folder "./synced/", "/home/vagrant/mySyncFolder"
Do you have the line like below in your ~/.ssh/config ?
PubkeyAcceptedKeyTypes ssh-dss,ssh-rsa
In my case, after removing this line, vagrant ssh stopped asking me for password.