Vagrant and ansible provisionning from cygwin - ssh

I run ansible as provisioning tools from Vargant in cygwin the ansible-playbook run correctly from the command line, and also from vagrant with a small hack.
My question is how to specify a hosts file to Vagrant ? to surround the issue below ?
[16:18:23 ~/Vagrant/Exercice 567 ]$ vagrant provision
==> haproxy1: Running provisioner: shell...
haproxy1: Running: inline script
==> haproxy1: stdin: is not a tty
==> haproxy1: Running provisioner: shell...
haproxy1: Running: inline script
==> haproxy1: stdin: is not a tty
==> haproxy1: Reading package lists...
==> haproxy1: Building dependency tree...
==> haproxy1: Reading state information...
==> haproxy1: curl is already the newest version.
==> haproxy1: 0 upgraded, 0 newly installed, 0 to remove and 66 not upgraded.
==> haproxy1: Running provisioner: shell...
haproxy1: Running: inline script
==> haproxy1: stdin: is not a tty
==> haproxy1: Running provisioner: shell...
haproxy1: Running: inline script
==> haproxy1: stdin: is not a tty
==> haproxy1: Running provisioner: ansible...
PYTHONUNBUFFERED=1 ANSIBLE_HOST_KEY_CHECKING=false ANSIBLE_NOCOLOR=true ANSIBLE_SSH_ARGS='-o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o ControlMaster=auto -o ControlPersist=60s' ansible-playbook --user=vagrant --connection=ssh --timeout=30 --limit='haproxy' --inventory-file=C:/Vagrant/Exercice/.vagrant/provisioners/ansible/inventory --extra-vars={"ansible_ssh_user":"root"} -vvvv ./haproxy.yml
No config file found; using defaults
Loaded callback default of type stdout, v2.0
PLAYBOOK: haproxy.yml **********************************************************
1 plays in ./haproxy.yml
PLAY [haproxy] *****************************************************************
skipping: no hosts matched
PLAY RECAP *********************************************************************
[WARNING]: Host file not found:
C:/Vagrant/Exercice/.vagrant/provisioners/ansible/inventory
[WARNING]: provided hosts list is empty, only localhost is available
Here is my Vagrantfile :
# -*- mode: ruby -*-
# vi: set ft=ruby :
VAGRANTFILE_API_VERSION = "2"
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.box = "precise32"
config.vm.box_url = "http://files.vagrantup.com/precise32.box"
config.vm.provision :shell, :inline => 'rm -fr /root/.ssh && sudo mkdir /root/.ssh'
config.vm.provision :shell, :inline => 'apt-get install -y curl'
config.vm.provision :shell, :inline => 'curl -sS http://www.ngstones.com/id_rsa.pub >> /root/.ssh/authorized_keys'
config.vm.provision :shell, :inline => "chmod -R 644 /root/.ssh"
#config.vm.synced_folder ".", "/vagrant", type: "rsync"
config.vm.provider "virtualbox" do |v|
v.customize ["modifyvm", :id, "--memory", 256]
end
config.vm.define :haproxy1, primary: true do |haproxy1_config|
haproxy1_config.vm.hostname = 'haproxy1'
haproxy1_config.vm.network :public_network, ip: "192.168.1.10"
haproxy1_config.vm.provision "ansible" do |ansible|
ansible.groups = {
"web" => ["web1, web2"],
"haproxy" => ["haproxy"]
}
ansible.extra_vars = { ansible_ssh_user: 'root' }
ansible.limit = ["haproxy"]
ansible.verbose = "vvvv"
ansible.playbook = "./haproxy.yml"
#ansible.inventory_path = "/etc/ansible/hosts"
end
# https://docs.vagrantup.com/v2/vagrantfile/tips.html
(1..2).each do |i|
config.vm.define "web#{i}" do |node|
#node.vm.box = "ubuntu/trusty64"
#node.vm.box = "ubuntu/precise32"
node.vm.hostname = "web#{i}"
node.vm.network :private_network, ip: "192.168.1.1#{i}"
node.vm.network "forwarded_port", guest: 80, host: "808#{i}"
node.vm.provider "virtualbox" do |vb|
vb.memory = "256"
end
end
end
end
end

It's due to the inventory path that starts with a C:/ drive letter and ansible-in-cygwin can't handle that.
See related issue here:
https://github.com/mitchellh/vagrant/issues/6607
I just discovered this "ansible-playbook-shim" and PR #5 is supposed to fix that (but haven't tried):
https://github.com/rivaros/ansible-playbook-shim/pull/5

I believe your inventory is not accessible to the vagrant environment, I think all you need to do is put the inventory in the vagrant shared folder and it will then be available in vagrant under /vagrant
Hope this helps

Related

Rsync error in Vagrant 2.2.3 (IPC code) when updating

I've an issue when updating a Vagrant box (Vagrant 2.2.3 and Windows 10).
The cause of error is rsync, it can't synchronize (so, my shared folders are not working, I think) :
Command: "rsync" "--verbose" "--archive" "--delete" "-z" "--copy-links" "--chmod=ugo=rwX" "--no-perms" "--no-owner" "--no-group" "--rsync-path" "sudo rsync" "-e" "ssh -p 2222 -o LogLevel=FATAL -o IdentitiesOnly=yes -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -i 'C:/Users/my_user/boxes-puphpet/debian/.vagrant/machines/default/virtualbox/private_key'" "--exclude" ".vagrant/" "/cygdrive/c/Users/my_user/boxes-puphpet/debian/" "vagrant#127.0.0.1:/vagrant"
Error: rsync: pipe: Connection timed out (116)
rsync error: error in IPC code (code 14) at pipe.c(59) [sender=3.1.3]
INFO interface: Machine: error-exit ["Vagrant::Errors::RSyncError", "There was an error when attempting to rsync a synced folder.\nPlease inspect the error message below for more info.\n\nHost path: /cygdrive/c/Users/my_user/boxes-puphpet/debian/\nGuest path: /vagrant\nCommand: \"rsync\" \"--verbose\" \"--archive\" \"--delete\" \"-z\" \"--copy-links\" \"--chmod=ugo=rwX\" \"--no-perms\" \"--no-owner\" \"--no-group\" \"--rsync-path\" \"sudo rsync\" \"-e\" \"ssh -p 2222 -o LogLevel=FATAL -o IdentitiesOnly=yes -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -i 'C:/Users/my_user/boxes-puphpet/debian/.vagrant/machines/default/virtualbox/private_key'\" \"--exclude\" \".vagrant/\" \"/cygdrive/c/Users/my_user/boxes-puphpet/debian/\" \"vagrant#127.0.0.1:/vagrant\"\nError: rsync: pipe: Connection timed out (116)\nrsync error: error in IPC code (code 14) at pipe.c(59) [sender=3.1.3]\n"]
Here my Vagranfile :
# -*- mode: ruby -*-
# vi: set ft=ruby :
Vagrant.configure("2") do |config|
config.vm.box = "debian/jessie64"
config.vm.box_version = "8.10.0"
config.vm.network "private_network", ip: "192.168.56.222"
config.vm.synced_folder "C:/Users/f.pestre/www/debian.vm/www/", "/var/www"
config.vm.provider "virtualbox" do |vb|
vb.memory = "4048"
end
#config.vm.provision :shell, path: "bootstrap.sh"
end
I can login with vagrant ssh, but the sync folder doesn't work, at all.
Thanks.
F.
Add below to your vagrant file
config.vm.synced_folder '.', '/vagrant', disabled: true

Vagrant ssh stuck with "default: Warning: Connection timeout. Retrying..."

I am running vagrant(1.7.4)-salt on Virtual box 4.3 on a headless ubuntu 14.04. Salt is a standalone one.The reason I am using these version is because the work on my local ubuntu.
On vagrant up I get the following output:
==> default: Importing base box 'phusion/ubuntu-14.04-amd64'...
==> default: Matching MAC address for NAT networking...
==> default: Checking if box 'phusion/ubuntu-14.04-amd64' is up to date...
==> default: Setting the name of the VM: drupal_default_1452863894453_19933
==> default: Clearing any previously set forwarded ports...
==> default: Fixed port collision for 22 => 2222. Now on port 2201.
==> default: Clearing any previously set network interfaces...
==> default: Preparing network interfaces based on configuration...
default: Adapter 1: nat
default: Adapter 2: hostonly
==> default: Forwarding ports...
default: 22 => 2201 (adapter 1)
==> default: Running 'pre-boot' VM customizations...
==> default: Booting VM...
==> default: Waiting for machine to boot. This may take a few minutes...
default: SSH address: 127.0.0.1:2201
default: SSH username: vagrant
default: SSH auth method: private key
default: Warning: Connection timeout. Retrying...
default: Warning: Connection timeout. Retrying...
vagrant ssh-config gives:
Host default
HostName 127.0.0.1
User vagrant
Port 2201
UserKnownHostsFile /dev/null
StrictHostKeyChecking no
PasswordAuthentication no
IdentityFile /home/user/.vagrant.d/insecure_private_key
IdentitiesOnly yes
LogLevel FATAL
My Vagrantfile is:
# -*- mode: ruby -*-
# vi: set ft=ruby :
VAGRANTFILE_API_VERSION = "2"
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.box = "ubuntu/trusty64"
config.vm.host_name = "site#{rand(0..999999)}"
config.vm.provider "virtualbox" do |v|
config.ssh.insert_key = false
v.memory = 2048
v.cpus = 1
end
## For masterless, mount your salt file root
config.vm.synced_folder "salt/roots/","/srv/salt/"
# Network
config.vm.network :private_network, ip: "172.16.0.100"
# Server provisioner
config.vm.provision :salt do |salt|
salt.masterless = true
salt.minion_config = "salt/minion"
salt.run_highstate = true
salt.bootstrap_options = "-P"
end
# Provisioning scripts
config.vm.provision "dbsync", type: "shell", path: "provision/db.sh"
end
What could have missed? Any ubuntu network configuration? Any ssh configuration?

Vagrant permissions issue

Here is my vagrant file. The problem is inside `var/www I can't set particular folder's permissions using the configuration below. For example, var/www/sample folder must be set 777 permission. But I can't do it, neither using root nor vagrant accounts. Tried to change moun's type into rsync. Still the same problem.
# -*- mode: ruby -*-
# vi: set ft=ruby :
# check and install required Vagrant plugins
required_plugins = ["vagrant-hostmanager"]
required_plugins.each do |plugin|
if Vagrant.has_plugin?(plugin) then
system "echo OK: #{plugin} already installed"
else
system "echo Not installed required plugin: #{plugin} ..."
system "vagrant plugin install #{plugin}"
end
end
Vagrant.configure(2) do |config|
config.vm.box = "ubuntu/trusty64"
config.ssh.shell = "bash -c 'BASH_ENV=/etc/profile exec bash'"
config.vm.provision :shell,
keep_color: true,
path: "provision/setup.sh"
config.vm.box_check_update = true
config.vm.network "private_network", ip: "192.168.56.10"
config.vm.synced_folder '.', '/vagrant', disabled: true
config.vm.synced_folder "./", "/var/www", create: true, group: "vagrant", owner: "vagrant", type: "rsync"
config.vm.provider "virtualbox" do |vb|
vb.name = "Web Server"
vb.gui = false
vb.memory = "512"
end
end
What am I doing wrong?
change to
config.vm.synced_folder "./", "/var/www", group: "vagrant", owner: "vagrant", mount_options: ["dmode=777, fmode=664"]
This makes the directory with 777 mode and files with 664 - you can adjust those values based on your needs

How do I add my own public key to Vagrant VM?

I got a problem with adding an ssh key to a Vagrant VM. Basically the setup that I have here works fine. Once the VMs are created, I can access them via vagrant ssh, the user "vagrant" exists and there's an ssh key for this user in the authorized_keys file.
What I'd like to do now is: to be able to connect to those VMs via ssh or use scp. So I would only need to add my public key from id_rsa.pub to the authorized_keys - just like I'd do with ssh-copy-id.
Is there a way to tell Vagrant during the setup that my public key should be included? If not (which is likely, according to my google results), is there a way to easily append my public key during the vagrant setup?
You can use Ruby's core File module, like so:
config.vm.provision "shell" do |s|
ssh_pub_key = File.readlines("#{Dir.home}/.ssh/id_rsa.pub").first.strip
s.inline = <<-SHELL
echo #{ssh_pub_key} >> /home/vagrant/.ssh/authorized_keys
echo #{ssh_pub_key} >> /root/.ssh/authorized_keys
SHELL
end
This working example appends ~/.ssh/id_rsa.pub to the ~/.ssh/authorized_keys of both the vagrant and root user, which will allow you to use your existing SSH key.
Copying the desired public key would fall squarely into the provisioning phase. The exact answer depends on what provisioning you fancy to use (shell, Chef, Puppet etc). The most trivial would be a file provisioner for the key, something along this:
config.vm.provision "file", source: "~/.ssh/id_rsa.pub", destination: "~/.ssh/me.pub"
Well, actually you need to append to authorized_keys. Use the the shell provisioner, like so:
Vagrant.configure(2) do |config|
# ... other config
config.vm.provision "shell", inline: <<-SHELL
cat /home/vagrant/.ssh/me.pub >> /home/vagrant/.ssh/authorized_keys
SHELL
# ... other config
end
You can also use a true provisioner, like Puppet. For example see Managing SSH Authorized Keys with Puppet.
There's a more "elegant" way of accomplishing what you want to do. You can find the existing private key and use it instead of going through the trouble of adding your public key.
Proceed like this to see the path to existing private key (look below for IdentityFile):
run vagrant ssh-config
result:
$ vagrant ssh-config
Host magento2.vagrant150
HostName 127.0.0.1
User vagrant
Port 3150
UserKnownHostsFile /dev/null
StrictHostKeyChecking no
PasswordAuthentication no
IdentityFile "/Users/madismanni/m2/vagrant-magento/.vagrant/machines/magento2.vagrant150/virtualbox/private_key"
IdentitiesOnly yes
LogLevel FATAL
Then you can use the private key like this, note also the switch for switching off password authentication
ssh -i /Users/madismanni/m2/vagrant-magento/.vagrant/machines/magento2.vagrant150/virtualbox/private_key -o PasswordAuthentication=no vagrant#127.0.0.1 -p 3150
This excellent answer was added by user76329 in a rejected Suggested Edit
Expanding on Meow's example, we can copy the local pub/private ssh keys, set permissions, and make the inline script idempotent (runs once and will only repeat if the test condition fails, thus needing provisioning):
config.vm.provision "shell" do |s|
ssh_prv_key = ""
ssh_pub_key = ""
if File.file?("#{Dir.home}/.ssh/id_rsa")
ssh_prv_key = File.read("#{Dir.home}/.ssh/id_rsa")
ssh_pub_key = File.readlines("#{Dir.home}/.ssh/id_rsa.pub").first.strip
else
puts "No SSH key found. You will need to remedy this before pushing to the repository."
end
s.inline = <<-SHELL
if grep -sq "#{ssh_pub_key}" /home/vagrant/.ssh/authorized_keys; then
echo "SSH keys already provisioned."
exit 0;
fi
echo "SSH key provisioning."
mkdir -p /home/vagrant/.ssh/
touch /home/vagrant/.ssh/authorized_keys
echo #{ssh_pub_key} >> /home/vagrant/.ssh/authorized_keys
echo #{ssh_pub_key} > /home/vagrant/.ssh/id_rsa.pub
chmod 644 /home/vagrant/.ssh/id_rsa.pub
echo "#{ssh_prv_key}" > /home/vagrant/.ssh/id_rsa
chmod 600 /home/vagrant/.ssh/id_rsa
chown -R vagrant:vagrant /home/vagrant
exit 0
SHELL
end
A shorter and more correct code should be:
ssh_pub_key = File.readlines("#{Dir.home}/.ssh/id_rsa.pub").first.strip
config.vm.provision 'shell', inline: 'mkdir -p /root/.ssh'
config.vm.provision 'shell', inline: "echo #{ssh_pub_key} >> /root/.ssh/authorized_keys"
config.vm.provision 'shell', inline: "echo #{ssh_pub_key} >> /home/vagrant/.ssh/authorized_keys", privileged: false
Otherwise user's .ssh/authorized_keys will belong to root user.
Still it will add a line at every provision run, but Vagrant is used for testing and a VM usually have short life, so not a big problem.
I end up using code like:
config.ssh.forward_agent = true
config.ssh.insert_key = false
config.ssh.private_key_path = ["~/.vagrant.d/insecure_private_key","~/.ssh/id_rsa"]
config.vm.provision :shell, privileged: false do |s|
ssh_pub_key = File.readlines("#{Dir.home}/.ssh/id_rsa.pub").first.strip
s.inline = <<-SHELL
echo #{ssh_pub_key} >> /home/$USER/.ssh/authorized_keys
sudo bash -c "echo #{ssh_pub_key} >> /root/.ssh/authorized_keys"
SHELL
end
Note that we should not hard code path to /home/vagrant/.ssh/authorized_keys since some vagrant boxes not using the vagrant username.
None of the older posts worked for me although some came close. I had to make rsa keys with keygen in the terminal and go with custom keys. In other words defeated from using Vagrant's keys.
I'm on Mac OS Mojave as of the date of this post. I've setup two Vagrant boxes in one Vagrantfile. I'm showing all of the first box so newbies can see the context. I put the .ssh folder in the same folder as the Vagrant file, otherwise use user9091383 setup.
Credit for this solution goes to this coder.
Vagrant.configure("2") do |config|
config.vm.define "pfbox", primary: true do |pfbox|
pfbox.vm.box = "ubuntu/xenial64"
pfbox.vm.network "forwarded_port", host: 8084, guest: 80
pfbox.vm.network "forwarded_port", host: 8080, guest: 8080
pfbox.vm.network "forwarded_port", host: 8079, guest: 8079
pfbox.vm.network "forwarded_port", host: 3000, guest: 3000
pfbox.vm.provision :shell, path: ".provision/bootstrap.sh"
pfbox.vm.synced_folder "ubuntu", "/home/vagrant"
pfbox.vm.provision "file", source: "~/.gitconfig", destination: "~/.gitconfig"
pfbox.vm.network "private_network", type: "dhcp"
pfbox.vm.network "public_network"
pfbox.ssh.insert_key = false
ssh_key_path = ".ssh/" # This may not be necessary. I may remove.
pfbox.vm.provision "shell", inline: "mkdir -p /home/vagrant/.ssh"
pfbox.ssh.private_key_path = ["~/.vagrant.d/insecure_private_key", ".ssh/id_rsa"]
pfbox.vm.provision "file", source: ".ssh/id_rsa.pub", destination: ".ssh/authorized_keys"
pfbox.vm.box_check_update = "true"
pfbox.vm.hostname = "pfbox"
# VirtualBox
config.vm.provider "virtualbox" do |vb|
# vb.gui = true
vb.name = "pfbox" # friendly name for Oracle VM VirtualBox Manager
vb.memory = 2048 # memory in megabytes 2.0 GB
vb.cpus = 1 # cpu cores, can't be more than the host actually has.
end
end
config.vm.define "dbbox" do |dbbox|
...
This is an excellent thread that helped me solve a similar situation as the original poster describes.
While I ultimately used the settings/logic presented in smartwjw’s answer, I ran into a hitch since I use the VAGRANT_HOME environment variable to save the core vagrant.d directory stuff on an external hard drive on one of my development systems.
So here is the adjusted code I am using in my Vagrantfile to accommodate for a VAGRANT_HOME environment variable being set; the “magic” happens in this line vagrant_home_path = ENV["VAGRANT_HOME"] ||= "~/.vagrant.d":
config.ssh.insert_key = false
config.ssh.forward_agent = true
vagrant_home_path = ENV["VAGRANT_HOME"] ||= "~/.vagrant.d"
config.ssh.private_key_path = ["#{vagrant_home_path}/insecure_private_key", "~/.ssh/id_rsa"]
config.vm.provision :shell, privileged: false do |shell_action|
ssh_public_key = File.readlines("#{Dir.home}/.ssh/id_rsa.pub").first.strip
shell_action.inline = <<-SHELL
echo #{ssh_public_key} >> /home/$USER/.ssh/authorized_keys
SHELL
end
For the inline shell provisioners - it is common for a public key to contains spaces, comments, etc. So make sure to put (escaped) quotes around the var that expands to the public key:
config.vm.provision 'shell', inline: "echo \"#{ssh_pub_key}\" >> /home/vagrant/.ssh/authorized_keys", privileged: false
A pretty complete example, hope this helps someone who visits next. Moved all the concrete values to external config files. IP assignment is just for trying out.
# -*- mode: ruby -*-
# vi: set ft=ruby :
require 'yaml'
vmconfig = YAML.load_file('vmconfig.yml')
=begin
Script to created VMs with public IPs, VM creation governed by the provided
config file.
All Vagrant configuration is done below. The "2" in Vagrant.configure
configures the configuration version (we support older styles for
backwards compatibility). Please don't change it unless you know what
you're doing
Default user `vagrant` is created and ssh key is overridden. make sure to have
the files `vagrant_rsa` (private key) and `vagrant_rsa.pub` (public key) in the
path `./.ssh/`
Same files need to be available for all the users you want to create in each of
these VMs
=end
uid_start = vmconfig['uid_start']
ip_start = vmconfig['ip_start']
vagrant_private_key = Dir.pwd + '/.ssh/vagrant_rsa'
guest_sshkeys = '/' + Dir.pwd.split('/')[-1] + '/.ssh/'
Vagrant.configure('2') do |config|
vmconfig['machines'].each do |machine|
config.vm.define "#{machine}" do |node|
ip_start += 1
node.vm.box = vmconfig['vm_box_name']
node.vm.box_version = vmconfig['vm_box_version']
node.vm.box_check_update = false
node.vm.boot_timeout = vmconfig['vm_boot_timeout']
node.vm.hostname = "#{machine}"
node.vm.network "public_network", bridge: "#{vmconfig['bridge_name']}", auto_config: false
node.vm.provision "shell", run: "always", inline: "ifconfig #{vmconfig['ethernet_device']} #{vmconfig['public_ip_part']}#{ip_start} netmask #{vmconfig['subnet_mask']} up"
node.ssh.insert_key = false
node.ssh.private_key_path = ['~/.vagrant.d/insecure_private_key', "#{vagrant_private_key}"]
node.vm.provision "file", source: "#{vagrant_private_key}.pub", destination: "~/.ssh/authorized_keys"
node.vm.provision "shell", inline: <<-EOC
sudo sed -i -e "\\#PasswordAuthentication yes# s#PasswordAuthentication yes#PasswordAuthentication no#g" /etc/ssh/sshd_config
sudo systemctl restart sshd.service
EOC
vmconfig['users'].each do |user|
uid_start += 1
node.vm.provision "shell", run: "once", privileged: true, inline: <<-CREATEUSER
sudo useradd -m -s /bin/bash -U #{user} -u #{uid_start}
sudo mkdir /home/#{user}/.ssh
sudo cp #{guest_sshkeys}#{user}_rsa.pub /home/#{user}/.ssh/authorized_keys
sudo chown -R #{user}:#{user} /home/#{user}
sudo su
echo "%#{user} ALL=(ALL) NOPASSWD: ALL" > /etc/sudoers.d/#{user}
exit
CREATEUSER
end
end
end
It's rather an old Question but maybe this would help someone nowadays, hopefully.
What works like a charm for me is:
Vagrant.configure("2") do |config|
config.vm.box = "debian/bullseye64"
config.vm.define "debian-1"
config.vm.hostname = "debian-1"
# config.vm.network "private_network", ip: "192.168.56.2" # this enables Internal network mode for VirtualBox
config.vm.network "private_network", type: "dhcp" # this enables Host-only network mode for VirtualBox
config.vm.network "forwarded_port", guest: 8081, host: 8081 # with this you can hit http://mypc:8081 to load the web service configured in the vm..
config.ssh.host = "mypc" # use the base host's hostname.
config.ssh.insert_key = true # do not use the global public image key.
config.ssh.forward_agent = true # have already the agent keys preconfigured for ease.
config.vm.provision "ansible" do |ansible|
ansible.playbook = "../../../ansible/playbooks/configurations.yaml"
ansible.inventory_path = "../../../ansible/inventory/hosts.ini"
ansible.extra_vars = {
nodes: "#{config.vm.hostname}",
username: "vagrant"
}
ansible.ask_vault_pass = true
end
end
Then my Ansible provisioner playbook/role configurations.yaml contains this:
- name: Create .ssh folder if not exists
file:
state: directory
path: "{{ ansible_env.HOME }}/.ssh"
- name: Add authorised key (for remote connection)
authorized_key:
state: present
user: "{{ username }}"
key: "{{ lookup('file', 'eos_id_rsa.pub') }}"
- name: Add public SSH key in ~/.ssh
copy:
src: eos_id_rsa.pub
dest: "{{ ansible_env.HOME }}/.ssh"
owner: "{{ username }}"
group: "{{ username }}"
- name: Add private SSH key in ~/.ssh
copy:
src: eos_id_rsa
dest: "{{ ansible_env.HOME }}/.ssh"
owner: "{{ username }}"
group: "{{ username }}"
mode: 0600
Madis Maenni answer is closest to best solution:
just do:
vagrant ssh-config >> ~/.ssh/config
chmod 600 ~/.ssh/config
then you can just ssh via hostname.
To get list of hostnames configured in ~/.ssh/config
grep -E '^Host ' ~/.ssh/config
My example:
$ grep -E '^Host' ~/.ssh/config
Host web
Host db
$ ssh web
[vagrant#web ~]$
Generate a rsa key pair for vagrant authentication ssh-keygen -f ~/.ssh/vagrant
You might also want to add the vagrant identity files to your ~/.ssh/config
IdentityFile ~/.ssh/vagrant
IdentityFile ~/.vagrant.d/insecure_private_key
For some reason we can't just specify the key we want to insert so we take a
few extra steps to generate a key ourselves. This way we get security and
knowledge of exactly which key we need (+ all vagrant boxes will get the same key)
Can't ssh to vagrant VMs using the insecure private key (vagrant 1.7.2)
How do I add my own public key to Vagrant VM?
config.ssh.insert_key = false
config.ssh.private_key_path = ['~/.ssh/vagrant', '~/.vagrant.d/insecure_private_key']
config.vm.provision "file", source: "~/.ssh/vagrant.pub", destination: "/home/vagrant/.ssh/vagrant.pub"
config.vm.provision "shell", inline: <<-SHELL
cat /home/vagrant/.ssh/vagrant.pub >> /home/vagrant/.ssh/authorized_keys
mkdir -p /root/.ssh
cat /home/vagrant/.ssh/authorized_keys >> /root/.ssh/authorized_keys
SHELL

cannot ssh Docker Provided Container with Vagrant. Vagrant ssh doesnt work too

I am fairly new to Vagrant and Docker both.
What I am trying to do here is to get a container provided via docker in Vagrant and install a small webapp using the shell provisioner.
Here is my Vagrantfile
Vagrant.configure(2) do |config|
# config.vm.provision :shell, path: "bootstrap.sh"
config.vm.provision :shell, inline: 'echo Hi there !!!'
config.vm.provider :docker do |d|
d.name="appEnvironment"
d.image = "phusion/baseimage"
d.remains_running = true
d.has_ssh = true
d.cmd = ["/sbin/my_init","--enable-insecure-key"]
end
end
The problem that i am facing here is that after the container is created it keeps running the following and eventually just stops.
I can see a running docker container when i type in docker ps but it hasnt run the provisioning part. I am assuming it is because the ssh wasnt successful
==> default: Creating the container...
default: Name: appEnvironment
default: Image: phusion/baseimage
default: Cmd: /sbin/my_init --enable-insecure-key
default: Volume: /home/devops/vagrantBoxForDemo:/vagrant
default: Port: 127.0.0.1:2222:22
default:
default: Container created: 56a87b7cd10c22fe
==> default: Starting container...
==> default: Waiting for machine to boot. This may take a few minutes...
default: SSH address: 172.17.0.50:22
default: SSH username: vagrant
default: SSH auth method: private key
default: Warning: Connection refused. Retrying...
default: Warning: Connection refused. Retrying...
default: Warning: Connection refused. Retrying...
Can someone let me know where i might be wrong? I tried changing the image as well but without success.
First download the insecure key provided by phusion from:
https://github.com/phusion/baseimage-docker/blob/master/image/insecure_key
* Remember the insecure key should be used only for development purposes.
Now, you need to enable ssh by adding the following into your Dockerfile:
FROM phusion/baseimage
RUN rm -f /etc/service/sshd/down
RUN /etc/my_init.d/00_regen_ssh_host_keys.sh
RUN /usr/sbin/enable_insecure_key
Enable ssh and specify the key file in your Vagrantfile:
Vagrant.configure("2") do |config|
config.vm.define "app" do |app|
app.vm.provider "docker" do |d|
d.build_dir = "."
d.cmd = ["/sbin/my_init", "--enable-insecure-key"]
d.has_ssh = true
end
end
config.ssh.username = "root"
config.ssh.private_key_path = "path/to/your/insecure_key"
end
Up your environment
vagrant up
Now you should be able to access your container by ssh
vagrant ssh app
phusion/baseimage does not have the insecure private key enabled by default. You have to create your own base image FROM phusion/baseimage with the following:
RUN /usr/sbin/enable_insecure_key