I use vagrant with a 3rd party linux box.
The box has the default vagrant/vagrant credentials.
In my Vagrantfile I want it to use ssh so I have this
config.vm.provision :shell, :path => "bootstrap.sh"
config.ssh.private_key_path = "~/.ssh/id_rsa"
config.ssh.forward_agent = true
In my bootstrap script I want to add my public key to authorized_keys. This works if I do it post VM creation.
But when I re-provision the VM from scratch, the VM has not yet received the public key through my bootstrap shell script.
How can I have vagrant install my public key in authorized_keys and authenticate with vagrant/vagrant until this has happened? Or is there a better way?
Found something that works
Based on this Vagrant insecure by default?
Where we have
config.ssh.private_key_path = ["#{ENV['HOME']}/.ssh/id_rsa", \
"#{ENV['HOME']}/.vagrant.d/insecure_private_key"]
This seems to have the effect that vagrant tries keys until it finds one that works (the example enumerates host file system paths too - very nice indeed.)
Related
Is there a way to migrate my Host's public ssh key to my VM?
The use case is:
I have a user who has a public SSH key which has access to a certain repository.
I am creating a VM that will be distributed to other developers (who have access with their SSH keys to this repository)
I would like to automate git cloning of the repository so it happens during exec-once ..
What should I do that involves as few manual paths as possible?
PS: I am using https://puphpet.com/ to generate the vagrant machine for me - I am not editing the Vagrantfile direclty.
As mentioned in comments you can use a vagrant file provision to copy your private key on your VM
in your Vagrantfile add
Vagrant.configure("2") do |config|
# ... other configuration
config.vm.provision "file", source: "~/.ssh/id_rsa.pub", destination: "/home/vagrant/id_rsa.pub"
end
I'm following a few tutorials to learn vagrant and ansible. I get to a point in a tutorial where I have an inventory file of boxes that it will supposedly provision for me:
[loadbalancer]
lb01
[webserver]
app01
app02
[database]
db01
[control]
control ansible_connection=local
Please correct me where I'm wrong, but I think I should have setup the authorized_keys file for each of these machines manually by using "Vagrant up", followed by "vagrant ssh lb01" and placing my public key manually in authorized_keys. Or is there a quicker way to do this part? I certainly hope so.
Thanks!
Mike
If you are using Vagrant, you can use the ansible provisioner.
config.vm.provision "ansible" do |ansible|
ansible.playbook = "playbook.yml"
end
Vagrant takes care of setting up the inventory file and the related SSH private keys for you.
If you do want to see what inventory file has been generated, you can find that at
.vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory
I have a new user in my vagrant box(trusty64) and I am trying to ssh into it. Instead of logging into vagrant user after vagrant up, I want to login to my username.
What I have done so far
Created a user in my guest machine.
Created ssh key in my host using ssh-keygen
Copied the ssh key to the guest using ssh-copy-id -p 2222 -i shash#127.0.0.1
and the part of the Vagrantfile looks like this
config.vm.box = "ubuntu/trusty64"
config.ssh.username = "shash"
config.ssh.forward_agent = true
config.ssh.private_key_path = "~/.ssh/authorized_keys"
I can use ssh -p '2222' 'shash#127.0.0.1' to login directly but when I give vagrant up I keep getting the following error
default: Warning: Connection timeout. Retrying...
default: Warning: Authentication failure. Retrying...
default: Warning: Authentication failure. Retrying...
Any help in sorting out this is really appreciated.Thanks!
A complete set-up guide would be really helpful
The vagrant file will access that users home directory when you specify '~'.
config.ssh.private_key_path = "/home/shash/.ssh/authorized_keys"
Give that a go!
Add it to the Vagrantfile:
Vagrant.configure("2") do |config|
config.ssh.private_key_path = "~/.ssh/id_rsa"
config.ssh.forward_agent = true
end
config.ssh.private_key_path is your local private key
Your private key must be available to the local ssh-agent. You can check with ssh-add -L, if it's not listed add it with ssh-add ~/.ssh/id_rsa
Don't forget to add you public key to ~/.ssh/authorized_keys on the Vagrant VM. You can do it copy-and-pasting or using a tool like ssh-copy-id
https://stackoverflow.com/a/23554973/3563993
I have a cluster of 3 VMs. Here is the Vagrantfile:
# -*- mode: ruby -*-
# vi: set ft=ruby :
hosts = {
"host0" => "192.168.33.10",
"host1" => "192.168.33.11",
"host2" => "192.168.33.12"
}
Vagrant.configure("2") do |config|
config.vm.box = "precise64"
config.vm.box_url = "http://files.vagrantup.com/precise64.box"
config.ssh.private_key_path = File.expand_path('~/.vagrant.d/insecure_private_key')
hosts.each do |name, ip|
config.vm.define name do |machine|
machine.vm.hostname = "%s.example.org" % name
machine.vm.network :private_network, ip: ip
machine.vm.provider "virtualbox" do |v|
v.name = name
# #v.customize ["modifyvm", :id, "--memory", 200]
end
end
end
end
This used to work until I upgraded recently:
ssh -i ~/.vagrant.d/insecure_private_key vagrant#192.168.33.10
Instead, vagrant asks for a password.
It seems that recent versions of vagrant (I'm on 1.7.2) create a secure private key for each machine. I discovered it by running
vagrant ssh-config
The output shows different keys for each host. I verified the keys are different by diffing them.
I tried to force the insecure key by setting in Vagrantfile the config.ssh.private_key_path, but it doesn't work.
The reason I want to use the insecure key for all machines is that I want to provision them from the outside using ansible. I don't want to use the Ansible provisioner, but treat the VMs as remote servers. So, the Vagrantfile is just used to specify the machines in the cluster and then provisioning will be done externally.
The documentation still says that by default machines will use the insecure private key.
How can I make my VMs use the insecure private key?
Vagrant changed the behaviour between 1.6 and 1.7 versions and now will insert auto generated insecure key instead of the default one.
You can cancel this behaviour by setting config.ssh.insert_key = false in your Vagrantfile.
Vagrant shouldn't replace insecure key if you specify private_key_path like you did, however the internal logic checks if the private_key_path points to the default insecure_private_key, and if it does, Vagrant will replace it.
More info can be found here.
When Vagrant creates a new ssh key it's saved with the default configuration below the Vagrantfile directory at .vagrant/machines/default/virtualbox/private_key.
Using the autogenerated key you can login with that from the same directory as the Vagrantfile like this:
ssh -i .vagrant/machines/default/virtualbox/private_key -p 2222 vagrant#localhost
To learn about all details about the actual ssh configuration of a vagrant box use the vagrant ssh-config command.
# vagrant ssh-config
Host default
HostName 127.0.0.1
User vagrant
Port 2222
UserKnownHostsFile /dev/null
StrictHostKeyChecking no
PasswordAuthentication no
IdentityFile /Users/babo/src/centos/.vagrant/machines/default/virtualbox/private_key
IdentitiesOnly yes
LogLevel FATAL
Adding config.ssh.insert_key = false to the Vagrantfile and removing the new vm private key .vagrant/machines/default/virtualbox/private_key vagrant automatically updates vagrant ssh-config with the correct private key ~/.vagrant.d/insecure_private_key. The last thing I had to do was ssh into the vm and update the authorized keys file on the vm. curl https://raw.githubusercontent.com/mitchellh/vagrant/master/keys/vagrant.pub > ~/.ssh/authorized_keys
tldr;
ssh vagrant#127.0.0.1 -p2222 -i/~/www/vw/vw-environment/.vagrant/machines/default/virtualbox/private_key
I couldn't get this to work, so in the end I added the following to the ssh.rb ruby script (/opt/vagrant/embedded/gems/gems/vagrant-1.7.1//lib/vagrant/util/ssh.rb)
print(*command_options)
just before this line that executes the ssh call
SafeExec.exec("ssh", *command_options)
So that prints out all the command options passed to the ssh call, from there you can work out something that works for you based on what vagrant calculates to be the correct ssh parameters.
If you are specifically using Ansible (not the Vagrant Ansible provisioner), you might want to consider using the vagrant dynamic inventory script from Ansible's repo:
https://github.com/ansible/ansible/blob/devel/contrib/inventory/vagrant.py
Alternatively, you'd can handcraft your own script and dynamically build your own vagrant inventory file:
SYSTEMS=$(vagrant status | grep running | cut -d ' ' -f1)
echo '[vagrant_systems]' > vagrant.ini
for SYSTEM in ${SYSTEMS}; do
SSHCONFIG=$(vagrant ssh-config ${SYSTEM})
IDENTITY_FILE=$(echo "${SSHCONFIG}" | grep -o "\/.*${SYSTEM}.*")
PORT=$(echo "${SSHCONFIG}" | grep -oE '[0-9]{4,5}')
echo "${SYSTEM} ansible_ssh_host=127.0.0.1 ansible_ssh_port=${PORT} ansible_ssh_private_key_file=${IDENTITY_FILE}" >> vagrant.ini
done
Then use ansible-playbook -i=vagrant.ini
If you try to use the ~/.ssh/config, you'll have to dynamically create or edit existing entries, as the ssh ports can change (due to the collision detection in Vagrant).
Rather than create a new SSH key pair on a vagrant box, I would like to re-use the key pair I have on my host machine, using agent forwarding. I've tried setting config.ssh.forward_agent to TRUE in the Vagrantfile, then rebooted the VM, and tried using:
vagrant ssh -- -A
...but I'm still getting prompted for a password when I try to do a git checkout. Any idea what I'm missing?
I'm using vagrant 2 on OS X Mountain Lion.
Vagrant.configure("2") do |config|
config.ssh.private_key_path = "~/.ssh/id_rsa"
config.ssh.forward_agent = true
end
config.ssh.private_key_path is your local private key
Your private key must be available to the local ssh-agent. You can check with ssh-add -L, if it's not listed add it with ssh-add ~/.ssh/id_rsa
Don't forget to add you public key to ~/.ssh/authorized_keys on the Vagrant VM. You can do it copy-and-pasting or using a tool like ssh-copy-id
Add it to the Vagrantfile
Vagrant::Config.run do |config|
# stuff
config.ssh.forward_agent = true
end
See the docs
In addition to adding "config.ssh.forward_agent = true" to the vagrant file make sure the host computer is set up for agent forwarding. Github provides a good guide for this. (Check out the troubleshooting section).
I had this working with the above replies on 1.4.3, but stopped working on 1.5. I now have to run ssh-add to work fully with 1.5.
For now I add the following line to my ansible provisioning script.
- name: Make sure ssk keys are passed to guest.
local_action: command ssh-add
I've also created a gist of my setup: https://gist.github.com/KyleJamesWalker/9538912
If you are on Windows, SSH Forwarding in Vagrant does not work properly by default (because of a bug in net-ssh). See this particular Vagrant bug report: https://github.com/mitchellh/vagrant/issues/1735
However, there is a workaround! Simply auto-copy your local SSH key to the Vagrant VM via a simple provisioning script in your VagrantFile. Here's an example:
https://github.com/mitchellh/vagrant/issues/1735#issuecomment-25640783
When we recently tried out the vagrant-aws plugin with Vagrant 1.1.5, we ran into an issue with SSH agent forwarding. It turned out that Vagrant was forcing IdentitiesOnly=yes without an option to change it to no. This forced Vagrant to only look at the private key we listed in the Vagrantfile for the AWS provider.
I wrote up our experiences in a blog post. It may turn into a pull request at some point.
Make sure that the VM does not launch its own SSH agent. I had this line in my ~/.profile
eval `ssh-agent`
After removing it, SSH agent forwarding worked.
The real problem is Vagrant using 127.0.0.1:2222 as default port-forward.
You can add one (not 2222, 2222 is already occupied by default)
config.vm.network "forwarded_port", guest: 22, host:2333, host_ip: "0.0.0.0"
"0.0.0.0" is way take request from external connection.
then
ssh -p 2333 vagrant#192.168.2.101 (change to your own host ip address, dud)
will working just fine.
Do thank me, Just call me Leifeng!
On Windows, the problem is that Vagrant doesn't know how to communicate with git-bash's ssh-agent. It does, however, know how to use PuTTY's Pageant. So, as long as Pageant is running and has loaded your SSH key, and as long as you've set config.ssh.forward_agent, this should work.
See this comment for details.
If you use Pageant, then the workaround of updating the Vagrantfile to copy SSH keys on Windows is no longer necessary.