I've narrowed down this question - regarding a MySQL over SSH connection only working once - to a conflicting line in my host computer's known_hosts file.
Essentially, I can not get into my Database GUI of choice because the key is different for the same IP address (after re-provisioning, reloading, etc.).
Once I delete any offending lines, I can get in just fine.
So, through Vagrant's shell command (that I'm provisioning with) how can I modify the host machine's ~/.ssh/known_hosts file?
EDIT:
I found a temp fix that involves adding/creating a ~/.ssh/config file (this involves using a private IP address):
Host 192.168.*.*
StrictHostKeyChecking no
UserKnownHostsFile /dev/null
Should let you in. Not a real fix, as this kind of fix can be a security concern. Look below for much better answer.
Sorry for taking you away from what you need!
Changing HOST files from Vagrantfile:
What you actually want is very simple. Vagrantfile is interpreted by Vagrant each time you run vagrant command. It is regular ruby code, so if you want to change HOST file, all you need to do is to put in the Vagrantfile Ruby code that performs this change. Here's the example code I've put at the end of my Vagrantfile:
require 'tempfile'
require 'fileutils'
path = '~/.ssh/known_hosts'
temp_file = Tempfile.new('foo')
begin
File.open(path, 'r') do |file|
file.each_line do |line|
if line !~ /REGEX_OF_LINE_YOU_WANT_TO_EXCLUDE/ then
temp_file.puts line
end
end
end
temp_file.rewind
FileUtils.mv(temp_file.path, path)
ensure
temp_file.close
temp_file.unlink
end
Note to edit the code above by putting your own value for REGEX_OF_LINE_YOU_WANT_TO_EXCLUDE.
Hope I at least partially fixed my mistake by providing this answer :)
Original Answer
For anyone's (mine as well) future reference, I'm leaving part of the answer that refers to changing GUEST OS files or copying files to GUEST OS:
Vagrant provides couple of provisioners.
Solution number 1: For simple file copy you may use Vagrant file provisioner. Following code in your Vagrantfile would copy the file ~/known_hosts.template from your host system to VM's file /home/vagrant/.ssh/known_hosts
# Context -------------------------------------------------------
Vagrant.configure('2') do |config|
# ...
# This is the section to add ----------------------------------
config.vm.provision :file do |file|
file.source = '~/known_hosts.template'
file.destination = '/home/vagrant/.ssh/known_hosts'
end
#--------------------------------------------------------------
end
File provisioner is poorly documented on Vagrant site, and we've got to thank #tmatilai who had answered similar question on serverfault.
Keep in mind that you should use absolute paths in destination field, and that copying is being performed by vagrant user, so file will have vagrant's owner:group.
Solution number 2: If you need to copy file with a root privileges, or really have to change the file without using templates, consider using well documented shell provisioner. File copying in this case would work only if you have the file placed in the folder visible from within the VM(guestOS), but you have all the power of shell.
Solution number 3: Though it would be overkill in this case, you might use very powerful Chef or Puppet as provisioners, and perform action via one of those frameworks. I know nothing about Puppet, and may talk only about Chef. Cookbook would be very simple. Create template file (.erb) with desired content, and then your recipe will just place the file where necessary. Of course you'll need a box with Chef packeges in it.
I use plain ssh to enter my machines in order to do provisioning:
$ ssh-add ~/.vagrant.d/insecure_private_key
With this setup the known hosts is bound to give problems, but I do not want to turn off the host key checking as I use that also for external hosts. Given my hosts include pattern foo, I did this on the shell:
$ ssh -i '' '/foo/d' ~/.ssh/known_hosts
Remove the empty '' argument after -i if you have GNU/linux host in stead of BSD/MacOSX.
You can then install the vagrant trigger plugin:
$ vagrant plugin install vagrant-triggers
And add the above snippet to the Vagrantfile (mind the backticks):
config.trigger.after :destroy do
puts "Removing known host entries"
`sed -i '' '/foo/d' ~/.ssh/known_hosts`
end
This is what I do:
I define the IP_ADDRESS and DOMAIN_NAME variables at the top of the Vagrantfile.
Then inside Vagrant.configure I add:
config.trigger.after :destroy do |trigger|
trigger.info = "Removing known_hosts entries"
trigger.run = {inline: "ssh-keygen -R #{IP_ADDRESS}"}
trigger.run = {inline: "ssh-keygen -R #{DOMAIN_NAME}"}
end
Related
docker-machine has an scp command, but docker-cloud doesn't seem to have any way to transfer a file from my local machine to the cloud container or vice-versa.
I'm submitting an answer below that I've finally figured out (in hopes that it will help someone), but I'd love to hear better answers if there are any!
(I realize docker-cloud is going away, but perhaps this will be helpful for other cloud platforms as well)
To transfer a file from your local machine to a docker-cloud instance that is running linux with the tee command available:
docker-cloud container exec id12345 tee filename.ext < file_to_copy.ext > /dev/null
(you'll want to redirect output to /dev/null as shown unless you want the entire contents of the file to be echoed to the terminal... twice)
To transfer a file to your local machine, is somewhat easier:
docker-cloud container exec id12345 cat file_to_copy.ext > filename.ext
Note: I'm not sure this works for binary files, and it can even cause issues with linefeed characters in text files, based on terminal settings, etc. - but it's the best answer I've got short of using an external service like https://transfer.sh
I've been using PuPHPet to create virtual development environments.
Yesterday I generated a config file for a new box. When I try to spin it up using the vagrant up command, I get the following error message:
C:\xx>vagrant up
Bringing machine 'default' up with 'virtualbox'
provider... There are errors in the configuration of this machine.
Please fix the following errors and try again:
SSH:
* private_key_path file must exist: P://.vagrant.d/insecure_private_key
I came across this question and moved the insecure_private_key from puphpet\files\dot\ssh to the same directory as where the Vagrantfile is. However this gives the same error.
I'm also confused by the directory given in the error message;
P://.vagrant.d/insecure_private_key
Why is the 'P' drive mentioned?
My Vagrantfile can be found here.
Appreciate any advice on solving this error.
I fixed the problem by replacing the path to insecure_private_key by hard coding the path to the insecure_private_key file.
So it went from:
config.ssh.private_key_path = [
customKey,
"#{ENV['HOME']}/.vagrant.d/insecure_private_key"
]
To:
config.ssh.private_key_path = [
customKey,
"C:/Users/My.User/.vagrant.d/insecure_private_key"
]
It looks like it's because you may have performed a vagrant destroy which deleted the insecure_private_key.
But the vagrant file looks up the puphpet\files\dot\ssh files, if they are there, it looks for the insecure_private_key.
delete (rename) the id_rsa files in puphpet\files\dot\ssh
this fixed it for me!
When you are sharing your puphet configuration to your teammates, hardcoding the private_key_path is not advisable as per the accepted answer.
My host computer is windows so i have added a new environment variable VAGRANT_HOME with value %USERPROFILE% since this is where my /.vagrant.d folder resides. When you add this variable just make sure that you close command prompts that are open so the variable will be applied
Hope this helps
You can also just delete all the files in the puphpet folder rm -rf puphpet/files/dot/ssh/* and the vm should regenerate them when you run vagrant provision.
I'm not sure what's wrong with your Vagrant installation, but this line:
vagrant_home = (ENV['VAGRANT_HOME'].to_s.split.join.length > 0) ? ENV['VAGRANT_HOME'] : "#{ENV['HOME']}/.vagrant.d"
is what sets up the variable that is later on used here:
config.ssh.private_key_path = [
customKey,
"#{vagrant_home}/insecure_private_key"
]
The reason this is happening is that as of Vagrant 1.7, it generates a unique private key for each VM you have. There's, what I consider to be, a bug in that Vagrant completely ignores user-defined private_key_path if it detects that it generated a unique key previously.
What PuPHPet is doing here is letting Vagrant generate its unique SSH key, then once the VM boots up and has SSH access, it goes in and generates another key to replace it.
The reason we're replacing it is because this new Vagrant feature only works on OSX/Linux hosts, due to Windows not having the required tools.
My way works across all OS because it does the SSH key generation within the VM itself.
All this is semi-related to your question, but the answer is that something's wrong with your Vagrant installation if those environment variables have not been defined.
Adding to PunctuationMark's answer you can also set the VAGRANT_HOME environment variable in your Vagrantfile: ENV['VAGRANT_HOME'] = ENV['USERPROFILE']
Editing this following line in Vagrantfile worked for me.
PRIVATE_KEY_SOURCE = '~/.vagrant.d/insecure_private_key'
I want to preface this question by mentioning that I have indeed looked over most if not all vagrant "Waiting for VM to Boot" troubleshooting threads:
Things I've tried include:
vagrant failed to connect VM
https://superuser.com/questions/342473/vagrant-ssh-fails-with-virtualbox
https://github.com/mitchellh/vagrant/issues/410
http://vagrant.wikia.com/wiki/Usage
http://scotch.io/tutorials/get-vagrant-up-and-running-in-no-time
And more.
Here's how I setup my Vagrant:
Note: We are using Vagrant 1.2.2 since we do not at the moment have time to change configs to newer versions. I am also using VirtualBox 4.2.26.
My office has an /official/ folder which includes things such as Vagrantfile inside. Inside my Vagrantfile are these custom settings:
config.vm.box = "my_box"
config.ssh.private_key_path = "~/.ssh/github_rsa"
config.ssh.forward_agent = true
config.ssh.forward_x11 = true
config.ssh.max_tries = 300
config.vm.provision :shell, :inline => "/etc/init.d/networking restart"
I installed our custom box (called package.box) via vagrant box add my_box absolute_path/package.box which went without a hitch.
Running vagrant up, I would look at the "preview" of the VirtualBox, and it would simply be stuck at the login page. My Terminal would also only say: Waiting for VM to boot. This can take a few minutes. As far as I know, this is an SSH issue. Or my private key issues, though in my Vagrantfile I explicitly pointed to my private key location.
Interesting Notes:
Running dhclient within the VirtualBox GUI, it says command no found. Running sudo dhclient eth0 was one of the suggested fixes.
This fix: https://superuser.com/a/343775/298915 of "modify the /etc/rc.local file to include the line sh /etc/init.d/networking restart just before exit 0." did nothing to fix the issue.
Conclusion:
Having tried to re-install everything thinking I messed up a file, it did not seem to ameliorate the issue. I am unable to work with this issue. Could someone give me some insight?
So after around twelve hours of dejected troubleshooting, I was able to (finally) get the VM to boot.
Setup your private/public keys using the link provided. My box is a Debian Linux 3.2.0-4-amd64, so instead of /root/.ssh/id_rsa.pub, you have to use /home/vagrant/.ssh/id_rsa.pub (and the respective id_rsa path for the private key).
Note: make sure your files have the right permissions. Check using ls -l path, and change using chmod. Your machine may not have /home/vagrant/.ssh/authorized_keys, so generate that file with touch /home/vagrant/.ssh/authorized_keys.
Boot your VM using the VirtualBox GUI using (through either Vagrantfile boot-GUI command, or starting your VM using VirtualBox). Login using vagrant and vagrant when prompted.
Within the GUI, manually start dhclient using sudo dhclient eth0 -v. Why is it off by default? I have no idea. I found out that it was off when I tried to wget the private/public keys in the tutorial above, but was unable to.
Go to your local machine's command line and reload vagrant using vagrant reload. It should boot, and no longer hang at "Waiting for VM to Boot."
This worked for me. Though it may be different for other machines, for whatever reason Vagrant likes to break.
Suggestion: can this be saved as a script so we don't need to manually do this everytime?
EDIT: Update to the latest version of Vagrant, and you will never see this issue again. About time, huh?
I want to take backup of my website which is hosted on godaddy.
I used pscp command from my windows dos and try to download whole public_html folder.
my command is :
pscp -r user#host:public_html/ d:\sites\;
Files are downloading properly and folders also. But the issue is public_html and other subfolders has two folder like "./" and "../". Due to these two folders my copy is getting failed and I am getting
"security violation: remote host attempted to write to " a '.' or '..' path!"error.
Hope any one can help for this.
Note : I have only ssh access and have to download it from ssh commands itself.
Appending a star to the source should fix it, e.g.
pscp -r user#host:public_html/* d:\sites\;
Also you can do same thing by not adding '/' at the end of your source path.
For eg.
pscp -r user#host:public_html d:\sites
Above command will create public_html directory if not exists at your destination (i.e. d:\sites).
Simply we can say using above command we can make a as it is clone of public_html at d:\sites.
One important thing: You need to define the port number over here "-P 22".
pscp -r -P 22 user#host:public_html/* D:\sites
In my case, it works when I use port number 22 with the above script.
I would like to have a VM to look at how applications appear and to develop OS-specific applications, however, I want to keep all my code on my Windows machine so if I decide to nuke a VM or anything like that, it's all still there.
If it matters, I'm using VirtualBox.
This is usually handled with network shares. Share your code folder from your host machine and access it from the VMs.
Aside from network shares, another tool to use for this is a version-control system.
You should always be able make a normal network connection between the VM and the hosting OS, as though it were another computer on the same network. Which, in some sense, it is.
I do this all the time.
I have a directory in a Windows drive that I mount in my host ubuntu 12.04.
I run virtualbox ubuntu 13.04 as a guest.
I want the guest to mount the Windows directory with full non-root permissions.
I do almost all my work from a bash shell, so this method is natural for me.
When searching for methods to automatically mount virtualbox shared folders,
reliable and correct methods are hard to distinguish from those that fail.
Failures include getting and setting permissions, as well as other problems.
Methods that fail include:
modifying /etc/fstab
modifying /etc/rc.local
I am fairly certain that rc.local can be used,
but no methods I have tried worked.
I welcome improvements on these guidelines.
On virtualbox 4.2.14 running nautilus (bash terminal) on an ubuntu 13.04 guest,
Below is a working method to mount Common (sharename)
on /home/$USER/Desktop/Common (mountpoint) with full permissions.
(Note the β\β command continuation character in the find command.)
First time only: create your mountpoint, modify your .bashrc file, and run it.
Respond with password when requested.
These are the four command-lines needed:
mkdir $HOME/Desktop/Common
sudo echo β$USER ALL=(ALL) NOPASSWD:ALLβ >> /etc/sudoers
find $HOME/Desktop/Common -maxdepth 0 -type d -empty -exec sudo \
mount -t vboxsf -o \
uid=`id -u $USER`,gid=`id -g $USER` Common $HOME/Desktop/Common \;
source ~/.bashrc # Needed if you want to mount Common in this bash.
All other times: simply launch a bash shell.
The find command mounts the shared directory if the mountpoint directory is empty.
If the mountpoint directory is not empty, it does not run the mount command.
I hope this is error-free and sufficiently general.
Please let me know of corrections and improvements.