How to fix Vagrant error: `private_key_path` file must exist: - ssh

I've been using PuPHPet to create virtual development environments.
Yesterday I generated a config file for a new box. When I try to spin it up using the vagrant up command, I get the following error message:
C:\xx>vagrant up
Bringing machine 'default' up with 'virtualbox'
provider... There are errors in the configuration of this machine.
Please fix the following errors and try again:
SSH:
* private_key_path file must exist: P://.vagrant.d/insecure_private_key
I came across this question and moved the insecure_private_key from puphpet\files\dot\ssh to the same directory as where the Vagrantfile is. However this gives the same error.
I'm also confused by the directory given in the error message;
P://.vagrant.d/insecure_private_key
Why is the 'P' drive mentioned?
My Vagrantfile can be found here.
Appreciate any advice on solving this error.

I fixed the problem by replacing the path to insecure_private_key by hard coding the path to the insecure_private_key file.
So it went from:
config.ssh.private_key_path = [
customKey,
"#{ENV['HOME']}/.vagrant.d/insecure_private_key"
]
To:
config.ssh.private_key_path = [
customKey,
"C:/Users/My.User/.vagrant.d/insecure_private_key"
]

It looks like it's because you may have performed a vagrant destroy which deleted the insecure_private_key.
But the vagrant file looks up the puphpet\files\dot\ssh files, if they are there, it looks for the insecure_private_key.
delete (rename) the id_rsa files in puphpet\files\dot\ssh
this fixed it for me!

When you are sharing your puphet configuration to your teammates, hardcoding the private_key_path is not advisable as per the accepted answer.
My host computer is windows so i have added a new environment variable VAGRANT_HOME with value %USERPROFILE% since this is where my /.vagrant.d folder resides. When you add this variable just make sure that you close command prompts that are open so the variable will be applied
Hope this helps

You can also just delete all the files in the puphpet folder rm -rf puphpet/files/dot/ssh/* and the vm should regenerate them when you run vagrant provision.

I'm not sure what's wrong with your Vagrant installation, but this line:
vagrant_home = (ENV['VAGRANT_HOME'].to_s.split.join.length > 0) ? ENV['VAGRANT_HOME'] : "#{ENV['HOME']}/.vagrant.d"
is what sets up the variable that is later on used here:
config.ssh.private_key_path = [
customKey,
"#{vagrant_home}/insecure_private_key"
]
The reason this is happening is that as of Vagrant 1.7, it generates a unique private key for each VM you have. There's, what I consider to be, a bug in that Vagrant completely ignores user-defined private_key_path if it detects that it generated a unique key previously.
What PuPHPet is doing here is letting Vagrant generate its unique SSH key, then once the VM boots up and has SSH access, it goes in and generates another key to replace it.
The reason we're replacing it is because this new Vagrant feature only works on OSX/Linux hosts, due to Windows not having the required tools.
My way works across all OS because it does the SSH key generation within the VM itself.
All this is semi-related to your question, but the answer is that something's wrong with your Vagrant installation if those environment variables have not been defined.

Adding to PunctuationMark's answer you can also set the VAGRANT_HOME environment variable in your Vagrantfile: ENV['VAGRANT_HOME'] = ENV['USERPROFILE']

Editing this following line in Vagrantfile worked for me.
PRIVATE_KEY_SOURCE = '~/.vagrant.d/insecure_private_key'

Related

Can't connect VS Code to Linux machine for remote development

I am getting this error on VS Code and have no clue why it fails
[15:14:59.543] Log Level: 2
[15:14:59.555] remote-ssh#0.51.0
[15:14:59.555] win32 x64
[15:14:59.560] SSH Resolver called for "ssh-remote+xx.xx.xx.xx", attempt 1
[15:14:59.561] SSH Resolver called for host: xx.xx.xx.xx
[15:14:59.561] Setting up SSH remote "xx.xx.xx.xx"
[15:14:59.621] Using commit id "0ba0ca52957102ca3527cf479571617f0de6ed50" and quality "stable" for server
[15:14:59.624] Install and start server if needed
[15:15:01.964] getPlatformForHost was canceled
[15:15:01.965] Resolver error: Connecting was canceled
[15:15:01.973] ------
Add one key in your settings.json as below. Please remember to replace the $remote_server_name to yours.
"remote.SSH.remotePlatform": {
"$remote_server_name": "linux"
}
Menu: File->Preference->Settings
Or click the icon to open settings.json:
In dialog box where you have typed user#host type/select Linux/Windows/etc. depends what you are using, then type/select Continue, then type password for remote session.
For those getting this error on Windows: Check if you have multiple ssh clients installed.
How I solved it was by adding my ssh-configuration to ALL ssh-config files.
In my case I had one in
C:\Users\USER_NAME.ssh\config (this is the one that the remote extension used to give me connection options)
and another in C:\Program Data\ssh\ssh-config
After adding my ssh-config setting to both I got the prompt to select virtual hosts' OS. Tried editing the settings.json file directly, but I think it gets confused because of the multiple ssh-configurations.
P.S.
Tested it for both private key and password enabled connections and it work with either.
I got a similar problem, but the error logs were bigger. Before that, I deleted the python and reinstalled it. Perhaps this led to the problem. Just reinstalled "Remote -SSH" extension in vscode and it worked for me.
In my case there were two files that look like
vscode-remote-lock.<user>.<xxx>
vscode-remote-lock.<user>.<xxx>.target
where was my remote user name and xxx the VS Code Remote Server build hash.
These two files on the remote server in the folder.
/run/user/1000/
I deleted both files and then VS Code came up right away. I have encountered this a few times now. VS Code Remote Server install is not very robust. I use it on about 7 remote machines and every once in a while something goes awry and it cannot recover from simple errors and gets stuck in installation loops.
This trick only works if there is a valid ~/.vscode-server on the remote machine with a hash that matches your local VS Code installation.
If you got here because you were trying to install VS Code in the first place and for whatever reason VS Code had issues with the remote installation, I highly recommend installing it manually by downloading and extracting the tar file to the remote machine directly.
I have tried playing with the setting "Use remote.SSH: Use Flock" and other tricks posted on StackOverflow but none of these work for me whenever I have remote installation issues. I cannot figure out why on some machines, a smooth remote installation is not possible. Even when all of my ssh keys and remote ids have been copied and tested from both the Windows command line and inside a WSL Ubuntu instance.
If VS Code Remote Server installation had slightly better error logic and better error messages none of us would be wasting hours doing this simple task.
I was getting the exact same error as the original poster received and yet none of the other answers were my issue.

Vagrant stuck in "Waiting for VM to Boot"

I want to preface this question by mentioning that I have indeed looked over most if not all vagrant "Waiting for VM to Boot" troubleshooting threads:
Things I've tried include:
vagrant failed to connect VM
https://superuser.com/questions/342473/vagrant-ssh-fails-with-virtualbox
https://github.com/mitchellh/vagrant/issues/410
http://vagrant.wikia.com/wiki/Usage
http://scotch.io/tutorials/get-vagrant-up-and-running-in-no-time
And more.
Here's how I setup my Vagrant:
Note: We are using Vagrant 1.2.2 since we do not at the moment have time to change configs to newer versions. I am also using VirtualBox 4.2.26.
My office has an /official/ folder which includes things such as Vagrantfile inside. Inside my Vagrantfile are these custom settings:
config.vm.box = "my_box"
config.ssh.private_key_path = "~/.ssh/github_rsa"
config.ssh.forward_agent = true
config.ssh.forward_x11 = true
config.ssh.max_tries = 300
config.vm.provision :shell, :inline => "/etc/init.d/networking restart"
I installed our custom box (called package.box) via vagrant box add my_box absolute_path/package.box which went without a hitch.
Running vagrant up, I would look at the "preview" of the VirtualBox, and it would simply be stuck at the login page. My Terminal would also only say: Waiting for VM to boot. This can take a few minutes. As far as I know, this is an SSH issue. Or my private key issues, though in my Vagrantfile I explicitly pointed to my private key location.
Interesting Notes:
Running dhclient within the VirtualBox GUI, it says command no found. Running sudo dhclient eth0 was one of the suggested fixes.
This fix: https://superuser.com/a/343775/298915 of "modify the /etc/rc.local file to include the line sh /etc/init.d/networking restart just before exit 0." did nothing to fix the issue.
Conclusion:
Having tried to re-install everything thinking I messed up a file, it did not seem to ameliorate the issue. I am unable to work with this issue. Could someone give me some insight?
So after around twelve hours of dejected troubleshooting, I was able to (finally) get the VM to boot.
Setup your private/public keys using the link provided. My box is a Debian Linux 3.2.0-4-amd64, so instead of /root/.ssh/id_rsa.pub, you have to use /home/vagrant/.ssh/id_rsa.pub (and the respective id_rsa path for the private key).
Note: make sure your files have the right permissions. Check using ls -l path, and change using chmod. Your machine may not have /home/vagrant/.ssh/authorized_keys, so generate that file with touch /home/vagrant/.ssh/authorized_keys.
Boot your VM using the VirtualBox GUI using (through either Vagrantfile boot-GUI command, or starting your VM using VirtualBox). Login using vagrant and vagrant when prompted.
Within the GUI, manually start dhclient using sudo dhclient eth0 -v. Why is it off by default? I have no idea. I found out that it was off when I tried to wget the private/public keys in the tutorial above, but was unable to.
Go to your local machine's command line and reload vagrant using vagrant reload. It should boot, and no longer hang at "Waiting for VM to Boot."
This worked for me. Though it may be different for other machines, for whatever reason Vagrant likes to break.
Suggestion: can this be saved as a script so we don't need to manually do this everytime?
EDIT: Update to the latest version of Vagrant, and you will never see this issue again. About time, huh?

Vagrant corrupted index file C:\Users\USERNAME\.vagrant.d/data/machine-index/index

My Windows 8.1 just crashed. Now I have some files on my dist that are corrupted. This includes my vagrant machine index (Not shure if the naming is right but I know that it is this file -> C:\Users\USERNAME.vagrant.d/data/machine-index/index).
So There is a lot of binary or hexdecimal stuff in there (Again not shure because I don't deal with this stuff usualy so correct me if I'm wrong!) And Vagrant spits out the following message if I try to start everything after boot.
vagrant up returns this
The machine index which stores all required information about
running Vagrant environments has become corrupt. This is usually
caused by external tampering of the Vagrant data folder.
Vagrant cannot manage any Vagrant environments if the index is
corrupt. Please attempt to manually correct it. If you are unable
to manually correct it, then remove the data file at the path below.
This will leave all existing Vagrant environments "orphaned" and
they'll have to be destroyed manually.
Path: C:/Users/Username/.vagrant.d/data/machine-index/index
Same thing happened to me. So I just deleted the index file and the .lock file from the machine-index folder to get Vagrant working again.
When using Vagrant 2.2.5 in Windows 10, I had to navigate to /Users/{yourname}/.vagrant.d/data/machine-index and remove both index and index.lock, so rm index then rm index.lock.
Finally I navigated back to Homestead folder and ran vagrant up.
When accidentally my laptop crashed, I had the same vagrant issue (index) on my first attempt to run vagrant up.
The machine index which stores all required information about
running Vagrant environments has become corrupt. This is usually
caused by external tampering of the Vagrant data folder.
Vagrant cannot manage any Vagrant environments if the index is
corrupt. Please attempt to manually correct it. If you are unable
to manually correct it, then remove the data file at the path below.
This will leave all existing Vagrant environments "orphaned" and
they'll have to be destroyed manually.
Path: C:/Users/{user}/.vagrant.d/data/machine-index/index
Unfortunately my issue was not solved by deleting the index and index.lock files as the most voted up answer told. I rebooted my vm using virtualbox GUI (used as VM provider) and shown up the following message.
Entering emergency mode. Exit the shell to continue.
Type "journalctl" to view system logs.
You might want to save "/run/initramfs/rdsosreport.txt" to a USB stick or /boot
after mounting them and attach it to a bug report.
I realised that crash produced errors on VM's FS. So after searching and investigation I overcame that issue by executing the command below.
xfs_repair -v -L /dev/dm-0
Environment info: OS windows10, virtual-box 6.1, vagrant 2.2.7 and vm-os centos7

"The machine with the name 'c6401' was not found configured for this Vagrant environment." Error

I plan to work with Apache Ambari. To start, I've done everything according to https://cwiki.apache.org/confluence/display/AMBARI/Quick+Start+Guide. But whenever I try to start vms, I get following error:
The machine with the name 'c6401' was not found configured for
this Vagrant environment.
Had this error today on mac and decided to update this post with the solution that worked for me.
Steps
Delete all redundant machine folders ./.vagrant/machines (.vagrant folder is a hidden folder within the repo)
Run vagrant global-status --prune command in the root of the project
Destroy vagrant setup vagrant destroy
Ensure no related machine exists in the virtualbox UI
Run vagrant up again
cheers!
Please open the VagrantFile under the centos6.4 directory and verify if you see contents like this:
config.vm.define :c6401 do |c6401|
# uncomment the line below to set up the ambari dev environment
# c6401.vm.provision :shell, :path => "dev-bootstrap.sh"
c6401.vm.hostname = "c6401.ambari.apache.org"
c6401.vm.network :private_network, ip: "192.168.64.101"
end
I had faced a similar issue. The problem was that I had deleted the Vagrantfile that comes by default with ambari-vagrant.git and had run 'vagrant init', which creates a standard template file that doesn't have any reference to c6401 machine.
If you are on the same boat, just do a
git checkout centos6.4/VagrantFile
from under ambari-vagrant directory and try re-running
vagrant up c6401

Remove line in host's known_host file through Vagrant

I've narrowed down this question - regarding a MySQL over SSH connection only working once - to a conflicting line in my host computer's known_hosts file.
Essentially, I can not get into my Database GUI of choice because the key is different for the same IP address (after re-provisioning, reloading, etc.).
Once I delete any offending lines, I can get in just fine.
So, through Vagrant's shell command (that I'm provisioning with) how can I modify the host machine's ~/.ssh/known_hosts file?
EDIT:
I found a temp fix that involves adding/creating a ~/.ssh/config file (this involves using a private IP address):
Host 192.168.*.*
StrictHostKeyChecking no
UserKnownHostsFile /dev/null
Should let you in. Not a real fix, as this kind of fix can be a security concern. Look below for much better answer.
Sorry for taking you away from what you need!
Changing HOST files from Vagrantfile:
What you actually want is very simple. Vagrantfile is interpreted by Vagrant each time you run vagrant command. It is regular ruby code, so if you want to change HOST file, all you need to do is to put in the Vagrantfile Ruby code that performs this change. Here's the example code I've put at the end of my Vagrantfile:
require 'tempfile'
require 'fileutils'
path = '~/.ssh/known_hosts'
temp_file = Tempfile.new('foo')
begin
File.open(path, 'r') do |file|
file.each_line do |line|
if line !~ /REGEX_OF_LINE_YOU_WANT_TO_EXCLUDE/ then
temp_file.puts line
end
end
end
temp_file.rewind
FileUtils.mv(temp_file.path, path)
ensure
temp_file.close
temp_file.unlink
end
Note to edit the code above by putting your own value for REGEX_OF_LINE_YOU_WANT_TO_EXCLUDE.
Hope I at least partially fixed my mistake by providing this answer :)
Original Answer
For anyone's (mine as well) future reference, I'm leaving part of the answer that refers to changing GUEST OS files or copying files to GUEST OS:
Vagrant provides couple of provisioners.
Solution number 1: For simple file copy you may use Vagrant file provisioner. Following code in your Vagrantfile would copy the file ~/known_hosts.template from your host system to VM's file /home/vagrant/.ssh/known_hosts
# Context -------------------------------------------------------
Vagrant.configure('2') do |config|
# ...
# This is the section to add ----------------------------------
config.vm.provision :file do |file|
file.source = '~/known_hosts.template'
file.destination = '/home/vagrant/.ssh/known_hosts'
end
#--------------------------------------------------------------
end
File provisioner is poorly documented on Vagrant site, and we've got to thank #tmatilai who had answered similar question on serverfault.
Keep in mind that you should use absolute paths in destination field, and that copying is being performed by vagrant user, so file will have vagrant's owner:group.
Solution number 2: If you need to copy file with a root privileges, or really have to change the file without using templates, consider using well documented shell provisioner. File copying in this case would work only if you have the file placed in the folder visible from within the VM(guestOS), but you have all the power of shell.
Solution number 3: Though it would be overkill in this case, you might use very powerful Chef or Puppet as provisioners, and perform action via one of those frameworks. I know nothing about Puppet, and may talk only about Chef. Cookbook would be very simple. Create template file (.erb) with desired content, and then your recipe will just place the file where necessary. Of course you'll need a box with Chef packeges in it.
I use plain ssh to enter my machines in order to do provisioning:
$ ssh-add ~/.vagrant.d/insecure_private_key
With this setup the known hosts is bound to give problems, but I do not want to turn off the host key checking as I use that also for external hosts. Given my hosts include pattern foo, I did this on the shell:
$ ssh -i '' '/foo/d' ~/.ssh/known_hosts
Remove the empty '' argument after -i if you have GNU/linux host in stead of BSD/MacOSX.
You can then install the vagrant trigger plugin:
$ vagrant plugin install vagrant-triggers
And add the above snippet to the Vagrantfile (mind the backticks):
config.trigger.after :destroy do
puts "Removing known host entries"
`sed -i '' '/foo/d' ~/.ssh/known_hosts`
end
This is what I do:
I define the IP_ADDRESS and DOMAIN_NAME variables at the top of the Vagrantfile.
Then inside Vagrant.configure I add:
config.trigger.after :destroy do |trigger|
trigger.info = "Removing known_hosts entries"
trigger.run = {inline: "ssh-keygen -R #{IP_ADDRESS}"}
trigger.run = {inline: "ssh-keygen -R #{DOMAIN_NAME}"}
end