Vagrant corrupted index file C:\Users\USERNAME\.vagrant.d/data/machine-index/index - virtual-machine

My Windows 8.1 just crashed. Now I have some files on my dist that are corrupted. This includes my vagrant machine index (Not shure if the naming is right but I know that it is this file -> C:\Users\USERNAME.vagrant.d/data/machine-index/index).
So There is a lot of binary or hexdecimal stuff in there (Again not shure because I don't deal with this stuff usualy so correct me if I'm wrong!) And Vagrant spits out the following message if I try to start everything after boot.
vagrant up returns this
The machine index which stores all required information about
running Vagrant environments has become corrupt. This is usually
caused by external tampering of the Vagrant data folder.
Vagrant cannot manage any Vagrant environments if the index is
corrupt. Please attempt to manually correct it. If you are unable
to manually correct it, then remove the data file at the path below.
This will leave all existing Vagrant environments "orphaned" and
they'll have to be destroyed manually.
Path: C:/Users/Username/.vagrant.d/data/machine-index/index

Same thing happened to me. So I just deleted the index file and the .lock file from the machine-index folder to get Vagrant working again.

When using Vagrant 2.2.5 in Windows 10, I had to navigate to /Users/{yourname}/.vagrant.d/data/machine-index and remove both index and index.lock, so rm index then rm index.lock.
Finally I navigated back to Homestead folder and ran vagrant up.

When accidentally my laptop crashed, I had the same vagrant issue (index) on my first attempt to run vagrant up.
The machine index which stores all required information about
running Vagrant environments has become corrupt. This is usually
caused by external tampering of the Vagrant data folder.
Vagrant cannot manage any Vagrant environments if the index is
corrupt. Please attempt to manually correct it. If you are unable
to manually correct it, then remove the data file at the path below.
This will leave all existing Vagrant environments "orphaned" and
they'll have to be destroyed manually.
Path: C:/Users/{user}/.vagrant.d/data/machine-index/index
Unfortunately my issue was not solved by deleting the index and index.lock files as the most voted up answer told. I rebooted my vm using virtualbox GUI (used as VM provider) and shown up the following message.
Entering emergency mode. Exit the shell to continue.
Type "journalctl" to view system logs.
You might want to save "/run/initramfs/rdsosreport.txt" to a USB stick or /boot
after mounting them and attach it to a bug report.
I realised that crash produced errors on VM's FS. So after searching and investigation I overcame that issue by executing the command below.
xfs_repair -v -L /dev/dm-0
Environment info: OS windows10, virtual-box 6.1, vagrant 2.2.7 and vm-os centos7

Related

Docker build fails always with error hcsshim::PrepareLayer - failed failed in Win32: Incorrect function. (0x1) Windows Containers

Steps to reproduce are very easy.
Create a Dockerfile.
My Dockerfile has many more lines, but I have trimmed them so we can focus in the source of the problem.
Said that, these two lines alone (without anything more) show the problem.
FROM microsoft/iis
SHELL ["powershell", "-Command", "$ErrorActionPreference = 'Stop'; $ProgressPreference = 'SilentlyContinue'; $VerbosePreference = 'Continue'; "]
Run docker build . and you get hcsshim::PrepareLayer - failed failed in Win32: Función incorrecta. (0x1).
Windows 10 Pro 1909 (but it happened too in 1903)
Docker version: 2.1.0.5
Engine: 19.03.5
Machine: 0.16.2
I have found the solution to the problem.
Reading all the https://github.com/docker/for-win/issues/3884 bug, some have found a simple solution: rename C:\windows\system32\driver\cbfsconnect2017.sys so it isn't loaded the next boot.
Disabling that driver enables me to do a docker build for the first time in windows containers in almost a year.
In my case Box Sync was the one using that driver.
EDIT: #GustavoTM have found that pCloud raises the same problem.
EDIT2: #VonC have noticed that some people in the issue in GitHub has solved it deleting this other file: C:\Windows\System32\drivers\cbfs6.sys. I haven't tried that, but i put it if it helps others.
The good thing is that I don't need to uninstall Box, but only rename that file.
This is still an issue (still open) with Win10.
Looks like uninstalling cloud storage providers with file system filters like Dropbox, Box, etc. as a workaround is an option for some users.
Deinstall cloud storage providers or virus scanners; if you identify which one is not working please share in https://github.com/docker/for-win/issues/3884
In my case was the problem similar but the file cbfs6.sys was placed somewhere in the rest of uninstalled application Jungle disk, somewhere in the folder c:\Program files\Jungle disk .... It's part of Callback File System signed by EldoS Corporation.
The folder could be rename only and not delete directly. So I could delete its immediately after the PC restart, before running the Docker. So it could be delete during the Docker service restart too.

"Windows Subsystem for Linux has no installed distributions" even though 'Ubuntu' is installed

I recently moved my wsl directory to another drive due to low storage in C: drive. As per the answer provided in this StackOverflow post, I used lxrunoffline tool and moved my Ubuntu distribution to another drive (E:\wsl in my case). As soon as the distribution was moved successfully, I ran wsl to test and it worked like a charm.
Everything went fine until one day I accidentally renamed the E:\wsl folder to something else. Well, as expected, wsl didn't work. Then, I reverted back to the name wsl and expected it to work but to my surprise, it didn't find any installed distribution after that even though it's installed... 😕
E:> wsl
Windows Subsystem for Linux has no installed distributions.
Distributions can be installed by visiting the Microsoft Store:
https://aka.ms/wslstore
Is there any way to revert back to the old directory or make wsl point to a manual location?
EDIT: I don't want to reset Ubuntu as I want to retain the installed packages and preferences...
Well, I finally found a solution to this problem. 😊
This is as simple as registering the distribution using lxrunoffline tool using the rg or register command.
E:\LxRunOffline\LxRunOffline-v3.3.3>lxrunoffline rg
[ERROR] the option '-d' is required but missing
Options:
-n arg Name of the distribution
-d arg The directory containing the distribution.
-c arg The config file to use. This argument is optional.
After running the register command, I was able to start wsl as usual. But that would log you in as a "root" user and would thus start in "/root" directory. I ran the following command to start wsl as different user (this is for Ubuntu):
ubuntu config --default-user <user-name>

How to fix Vagrant error: `private_key_path` file must exist:

I've been using PuPHPet to create virtual development environments.
Yesterday I generated a config file for a new box. When I try to spin it up using the vagrant up command, I get the following error message:
C:\xx>vagrant up
Bringing machine 'default' up with 'virtualbox'
provider... There are errors in the configuration of this machine.
Please fix the following errors and try again:
SSH:
* private_key_path file must exist: P://.vagrant.d/insecure_private_key
I came across this question and moved the insecure_private_key from puphpet\files\dot\ssh to the same directory as where the Vagrantfile is. However this gives the same error.
I'm also confused by the directory given in the error message;
P://.vagrant.d/insecure_private_key
Why is the 'P' drive mentioned?
My Vagrantfile can be found here.
Appreciate any advice on solving this error.
I fixed the problem by replacing the path to insecure_private_key by hard coding the path to the insecure_private_key file.
So it went from:
config.ssh.private_key_path = [
customKey,
"#{ENV['HOME']}/.vagrant.d/insecure_private_key"
]
To:
config.ssh.private_key_path = [
customKey,
"C:/Users/My.User/.vagrant.d/insecure_private_key"
]
It looks like it's because you may have performed a vagrant destroy which deleted the insecure_private_key.
But the vagrant file looks up the puphpet\files\dot\ssh files, if they are there, it looks for the insecure_private_key.
delete (rename) the id_rsa files in puphpet\files\dot\ssh
this fixed it for me!
When you are sharing your puphet configuration to your teammates, hardcoding the private_key_path is not advisable as per the accepted answer.
My host computer is windows so i have added a new environment variable VAGRANT_HOME with value %USERPROFILE% since this is where my /.vagrant.d folder resides. When you add this variable just make sure that you close command prompts that are open so the variable will be applied
Hope this helps
You can also just delete all the files in the puphpet folder rm -rf puphpet/files/dot/ssh/* and the vm should regenerate them when you run vagrant provision.
I'm not sure what's wrong with your Vagrant installation, but this line:
vagrant_home = (ENV['VAGRANT_HOME'].to_s.split.join.length > 0) ? ENV['VAGRANT_HOME'] : "#{ENV['HOME']}/.vagrant.d"
is what sets up the variable that is later on used here:
config.ssh.private_key_path = [
customKey,
"#{vagrant_home}/insecure_private_key"
]
The reason this is happening is that as of Vagrant 1.7, it generates a unique private key for each VM you have. There's, what I consider to be, a bug in that Vagrant completely ignores user-defined private_key_path if it detects that it generated a unique key previously.
What PuPHPet is doing here is letting Vagrant generate its unique SSH key, then once the VM boots up and has SSH access, it goes in and generates another key to replace it.
The reason we're replacing it is because this new Vagrant feature only works on OSX/Linux hosts, due to Windows not having the required tools.
My way works across all OS because it does the SSH key generation within the VM itself.
All this is semi-related to your question, but the answer is that something's wrong with your Vagrant installation if those environment variables have not been defined.
Adding to PunctuationMark's answer you can also set the VAGRANT_HOME environment variable in your Vagrantfile: ENV['VAGRANT_HOME'] = ENV['USERPROFILE']
Editing this following line in Vagrantfile worked for me.
PRIVATE_KEY_SOURCE = '~/.vagrant.d/insecure_private_key'

Vagrant stuck in "Waiting for VM to Boot"

I want to preface this question by mentioning that I have indeed looked over most if not all vagrant "Waiting for VM to Boot" troubleshooting threads:
Things I've tried include:
vagrant failed to connect VM
https://superuser.com/questions/342473/vagrant-ssh-fails-with-virtualbox
https://github.com/mitchellh/vagrant/issues/410
http://vagrant.wikia.com/wiki/Usage
http://scotch.io/tutorials/get-vagrant-up-and-running-in-no-time
And more.
Here's how I setup my Vagrant:
Note: We are using Vagrant 1.2.2 since we do not at the moment have time to change configs to newer versions. I am also using VirtualBox 4.2.26.
My office has an /official/ folder which includes things such as Vagrantfile inside. Inside my Vagrantfile are these custom settings:
config.vm.box = "my_box"
config.ssh.private_key_path = "~/.ssh/github_rsa"
config.ssh.forward_agent = true
config.ssh.forward_x11 = true
config.ssh.max_tries = 300
config.vm.provision :shell, :inline => "/etc/init.d/networking restart"
I installed our custom box (called package.box) via vagrant box add my_box absolute_path/package.box which went without a hitch.
Running vagrant up, I would look at the "preview" of the VirtualBox, and it would simply be stuck at the login page. My Terminal would also only say: Waiting for VM to boot. This can take a few minutes. As far as I know, this is an SSH issue. Or my private key issues, though in my Vagrantfile I explicitly pointed to my private key location.
Interesting Notes:
Running dhclient within the VirtualBox GUI, it says command no found. Running sudo dhclient eth0 was one of the suggested fixes.
This fix: https://superuser.com/a/343775/298915 of "modify the /etc/rc.local file to include the line sh /etc/init.d/networking restart just before exit 0." did nothing to fix the issue.
Conclusion:
Having tried to re-install everything thinking I messed up a file, it did not seem to ameliorate the issue. I am unable to work with this issue. Could someone give me some insight?
So after around twelve hours of dejected troubleshooting, I was able to (finally) get the VM to boot.
Setup your private/public keys using the link provided. My box is a Debian Linux 3.2.0-4-amd64, so instead of /root/.ssh/id_rsa.pub, you have to use /home/vagrant/.ssh/id_rsa.pub (and the respective id_rsa path for the private key).
Note: make sure your files have the right permissions. Check using ls -l path, and change using chmod. Your machine may not have /home/vagrant/.ssh/authorized_keys, so generate that file with touch /home/vagrant/.ssh/authorized_keys.
Boot your VM using the VirtualBox GUI using (through either Vagrantfile boot-GUI command, or starting your VM using VirtualBox). Login using vagrant and vagrant when prompted.
Within the GUI, manually start dhclient using sudo dhclient eth0 -v. Why is it off by default? I have no idea. I found out that it was off when I tried to wget the private/public keys in the tutorial above, but was unable to.
Go to your local machine's command line and reload vagrant using vagrant reload. It should boot, and no longer hang at "Waiting for VM to Boot."
This worked for me. Though it may be different for other machines, for whatever reason Vagrant likes to break.
Suggestion: can this be saved as a script so we don't need to manually do this everytime?
EDIT: Update to the latest version of Vagrant, and you will never see this issue again. About time, huh?

Oracle virtual box inaccessible

I am using Oracle Virtual Box version 4.2.16 r86992. Everything was fine until yesterday shutdown.
Today, it shows inaccessible and throws this error:
Runtime error opening C:\Users\xxxxxx\VirtualBox VMs\vBoxxxxXubuntu_Beta\vBoxxxxXubuntu_Beta.vbox for reading: -102 (File not found.).
D:\tinderbox\win-4.2\src\VBox\Main\src-server\MachineImpl.cpp[725] (long __cdecl Machine::registeredInit(void)).
It's good to restore this to working, It would save lot of time and restore configuration settings and data. Thanking your support.
This normally happens if the host OS crashes or you pull the plug on it, leaving the .vbox file unsaved.
In the location:
C:\Users\xxxxxxx\VirtualBox VMs\vBoxxxxXubuntu_Beta\
you should find two files:
vBoxxxxXubuntu_Beta.vbox-prev
vBoxxxxXubuntu_Beta.vbox-tmp
Copy vBoxxxxXubuntu_Beta.vbox-prev to vBoxxxxXubuntu_Beta.vbox.
Select vBoxxxxXubuntu_Beta.vbox, in the VBox manager, right click, and then left click on refresh.
Observe that it now shows Powered Off.
Now you are good to go.
Based on my experience, I was on Windows 7 and running Ubuntu 14.04 as guest OS on Virtual Machine.
Go to your Virtualbox folder (in my case):
C:\Users\Dev12\VirtualBox VMs\Ubuntu
You'll see files with extensions: Ubuntu.vbox-tmp or Ubuntu.vbox-prev
Remove -tmp from file name Ubuntu.vbox-tmp so that it reads as Ubuntu.vbox
Exit from Virtual Machine and start it again.
You should now see error gone away.
The virtual box files with extension .vbox contain metadata the virtualbox hypervisor requires to resolve the guest virtual OS' configuration.
If the main .vbox file is corrupted (i.e. reporting that it is empty) then use the backup .vbox-prev file to recover the contents of the original file.
Do this by renaming the empty .vbox files a temporary name (e.g. rename originalVM.vbox to originalVM-empty.vbox).
Then make a copy of the backup file originalVM.vbox-prev, where the copy will have the same name as the original but with the word "copy" appended to it (i.e. originalVM.vbox-prev is renamed to originalVM (copy).vbox-prev).
It is important to retain the original backup .vbox-prev file it should not be altered or itself renamed.
Now go rename the copy of the newly created .vbox-prev file originalVM (copy).vbox-prev to the original name of the empty .vbox file and be mindful to also change it extension from .vbox-prev back to just .vbox.
That is rename originalVM (copy).vbox-prev back to originalVM.vbox. Now that this is done you may add the .vbox file (guest os) back into the VBOX hypervisor. This will recover the state and snapshot of the "inaccessible" guest VM. Now delete the original empty .vbox file.
I've faced the same issue using CentOs 6.8 on a VirtualBox 5.1 installed in Windows 7 and AjayKumarBasuthkar's solution worked perfectly for me:
I went to C:\Users\\VirtualBox VMs\CentOS6.8
Made a copy of the file CentOS6.8.vbox-prev and gave it the name of CentOS6.8.vbox
Went to the VirtualBox GUI, right-clicked the VM instance and hit refresh
The CentOS instance went from the State Inaccessible to Powered Off
VirtualBox 4.3 is released and could it be that you've updated or there was some issues while updating?
In any case if you are not able to bring up the Virtualbox, remember to backup the VirutalBox VMs folder and going for a fresh install should be the best way forward.
I faced the same problem and I resolved by doing following in Oracle Virtual box 4.3.28 with Ubuntu 14.04 LTS, when Virtual box VM was closed.
Removed ubuntu.vbox to another folder outside virtual box folder
removed -prev from file ubuntu.vbox-prev
start oracle virtualbox, it works excellent.
On a Windows 7 Host, I found that Daemon Tools service had a hold on the file.
The solution was to uninstall Daemon Tools, but I suspect if you stop the service and remove the file association, you would be sorted.
The other issue might be that if your Virtual Machine was on an external hard drive, it is possible that the drive letter has changed. If so, go to Computer Management, and select the hard drive and right click to change the drive letter and save (Note that this is for Windows).
This is going to sound stupid but try to reinstall VB. It may work.
I am adding one critical and important comment to the previous great answers. Make sure that the original .vbox file is corrupted and empty before you copy the content from the.vbox-prev file. If it is not the case and you find it with lines and readable content, don't replace the content of the .vbox.
Changes made to the VM directly before the VM got inaccessible might not be updated in the .vbox-prev backup file . The changes could not be synced with those changes before the OS upgrade or system changes that led to the inaccesable issue.
If you find your VM not accessible after an OS upgrade or system change, first check the.vbox file if it is still readable by a text editor and it has lines. Then you just need to delete the VM from the VirtualBox manager list(just remove the appliance from the list and don't remove files) . Then reopen the.vbox file and it should work perfectly.
If the original.vbox file is corrupted or empty when you open it with a text editor, then and only then, you can copy the content from the .vbox-prev and follow the instructions highlighted.
This was my experience, and I wanted to share it with you to avoid losing some last minute changes before the OS upgrade or crash.