Booting raw-disk windows 10 vm in virtualbox boots to grub shell - virtual-machine

I have a dual-boot setup with Windows 10 and Kubuntu 18. Following instructions found from here and there I managed to get the Windows to run as guest in Kubuntu host as a VM using VirtualBox.
sudo usermod -a -G disk $USER
VBoxManage internalcommands createrawvmdk -filename "/path/to/vm/win10.vmdk" -rawdisk /dev/sda -partitions 1,3,4 -relative
The first line is to avoid running VirtualBox as superuser.
When I boot the VM, I briefly see an error message
Boot Failed. EFI DVD/CDROM
SystemBootOrder not found. Initializing defaults.
Creating boot entry "Boot0003" with label "ubuntu" for file "\EFI\ubuntu\shimx64.efi"
and then end up in grub shell. Now, when I run the commands
insmod chain
set root=(hd0,gpt1)
chainloader /EFI/Microsoft/Boot/bootmgfw.efi
boot
Windows boots and works just fine but entering these every time is not exactly smooth workflow. Any idea how to permanently fix this?
Please note that I'd still like to be able to physically boot into both OS's.
Thanks,

I had the same problem. I fixed it, but then updated my kernel and so grub re-un-fixed it for me! Figuring it out for the second time was quicker, but I figured it'd be even quicker next time to find my answer on StackOverflow!
My grub.cfg file in /boot/efi/EFI/ubuntu looked like this:
search.fs_uuid 47d6233f-c0ae-4f89-bf18-184452eac803 root hd0,gpt6
set prefix=($root)'/boot/grub'
configfile $prefix/grub.cfg
Because we have setup the VirtualBox vmdk file with only the selected partitions for Windows to work, the search.fs_uuid command was failing, $root was empty and so grub can't find $prefix/grub.cfg (/boot/grub/grub.cfg in my linux rootfs which is on sda6==gpt6)
I automated it by changing the EFI grub.cfg, note my EFI System partition is 2 not 1 as in your example:
search.fs_uuid 47d6233f-c0ae-4f89-bf18-184452eac803 root hd0,gpt6
set prefix=($root)'/boot/grub'
if [ -f $prefix/grub.cfg ]
then
configfile $prefix/grub.cfg
else
insmod chain
set root=(hd0,gpt2)
chainloader /EFI/Microsoft/Boot/bootmgfw.efi
boot
fi
Now if grub can find the cfg file it will give me the menu to select the boot as before, but if it can't - when I'm in VirtualBox - it'll just boot straight into Win10.
Hope this helps!

Related

Why is fdisk -l showing different results for the same vdi virtual drive when different virtual machines are used in VirtualBox

VirtualBox (Version 5.2.24 r128163 (Qt5.6.2)) user with xubuntu guest (Ubuntu 18.04.2 LTS) and Windows 10 host here.
I recently tried to resize my vdi from ~100GB to 200GB. In windows I used the command:
./VBoxManage modifyhd "D:\xub2\xub2.vdi" --resize 200000
That went fine. Then I used a gparted live cd to create a vm, attached the vdi and resize the partitions:
gparted gui
All looks good. If I then use the 'fdisk -l' command whilst in the gparted vm the increased partition sizes are visible as expected.
fdisk -l results for vdi attached to gparted vm
If I try and resize the file system for one of the newly resized logical drives with 'resize2fs /dev/sda5' I am told it is already 46265856 blocks long and there is nothing to do.
However....
If I then re-attach this vdi to an ubuntu vm and boot up with the vdi, the 'fdisk -l' command gives different results and is basically telling me that the drive is still 100GB in size.
fdisk -l results for the same vdi attached to ubuntu vm
The 'df' command confirms that it is not resized.
df command output with same vdi attached to ubuntu vm
If I try the command 'resize2fs /dev/sda5' I get the result:
The filesystem is already 22003712 (4k) blocks long. Nothing to do!
How can I fix this and make the ubuntu vm see that the disk and partitions have been increase in size?
Ok. I will answer my own question (thank you for the negative vote anonymous internet).
This issue occurs when you have existing snapshots of the drive that you are trying to expand associated with a VirtualBox VM.
I found this described in VirtualBox's documentation.
https://forums.virtualbox.org/viewtopic.php?f=35&t=50661
One suggested solution is to delete the snapshots, however I got an error message when I attempted that.
The solution that worked for me was to clone my VM. The cloned VM (which did not have any snapshots associated with it), behaved as expected and showed the correct size for the resized disk.
To be clear: the situation I described above is 100% true.
Hope that helps someone.

"Windows Subsystem for Linux has no installed distributions" even though 'Ubuntu' is installed

I recently moved my wsl directory to another drive due to low storage in C: drive. As per the answer provided in this StackOverflow post, I used lxrunoffline tool and moved my Ubuntu distribution to another drive (E:\wsl in my case). As soon as the distribution was moved successfully, I ran wsl to test and it worked like a charm.
Everything went fine until one day I accidentally renamed the E:\wsl folder to something else. Well, as expected, wsl didn't work. Then, I reverted back to the name wsl and expected it to work but to my surprise, it didn't find any installed distribution after that even though it's installed... 😕
E:> wsl
Windows Subsystem for Linux has no installed distributions.
Distributions can be installed by visiting the Microsoft Store:
https://aka.ms/wslstore
Is there any way to revert back to the old directory or make wsl point to a manual location?
EDIT: I don't want to reset Ubuntu as I want to retain the installed packages and preferences...
Well, I finally found a solution to this problem. 😊
This is as simple as registering the distribution using lxrunoffline tool using the rg or register command.
E:\LxRunOffline\LxRunOffline-v3.3.3>lxrunoffline rg
[ERROR] the option '-d' is required but missing
Options:
-n arg Name of the distribution
-d arg The directory containing the distribution.
-c arg The config file to use. This argument is optional.
After running the register command, I was able to start wsl as usual. But that would log you in as a "root" user and would thus start in "/root" directory. I ran the following command to start wsl as different user (this is for Ubuntu):
ubuntu config --default-user <user-name>

virsh console hangs whenever I connect to Virtual Machine

Whenever I try to connect to VM using virsh console <vm name> my screen hangs and displays:
Connected to domain <vm name>
Escape character is ^]
I have found many solutions on the internet but nothing has worked for me and I am even not able to find the /etc/init directory as CentOS 7 has a different directory structure.
I need /etc/init directory to create a script which I found on the internet as a solution.
I am using only ssh connection and no GUI and I do not have any access to the physical machine.
I think you should start a console (e.g. ttyS0 ).
For example on my Debian 8 I enable it with systemd:
systemctl enable getty#tty1.service
Enable Serial Console on CentOS/RHEL 7
On the virtual machine, add ‘console=ttyS0‘ at the end of the kernel lines in the /boot/grub2/grub.cfg file:
grubby --update-kernel=ALL --args="console=ttyS0"
Note: Alternatively, you can edit the /etc/default/grub file, add console=ttyS0 to the GRUB_CMDLINE_LINUX variable and execute
grub2-mkconfig -o /boot/grub2/grub.cfg
GRUB_TERMINAL=serial
GRUB_SERIAL_COMMAND="serial –speed115200 –unit=0 –word=8 –parity=no –stop=1"
I had the same issue right after virt-install, then after trying to connect to the guest, too. I tried all the suggested solutions but none of them helped. Then I realized that I forgot to install KVM. A simple 'yum -y install kvm' resolved the issue.

reboot VM (run on vbox) into specific (compiled) kernel from shell

Im running ubuntu 14.04 with vbox . In this machine I compiled and run kernel 3.14 which I choose from the grub menu when ubuntu load on vbox.
The host also run on ubuntu 14.04.
I wanted to ask - is there a way to load the guest ubuntu into specific kernel with a command on shell?
I can start running a vm on vbox trough command line with this command :
VBoxManage startvm ubuservloc --type headless
but its not quite exactly what I need.
I don't know of any way to directly communicate from the host to the guest's GRUB, but there are several indirect ways you could go:
mount the /boot filesystem from the host and drop a file there that is read by the guest's grub.cfg.
VBoxManage controlvm keyboardputscancode to type a hotkey which is assigned to the correct kernel in GRUB (shortly after starting the VM)
Configure GRUB to listen to a (virtual) serial port and select the kernel by writing to that file
In case a second reboot is acceptable (first boot into default kernel and then reboot into desired kernel) there are also several ways (you can use the grub-set-default command from guest to choose your desired kernel and issue a reboot). Some I can think of here:
VBoxManage guestcontrol run to call a shell script from host in the guest (after guest additions have been loaded)
VBoxManage guestproperty to set a property from host and VBoxControl guestproperty to read it from an init script and decide from there
Just SSH into the guest and reboot from there :D
Obviously, if you always want to boot that kernel, why not make it default? And in case always you want to alternately boot two different kernels, you can also set the default for next boot to another one direclty from grub.cfg.

Vagrant stuck in "Waiting for VM to Boot"

I want to preface this question by mentioning that I have indeed looked over most if not all vagrant "Waiting for VM to Boot" troubleshooting threads:
Things I've tried include:
vagrant failed to connect VM
https://superuser.com/questions/342473/vagrant-ssh-fails-with-virtualbox
https://github.com/mitchellh/vagrant/issues/410
http://vagrant.wikia.com/wiki/Usage
http://scotch.io/tutorials/get-vagrant-up-and-running-in-no-time
And more.
Here's how I setup my Vagrant:
Note: We are using Vagrant 1.2.2 since we do not at the moment have time to change configs to newer versions. I am also using VirtualBox 4.2.26.
My office has an /official/ folder which includes things such as Vagrantfile inside. Inside my Vagrantfile are these custom settings:
config.vm.box = "my_box"
config.ssh.private_key_path = "~/.ssh/github_rsa"
config.ssh.forward_agent = true
config.ssh.forward_x11 = true
config.ssh.max_tries = 300
config.vm.provision :shell, :inline => "/etc/init.d/networking restart"
I installed our custom box (called package.box) via vagrant box add my_box absolute_path/package.box which went without a hitch.
Running vagrant up, I would look at the "preview" of the VirtualBox, and it would simply be stuck at the login page. My Terminal would also only say: Waiting for VM to boot. This can take a few minutes. As far as I know, this is an SSH issue. Or my private key issues, though in my Vagrantfile I explicitly pointed to my private key location.
Interesting Notes:
Running dhclient within the VirtualBox GUI, it says command no found. Running sudo dhclient eth0 was one of the suggested fixes.
This fix: https://superuser.com/a/343775/298915 of "modify the /etc/rc.local file to include the line sh /etc/init.d/networking restart just before exit 0." did nothing to fix the issue.
Conclusion:
Having tried to re-install everything thinking I messed up a file, it did not seem to ameliorate the issue. I am unable to work with this issue. Could someone give me some insight?
So after around twelve hours of dejected troubleshooting, I was able to (finally) get the VM to boot.
Setup your private/public keys using the link provided. My box is a Debian Linux 3.2.0-4-amd64, so instead of /root/.ssh/id_rsa.pub, you have to use /home/vagrant/.ssh/id_rsa.pub (and the respective id_rsa path for the private key).
Note: make sure your files have the right permissions. Check using ls -l path, and change using chmod. Your machine may not have /home/vagrant/.ssh/authorized_keys, so generate that file with touch /home/vagrant/.ssh/authorized_keys.
Boot your VM using the VirtualBox GUI using (through either Vagrantfile boot-GUI command, or starting your VM using VirtualBox). Login using vagrant and vagrant when prompted.
Within the GUI, manually start dhclient using sudo dhclient eth0 -v. Why is it off by default? I have no idea. I found out that it was off when I tried to wget the private/public keys in the tutorial above, but was unable to.
Go to your local machine's command line and reload vagrant using vagrant reload. It should boot, and no longer hang at "Waiting for VM to Boot."
This worked for me. Though it may be different for other machines, for whatever reason Vagrant likes to break.
Suggestion: can this be saved as a script so we don't need to manually do this everytime?
EDIT: Update to the latest version of Vagrant, and you will never see this issue again. About time, huh?