"Windows Subsystem for Linux has no installed distributions" even though 'Ubuntu' is installed - windows-subsystem-for-linux

I recently moved my wsl directory to another drive due to low storage in C: drive. As per the answer provided in this StackOverflow post, I used lxrunoffline tool and moved my Ubuntu distribution to another drive (E:\wsl in my case). As soon as the distribution was moved successfully, I ran wsl to test and it worked like a charm.
Everything went fine until one day I accidentally renamed the E:\wsl folder to something else. Well, as expected, wsl didn't work. Then, I reverted back to the name wsl and expected it to work but to my surprise, it didn't find any installed distribution after that even though it's installed... 😕
E:> wsl
Windows Subsystem for Linux has no installed distributions.
Distributions can be installed by visiting the Microsoft Store:
https://aka.ms/wslstore
Is there any way to revert back to the old directory or make wsl point to a manual location?
EDIT: I don't want to reset Ubuntu as I want to retain the installed packages and preferences...

Well, I finally found a solution to this problem. 😊
This is as simple as registering the distribution using lxrunoffline tool using the rg or register command.
E:\LxRunOffline\LxRunOffline-v3.3.3>lxrunoffline rg
[ERROR] the option '-d' is required but missing
Options:
-n arg Name of the distribution
-d arg The directory containing the distribution.
-c arg The config file to use. This argument is optional.
After running the register command, I was able to start wsl as usual. But that would log you in as a "root" user and would thus start in "/root" directory. I ran the following command to start wsl as different user (this is for Ubuntu):
ubuntu config --default-user <user-name>

Related

WSL can't detect VS code

At first, I tried to fix my problem of npm instruction
so I added
[interop]
appendWindowsPath = false
to /etc/wsl.conf
It works, but another problem happen.
When I type code .
Command 'code' not found, did you mean:
command 'node' from deb nodejs (12.22.9~dfsg-1ubuntu3)
command 'cdde' from deb cdde (0.3.1-1build1)
command 'ode' from deb plotutils (2.6-11)
command 'tcode' from deb emboss (6.6.0+dfsg-11ubuntu1)
command 'cde' from deb cde (0.1+git9-g551e54d-1.2)
Try: sudo apt install <deb name>
The above Error message appear.
I tried the following instruction
export PATH=$PATH:"/mnt/c/Users/%USERNAME%/AppData/Local/Programs/Microsoft VS Code/bin"
It also works properly.
Whenever I restarted WSL, npm instruction still worked well, but code instruction lost its function again.
What should I do to fix the problem?
Thanks in advance!
My main suggestion would be to not use appendWindowsPath = false to fix your NPM problem. That's like using a sledgehammer as a flyswatter. As I said in this answer:
Please do not follow the recommendations (like this answer) to completely remove all Windows paths from WSL, as that will severely limit your ability to run Windows applications in WSL (one of its great features).
You'll also lose access to the ability to run PowerShell scripts and commands in WSL easily. You won't have direct access to wsl.exe itself from inside WSL (which comes in handy).
You can type the full paths to these commands, of course, but most instructions and other answers you find here are going to assume that you've left the Windows path intact.
Instead, figure out where npm is installed in your WSL distribution and then determine why it is further toward the end of the PATH than your Windows directories. Windows paths are added at the end of the Linux PATH for a reason. If something in your startup files is adding to the path, it should put it at the beginning, so it has precedence. E.g.:
export PATH="newdir:$PATH"
Note that I'm not saying that you should change your export statement above since, as mentioned, that Windows path would normally come at the end anyway. It's really not going to matter unless you put another code executable somewhere else in your path.
Whenever I restarted WSL, npm instruction still worked well, but code instruction lost its function again.
If you do want the "quick and dirty" (not recommended) solution, then you can simply add that export command that "makes it work" to your ~/.bashrc. That file is processed each time the Bash shell starts interactively.

Booting raw-disk windows 10 vm in virtualbox boots to grub shell

I have a dual-boot setup with Windows 10 and Kubuntu 18. Following instructions found from here and there I managed to get the Windows to run as guest in Kubuntu host as a VM using VirtualBox.
sudo usermod -a -G disk $USER
VBoxManage internalcommands createrawvmdk -filename "/path/to/vm/win10.vmdk" -rawdisk /dev/sda -partitions 1,3,4 -relative
The first line is to avoid running VirtualBox as superuser.
When I boot the VM, I briefly see an error message
Boot Failed. EFI DVD/CDROM
SystemBootOrder not found. Initializing defaults.
Creating boot entry "Boot0003" with label "ubuntu" for file "\EFI\ubuntu\shimx64.efi"
and then end up in grub shell. Now, when I run the commands
insmod chain
set root=(hd0,gpt1)
chainloader /EFI/Microsoft/Boot/bootmgfw.efi
boot
Windows boots and works just fine but entering these every time is not exactly smooth workflow. Any idea how to permanently fix this?
Please note that I'd still like to be able to physically boot into both OS's.
Thanks,
I had the same problem. I fixed it, but then updated my kernel and so grub re-un-fixed it for me! Figuring it out for the second time was quicker, but I figured it'd be even quicker next time to find my answer on StackOverflow!
My grub.cfg file in /boot/efi/EFI/ubuntu looked like this:
search.fs_uuid 47d6233f-c0ae-4f89-bf18-184452eac803 root hd0,gpt6
set prefix=($root)'/boot/grub'
configfile $prefix/grub.cfg
Because we have setup the VirtualBox vmdk file with only the selected partitions for Windows to work, the search.fs_uuid command was failing, $root was empty and so grub can't find $prefix/grub.cfg (/boot/grub/grub.cfg in my linux rootfs which is on sda6==gpt6)
I automated it by changing the EFI grub.cfg, note my EFI System partition is 2 not 1 as in your example:
search.fs_uuid 47d6233f-c0ae-4f89-bf18-184452eac803 root hd0,gpt6
set prefix=($root)'/boot/grub'
if [ -f $prefix/grub.cfg ]
then
configfile $prefix/grub.cfg
else
insmod chain
set root=(hd0,gpt2)
chainloader /EFI/Microsoft/Boot/bootmgfw.efi
boot
fi
Now if grub can find the cfg file it will give me the menu to select the boot as before, but if it can't - when I'm in VirtualBox - it'll just boot straight into Win10.
Hope this helps!

Crashplan on FreeNAS missing /var/lib/crashplan/.ui_info

So I spent a few weeks on this problem now. I've been trying to get CrashPlan running on a headless FreeNAS server. I have found lots a tutorial to do this. However the fact is that I'm missing the .un_info file on my FreeNAS server after installing CrashPlan.
I have searched the whole file system to try and find the elusive .ui_info file.
I've tried creating it manually with information copied from desktop PC but that does not help me resolve my CrashPlan Pro app connecting to the Crashplan server service on FreeNAS.
INFO:
FreeNAS 9.3 STABLE
Crashplan 3.6.3_1 Plugin
The crashplan remote access behaviour changed several times during the last updates, however with version 3.6.3_1 you should find the .ui_info file in
/var/lib/crashplan/.ui_info
Although the jail version is 3.6.3 it's possible that Crashplan updated itself, please check this with:
tail -f /usr/pbi/crashplan-amd64/share/crashplan/log/service.log.0
In the end you want your Crashplan to update itself anyway. If the update process produces an error related to bash, please run:
pkg update
pkg install bash
ln -siv /usr/local/bin/bash /bin/bash
And restart crashplan while checking the log output with the tail -f command from above:
service crashplan restart
If you finally reach a recent version (>4.4.1), its time to remotely connect to crashplan.
The only change on the server necessary for the easiest method without ssh tunnel is the serviceHost tag in /usr/pbi/crashplan-amd64/share/crashplan/conf/my.service.xml.
<serviceUIConfig>
<serviceHost>0.0.0.0</serviceHost>
Either do this everytime you want to connect, because the token will change after every crashplan restart or use my script from here (for OS X): https://gist.github.com/Phlogi/8654e353786ed1cf0858
Copy /var/lib/crashplan/.ui_info to the correct place on your desktop machine and edit the IP address at the end (to your servers address), for example:
4339,7f1d655f-*****,192.168.1.20
That's it, you can start crashplan on your remote machine and it will connect properly, there are no other changes neccessary. Latest crashplan (>4.4.1) will actually use the IP address from .ui_info.
Install JRE. You will need to add --no-check-certificate to the JRE wget line in the install.sh file

Vagrant stuck in "Waiting for VM to Boot"

I want to preface this question by mentioning that I have indeed looked over most if not all vagrant "Waiting for VM to Boot" troubleshooting threads:
Things I've tried include:
vagrant failed to connect VM
https://superuser.com/questions/342473/vagrant-ssh-fails-with-virtualbox
https://github.com/mitchellh/vagrant/issues/410
http://vagrant.wikia.com/wiki/Usage
http://scotch.io/tutorials/get-vagrant-up-and-running-in-no-time
And more.
Here's how I setup my Vagrant:
Note: We are using Vagrant 1.2.2 since we do not at the moment have time to change configs to newer versions. I am also using VirtualBox 4.2.26.
My office has an /official/ folder which includes things such as Vagrantfile inside. Inside my Vagrantfile are these custom settings:
config.vm.box = "my_box"
config.ssh.private_key_path = "~/.ssh/github_rsa"
config.ssh.forward_agent = true
config.ssh.forward_x11 = true
config.ssh.max_tries = 300
config.vm.provision :shell, :inline => "/etc/init.d/networking restart"
I installed our custom box (called package.box) via vagrant box add my_box absolute_path/package.box which went without a hitch.
Running vagrant up, I would look at the "preview" of the VirtualBox, and it would simply be stuck at the login page. My Terminal would also only say: Waiting for VM to boot. This can take a few minutes. As far as I know, this is an SSH issue. Or my private key issues, though in my Vagrantfile I explicitly pointed to my private key location.
Interesting Notes:
Running dhclient within the VirtualBox GUI, it says command no found. Running sudo dhclient eth0 was one of the suggested fixes.
This fix: https://superuser.com/a/343775/298915 of "modify the /etc/rc.local file to include the line sh /etc/init.d/networking restart just before exit 0." did nothing to fix the issue.
Conclusion:
Having tried to re-install everything thinking I messed up a file, it did not seem to ameliorate the issue. I am unable to work with this issue. Could someone give me some insight?
So after around twelve hours of dejected troubleshooting, I was able to (finally) get the VM to boot.
Setup your private/public keys using the link provided. My box is a Debian Linux 3.2.0-4-amd64, so instead of /root/.ssh/id_rsa.pub, you have to use /home/vagrant/.ssh/id_rsa.pub (and the respective id_rsa path for the private key).
Note: make sure your files have the right permissions. Check using ls -l path, and change using chmod. Your machine may not have /home/vagrant/.ssh/authorized_keys, so generate that file with touch /home/vagrant/.ssh/authorized_keys.
Boot your VM using the VirtualBox GUI using (through either Vagrantfile boot-GUI command, or starting your VM using VirtualBox). Login using vagrant and vagrant when prompted.
Within the GUI, manually start dhclient using sudo dhclient eth0 -v. Why is it off by default? I have no idea. I found out that it was off when I tried to wget the private/public keys in the tutorial above, but was unable to.
Go to your local machine's command line and reload vagrant using vagrant reload. It should boot, and no longer hang at "Waiting for VM to Boot."
This worked for me. Though it may be different for other machines, for whatever reason Vagrant likes to break.
Suggestion: can this be saved as a script so we don't need to manually do this everytime?
EDIT: Update to the latest version of Vagrant, and you will never see this issue again. About time, huh?

Vagrant corrupted index file C:\Users\USERNAME\.vagrant.d/data/machine-index/index

My Windows 8.1 just crashed. Now I have some files on my dist that are corrupted. This includes my vagrant machine index (Not shure if the naming is right but I know that it is this file -> C:\Users\USERNAME.vagrant.d/data/machine-index/index).
So There is a lot of binary or hexdecimal stuff in there (Again not shure because I don't deal with this stuff usualy so correct me if I'm wrong!) And Vagrant spits out the following message if I try to start everything after boot.
vagrant up returns this
The machine index which stores all required information about
running Vagrant environments has become corrupt. This is usually
caused by external tampering of the Vagrant data folder.
Vagrant cannot manage any Vagrant environments if the index is
corrupt. Please attempt to manually correct it. If you are unable
to manually correct it, then remove the data file at the path below.
This will leave all existing Vagrant environments "orphaned" and
they'll have to be destroyed manually.
Path: C:/Users/Username/.vagrant.d/data/machine-index/index
Same thing happened to me. So I just deleted the index file and the .lock file from the machine-index folder to get Vagrant working again.
When using Vagrant 2.2.5 in Windows 10, I had to navigate to /Users/{yourname}/.vagrant.d/data/machine-index and remove both index and index.lock, so rm index then rm index.lock.
Finally I navigated back to Homestead folder and ran vagrant up.
When accidentally my laptop crashed, I had the same vagrant issue (index) on my first attempt to run vagrant up.
The machine index which stores all required information about
running Vagrant environments has become corrupt. This is usually
caused by external tampering of the Vagrant data folder.
Vagrant cannot manage any Vagrant environments if the index is
corrupt. Please attempt to manually correct it. If you are unable
to manually correct it, then remove the data file at the path below.
This will leave all existing Vagrant environments "orphaned" and
they'll have to be destroyed manually.
Path: C:/Users/{user}/.vagrant.d/data/machine-index/index
Unfortunately my issue was not solved by deleting the index and index.lock files as the most voted up answer told. I rebooted my vm using virtualbox GUI (used as VM provider) and shown up the following message.
Entering emergency mode. Exit the shell to continue.
Type "journalctl" to view system logs.
You might want to save "/run/initramfs/rdsosreport.txt" to a USB stick or /boot
after mounting them and attach it to a bug report.
I realised that crash produced errors on VM's FS. So after searching and investigation I overcame that issue by executing the command below.
xfs_repair -v -L /dev/dm-0
Environment info: OS windows10, virtual-box 6.1, vagrant 2.2.7 and vm-os centos7