Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 1 year ago.
The community reviewed whether to reopen this question 1 year ago and left it closed:
Original close reason(s) were not resolved
Improve this question
I've installed wsl2 on my windows machine and I was not able to figure out where the files are actually stored.
Note, that I don't mean that I wanna browse them inside the file explorer - I know it can be done by typing in the explorer \\wsl$\.
If I would have to guess I would say the files are stored in the same hard-drive that the os is stored.
So actually I have two related questions.
Where the files are stored?
If they are stored in the hard drive of my os, can I somehow relocate my wsl to another hard drive?
EDIT:
I was able to locate the installation path, in my machine the path is:
C:\Users\Eliran\AppData\Local\Packages\CanonicalGroupLimited.Ubuntu20.04onWindows_79rhkp1fndgsc\LocalState
Is there a way to mount this to another location?
All the files are stored in a ext4.vhd files in the installation directory, which you can't mount directly onto windows as it is in ext4 (obv)
There's two ways to change the location of the above mentioned vhd file the official, tedious way and an unofficial quick and dirty way
The official tedious way
Export the distro to a location with wsl.exe --export <Distro> <FileName> from CMD/PowerShell
Import the distro to a different location with wsl.exe --import <Distro> <InstallLocation> <FileName> [Options]
The problems with this is it's quite time consuming and after you do this, pray that it exported and imported several gigabytes worth of thousands of files without any problems
The quick and dirty way
This involes an unofficial opensource WSL manager called lxrunoffline
To install it (takes like a min at max) read through the instructions by the dev here
If you installed it by manually downloading the binaries from the release page, make sure to install it to a directory in PATH, like C:\Windows
Now the process is simple as lxrunoffline move -n <distroname> -d <destination-folder>
For example lxrunoffline move -n Ubuntu-20.04 -d G:\wsl\
Hope I helped
Edit: typo
I executed these commands in PowerShell to move my Ubuntu distro from C: to drive D:\wsl-ubuntu :
PS C:\Users\smarc> mkdir D:\wsl-ubuntu (create new location)
PS C:\Users\smarc> wsl -l -v (list wsl distros)
NAME STATE VERSION
Ubuntu Running 2
PS C:\Users\smarc> wsl --shutdown
PS C:\Users\smarc> wsl -l -v (verify if is stopped)
NAME STATE VERSION
Ubuntu Stopped 2
PS C:\Users\smarc> wsl --export Ubuntu ubuntu.tar
PS C:\Users\smarc> wsl --unregister Ubuntu
PS C:\Users\smarc> wsl --import Ubuntu D:\wsl-ubuntu\ .\ubuntu.tar --version 2
and reboot the computer at the end.
The only problem I have is that the default user when I started the Ubuntu application is the root. I need to execute $ su sergio to enter in my personal user.
You can delete the ubuntu.tar at the end of process.
#edit 2021-04-13: As pointed out in the comments, I had forgotten the "--export" command.
This is an answer to your last question: use symbolic links
open command prompt as administrator
shut down wsl vm using wsl --shutdown
change folder to C:\Users\Eliran\AppData\Local\Packages\CanonicalGroupLimited.Ubuntu20.04onWindows_79rhkp1fndgsc\
move the LocalState folder to another location like Z:\wsl\Ubuntu\
create symbolic link with mklink /J LocalState Z:\WSL\Ubuntu\LocalState
I would also edit/create the .wslconfig file from your user folder to move the swap file to the folder where you store your WSL vm's and maybe edit/add options for CPU cores and RAM assignment
[wsl2]
memory=4GB
processors=2
swap=1GB
swapFile=Z:\\WSL\\swap.vhdx
memory is the maximum amount your ram that WSL will use;
processors is the alocated cores to your WSL vm;
swap is the size of the swap file;
swapFile is the location of your swap and to my knowledge is used by all WSL vm's; notice the double slashes in the path, they are mandatory for the path.
Start your WSL VM as you normally would.
Related
How can I change the default location for storing Docker images in Windows? I currently have Docker installed on my C: drive, and the images are stored in the following location:
C:\Users\xxxxx\AppData\Local\Docker\wsl\data.
I want to change the default location to my D: drive. I am using WSL2 as the backend for Docker, and I have read that I can use the .wslconfig file to configure Docker. However, I am not sure how to set up the .wslconfig file to change the default image location. My WSL2 installation is located on my D: drive, which I installed from the Microsoft Store.
I'm using Docker version 20.10.21 and these are wsl specs
WSL version: 1.0.3.0
Kernel version: 5.15.79.1
WSLg version: 1.0.47
MSRDC version: 1.2.3575
Direct3D version: 1.606.4
DXCore version: 10.0.25131.1002-220531-1700.rs-onecore-base2-hyp
Windows version: 10.0.22000.1335
I'm using Ubuntu distro in WSL, and Docker Desktop v.4.15.0
I tried making some changes in .wslconfig but there was no option for storage or something.
Caveats/Preface:
I've tried this and it works, but I cannot guarantee that long-term it will continue to work. There's the potential that something will break when Docker Desktop upgrades in the future.
In general I don't recommend registry hacks, but I'm not aware of another way to do this. Other than the previous caveat, this seems fairly safe.
No, there's no .wslconfig option for changing the location of a distribution.
With that in mind, here's what I did to move docker-desktop-data to the D: drive:
Create the directory. I'll use D:\wsl\docker-desktop-data as an example.
Stop Docker Desktop by right-clicking on the status bar icon and Quit Docker Desktop.
From PowerShell:
wsl --shutdown
Confirm the location (BasePath) and registry key (PSChildName) of the docker-desktop-data via:
Get-ChildItem HKCU:\Software\Microsoft\Windows\CurrentVersion\Lxss\ |
ForEach-Object {
(Get-ItemProperty $_.PSPATH)
} | Where-Object {
$_.DistributionName -eq "docker-desktop-data"
}
Move ext4.vhdx from the BasePath directory identified above to the D:\wsl\docker-desktop-data directory.
In regedit, navigate to:
HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Lxss
Find the subkey matching the PSChildName from above.
Modify the BasePath to point to \\?\D:\wsl\docker-desktop-data
Restart Docker Desktop
Test that your existing images are still available by running one of them.
I want to use WSL to implement function like virtual machine(for learing hadoop), because my pc's performance is poor.
I use following command to create 3 instances:
wsl --import <Distribution Name> <Install Folder> <.TAR.GZ File Path>
but I find them using same file system. I want 3 instances to work like 3 seperate pc.
Is it possible?
Your question might need more detail, because the answer seems fairly obvious, but let's see if we can get you in the right direction.
Because each WSL instance is a virtualized environment, you aren't going to improve file system performance unless you distribute each instance to a separate Windows drive. For instance:
wsl --import Hadoop1 c:\wsl\hadoop1 image.tar --version 2
wsl --import Hadoop2 d:\wsl\hadoop1 image.tar --version 2
wsl --import Hadoop2 e:\wsl\hadoop1 image.tar --version 2
I'm not a Hadoop expert, but I would assume that C:, D:, and E: would need to be different physical drives. If they are all on the same physical drive (even different partitions), then that drive's IO is being shared amongst the three.
And, of course, you probably aren't going to improve performance if you use one SSD and two HDDs.
Note that you should make sure that you are using WSL2 instances for this purpose. WSL1 instances create their filesystem directly on the Windows drive, but WSL2 uses a virtual HDD (ext4.vhdx) for each image.
You can also wsl --set-default-version 2 to make this the default for all instances (if you haven't already).
Due to increasing space consumption of WSL I was forced to move my WSL distros to another disk.
Ubuntu
docker-desktop
docker-desktop-data
I used these commands.
wsl --shutdown
wsl --export (on all three of those distros)
wsl --import (already on another disk)
Now my environment is running fine but the ext4.vhdx in AppData\Local\Docker\wsl\data is still present and I can't remove it due to it still being used.
When I look at process hadnles
Its still being used by system which is not telling much.
If I run WSL --shutdown all virtual disks present on disk E: lose their handles and the one on disk C: is still being used.
Would you know how to find out what part of WSL or if it even is WSL is using?
Since shutting down WSL does not remove that handle it might be used by something else.
Its not docker-for-desktop that one uses different disk.
Thanks for your suggestions.
Docker Desktop for Windows, which uses WSL2, stores all image and container files in a separate virtual volume (vhdx). This virtual hard disk file can automatically grow when it needs more space (to a certain limit). Unfortunately, if you reclaim some space, i.e. by removing unused images, vhdx doesn't shrink automatically. Luckily, you can reduce its size manually by calling this command in PowerShell (as Administrator):
Optimize-VHD -Path $Env:LOCALAPPDATA\Docker\wsl\data\ext4.vhdx -Mode Full
If the above command fails with
The system failed to compact 'C:\Users\Maxx\AppData\Local\Docker\wsl\data\ext4.vhdx':
The process cannot access the file because it is being used by another process. (0x80070020).
exit form Docker Desktop or stop services and tasks using that file:
net stop com.docker.service
taskkill /IM "docker.exe" /F
taskkill /IM "Docker Desktop.exe" /F
wsl --shutdown
I reclaimed 15Gb of 40Gb.
Origin of the solution.
You can just clean data from interface. Troubleshooting -> Clean/Purge data
Upgrading from WSL1 to WSL2 made it a bit messy, but resetting docker-desktop to its default setting and then purging data from WSL (using docker-desktop troublesshot) cleared it for me.
I recently moved my wsl directory to another drive due to low storage in C: drive. As per the answer provided in this StackOverflow post, I used lxrunoffline tool and moved my Ubuntu distribution to another drive (E:\wsl in my case). As soon as the distribution was moved successfully, I ran wsl to test and it worked like a charm.
Everything went fine until one day I accidentally renamed the E:\wsl folder to something else. Well, as expected, wsl didn't work. Then, I reverted back to the name wsl and expected it to work but to my surprise, it didn't find any installed distribution after that even though it's installed... 😕
E:> wsl
Windows Subsystem for Linux has no installed distributions.
Distributions can be installed by visiting the Microsoft Store:
https://aka.ms/wslstore
Is there any way to revert back to the old directory or make wsl point to a manual location?
EDIT: I don't want to reset Ubuntu as I want to retain the installed packages and preferences...
Well, I finally found a solution to this problem. 😊
This is as simple as registering the distribution using lxrunoffline tool using the rg or register command.
E:\LxRunOffline\LxRunOffline-v3.3.3>lxrunoffline rg
[ERROR] the option '-d' is required but missing
Options:
-n arg Name of the distribution
-d arg The directory containing the distribution.
-c arg The config file to use. This argument is optional.
After running the register command, I was able to start wsl as usual. But that would log you in as a "root" user and would thus start in "/root" directory. I ran the following command to start wsl as different user (this is for Ubuntu):
ubuntu config --default-user <user-name>
I would like to have a VM to look at how applications appear and to develop OS-specific applications, however, I want to keep all my code on my Windows machine so if I decide to nuke a VM or anything like that, it's all still there.
If it matters, I'm using VirtualBox.
This is usually handled with network shares. Share your code folder from your host machine and access it from the VMs.
Aside from network shares, another tool to use for this is a version-control system.
You should always be able make a normal network connection between the VM and the hosting OS, as though it were another computer on the same network. Which, in some sense, it is.
I do this all the time.
I have a directory in a Windows drive that I mount in my host ubuntu 12.04.
I run virtualbox ubuntu 13.04 as a guest.
I want the guest to mount the Windows directory with full non-root permissions.
I do almost all my work from a bash shell, so this method is natural for me.
When searching for methods to automatically mount virtualbox shared folders,
reliable and correct methods are hard to distinguish from those that fail.
Failures include getting and setting permissions, as well as other problems.
Methods that fail include:
modifying /etc/fstab
modifying /etc/rc.local
I am fairly certain that rc.local can be used,
but no methods I have tried worked.
I welcome improvements on these guidelines.
On virtualbox 4.2.14 running nautilus (bash terminal) on an ubuntu 13.04 guest,
Below is a working method to mount Common (sharename)
on /home/$USER/Desktop/Common (mountpoint) with full permissions.
(Note the ‘\’ command continuation character in the find command.)
First time only: create your mountpoint, modify your .bashrc file, and run it.
Respond with password when requested.
These are the four command-lines needed:
mkdir $HOME/Desktop/Common
sudo echo “$USER ALL=(ALL) NOPASSWD:ALL” >> /etc/sudoers
find $HOME/Desktop/Common -maxdepth 0 -type d -empty -exec sudo \
mount -t vboxsf -o \
uid=`id -u $USER`,gid=`id -g $USER` Common $HOME/Desktop/Common \;
source ~/.bashrc # Needed if you want to mount Common in this bash.
All other times: simply launch a bash shell.
The find command mounts the shared directory if the mountpoint directory is empty.
If the mountpoint directory is not empty, it does not run the mount command.
I hope this is error-free and sufficiently general.
Please let me know of corrections and improvements.