WSL2 mount VHDX virtual disk issue with Windows docker Volumes - windows-subsystem-for-linux

On Windows 10 Insider Preview (prerelease.201207-1443) I have mounted successfully a vhdx file with WSL2.
PS C:\Users> wmic diskdrive list brief
Caption DeviceID Model Partitions Size
..
Microsoft virtuális lemez \\.\PHYSICALDRIVE2 Microsoft virtuális lemez 1 5362882560
PS C:\Users> wsl --mount \\.\PHYSICALDRIVE2 --bare
Inside the Ubuntu 20.20 it looks OK, I could mounted and format (ext4) the virtual disk.
/dev/sde1 4.9G 20M 4.6G 1% /mnt/docker/vol/pg_disk_1
I can use it, read, write without any problem. Performance is good.
Issue: I want to use this disk when I create docker container for application data, it is not visible inside the container.
docker run -d -it --name devtest --mount type=bind,source=/mnt/docker/vol/pg_disk_1/nginx,target=/app nginx:latest
docker inspect shows the volume bind mounted:
"Mounts": [
{
"Type": "bind",
"Source": "/mnt/docker/vol/pg_disk_1/nginx",
"Destination": "/app",
"Mode": "",
"RW": true,
"Propagation": "rprivate"
}
],
when I create a file on the host eg. example_host.html
User#Pince:/mnt/docker/vol/pg_disk_1/nginx$ ls -l
total 0
-rw-r--r-- 1 r858 r858 0 Jan 3 11:30 example_host.html
not visible when I check inside the container
root#078e4d7007a8:/app# ls -la
total 8
drwxr-xr-x 2 root root 4096 Jan 3 10:32 .
drwxr-xr-x 1 root root 4096 Jan 3 10:15 ..
root#078e4d7007a8:/app# pwd
/app
and if I create a file here inside the container it will be persistent when recreate the container but not in Linux mounted disk. I found it here:
\\wsl$\docker-desktop\mnt\host\wsl\docker-desktop-bind-mounts\Ubuntu-20.04\...
Question: this is normal and I missed something or it is not ready yet in this stage of the development. And do you know anybody a workaround how to use vhdx mounted disk for docker application data on Windows?

The problem is that /dev/sde1 is created and managed by wsl, so not (easily?) accessible by Windows programs like Docker Desktop.
One workaround is to mount \.\PHYSICALDRIVE2 as drive D: and mounted as :
docker run -v 'D:\data:/data' ...
This way, you can access it both on Windows and wsl.

Related

How to access \\wsl$\othercontainer\some\file from within a WSL container?

From Windows, I can access the file systems of all the WSL containers from under \\wsl$.
And from inside a WSL container, I can access the windows C:\ drive as /mnt/c.
But how can I access another container's drive from inside a WSL container?
I'm trying to access \\wsl$\othercontainer\some\file from inside a WSL container.
wslpath can normally convert Windows file paths to paths accessible from WSL:
WSL2#~» wslpath 'C:\Windows\System32\drivers\etc\hosts'
/mnt/c/Windows/System32/drivers/etc/hosts
But it doesn't work for:
WSL2#~» wslpath '\\wsl$\othercontainer\some\file'
wslpath: \\wsl$\othercontainer\some\file
WSL2#~» echo $?
1
And of course:
WSL2#~» ls -l '\\wsl$\othercontainer\some\file'
ls: cannot access '\\wsl$\othercontainer\some\file': No such file or directory
This answer provided the answer:
sudo mkdir /mnt/othercontainer
sudo mount -t drvfs '\\wsl$\othercontainer' /mnt/othercontainer
ls -l /mnt/othercontainer/some/file
NOTE: It looks like symbolic links aren't supported. When one is encountered, we get an error like:
$ ls -l /mnt/othercontainer/bin
ls: cannot read symbolic link '/mnt/othercontainer/bin': Function not implemented
lrwxrwxrwx 1 root root 7 Apr 23 2020 /mnt/othercontainer/bin

How to set mounted folder permission in podman

Abstract
When I mount a folder to my container and the path to the folder is not yet created on the client podman will create it for me. I can set the permissions for the mounted folder on my host machine to match it to the container-user, but the created path folders do not have the same permissions.
Steps to reproduce
For example lets assume in my image the home directory of the user ist empty. Then I will do on my host:
$ mkdir foo
$ podman unshare chown 1000:100 foo
$ podman run -v $PWD/foo:/home/myuser/bar/foo:z [...] some/image:latest
that will result on my container as:
~ # ls -la
drwxr-xr-t 3 root root 4096 Jan 28 12:43 bar
~ # cd bar
~/bar # ls -la
drwxrwxr-x 2 1000 users 4096 Jan 28 12:42 foo
~/bar #
is this behavior intentional?
is there a way to tell podman to create the path with the same permissions as the destination folder?
I can imagine a work around, but it would be nice if I could tell it in the run command.
Use Case
In my case I try to run different jupyter notebooks as disposable container direct from docker.io. But I do want to share the user-settings. The user-settings folder is not present when the container mounts the volumes. So podman will create them, but as root. So the jupyter user cannot access the folders created by podman and will fail.
I could create a Buildfile from the images and create the folders in the buildphase. But I use different images all the time and I dont want to create a custom image for all my use cases.
I could mount the volume to the parent folder, but all kinds of different stuff gets stored there and I dont want to share this to all the different containers.
I could not dispose the containers after the initial boot, but I dont know when I want to reuse this container, if at all...
Maybe it is possible to map the jupyter user to your user with the --uidmap command-line option?
(untested)
$ mkdir foo
$ jupyterUID=1234 # Replace 1234 with the correct UID for the jupyter user
$ podman run -v $PWD/foo:/home/myuser/bar/foo:z [...] --uidmap=0:1:$jupyterUID --uidmap=$(expr $jupyterUID + 1):$(expr $jupyterUID + 1):$(expr 65536 - $jupyterUID - 1) --uidmap=${jupyterUID}:0:1 some/image:latest
I think something like this is needed when the container starts as the container root user and then runs a program as another user. If that other user would write files in a bind-mounted directory, the files would be owned by your normal user on the host. I don't know, though, if that is the case with your Jupyter container image.
Edit 4 April 2022
A related Stackoverflow answer that I wrote:
https://stackoverflow.com/a/71741794/757777
I also wrote a troubleshooting tip about using --uidmap and --gidmap in the Podman troubleshooting guide.

Unable to mount files in Vagrant

Error description
I'm unable to mount files in Vagrant or Docker, so it seems like it's an issue caused by some kind of a permission error.
My OS is Ubuntu 18.04 LTS (Bionic Beaver), I'm not running any access control modules like SELinux as far as I'm aware.
The Docker-related discussion of the error is found in another question:
Unable to mount files in Docker
Troubleshooting Vagrant
As a result I'm unable to mount files into Vagrant boxes (even though I have vboxsf):
Vagrant was unable to mount VirtualBox shared folders. This is usually
because the filesystem "vboxsf" is not available. This filesystem is
made available via the VirtualBox Guest Additions and kernel module.
Please verify that these guest additions are properly installed in the
guest. This is not a bug in Vagrant and is usually caused by a faulty
Vagrant box. For context, the command attempted was:
mount -t vboxsf -o uid=1000,gid=1000 srv_salt /srv/salt
The error output from the command was:
/sbin/mount.vboxsf: mounting failed with the error: No such device
I have specified these statements in Vagrantfile and it works on my colleagues local builds, but in my build these files do not get mounted or copied into the boxes:
host.vm.synced_folder "salt/", "/srv/salt"
host.vm.synced_folder "pillar/", "/srv/pillar"
Conclusion
It seems something is very messed up on my local machine when it comes to copying over files, possibly has to do with how my user is configured and what access it has to mounting files into VMs and containers.
If anyone can shed some light on this I'd appreciate it.
Updates
1)
As user #BMitch suggested I went through my "Virtualbox Guest Additions" installation.
Whenever I do an update of my packages I log this into logfiles, found this from update-20180405_153850.txt, almost 2 months ago.
Preparing to unpack .../virtualbox-guest-additions-iso_5.1.34-0ubuntu1.16.04.2_all.deb ...
Unpacking virtualbox-guest-additions-iso (5.1.34-0ubuntu1.16.04.2) over (5.0.40-0ubuntu1.16.04.1) ...
But this doesn't make sense to me.. my bootstrapping script (vagrant up'ing the boxes) fails the first time I run it with this:
Vagrant was unable to mount VirtualBox shared folders. This is usually
because the filesystem "vboxsf" is not available. This filesystem is
made available via the VirtualBox Guest Additions and kernel module.
Please verify that these guest additions are properly installed in the
guest. This is not a bug in Vagrant and is usually caused by a faulty
Vagrant box. For context, the command attempted was:
mount -t vboxsf -o uid=1000,gid=1000 srv_salt /srv/salt
The error output from the command was:
/sbin/mount.vboxsf: mounting failed with the error: No such device
Traceback (most recent call last):
File "launch-vagrant.py", line 166, in <module>
main()
File "launch-vagrant.py", line 120, in main
vagrant()
File "launch-vagrant.py", line 95, in vagrant
main()
File "launch-vagrant.py", line 67, in main
start()
File "launch-vagrant.py", line 83, in start
_exec('vagrant', 'up')
File "launch-vagrant.py", line 129, in _exec
subprocess.check_call(list(args))
File "/usr/lib/python3.6/subprocess.py", line 291, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['vagrant', 'up']' returned non-zero exit status 1.
The second time I run it it succeeds but with the following warning message:
Copy iso file /usr/share/virtualbox/VBoxGuestAdditions.iso into the box /tmp/VBoxGuestAdditions.iso
Mounting Virtualbox Guest Additions ISO to: /mnt
mount: /dev/loop0 is write-protected, mounting read-only
Installing Virtualbox Guest Additions 5.2.11 - guest version is unknown
Verifying archive integrity... All good.
Uncompressing VirtualBox 5.2.11 Guest Additions for Linux........
VirtualBox Guest Additions installer
Removing installed version 5.2.11 of VirtualBox Guest Additions...
Copying additional installer modules ...
Installing additional modules ...
VirtualBox Guest Additions: Building the VirtualBox Guest Additions kernel modules.
This system is currently not set up to build kernel modules.
Please install the Linux kernel "header" files matching the current kernel
for adding new hardware support to the system.
The distribution packages containing the headers are probably:
kernel-devel kernel-devel-3.10.0-693.21.1.el7.x86_64
VirtualBox Guest Additions: Starting.
VirtualBox Guest Additions: Building the VirtualBox Guest Additions kernel modules.
This system is currently not set up to build kernel modules.
Please install the Linux kernel "header" files matching the current kernel
for adding new hardware support to the system.
The distribution packages containing the headers are probably:
kernel-devel kernel-devel-3.10.0-693.21.1.el7.x86_64
An error occurred during installation of VirtualBox Guest Additions 5.2.11. Some functionality may not work as intended.
In most cases it is OK that the "Window System drivers" installation failed.
Clearly, this is the VBox Guest Additions installed in the actual boxes and not on my local machine.
I did however change the file mode for /usr/share/virtualbox/VBoxGuestAdditions.iso so maybe that will fix the issue:
petrus#DD-XPS-15-9550:/usr/share/virtualbox$ ll
total 56656
drwxr-xr-x 3 root root 4096 maj 14 00:28 ./
drwxr-xr-x 384 root root 16384 maj 16 12:58 ../
drwxr-xr-x 2 root root 12288 maj 13 23:06 nls/
-rw-r--r-- 1 root root 57970688 apr 20 15:50 VBoxGuestAdditions.iso
-rwxr-xr-x 1 root root 2600 nov 29 2016 VBox.sh*
-rwxr-xr-x 1 root root 4163 apr 13 18:37 VBoxSysInfo.sh*
petrus#DD-XPS-15-9550:/usr/share/virtualbox$ sudo chmod ugo+w VBoxGuestAdditions.iso
[sudo] password for petrus:
petrus#DD-XPS-15-9550:/usr/share/virtualbox$ ll
total 56656
drwxr-xr-x 3 root root 4096 maj 14 00:28 ./
drwxr-xr-x 384 root root 16384 maj 16 12:58 ../
drwxr-xr-x 2 root root 12288 maj 13 23:06 nls/
-rw-rw-rw- 1 root root 57970688 apr 20 15:50 VBoxGuestAdditions.iso
-rwxr-xr-x 1 root root 2600 nov 29 2016 VBox.sh*
-rwxr-xr-x 1 root root 4163 apr 13 18:37 VBoxSysInfo.sh*
To the vagrant problem:
By default Vagrant mounted the folder of host Vagrant file to '/vagrant'. Test the output of 'ls -l /vagrant' if the basic file system mount works.
The 'ls' command should show your Vagrantfile as a minimum in '/vagrant'.
If the Vagrantfile is visible in the virtual machine basically the mount works with vagrant.
host.vm.synced_folder "salt/", "/srv/salt"
host.vm.synced_folder "pillar/", "/srv/pillar"
It seem you mount your additional folder relative to your Vagrantfile. In that case you can modify your VM internally to link them to the desired dest folder.
You can do that in the provision state of your VM for example.
# Vagrantfile snippet
config.vm.provision "shell", inline: <<-SHELL
ln -s /vagrant/salt /srv/salt
ln -s /vagrant/pillar /srv/pillar
SHELL
Maybe this file are not available at provision time then run them as one time task after your first login or put something similar to /etc/rc.local of your VM

Docker - non-privileged user can write to / inside container

I've created a container, based off the centos:6.8 image using the following Dockerfile:
FROM centos:6.8
RUN adduser -m test
USER test
The image is then built using docker build:
docker build -t dockerdemo .
Then I start a container with:
docker run -ti dockerdemo bash
When I am inside the container, I appear to be able to write as the "test" user into the root directory of the container:
[test#9af9c4aeb990 /]$ ls -ld /
drwxr-xr-x 29 root root 4096 Oct 25 09:49 /
[test#9af9c4aeb990 /]$ id -a
uid=500(test) gid=500(test) groups=500(test)
[test#9af9c4aeb990 /]$ touch /test-file
[test#9af9c4aeb990 /]$ ls -l /test-file
-rw-rw-r-- 1 test test 0 Oct 25 09:49 /test-file
I am expecting to see Permission denied when I run the touch command.
If I alter the Dockerfile and remove the USER statement, and rebuild, then I can su to the "test" user inside the container and I get the behaviour I would expect:
[root#d16277f693d8 /]# su - test
[test#d16277f693d8 ~]$ id
uid=500(test) gid=500(test) groups=500(test)
[test#d16277f693d8 ~]$ ls -ld /
drwxr-xr-x 29 root root 4096 Oct 25 09:50 /
[test#d16277f693d8 ~]$ touch /test-file
touch: cannot touch `/test-file': Permission denied
Have I misunderstood how user permissions work inside containers?
Is there a way to produce my expected behaviour?
There was a vulnerability announced in 1.12.2 that your scenario matches. Release 1.12.3 just came out yesterday to fix this issue and CVE-2016-8867 was registered on the vulnerability. It's an internal container privilege escalation, so limited impact, but still worth the upgrade.

How can I mount an S3 volume with proper permissions using FUSE

I have an Amazon S3 bucket (let's call it static.example.com) that I need to mount on an EC2 instance (Ubuntu 12.04.2). I've installed s3fs. I'm able to mount the volume, but I can't write to the bucket. I have tried:
sudo s3fs static.example.com -o use_cache=/tmp,allow_other,uid=33,gid=33 /mnt/static.example.com
I can then cd /mnt and ls -la to see:
drwxr-xr-x 5 root root 4096 Mar 28 18:03 .
drwxr-xr-x 25 root root 4096 Feb 19 19:22 ..
lrwxrwxrwx 1 root root 7 Feb 21 19:19 httpd -> /httpd/
drwx------ 2 root root 16384 Oct 9 2012 lost+found
drwxr-xr-x 1 www-data www-data 0 Jan 1 1970 static.example.com
This all looks good, but when I cd static.example.com and mkdir test, I get:
mkdir: cannot create directory `test': Permission denied
The only way I can actually create a directory or touch a file is to force it with sudo. This is not a viable option, however, because I want to write files to the bucket from Apache. My Apache server runs as user:group www-data. Running mount yields:
s3fs on /mnt/static.example.com type fuse.s3fs (rw,nosuid,nodev,allow_other)
How can I mount this bucket in a manner that will allow me to write to the bucket?
I'm the lead developer and maintainer of Open source project RioFS: a userspace filesystem to mount Amazon S3 buckets.
Our project is an alternative to “s3fs” project, main advantages comparing to “s3fs” are: simplicity, the speed of operations and bugs-free code. Currently the project is in the “beta” state, but it's been running on several high-loaded fileservers for quite some time.
We are seeking for more people to join our project and help with the testing. From our side we offer quick bugs fix and will listen to your requests to add new features.
Regarding your issue:
if'd you use RioFS, you could mount a bucket and have a write access to it using the following command (assuming you have installed RioFS and have exported AWSACCESSKEYID and AWSSECRETACCESSKEY environment variables):
riofs -o allow_other http://s3.amazonaws.com bucket_name /mnt/static.example.com
(please refer to project description for command line arguments)
Please note that the project is still in the development, there are could be still a number of bugs left.
If you find that something doesn't work as expected: please fill a issue report on the project's GitHub page.
Hope it helps and we are looking forward to seeing you joined our community !
This works for me:
sudo s3fs bucketname /mnt/folder -o allow_other,nosuid,use_cache=/mnt/foldercache
If you need to debug, just add ,f2 -f -d:
sudo s3fs bucketname /mnt/folder -o allow_other,nosuid,use_cache=/mnt/foldercache,f2 -f -d
Try this method using S3Backer:
mountpoint/
file # (e.g., can be used as a virtual loopback)
stats # human readable statistics
Read more about it hurr:
http://www.turnkeylinux.org/blog/exploring-s3-based-filesystems-s3fs-and-s3backer