When I go into WSL the C drive is automatically mounted at /mnt/c. I would further like to mount a folder C:\D to the mount point /mnt/d.
The contents of /etc/fstab:
LABEL=cloudimg-rootfs / ext4 defaults 0 0
/mnt/c/D /mnt/d none bind
The contents of /etc/wsl.conf:
# Enable extra metadata options by default
[automount]
enabled = true
root = /mnt/
options = "metadata,umask=22,fmask=11"
mountFsTab = true
# Enable DNS – even though these are turned on by default, we’ll specify here just to be explicit.
[network]
generateHosts = true
generateResolvConf = true
When I do sudo mount -a then it mounts correctly. However it is not mounted at startup and running mount -a reports "mount: only root can use "--all" option".
The question is old but if somebody still hit the question I found the answer in WSL release notes
WSL now processes the /etc/fstab file during instance start [GH 2636].
This is done prior to automatically mounting DrvFs drives; any drives
that were already mounted by fstab will not be remounted
automatically, allowing you to change the mount point for specific
drives.
Therefore before bind mount one have to add mount for windows drive:
eg:
# <file system> <dir> <type> <options> <dump> <pass>
C: /mnt/c drvfs rw,noatime,uid=1000,gid=1000,case=off,umask=0027,fmask=0137, 0 0
/mnt/c/directory/for/mount /where/to/mount none bind,default 0 0
Related
I need help with adding ACL on the mounted directory.
My ultimate goal here is to set default permission for files and subdirectories.
Many people suggestes using "setfacl" function, so I tried using it as
setfacl setfacl -d -m g::rwx /SharedA/dirA/dir1
However, I've got error message of
setfacl: /SharedA/dirA/dir1: Operation not supported
SharedA is an shared directory mounted on the server using the line in /etc/fstab as shown below.
IPADDR:/volume4/SharedA /SharedA nfs defaults 0 0
Therefore, I added acl after defaults as shown below
IPADDR:/volume4/SharedA /SharedA nfs defaults,acl 0 0
and remount them using without any error.
sudo mount -o remount /SharedA
However, it does not seem to be working still.
The server is built on CentOS7. Would anybody know what I should try?
I hope I've provided enough info. Please advise.
Thank you!
Abstract
When I mount a folder to my container and the path to the folder is not yet created on the client podman will create it for me. I can set the permissions for the mounted folder on my host machine to match it to the container-user, but the created path folders do not have the same permissions.
Steps to reproduce
For example lets assume in my image the home directory of the user ist empty. Then I will do on my host:
$ mkdir foo
$ podman unshare chown 1000:100 foo
$ podman run -v $PWD/foo:/home/myuser/bar/foo:z [...] some/image:latest
that will result on my container as:
~ # ls -la
drwxr-xr-t 3 root root 4096 Jan 28 12:43 bar
~ # cd bar
~/bar # ls -la
drwxrwxr-x 2 1000 users 4096 Jan 28 12:42 foo
~/bar #
is this behavior intentional?
is there a way to tell podman to create the path with the same permissions as the destination folder?
I can imagine a work around, but it would be nice if I could tell it in the run command.
Use Case
In my case I try to run different jupyter notebooks as disposable container direct from docker.io. But I do want to share the user-settings. The user-settings folder is not present when the container mounts the volumes. So podman will create them, but as root. So the jupyter user cannot access the folders created by podman and will fail.
I could create a Buildfile from the images and create the folders in the buildphase. But I use different images all the time and I dont want to create a custom image for all my use cases.
I could mount the volume to the parent folder, but all kinds of different stuff gets stored there and I dont want to share this to all the different containers.
I could not dispose the containers after the initial boot, but I dont know when I want to reuse this container, if at all...
Maybe it is possible to map the jupyter user to your user with the --uidmap command-line option?
(untested)
$ mkdir foo
$ jupyterUID=1234 # Replace 1234 with the correct UID for the jupyter user
$ podman run -v $PWD/foo:/home/myuser/bar/foo:z [...] --uidmap=0:1:$jupyterUID --uidmap=$(expr $jupyterUID + 1):$(expr $jupyterUID + 1):$(expr 65536 - $jupyterUID - 1) --uidmap=${jupyterUID}:0:1 some/image:latest
I think something like this is needed when the container starts as the container root user and then runs a program as another user. If that other user would write files in a bind-mounted directory, the files would be owned by your normal user on the host. I don't know, though, if that is the case with your Jupyter container image.
Edit 4 April 2022
A related Stackoverflow answer that I wrote:
https://stackoverflow.com/a/71741794/757777
I also wrote a troubleshooting tip about using --uidmap and --gidmap in the Podman troubleshooting guide.
We have VirtualBox (using vagrant) env , by mistake made an entry in /etc/security/limits.conf [with out having a root shell open:( ] and now I am unable to ssh (the connections drops immediately).
Previously we had one such scenario (limits done by someone else) , was able to fix using vboxmanage guestcontrol copyto CLI and was able to overwrite limits.conf and then ssh was allowed, this time around the vboxmanage CLI also hangs
Tried to open the VM in GUI and went to console and tried few options , but could not get to single user mode.
Since you already tried vbox cli command and the commands hang, it means even virtualbox cannot access the system or get a shell to open.
In this case you will have to bring up a ubuntu VM and use the qemu-nbd module to fix this. The steps are given below.
Bring up a very simple ubuntu vm using hashicorp’s bionic64 on the same host machine by executing the following steps.
mkdir bionic
cd bionic
vagrant box add hashicorp/bionic64
vagrant init
Open the Vagrantfile and change the config.vm.box = "base" to config.vm.box = "hashicorp/bionic64"
Also mount the folder in the host where the .vdi file for the VM is located by adding the following to the Vagrant file by adding the following line(replace the file path with the correct one corresponding to your system. Here /nbd2 will be created on the ubuntu machine and will contain the files including the .vdi file.
config.vm.synced_folder "/home/topcat/VirtualBox\ VMs/your_vm", "/nbd2"
Now do vagrant up
Once the machine boots up
vagrant ssh #to ssh as vagrant
sudo su #to become root
apt-get update #This will refresh the apt cache
apt-get install qemu
modprobe nbd (to check if the module is loaded successfully. Will exit without any output if it is installed)
qemu-nbd -c /dev/nbd1 "/nbd2/box-disk001.vdi" - (Here change the path to whatever you gave in the config.vm.synced_folder property)
mkdir -p /mnt/vdi-boot
mount /dev/nbd1p1 /mnt/vdi-boot
cd /mnt/vdi-boot/etc/security (This folder will have all the files as it were in your VM)
touch limits.conf (if the file is already there, delete it)
chmod 644 limits.conf
chown root:root limits.conf
open the /mnt/vdi-boot/etc/security/nsswitch.conf file and check if the following three lines are present
passwd: files
shadow: files
group: files
umount /mnt/vdi-boot (unmounts the mounted path)
qemu-nbd -d /dev/nbd1 (disconnects from qemu-nbd)
Exit the VM and start the VM
Open another shell and try to ssh. It should go through fine this time.
I have found that running
chmod u+x myfile
Does not work when I ssh into my local vagrant machine. Same thing for +/- for writable, in fact chmod has no effect. How can I modify permissions inside my vagrant instance?
Running on Windows 10 machine
VirtualBox version 5.1.10
Vagrant 1.8.7
Also this is the line in the Vagrantfile:
config.vm.synced_folder ".", "/var/www", :mount_options => ["dmode=777", "fmode=666"]
666 = rw-rw-rw- .. I'm suspecting that permissions aren't changeable since it remains that way no matter what I do.
(To clarify, my purpose is to practice with a PHP shell script, but I can't run the script if it's not executable)
You're telling vagrant to share the folder in full mode for directory (dmode) but to set permissions as 666 on file so you cant make an executable mode after.
If you have no particular reason to set the directory/file mode for your shared folder, jsut remove the mounting options and leave as
config.vm.synced_folder ".", "/var/www"
Below permission should be enough for write and execute.
chmod 755 filename
I was searching hours on the Internet, but for this specific problem I could not find any solution.
1: I have a Xubuntu Linux on my PC. I use it in average way: browse the Internet, watch videos, etc. And also it gives home for my PHPStorm app but no the project files. This is the HOST. It has a host-only network: 192.168.56.1
2: I have a VirtualBox Debian Linux (no GUI) system. This meant to be represent a development version of my real webserver. It has all the project files. This VM is on an external drive, so I can take it everywhere (e.g.: to the office). 192.168.56.101. This is the GUEST.
3: on the HOST I use dnsmasq to force every *.dev domain to be redirected to the GUEST. So I can test my projects easily.
4: on the GUEST I exported the /var/www folder in the /etc/exports:
/var/www 192.168.56.1(rw,sync,no_root_squash,no_subtree_check)
The problem: I want to use the PHPStorm on the HOST to edit the files on the GUEST "locally". But I cannot mount the GUEST's /var/www folder into the HOST's /home/gabor/Projects folder with full permissions. I tried to use the following:
$> sudo mount 192.168.56.101:/var/www /home/gabor/Projects
This looks okay for the first time, but the folder is mounted with nobody:nogoup and I have no permissions to edit.
I want the /home/gabor/Projects has the owner gabor:gabor and everything I create in this folder must has the owner www-data:www-data on the Debian side. But for NFS mounting I cannot specify the user.
$> sudo mount -o umask=0022,gid=1000,uid=1000 192.168.56.101:/var/www /home/gabor/Projects
mount.nfs: an incorrect mount option was specified
I also failed to mount --bind the /var/www with different user (should be nobody:nogroup) on the Debian, so that I could export that one...
How can I solve this problem?
Please help me.
Thank you.
NFS v2 and v3 do not support uid/gid.
on Ubuntu man nfs
Adding this answer for posterity, as I ended up here with the same question.
Try this in /etc/export:
/var/www 192.168.56.1(rw,root_squash)
Then on the client, put this in /etc/fstab:
192.168.56.101:/var/www /home/gabor/Projects nfs defaults,user,noauto,relatime,rw 0 0
The user option will allow a non-root user to mount the volume. Adjust other options as needed.
Then on the client again, become the user you want to mount the volume as, and then mount the volume you added to /etc/fstab:
$ id
uid=1000(gabor) gid=1000(gabor) groups=1000(gabor)
$ mount /home/gabor/Projects
$
Make sure that the uid and/or gid are the same on the server. I'm not sure if the usernames can be different or not. Also make sure that the directory being exported on the server is writable by the user or group. See this blog post for additional info about setting up NFS in a similar manner.
Caution: This is an insecure configuration without authentication. Use NFS v4 with Kerberos for strong authentication.
Ok, I found a solution that exactly does what I want.
First, install the sshfs:
$> sudo apt-get install sshfs
Then mount the remote /var/www:
$> sshfs -o uid=33,gid=33 root#192.168.56.101:/var/www /home/gabor/Projects
And that is it!
$> ls -la /home/gabor | grep Projects
drwxr-xr-x 1 www-data www-data 4096 Okt 14 21:10 Projects