From Windows, I can access the file systems of all the WSL containers from under \\wsl$.
And from inside a WSL container, I can access the windows C:\ drive as /mnt/c.
But how can I access another container's drive from inside a WSL container?
I'm trying to access \\wsl$\othercontainer\some\file from inside a WSL container.
wslpath can normally convert Windows file paths to paths accessible from WSL:
WSL2#~» wslpath 'C:\Windows\System32\drivers\etc\hosts'
/mnt/c/Windows/System32/drivers/etc/hosts
But it doesn't work for:
WSL2#~» wslpath '\\wsl$\othercontainer\some\file'
wslpath: \\wsl$\othercontainer\some\file
WSL2#~» echo $?
1
And of course:
WSL2#~» ls -l '\\wsl$\othercontainer\some\file'
ls: cannot access '\\wsl$\othercontainer\some\file': No such file or directory
This answer provided the answer:
sudo mkdir /mnt/othercontainer
sudo mount -t drvfs '\\wsl$\othercontainer' /mnt/othercontainer
ls -l /mnt/othercontainer/some/file
NOTE: It looks like symbolic links aren't supported. When one is encountered, we get an error like:
$ ls -l /mnt/othercontainer/bin
ls: cannot read symbolic link '/mnt/othercontainer/bin': Function not implemented
lrwxrwxrwx 1 root root 7 Apr 23 2020 /mnt/othercontainer/bin
Related
I have made a python script in WSL using VSCode. I want to call this script from another python script that is running in Windows.
My plan was to use os.system() to command line into wsl and start the script from it's directory, however when I go into windows terminal to try to start it manually, I can't access any of my directories.
The path I am attempting to access from terminal is \wsl.localhost\Ubuntu\home\ben\apps and root user can't access it either.
Windows terminal:
(base) PS C:\Users\benja> wsl
ben#DESKTOP:/mnt/c/Users/benja$ cd ~
ben#DESKTOP:~$ ls
ben#DESKTOP:~$ ls -a
. .. .bash_history .bash_logout .bashrc .profile
ben#DESKTOP:~$ cd ..
ben#DESKTOP:/home$ ls
ben
ben#DESKTOP:/home$ cd /ben
-bash: cd: /ben: No such file or directory
ben#DESKTOP:/home$ cd ..
ben#DESKTOP:/$ ls
bin boot dev etc home init lib lib64 media mnt opt proc root run sbin srv sys tmp usr var
ben#DESKTOP:/$ cd ..
ben#DESKTOP:/$ cd /home/ben/apps
You're accessing /ben aka root_directory/ben
try: ~/apps or cd ben
WSL has both Ubuntu and Debian. apps is in Ubuntu, however my WSL was defaulting to Debian. That is why the directories were empty.
Abstract
When I mount a folder to my container and the path to the folder is not yet created on the client podman will create it for me. I can set the permissions for the mounted folder on my host machine to match it to the container-user, but the created path folders do not have the same permissions.
Steps to reproduce
For example lets assume in my image the home directory of the user ist empty. Then I will do on my host:
$ mkdir foo
$ podman unshare chown 1000:100 foo
$ podman run -v $PWD/foo:/home/myuser/bar/foo:z [...] some/image:latest
that will result on my container as:
~ # ls -la
drwxr-xr-t 3 root root 4096 Jan 28 12:43 bar
~ # cd bar
~/bar # ls -la
drwxrwxr-x 2 1000 users 4096 Jan 28 12:42 foo
~/bar #
is this behavior intentional?
is there a way to tell podman to create the path with the same permissions as the destination folder?
I can imagine a work around, but it would be nice if I could tell it in the run command.
Use Case
In my case I try to run different jupyter notebooks as disposable container direct from docker.io. But I do want to share the user-settings. The user-settings folder is not present when the container mounts the volumes. So podman will create them, but as root. So the jupyter user cannot access the folders created by podman and will fail.
I could create a Buildfile from the images and create the folders in the buildphase. But I use different images all the time and I dont want to create a custom image for all my use cases.
I could mount the volume to the parent folder, but all kinds of different stuff gets stored there and I dont want to share this to all the different containers.
I could not dispose the containers after the initial boot, but I dont know when I want to reuse this container, if at all...
Maybe it is possible to map the jupyter user to your user with the --uidmap command-line option?
(untested)
$ mkdir foo
$ jupyterUID=1234 # Replace 1234 with the correct UID for the jupyter user
$ podman run -v $PWD/foo:/home/myuser/bar/foo:z [...] --uidmap=0:1:$jupyterUID --uidmap=$(expr $jupyterUID + 1):$(expr $jupyterUID + 1):$(expr 65536 - $jupyterUID - 1) --uidmap=${jupyterUID}:0:1 some/image:latest
I think something like this is needed when the container starts as the container root user and then runs a program as another user. If that other user would write files in a bind-mounted directory, the files would be owned by your normal user on the host. I don't know, though, if that is the case with your Jupyter container image.
Edit 4 April 2022
A related Stackoverflow answer that I wrote:
https://stackoverflow.com/a/71741794/757777
I also wrote a troubleshooting tip about using --uidmap and --gidmap in the Podman troubleshooting guide.
So currently I have Busybox installed on an embedded kernel in its /system/bin/ folder and can call manually to the VI editor by typing busybox vi and vi will be executed. HOWEVER, I want to create a symbolic link to busybox vi by just typing vi file.txt instead of busybox vi file.txt so I won't have to type busybox every time. How to do this? I already tried this:
Installing Busybox
If the Busybox executable is renamed to one of the commands it supports, it will act as that command automatically:
ln -s busybox pwd
./pwdfrom
...from Busybox's website but still doesn't work, all it says is on my terminal for which command is:
127|root#nitrogen6x:/system/bin # ln -s busbox which
root#nitrogen6x:/system/bin # which ls
/system/bin/sh: which: not found
127|root#nitrogen6x:/system/bin # ls -la which lrwxrwxrwx root root 1970-01-03 18:15 which -> busbox
any ideas what I'm doing wrong? My $PATH is: /sbin:/vendor/bin:/system/sbin:/system/bin:/system/xbin
I figured out how to get this to work
HERE'S HOW:
So I went to root directory:
cd /
Then I remounted the /system/ directory:
mount -o rw,remount /system
Then I went into the binary folder where busybox was located:
cd /system/bin/
Then I used the link command for the busybox binary I wanted:
ln -s busybox lsusb (remember you must be in /system/bin directory already)
For Already Linked Files:
For already linked files like ls, remove the linked file and replace with Busybox binary instead (I know it sounds crazy but you can always go back to system's binary utilities):
sudo rm /system/bin/ls
ln -s busybox ls (remember you must be in /system/bin directory already)
You should get something like this when you do ls -l ls:
lrwxrwxrwx 1 0 0 7 Jan 4 21:53 ls -> busybox
One point to consider is that you have to be on the same file system.
For example if you are trying to create a symbolic link from one mounted file system to a file on another file system then that's an issue.
If your / and /usr are not on the same mounted file system as there might be the case for embedded systems, then you cannot create a symbolic link /usr/bin/which to point to /bin/busybox.
One possible solution is to put a copy of busybox binary in /urs/bin and create link to that.
file = open('/etc/shadow', 'r')
print(file)
getting error like this
file = open('/etc/shadow', 'r')
IOError: [Errno 13] Permission denied: '/etc/shadow'
On most systems /etc/shadow is owned by root with rw permissions.
$ ls -la /etc/shadow
-rw------- 1 root root 692 Jun 10 19:24 /etc/shadow
You need to either:
Change the permissons (don't do this it is not safe)
but you could this by running 'chmod o+r /etc/shadow' as root. This will give the 'other' users read rights to
Run your program as root. Either by
a. Starting it as root
su -c 'python myPython.py' //you will be asked to provide the root password.
b. Starting it with sudo [1]
sudo python myPython.py this all depends on you sudo configuration but is your best bet other then just starting python as root.
Also an example to call sudo from within python[5].
c. Set setuid bit on the program [2]
This will most likely not work as Python is an interpreted language and most modern Unix systems will disallow (exception being Perl) setuid on interpreted programs as opposed to compiled/binaries.
chown root programName # Set owner to be root
chmod +s programName # This gives the program itself the right to run as root.
Regardless of whom starts it.
[1]http://en.wikipedia.org/wiki/Sudo
[2]http://en.wikipedia.org/wiki/Setuid
[3]Open a file as superuser in python
[4]Setuid bit on python script : Linux vs Solaris
[5]Using sudo with Python script
The problem is not with the source code or python. But with not having the correct file system rights to the '/etc/shadow' file.
In python 3 you can do:
import spwd
spwd.getspnam('username')
More information about the spwd module can be found here: https://docs.python.org/3/library/spwd.html#module-spwd
I have an Amazon S3 bucket (let's call it static.example.com) that I need to mount on an EC2 instance (Ubuntu 12.04.2). I've installed s3fs. I'm able to mount the volume, but I can't write to the bucket. I have tried:
sudo s3fs static.example.com -o use_cache=/tmp,allow_other,uid=33,gid=33 /mnt/static.example.com
I can then cd /mnt and ls -la to see:
drwxr-xr-x 5 root root 4096 Mar 28 18:03 .
drwxr-xr-x 25 root root 4096 Feb 19 19:22 ..
lrwxrwxrwx 1 root root 7 Feb 21 19:19 httpd -> /httpd/
drwx------ 2 root root 16384 Oct 9 2012 lost+found
drwxr-xr-x 1 www-data www-data 0 Jan 1 1970 static.example.com
This all looks good, but when I cd static.example.com and mkdir test, I get:
mkdir: cannot create directory `test': Permission denied
The only way I can actually create a directory or touch a file is to force it with sudo. This is not a viable option, however, because I want to write files to the bucket from Apache. My Apache server runs as user:group www-data. Running mount yields:
s3fs on /mnt/static.example.com type fuse.s3fs (rw,nosuid,nodev,allow_other)
How can I mount this bucket in a manner that will allow me to write to the bucket?
I'm the lead developer and maintainer of Open source project RioFS: a userspace filesystem to mount Amazon S3 buckets.
Our project is an alternative to “s3fs” project, main advantages comparing to “s3fs” are: simplicity, the speed of operations and bugs-free code. Currently the project is in the “beta” state, but it's been running on several high-loaded fileservers for quite some time.
We are seeking for more people to join our project and help with the testing. From our side we offer quick bugs fix and will listen to your requests to add new features.
Regarding your issue:
if'd you use RioFS, you could mount a bucket and have a write access to it using the following command (assuming you have installed RioFS and have exported AWSACCESSKEYID and AWSSECRETACCESSKEY environment variables):
riofs -o allow_other http://s3.amazonaws.com bucket_name /mnt/static.example.com
(please refer to project description for command line arguments)
Please note that the project is still in the development, there are could be still a number of bugs left.
If you find that something doesn't work as expected: please fill a issue report on the project's GitHub page.
Hope it helps and we are looking forward to seeing you joined our community !
This works for me:
sudo s3fs bucketname /mnt/folder -o allow_other,nosuid,use_cache=/mnt/foldercache
If you need to debug, just add ,f2 -f -d:
sudo s3fs bucketname /mnt/folder -o allow_other,nosuid,use_cache=/mnt/foldercache,f2 -f -d
Try this method using S3Backer:
mountpoint/
file # (e.g., can be used as a virtual loopback)
stats # human readable statistics
Read more about it hurr:
http://www.turnkeylinux.org/blog/exploring-s3-based-filesystems-s3fs-and-s3backer