I have an Amazon S3 bucket (let's call it static.example.com) that I need to mount on an EC2 instance (Ubuntu 12.04.2). I've installed s3fs. I'm able to mount the volume, but I can't write to the bucket. I have tried:
sudo s3fs static.example.com -o use_cache=/tmp,allow_other,uid=33,gid=33 /mnt/static.example.com
I can then cd /mnt and ls -la to see:
drwxr-xr-x 5 root root 4096 Mar 28 18:03 .
drwxr-xr-x 25 root root 4096 Feb 19 19:22 ..
lrwxrwxrwx 1 root root 7 Feb 21 19:19 httpd -> /httpd/
drwx------ 2 root root 16384 Oct 9 2012 lost+found
drwxr-xr-x 1 www-data www-data 0 Jan 1 1970 static.example.com
This all looks good, but when I cd static.example.com and mkdir test, I get:
mkdir: cannot create directory `test': Permission denied
The only way I can actually create a directory or touch a file is to force it with sudo. This is not a viable option, however, because I want to write files to the bucket from Apache. My Apache server runs as user:group www-data. Running mount yields:
s3fs on /mnt/static.example.com type fuse.s3fs (rw,nosuid,nodev,allow_other)
How can I mount this bucket in a manner that will allow me to write to the bucket?
I'm the lead developer and maintainer of Open source project RioFS: a userspace filesystem to mount Amazon S3 buckets.
Our project is an alternative to “s3fs” project, main advantages comparing to “s3fs” are: simplicity, the speed of operations and bugs-free code. Currently the project is in the “beta” state, but it's been running on several high-loaded fileservers for quite some time.
We are seeking for more people to join our project and help with the testing. From our side we offer quick bugs fix and will listen to your requests to add new features.
Regarding your issue:
if'd you use RioFS, you could mount a bucket and have a write access to it using the following command (assuming you have installed RioFS and have exported AWSACCESSKEYID and AWSSECRETACCESSKEY environment variables):
riofs -o allow_other http://s3.amazonaws.com bucket_name /mnt/static.example.com
(please refer to project description for command line arguments)
Please note that the project is still in the development, there are could be still a number of bugs left.
If you find that something doesn't work as expected: please fill a issue report on the project's GitHub page.
Hope it helps and we are looking forward to seeing you joined our community !
This works for me:
sudo s3fs bucketname /mnt/folder -o allow_other,nosuid,use_cache=/mnt/foldercache
If you need to debug, just add ,f2 -f -d:
sudo s3fs bucketname /mnt/folder -o allow_other,nosuid,use_cache=/mnt/foldercache,f2 -f -d
Try this method using S3Backer:
mountpoint/
file # (e.g., can be used as a virtual loopback)
stats # human readable statistics
Read more about it hurr:
http://www.turnkeylinux.org/blog/exploring-s3-based-filesystems-s3fs-and-s3backer
Related
From Windows, I can access the file systems of all the WSL containers from under \\wsl$.
And from inside a WSL container, I can access the windows C:\ drive as /mnt/c.
But how can I access another container's drive from inside a WSL container?
I'm trying to access \\wsl$\othercontainer\some\file from inside a WSL container.
wslpath can normally convert Windows file paths to paths accessible from WSL:
WSL2#~» wslpath 'C:\Windows\System32\drivers\etc\hosts'
/mnt/c/Windows/System32/drivers/etc/hosts
But it doesn't work for:
WSL2#~» wslpath '\\wsl$\othercontainer\some\file'
wslpath: \\wsl$\othercontainer\some\file
WSL2#~» echo $?
1
And of course:
WSL2#~» ls -l '\\wsl$\othercontainer\some\file'
ls: cannot access '\\wsl$\othercontainer\some\file': No such file or directory
This answer provided the answer:
sudo mkdir /mnt/othercontainer
sudo mount -t drvfs '\\wsl$\othercontainer' /mnt/othercontainer
ls -l /mnt/othercontainer/some/file
NOTE: It looks like symbolic links aren't supported. When one is encountered, we get an error like:
$ ls -l /mnt/othercontainer/bin
ls: cannot read symbolic link '/mnt/othercontainer/bin': Function not implemented
lrwxrwxrwx 1 root root 7 Apr 23 2020 /mnt/othercontainer/bin
I have a proxmox server so under debian, and I want to mount a remote directory from my Nas Synologies to make backups.
I normally use ssh mounts without any problem.
But this time I have an error that I have never encountered, I can create files, but not delete them.
I find this very strange and I don't see where this can come from
root#proxmox:/mnt/# sshfs user#192.168.0.1:home/data /mnt/dist-folder/ -o reconnect,
ServerAliveInterval=60,ServerAliveCountMax=30,allow_other,
default_permissions,uid=0,gid=0,umask=007
root#proxmox:/mnt# cd dist-folder/
root#proxmox:/mnt/dist-folder# touch aa.txt
root#proxmox:/mnt/dist-folder# ls -la
total 12
drwxrwx--- 1 root root 114 Mar 13 09:53 .
drwxr-xr-x 7 root root 4096 Mar 13 09:37 ..
-rwxrwx--- 1 root root 0 Mar 13 09:53 aa.txt
root#proxmox:/mnt/dist-folder# rm aa.txt
rm: cannot remove 'aa.txt': Permission denied
With uid=0,gid=0 for root user and group
Thanks
This is finally a problem specific to synology.
For the assembly of the file it is absolutely necessary to respect the path by starting with
/homes/<user>home/
So it's give
sshfs user#192.168.0.1:/homes/proxmox/home/data /mnt/dist-folder/
And it's works fine !
It's not the first time that I have an abnormal configuration for this synology tool... AGrrrr
Abstract
When I mount a folder to my container and the path to the folder is not yet created on the client podman will create it for me. I can set the permissions for the mounted folder on my host machine to match it to the container-user, but the created path folders do not have the same permissions.
Steps to reproduce
For example lets assume in my image the home directory of the user ist empty. Then I will do on my host:
$ mkdir foo
$ podman unshare chown 1000:100 foo
$ podman run -v $PWD/foo:/home/myuser/bar/foo:z [...] some/image:latest
that will result on my container as:
~ # ls -la
drwxr-xr-t 3 root root 4096 Jan 28 12:43 bar
~ # cd bar
~/bar # ls -la
drwxrwxr-x 2 1000 users 4096 Jan 28 12:42 foo
~/bar #
is this behavior intentional?
is there a way to tell podman to create the path with the same permissions as the destination folder?
I can imagine a work around, but it would be nice if I could tell it in the run command.
Use Case
In my case I try to run different jupyter notebooks as disposable container direct from docker.io. But I do want to share the user-settings. The user-settings folder is not present when the container mounts the volumes. So podman will create them, but as root. So the jupyter user cannot access the folders created by podman and will fail.
I could create a Buildfile from the images and create the folders in the buildphase. But I use different images all the time and I dont want to create a custom image for all my use cases.
I could mount the volume to the parent folder, but all kinds of different stuff gets stored there and I dont want to share this to all the different containers.
I could not dispose the containers after the initial boot, but I dont know when I want to reuse this container, if at all...
Maybe it is possible to map the jupyter user to your user with the --uidmap command-line option?
(untested)
$ mkdir foo
$ jupyterUID=1234 # Replace 1234 with the correct UID for the jupyter user
$ podman run -v $PWD/foo:/home/myuser/bar/foo:z [...] --uidmap=0:1:$jupyterUID --uidmap=$(expr $jupyterUID + 1):$(expr $jupyterUID + 1):$(expr 65536 - $jupyterUID - 1) --uidmap=${jupyterUID}:0:1 some/image:latest
I think something like this is needed when the container starts as the container root user and then runs a program as another user. If that other user would write files in a bind-mounted directory, the files would be owned by your normal user on the host. I don't know, though, if that is the case with your Jupyter container image.
Edit 4 April 2022
A related Stackoverflow answer that I wrote:
https://stackoverflow.com/a/71741794/757777
I also wrote a troubleshooting tip about using --uidmap and --gidmap in the Podman troubleshooting guide.
Error:
java.lang.UnsatisfiedLinkError: /opt/gurobi600/linux64/lib/libGurobiJni60.so: libgurobi60.so: cannot open shared object
It is getting the path correct when I add it via JVM settings, for some reason it doesn't find it if only relying on LD_LIBRARY_PATH environment variable though. Either way it has trouble with the libgurobi60.so. I tried adding all of this to glassfish_home/domains/domain1/lib/applibs and ext to no avail.
Here are the permissions for /opt/gurobi600/linux64/lib
-rw-r--r-- gurobi.jar
lrwxrwxrwx libgurobi60.so -> ./libgurobi.so.6.0.0
lrwxrwxrwx libgurobi_c++.a -> ./libgurobi_g++4.2.a
-rw-r--r-- libgurobi_g++4.1.a
-rw-r--r-- libgurobi_g++4.2.a
-rwxr-xr-x libGurobiJni60.so
-rwxrwxrwx libgurobi.so.6.0.0
I had this working on my previous server running ubuntu 12.04, this is now on 14.04. Previously copying the .so file to /usr/local/bin seemed to fix the issue, but this does not work on the new server.
Running the following two commands fixed it:
echo "/opt/gurobi600/linux64/lib" | sudo tee /etc/ld.so.conf.d/gurobi.conf
sudo ldconfig
I was searching hours on the Internet, but for this specific problem I could not find any solution.
1: I have a Xubuntu Linux on my PC. I use it in average way: browse the Internet, watch videos, etc. And also it gives home for my PHPStorm app but no the project files. This is the HOST. It has a host-only network: 192.168.56.1
2: I have a VirtualBox Debian Linux (no GUI) system. This meant to be represent a development version of my real webserver. It has all the project files. This VM is on an external drive, so I can take it everywhere (e.g.: to the office). 192.168.56.101. This is the GUEST.
3: on the HOST I use dnsmasq to force every *.dev domain to be redirected to the GUEST. So I can test my projects easily.
4: on the GUEST I exported the /var/www folder in the /etc/exports:
/var/www 192.168.56.1(rw,sync,no_root_squash,no_subtree_check)
The problem: I want to use the PHPStorm on the HOST to edit the files on the GUEST "locally". But I cannot mount the GUEST's /var/www folder into the HOST's /home/gabor/Projects folder with full permissions. I tried to use the following:
$> sudo mount 192.168.56.101:/var/www /home/gabor/Projects
This looks okay for the first time, but the folder is mounted with nobody:nogoup and I have no permissions to edit.
I want the /home/gabor/Projects has the owner gabor:gabor and everything I create in this folder must has the owner www-data:www-data on the Debian side. But for NFS mounting I cannot specify the user.
$> sudo mount -o umask=0022,gid=1000,uid=1000 192.168.56.101:/var/www /home/gabor/Projects
mount.nfs: an incorrect mount option was specified
I also failed to mount --bind the /var/www with different user (should be nobody:nogroup) on the Debian, so that I could export that one...
How can I solve this problem?
Please help me.
Thank you.
NFS v2 and v3 do not support uid/gid.
on Ubuntu man nfs
Adding this answer for posterity, as I ended up here with the same question.
Try this in /etc/export:
/var/www 192.168.56.1(rw,root_squash)
Then on the client, put this in /etc/fstab:
192.168.56.101:/var/www /home/gabor/Projects nfs defaults,user,noauto,relatime,rw 0 0
The user option will allow a non-root user to mount the volume. Adjust other options as needed.
Then on the client again, become the user you want to mount the volume as, and then mount the volume you added to /etc/fstab:
$ id
uid=1000(gabor) gid=1000(gabor) groups=1000(gabor)
$ mount /home/gabor/Projects
$
Make sure that the uid and/or gid are the same on the server. I'm not sure if the usernames can be different or not. Also make sure that the directory being exported on the server is writable by the user or group. See this blog post for additional info about setting up NFS in a similar manner.
Caution: This is an insecure configuration without authentication. Use NFS v4 with Kerberos for strong authentication.
Ok, I found a solution that exactly does what I want.
First, install the sshfs:
$> sudo apt-get install sshfs
Then mount the remote /var/www:
$> sshfs -o uid=33,gid=33 root#192.168.56.101:/var/www /home/gabor/Projects
And that is it!
$> ls -la /home/gabor | grep Projects
drwxr-xr-x 1 www-data www-data 4096 Okt 14 21:10 Projects