How do I change the umask of a shared docker volume? - apache

This is related to "https://stackoverflow.com/questions/36477636/how-do-i-implement-a-shared-hosting-like-service-with-docker".
I am trying to setup a bunch of apache-php containers that should be able to rw to the docroot, and one single sftp container that should also be able to rw to the same directory. So, the plan is to create a shared volume, and mount it inside both containers (obviously in different places when it comes to the sftp container). I modified the official php:7-apache launch script like so:
#!/bin/bash
set -e
### > My stuff
chmod g+s /var/www/html
chown -R :2048 /var/www/html
umask
umask 0002
umask
### < My stuff
# Apache gets grumpy about PID files pre-existing
rm -f /var/run/apache2/apache2.pid
exec apache2 -DFOREGROUND
and in fact the two umask lines confirm that the sandwiched umask 0002 runs correctly. However, when I bash into the container and test this myself, umask returns 022, and if I create any file, they in fact get permissions 644. I can then manually umask 0002 and any new file gets created with permissions 664 (and 775 for directories).
For this use case, it is critical that both containers (sftp and apache) can read and write in the shared volume. So, how do I go about this?

Related

How to set mounted folder permission in podman

Abstract
When I mount a folder to my container and the path to the folder is not yet created on the client podman will create it for me. I can set the permissions for the mounted folder on my host machine to match it to the container-user, but the created path folders do not have the same permissions.
Steps to reproduce
For example lets assume in my image the home directory of the user ist empty. Then I will do on my host:
$ mkdir foo
$ podman unshare chown 1000:100 foo
$ podman run -v $PWD/foo:/home/myuser/bar/foo:z [...] some/image:latest
that will result on my container as:
~ # ls -la
drwxr-xr-t 3 root root 4096 Jan 28 12:43 bar
~ # cd bar
~/bar # ls -la
drwxrwxr-x 2 1000 users 4096 Jan 28 12:42 foo
~/bar #
is this behavior intentional?
is there a way to tell podman to create the path with the same permissions as the destination folder?
I can imagine a work around, but it would be nice if I could tell it in the run command.
Use Case
In my case I try to run different jupyter notebooks as disposable container direct from docker.io. But I do want to share the user-settings. The user-settings folder is not present when the container mounts the volumes. So podman will create them, but as root. So the jupyter user cannot access the folders created by podman and will fail.
I could create a Buildfile from the images and create the folders in the buildphase. But I use different images all the time and I dont want to create a custom image for all my use cases.
I could mount the volume to the parent folder, but all kinds of different stuff gets stored there and I dont want to share this to all the different containers.
I could not dispose the containers after the initial boot, but I dont know when I want to reuse this container, if at all...
Maybe it is possible to map the jupyter user to your user with the --uidmap command-line option?
(untested)
$ mkdir foo
$ jupyterUID=1234 # Replace 1234 with the correct UID for the jupyter user
$ podman run -v $PWD/foo:/home/myuser/bar/foo:z [...] --uidmap=0:1:$jupyterUID --uidmap=$(expr $jupyterUID + 1):$(expr $jupyterUID + 1):$(expr 65536 - $jupyterUID - 1) --uidmap=${jupyterUID}:0:1 some/image:latest
I think something like this is needed when the container starts as the container root user and then runs a program as another user. If that other user would write files in a bind-mounted directory, the files would be owned by your normal user on the host. I don't know, though, if that is the case with your Jupyter container image.
Edit 4 April 2022
A related Stackoverflow answer that I wrote:
https://stackoverflow.com/a/71741794/757777
I also wrote a troubleshooting tip about using --uidmap and --gidmap in the Podman troubleshooting guide.

Trying to transfer local files to web server

I recently set up Lamp stack on ubuntu 14.04 for my web server. I'm working through Digital Ocean. These are the steps I went through...
On local machine I logged in to my web server with
sftp user#web_server_ip
Then
sftp> cd /var/www/html
How would I go upon getting onto my local machine to get the file for the site? And how would I transfer them?
I know that I have to use the [get] and [put] commands
I'm just confused what's considered local/remote? if I'm logged into the remote server on my local machine. Overthinking it?
This is the tutorial I'm trying to follow: How To Use SFTP to Securely Transfer Files with a Remote Server
Edit:
So I tried moving a whole directory from my local machine and this is what I ended up doing
scp -r /path/directory_name name#ip_address:/var/www/html
scp: /var/www/html/portfolio.take7: Permission denied
Should I be changing permission by using sudo prior to scp -r?
Edit2:
I have also tried
Where_directory_is$ scp -r /path/directory_name name#ip_address:/var/www/html
/var/www/html: No such file or directory
It might be easier to start with SCP which allows you to copy files with one command. So for example, if you had a local file /path/filename.css and wanted to transfer it to your server, you could use the following command on your local machine:
scp /path/filename.css username#remote_hostname_or_IP:~
This command copies the local file and transfers it to the home directory of the username on the remote server using SSH. You can then SSH in (ssh username#remote_hostname_or_IP) and then do what you need with the file sitting in your home directory, such as move it to the proper Apache directory.
Once you start to get more comfortable, you can switch to sftp if you like.
Update
Here is how to set up your Apache permissions. Let's say you have an account you on the linux computer running Apache, and we'll say the IP is 192.168.1.100.
On your local machine, create this shell script, secure.sh, and remember shell scripts need to have execute privileges (chmod +x secure.sh). Fill it with the following contents:
#!/usr/bin/env bash
# Lockdown the public web files
find /var/www -exec chown you:www-data {} \;
find /var/www -type d -exec chmod -v 750 {} \;
find /var/www -type f -exec chmod -v 640 {} \;
This shell script is setting the permissions for anything in the /var/www/ directory to be 750 for the directories and 640 for the files. This gives you complete read/write permissions for the files and www-data (which is the account for Apache) read permissions. Run this anytime you have uploaded files to ensure the permissions are always set correctly.
Next, SSH into your remote computer and go to the /var/www/html directory. Ensure that the ownership is not set to root. If it is, scp the secure.sh file into your remote computer, become root and run it. This only needs to be done once, so you can remotely set the permissions.
Now you can copy directly to /var/www/ through the scp -r command on your local computer from the top of the directory you wish to copy to /var/www/html:
scp -r ./ you#192.168.1.100:/var/www/html/
Then run this command to remotely run the secure.sh shell script and send the output to out.txt:
ssh you#192.168.1.100 -p 23815 ./secure.sh > out.txt
Then cat out.txt to see that the file permissions changed accordingly.
If this is a public facing computer, then you must add an SSH key to your scp connection. You can use this tutorial to find out more about generating your own keys, it is quite easy. To use the key, you only need to add -i private_key_file to your scp and ssh commands. Lastly, it would actually be safer to keep the /var/www files as root, SSH into the computer, su to become root, then run secure.sh as root (with the owner changed to root in the shell script). It all depends on the level of security you need to worry about. If it is a development computer (which is what I am assuming) no worries then.
For folders use
scp -r root#yourIp:/home/path/ /pathOfDirectory/
For files
scp -r root#yourIp:/home/path/ /pathOfDirectory/file fileNameCopied

Docker wrong permission apache2

I have a problem whith my installation of docker. When I launch my docker-compose up I have this error :
front_1 | /var/lock/apache2 already exists but is not a directory owned by www-data.
front_1 | Please fix manually. Aborting.
I have this error because I add this line in my dockerfile conf :
RUN usermod -u 1000 www-data
But if I delete this line, my symfony project doesn't work with docker.
Do you have any ideas to solve my problem ?
Best regards
As I see it, you are trying to change UID of user www-data inside docker to have the same ID as host machine user UID (you), so you can open project files in your IDE.
This introduces file permissions problems on apache2 service, which can't read it's own files (config, pid,...), simply because it is not the same user anymore.
Quick 'dirty' solution is to change only owner of symfony project files to UID 1000, but keep group (GID) to the www-data. This applies only for dev machine. Else you don't needed it. Run command inside container.
chown -R 1000:www-data /home/project
You can create some bash alias inside docker to have it at hand.
Other option is to use ACL which will set existing files and folder with permissions, which will get inherited to newly created files under given folder. This could be put to bootstrap script inside container. But only for DEV mode. This way you won't need to run chown.
chown -R 1000:www-data /home/project #set for existing files
/usr/bin/setfacl -R -m u:www-data:rwx -m u:0:rwx -m u:1000:rwx /home/project
/usr/bin/setfacl -dR -m u:www-data:rwx -m u:0:rwx -m u:1000:rwx /home/project
Each -m is for a different user. First is www-data (apache2), second is 0 (root) and third is 1000 (you).
Remember UID can change anytime. So this could create security hole if mentioned users are not having proper UID.
I used second method only for folders, where PHP via apache2 sets permissions (uploaded files, cache,...), but host user needs to access these files.

CakePHP based application installation error: 777 permission directory is not writable?

Trying to install my CakePHP based application on server, but got following error:
Warning: _cake_core_ cache was unable to write 'cake_dev_en-us' to File cache in /var/www/html/cakephp-2460/lib/Cake/Cache/Cache.php on line 325
Warning: /var/www/html/tmp/cache/persistent/ is not writable
Sounds simple, but it is not - because my 'persistent' directory IS writable - in fact, /tmp and it's sub directories are writable.
Can you point me where is the problem? Do I missing some of PHP modules on server, or something like that?
Is there something to do with SeLinux?
Check that the user group for that directory is correct.
Maybe the user owner group does not have root permissions and therefor cannot write.
you may need to do the following on your server:
chown root:root -R /path_to_cake/app/tmp
Yes it is the problem in your SeLinux.You have to set www/..path../tmp directory is a httpd_cache_t so opan your terminal and
list to see all httpt_cache_t in system
# semanage fcontext -l | grep httpd
Set your www/.../tmp directory
# semanage fcontext -a -t 'httpd_cache_t' 'www/..path../tmp(/.*)?'
# restorecon -Rvvv /path/to/wwwroot/cache

Files saved by my web application has different file permissions from parent

I have a folder in which new subfolders and files will be created automatically, by a script.
I want to maintain the user and group permissions recursively for all new folders and files placed in the parent directory. I know this involves setting a sticky bit, but I can't seem to find a command that shows exactly what I need.
This is what I have done so far:
sudo mkdir -p /path/to/parent
sudo chmod -R 660 myself:somegroup /path/to/parent
Thereafter, I want the 660 permissions to be set recursively to any folders and files placed in /path/to/parent
However, whenever Apache saves a folder/file it assigns it permissions of 700 with user and group set to the apache user. This is NOT what I want. I want all files/subfolders under the parent to have 660 permissions for myself:somegroup.
Actually the octal flag 660 is probably not even correct. The permissions I want are:
Directories placed under /path/to/parent are eXecutable by users with permissions
files are read/writeable by user myself and members of somegroup
Files and folders in /path/to/parent is NOT world readable
Can someone help please?
I am running on Ubuntu 10.0.4 LTS
sudo mkdir -p /path/to/parent
sudo chmod -R 660 myself:somegroup /path/to/parent
erm - not ideal.
If you want new files/dirs created to have the same group ownership you need to set the group sticky bit on directories
find /path/to/parent -type d -exec chmod g+s {};
And you need to make directories executable:
find /path/to/parent -type d -exec chmod ug+x {};
...
However, whenever Apache saves a folder/file it assigns it permissions of 700 with user and group set to the apache user
Then you also need to set the umask (0770) for the executing code (or directly change the permissions) and ensure the apache uid is a member of somegroup. IIRC to set the group sticky bit, umask should be 2770 - but do check the manual.
You can set apache to write new files with specific umask.
Todo so in RedHat/CentOS edit the file /etc/sysconfig/httpd, and change to:
umask 007
In debian/Ubuntu use the file /etc/apache2/envars with the same settings