Apache on Docker: set file permissions in a shared web hosting environment - apache

I'm new to docker. I'm trying to switch from a traditional VMs setup to a dockerized one for a bunch of websites I manage. I tried with Docker Compose and Wordpress, this is my docker-compose.yml file:
version: "3"
services:
blog2:
image: wordpress:4.9.6-apache
volumes:
- blog2:/var/www/html
environment:
WORDPRESS_DB_PASSWORD:
depends_on:
- mysql
mysql:
image: mysql:5.7
environment:
MYSQL_ROOT_PASSWORD:
volumes:
blog2:
It works and it creates a blog2 volume I can access on the main filesystem from /var/lib/docker/volumes/blog2. I can also connect with SFTP and edit files, everything works.
Files in the /var/www/html directory are owned by www-data user. If I edit them it's ok but if I add a new file... it is owned by the user I'm using on the server (in my test case it's root, but it can be any other user). So they cannot be modified by www-data, if the webserver need to edit or delete them.
How can I fix this problem? My idea is to add a user to every Docker container, add him to the www-data group and chown the entire /var/www/html to this user, so that initial and future files can be red or written by both, no matter if they are created by www-data or this user.
Can it work? And can I write it in the docker-compose.yml file to have this set up when I do docker-compose up -d at container creation? :)
Thank you in advance.

One solution to your problem is to start the wordpress container using a different user. This is documented under Running as an arbitrary user on the dockerhub page for the wordpress image.
Inside the docker compose file you can set the user that will be running inside the container. For instance, you can specify user 1000 which will map to user 1000 on the machine.
Thus you can find the uid of user www-data and use that uid to start the container:
...
services:
blog2:
image: wordpress:4.9.6-apache
user: 1000:1000
volumes:
- blog2:/var/www/html
environment:
WORDPRESS_DB_PASSWORD:
depends_on:
- mysql
...

Related

How to stop anonymous access to redis databases

I run redis image with docker-compose
I passed redis.conf (and redis says "configuration loaded")
In redis.conf i added user
user pytest ><password> ~pytest/* on #set #get
And yet I can communicate with redis as anonymous
even with uncommented string
requirepass <password>
Redis docs about topics: Security and ACL do not answer how to restrict access to everyone. Probably I do not understand something fundamentally.
my docker-compose.yaml:
version: '3'
services:
redis:
image: redis:latest
ports:
- 6379:6379
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 6000s
timeout: 30s
retries: 50
restart: always
volumes:
- redis-db:/data
- redis.conf:/usr/local/etc/redis/redis.conf
command: ["redis-server", "/usr/local/etc/redis/redis.conf" ]
volumes:
redis-db:
redis.conf:
And yet I can communicate with redis as anonymous even with uncommented string
Because there's a default user, and you didn't disable it. If you want to totally disable anonymous access, you should add the following to your redis.conf:
user default off
Secondly, the configuration for user 'pytest' is incorrect. If you want to only allow user 'pytest' to have set and get command on the given key pattern, you should configure it as follows:
user pytest ><password> ~pytest/* on +set +get
You also need to ensure that the docker-compose is using your config file.
Assuming you have the redis.conf in the same directory as your docker-compose.yml the 'volumes' section in the service declaration would be.
- ./redis.conf:/usr/local/etc/redis/redis.conf
and also remove the named volume declaration in the bottom
redis.conf:
The users would be able to connect to Redis but without AUTH they can't perform any action if you enable
requirepass <password>
The right way to restrict GET, SET operations on the keys pytest/* would be
user pytest ><password> ~pytest/* on +set +get

How to propertly configure certbot with autorenewal?

I'm learning Docker so I'm sorry if this question might sound silly. Anyway, my goal is create a LAMP container which handle all the databases in one place and also, I want to setup multiple virtual hosts for many sites. For each of this site I want use certbot to require a SSL certificate.
For doing so, I wrote the following docker-compose.yaml:
version: "3"
services:
web:
image: webdevops/php-apache:alpine-php7
ports:
- "80:80"
volumes:
- ./www:/app
- ./php.ini:/opt/docker/etc/php/php.ini
- ./sites-available:/opt/docker/etc/httpd/vhost.common.d
db:
image: mysql
command: --default-authentication-plugin=mysql_native_password
restart: always
environment:
MYSQL_ROOT_PASSWORD: root
ports:
- "3306:3306"
phpmyadmin:
image: phpmyadmin/phpmyadmin
environment:
MYSQL_ROOT_PASSWORD: root
ports:
- "8088:80"
certbot:
image: webdevops/certbot
volumes:
- ./etc/letsencrypt:/etc/letsencrypt
in the first service, I'm declaring Apache as web, and I'm using the alpine image created by webdevops, here the documentation. I bind the port 80, so I can access to Apache externally without specify custom ports.
In the volumes section I added the www folder which contains the php scripts.
I also specified a custom php.ini to overwrite the default php settings. Then, as the last part of volumes I tried to mount all the virtual hosts which I created inside the folder sites-available in the vhost.common.d directory.
Then I have the certbot container as the last part of my docker-compose file, and I would like to do the following:
How can I request a certificate for my subdomain which I've actually stored inside sites-available folder that is mounted as volume of web container?
How can I set a cron job or something like a task that auto renew all the certificates?
How can I store in a volume the obtained certificates?
I will admit, docker at times is often a struggle to piece together all the appropriate parts, with that said, my answer will not be complete, but hopefully will get you a step closer.
The following will create a certificate (note the --dry-run, it is highly recommended you use this to do your testing, else you'll get throttled)
docker run -it --rm \
-v /docker-volumes/etc/letsencrypt:/etc/letsencrypt \
-v /docker-volumes/var/lib/letsencrypt:/var/lib/letsencrypt \
-v /vol/to/the/web/root:/data/letsencrypt \
certbot/certbot certonly \
--noninteractive \
--webroot --webroot-path=/data/letsencrypt \
-d sub.domain.com \
--dry-run
-v /docker-volumes/etc/letsencrypt:/etc/letsencrypt
this is needed to store the cert itself
-v /docker-volumes/var/lib/letsencrypt:/var/lib/letsencrypt
not required, but in-case you want to review log messages
-v /vol/to/the/web/root:/data/letsencrypt
you need to give access to your web root, so certbot can create the .well-known dir and do its checks, this one was a tricky one as you need to link/use the same location used for your web container web-root vol
--noninteractive
certbot will bypass asking you questions
--webroot --webroot-path=/data/letsencrypt
tell certbot where to find webroot (e.g. within its own container)
Although not in the command above, you can add the following to assist in creating the cert if prompted for email address, not sure if it is a requirement or not
--email [email_address] --agree-tos --no-eff-email
Things to keep in-mind:
run certbot in --dry-run mode else, you will be throttled
certbot will need http access to the host, your vhost declaration should not redirect or deny access to http requests at least to the .well-known directory
you will need to add the appropriate SSL options in your vhost, i think certbot can do this automatically, but have not used this myself.
you will then need to reload apache like so /etc/init.d/apache2 reload
remove -it when/if you are running in cron
explore wrapping the cert creation and renewal in a shell-script
While i know this is not "the answer", hopefully some of this helps.

In Fedora 31 how do I set permissions for nginx running in a Podman container?

I am trying to set up a local dev LEMP stack for a Slim-4 project using podman-compose. So far I have containers for PHP and Nginx. Nginx runs but gives a 500 error on trying to access the log directory - permission denied. This directory is outside of the public directory that is served by nginx.
I have selinux set to permissive to eliminate its issues.
I have used podman unshare to set ownership to the container's Nginx UID:GID.
I tried the setup with only a simple index file - the file is served with no issues. So, nginx/podman has access to the nginx configuration file on the host. The issue must be with write permissions.
Here is my docker-compose file:
version: '3.7'
# Services
services:
# Nginx Service
nginx:
image: nginx:1.17
ports:
- 8090:80
volumes:
- .:/var/www/php:z
- ./.docker/nginx/conf.d:/etc/nginx/conf.d:ro
depends_on:
- php
# PHP Service
php:
image: php:7.4-fpm
working_dir: /var/www/php
volumes:
- .:/var/www/php
What am I missing?
The issue was that I incorrectly assumed I needed to set permissions to allow Nginx to have access.
Instead I needed to grant the group www-data access permissions.
How I did it:
log into the running Nginx container podman exec -it [container ID] bash
find the www-data GID (Group ID) - from the container command line, cat /etc/passwd | grep www-data
note the GID (in the result you will see something like ...x:33:33... 33:33 is the user:group)
exit the container cli with exit
in your development/host cli, at the root of your project, run podman unshare chown -R 0:[the www-data GID you found above] . (don't miss the '.')
Explanation:
podman unshare puts you in a modified userspace that matches the container
chown changes ownership
-R means recursive
the number to the left of the ':' is the UID (User ID), the number to the right is the GID
the '.' is the current directory.
I hope this helps someone. I spent hours learning the above.

Share a data volume between two docker containers

I use Docker version 1.13.1,build 092cba3 on Windows 10.
I have a custom Jenkins container that builds code from Github in a volume.
The volume is /var/jenkins_home/workspace/myjob .
I also have an Apache container that I want to share the volume with.
The docker-compose.yml file is:
version: '2'
services:
jenkins:
container_name: jenkins
image: jenkins:v1
environment:
JAVA_OPTS: "-Djava.awt.headless=true"
JAVA_OPTS: "-Djenkins.install.runSetupWizard=false" # Start jenkins unlocked
ports:
# - "50000:50000" # jenkins nodes
- "8686:8080" # jenkins UI
volumes:
- myjob_volume:/var/jenkins_home/workspace/myjob
apache:
container_name: httpd
image: httpd:2.2
volumes_from:
- jenkins
volumes:
myjob_volume:
I basically want the Jenkins container to fetch the code in a volume , which will then be visible by the Apache (httpd) container. So every change I make to the code from my IDE and pushed to Github, will be visible in the Apache container. The volume is created in the Apache container, but when I successfully build the code in the Jenkins container, it does not appear in the volume in Apache.
EDIT:
After launching the 2 containers with docker-compose up -d,
I enable their volumes from Kitematic
I change the volume path for Apache to point to the Jenkins volume
and when I build the code from Jenkins, Apache sees it as I would like.
So...how should I do the same from the docker-compose file ?
You are using volumes_from which "copies" the mount definition from the container you're specifying. As a result, the myjob_volume will be mounted at /var/jenkins_home/workspace/myjob inside the Apache container. The official Apache image from Docker hub (https://hub.docker.com/_/httpd/) uses /usr/local/apache2/htdocs/ as the webroot.
To mount the volume at that location, update the docker-compose file to look like this;
version: '2'
services:
jenkins:
container_name: jenkins
image: jenkins:v1
environment:
JAVA_OPTS: "-Djava.awt.headless=true"
JAVA_OPTS: "-Djenkins.install.runSetupWizard=false" # Start jenkins unlocked
ports:
# - "50000:50000" # jenkins nodes
- "8686:8080" # jenkins UI
volumes:
- myjob_volume:/var/jenkins_home/workspace/myjob
apache:
container_name: httpd
image: httpd:2.2
volumes:
- myjob_volume:/usr/local/apache2/htdocs/
volumes:
myjob_volume:

Redis in docker-compose: any way to specify a redis.conf file?

my Redis container is defined as a standard image in my docker_compose.yml
redis:
image: redis
ports:
- "6379"
I guess it's using standard settings like binding to Redis at localhost.
I need to bind it to 0.0.0.0, is there any way to add a local redis.conf file to change the binding and let docker-compose to use it?
thanks for any trick...
Yes. Just mount your redis.conf over the default with a volume:
redis:
image: redis
volumes:
- ./redis.conf:/usr/local/etc/redis/redis.conf
ports:
- "6379"
Alternatively, create a new image based on the redis image with your conf file copied in. Full instructions are at: https://registry.hub.docker.com/_/redis/
However, the redis image does bind to 0.0.0.0 by default. To access it from the host, you need to use the port that Docker has mapped to the host for you which you find by using docker ps or the docker port command, you can then access it at localhost:32678 where 32678 is the mapped port. Alternatively, you can specify a specific port to map to in the docker-compose.yml.
As you seem to be new to Docker, this might all make a bit more sense if you start by using raw Docker commands rather than starting with Compose.
Old question, but if someone still want to do that, it is possible with volumes and command:
command: redis-server /usr/local/etc/redis/redis.conf
volumes:
- ./redis/redis.conf:/usr/local/etc/redis/redis.conf
Unfortunately with Docker, things become a little tricky when it comes to Redis configuration file, and the answer voted as best (im sure from people that did'nt actually tested it) it DOESNT work.
But what DOES WORK, fast, and without husles is this:
command: redis-server --bind redis-container-name --requirepass some-long-password --maxmemory 256mb --maxmemory-policy allkeys-lru --appendonly yes
You can pass all the variable options you want in the command section of the yaml docker file, by adding "--" in the front of it, followed by the variable value.
Never forget to set a password, and if possible close the port 6379.
Ī¤hank me later.
PS: If you noticed at the command, i didnt use the typical 127.0.0.1, but instead the redis container name. This is done for the reason that docker assigns ip addresses internally via it's embedded dns server. In other words this bind address becomes dynamic, hence adding an extra layer of security.
If your redis container is called "redis" and you execute the command docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' redis (for verifying the running container's internal ip address), as far as docker is concerned, the command give in docker file, will be translated internally to something like: redis-server --bind 172.19.0.5 --requirepass some-long-password --maxmemory 256mb --maxmemory-policy allkeys-lru --appendonly yes
Based on David awnser but a more "Docker Compose" way is:
redis:
image: redis:alpine
command: redis-server --include /usr/local/etc/redis/redis.conf
volumes:
- ./redis/redis.conf:/usr/local/etc/redis/redis.conf
That way, you include the .conf file by docker-compose.yml file and don't need a custom image.
mount your config /usr/local/etc/redis/redis.conf
add command to execute redis-server with your config
redis:
image: redis:7.0.4-alpine
restart: unless-stopped
volumes:
- ./redis.conf:/usr/local/etc/redis/redis.conf
command: redis-server /usr/local/etc/redis/redis.conf
########################################
# or using command if mount not work
########################################
command: >
redis-server --bind 127.0.0.1
--appendonly no
--save ""
--protected-mode yes
It is an old question but I have a solution that seems elegant and I don't have to execute commands every time ;).
1 Create your dockerfile like this
#/bin/redis/Dockerfile
FROM redis
CMD ["redis-server", "--include /usr/local/etc/redis/redis.conf"]
What we are doing is telling the server to include that file in the Redis configuration. The settings you type there will override the default Redis have.
2 Create your docker-compose
redisall:
build:
context: ./bin/redis
container_name: 'redisAll'
restart: unless-stopped
ports:
- "6379:6379"
volumes:
- ./config/redis:/usr/local/etc/redis
3 Create your configuration file it has to be called the same as Dockerfile
//config/redis/redis.conf
requirepass some-long-password
appendonly yes
################################## NETWORK #####################################
# By default, if no "bind" configuration directive is specified, Redis listens
# for connections from all the network interfaces available on the server.
# It is possible to listen to just one or multiple selected interfaces using
# the "bind" configuration directive, followed by one or more IP addresses.
#
# Examples:
#
# bind 192.168.1.100 10.0.0.1
# bind 127.0.0.1 ::1
#
# ~~~ WARNING ~~~ If the computer running Redis is directly exposed to the
# internet, binding to all the interfaces is dangerous and will expose the
# instance to everybody on the internet. So by default we uncomment the
# following bind directive, that will force Redis to listen only into
# the IPv4 loopback interface address (this means Redis will be able to
# accept connections only from clients running into the same computer it
# is running).
#
# IF YOU ARE SURE YOU WANT YOUR INSTANCE TO LISTEN TO ALL THE INTERFACES
# JUST COMMENT THE FOLLOWING LINE.*
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
bind 127.0.0.1
// and all configurations that can be specified
// what you put here overwrites the default settings that have the
container
I had the same problem when using Redis in docker environment that the Redis could not save data to disk on dump.rdb.
The problem was the Redis could not read the configurations redis.conf , I solve it by sending the required configurations with the command in docker compose as below :
redis19:
image: redis:5.0
restart: always
container_name: redis19
hostname: redis19
command: redis-server --requirepass some-secret --stop-writes-on-bgsave-error no --save 900 1 --save 300 10 --save 60 10000
volumes:
- $PWD/redis/redis_data:/data
- $PWD/redis/redis.conf:/usr/local/etc/redis/redis.conf
- /etc/localtime:/etc/localtime:ro
and it works fine.
I think it will be helpful i am sharing working code in my local
redis:
container_name: redis
hostname: redis
image: redis
command: >
--include /usr/local/etc/redis/redis.conf
volumes:
- ./redis/redis.conf:/usr/local/etc/redis/redis.conf
ports:
- "6379:6379"