I'm learning Docker so I'm sorry if this question might sound silly. Anyway, my goal is create a LAMP container which handle all the databases in one place and also, I want to setup multiple virtual hosts for many sites. For each of this site I want use certbot to require a SSL certificate.
For doing so, I wrote the following docker-compose.yaml:
version: "3"
services:
web:
image: webdevops/php-apache:alpine-php7
ports:
- "80:80"
volumes:
- ./www:/app
- ./php.ini:/opt/docker/etc/php/php.ini
- ./sites-available:/opt/docker/etc/httpd/vhost.common.d
db:
image: mysql
command: --default-authentication-plugin=mysql_native_password
restart: always
environment:
MYSQL_ROOT_PASSWORD: root
ports:
- "3306:3306"
phpmyadmin:
image: phpmyadmin/phpmyadmin
environment:
MYSQL_ROOT_PASSWORD: root
ports:
- "8088:80"
certbot:
image: webdevops/certbot
volumes:
- ./etc/letsencrypt:/etc/letsencrypt
in the first service, I'm declaring Apache as web, and I'm using the alpine image created by webdevops, here the documentation. I bind the port 80, so I can access to Apache externally without specify custom ports.
In the volumes section I added the www folder which contains the php scripts.
I also specified a custom php.ini to overwrite the default php settings. Then, as the last part of volumes I tried to mount all the virtual hosts which I created inside the folder sites-available in the vhost.common.d directory.
Then I have the certbot container as the last part of my docker-compose file, and I would like to do the following:
How can I request a certificate for my subdomain which I've actually stored inside sites-available folder that is mounted as volume of web container?
How can I set a cron job or something like a task that auto renew all the certificates?
How can I store in a volume the obtained certificates?
I will admit, docker at times is often a struggle to piece together all the appropriate parts, with that said, my answer will not be complete, but hopefully will get you a step closer.
The following will create a certificate (note the --dry-run, it is highly recommended you use this to do your testing, else you'll get throttled)
docker run -it --rm \
-v /docker-volumes/etc/letsencrypt:/etc/letsencrypt \
-v /docker-volumes/var/lib/letsencrypt:/var/lib/letsencrypt \
-v /vol/to/the/web/root:/data/letsencrypt \
certbot/certbot certonly \
--noninteractive \
--webroot --webroot-path=/data/letsencrypt \
-d sub.domain.com \
--dry-run
-v /docker-volumes/etc/letsencrypt:/etc/letsencrypt
this is needed to store the cert itself
-v /docker-volumes/var/lib/letsencrypt:/var/lib/letsencrypt
not required, but in-case you want to review log messages
-v /vol/to/the/web/root:/data/letsencrypt
you need to give access to your web root, so certbot can create the .well-known dir and do its checks, this one was a tricky one as you need to link/use the same location used for your web container web-root vol
--noninteractive
certbot will bypass asking you questions
--webroot --webroot-path=/data/letsencrypt
tell certbot where to find webroot (e.g. within its own container)
Although not in the command above, you can add the following to assist in creating the cert if prompted for email address, not sure if it is a requirement or not
--email [email_address] --agree-tos --no-eff-email
Things to keep in-mind:
run certbot in --dry-run mode else, you will be throttled
certbot will need http access to the host, your vhost declaration should not redirect or deny access to http requests at least to the .well-known directory
you will need to add the appropriate SSL options in your vhost, i think certbot can do this automatically, but have not used this myself.
you will then need to reload apache like so /etc/init.d/apache2 reload
remove -it when/if you are running in cron
explore wrapping the cert creation and renewal in a shell-script
While i know this is not "the answer", hopefully some of this helps.
Related
I need to get an certificate for my domain hosted on AWS Route 53 from LetsEncrypt. I do not have any port 80 or 443 exposed since the server is used for VPN and does not have a public access.
So the only way to do this is via DNS validation of route 53.
So far I have installed certbot and dns-route53 plugin
sudo snap install --beta --classic certbot
sudo snap set certbot trust-plugin-with-root=ok
sudo snap install --beta certbot-dns-route53
sudo snap connect certbot:plugin certbot-dns-route53
I have created a special user in my AWS account who has access to Route53 and I have added the access key id and secret access key in the ~/.aws/config and also ~/.aws/credentials which looks something like this
[default]
aws_access_key_id=foo
aws_secret_access_key=bar
Basically followed every step given here: https://certbot-dns-route53.readthedocs.io/en/stable/
Now when I run the following command:
sudo certbot certonly -d mydomain.com --dns-route53
It gives the following output:
Saving debug log to /var/log/letsencrypt/letsencrypt.log
Plugins selected: Authenticator dns-route53, Installer None
Requesting a certificate for mydomain.com
Performing the following challenges:
dns-01 challenge for mydomain.com
Cleaning up challenges
Unable to locate credentials
To use certbot-dns-route53, configure credentials as described at https://boto3.readthedocs.io/en/latest/guide/configuration.html#best-practices-for-configuring-credentials and add the necessary permissions for Route53 access.
I went to the documentation given in the error message: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/configuration.html#best-practices-for-configuring-credentials
but I do not think there is anything wrong I am doing
I even went to the root level by doing sudo su and exported the AWS keys as env vars there and even exported the AWS keys in the home as well but it still throws me the same error.
so I also ran into this same issue, and it's likely because of you running certbot with sudo, when do you do that, whatever user you've used as ~/, is ignored, as instead, it's looking in /root/.
I fixed it by (centos) is my user where I have the .aws/ directory with config and credential files.
sudo -s
ln -s /home/centos/.aws/ ~/.aws
ls -lsa ~/.aws
... /root/.aws -> /home/centos/.aws/
I am trying to set up a local dev LEMP stack for a Slim-4 project using podman-compose. So far I have containers for PHP and Nginx. Nginx runs but gives a 500 error on trying to access the log directory - permission denied. This directory is outside of the public directory that is served by nginx.
I have selinux set to permissive to eliminate its issues.
I have used podman unshare to set ownership to the container's Nginx UID:GID.
I tried the setup with only a simple index file - the file is served with no issues. So, nginx/podman has access to the nginx configuration file on the host. The issue must be with write permissions.
Here is my docker-compose file:
version: '3.7'
# Services
services:
# Nginx Service
nginx:
image: nginx:1.17
ports:
- 8090:80
volumes:
- .:/var/www/php:z
- ./.docker/nginx/conf.d:/etc/nginx/conf.d:ro
depends_on:
- php
# PHP Service
php:
image: php:7.4-fpm
working_dir: /var/www/php
volumes:
- .:/var/www/php
What am I missing?
The issue was that I incorrectly assumed I needed to set permissions to allow Nginx to have access.
Instead I needed to grant the group www-data access permissions.
How I did it:
log into the running Nginx container podman exec -it [container ID] bash
find the www-data GID (Group ID) - from the container command line, cat /etc/passwd | grep www-data
note the GID (in the result you will see something like ...x:33:33... 33:33 is the user:group)
exit the container cli with exit
in your development/host cli, at the root of your project, run podman unshare chown -R 0:[the www-data GID you found above] . (don't miss the '.')
Explanation:
podman unshare puts you in a modified userspace that matches the container
chown changes ownership
-R means recursive
the number to the left of the ':' is the UID (User ID), the number to the right is the GID
the '.' is the current directory.
I hope this helps someone. I spent hours learning the above.
I have a Debian server with apache2 on it. I can access it by an ip address.
What I want is to be able to access to the containers in it (which contain an apache2 serveur) from the outside by an url like "myIpAddress/container1". What I currently have is an acces to those containers only from the Debian server.
I thought about using proxy reverse, but I cannot make it works.
Thank you for your help! :-)
Map the docker container's port to a host port and access the docker container from <host-ip>:port.
docker run -p host-port:container-port image
For example, upon running a container using the above command will make the container available at 127.0.0.1
docker run -p 80:5000 training/webapp
Update:
Setting up reverse proxy using NGINX
This example uses a plain NGINX container as site A and plain Apache server as site B.
Run the reverse proxy.
docker run -d \
--name nginx-proxy \
-p 80:80 \
-v /var/run/docker.sock:/tmp/docker.sock:ro jwilder/nginx-proxy
Start the container for site A, specifying the domain name in the VIRTUAL_HOST variable.
docker run -d --name site-a -e VIRTUAL_HOST=a.example.com nginx
Check out your website at http://a.example.com.
With site A still running, start the container for site B.
docker run -d --name site-b -e VIRTUAL_HOST=b.example.com httpd
Check out site B at http://b.example.com.
Note: Make sure you have set up DNS to forward the subdomains to the host running nginx-proxy. If you're using AWS, the easiest way is to use Route53.
For testing locally, map sub-domains to resolve to localhost by adding entries in /etc/hosts file.
127.0.0.1 a.example.com
127.0.0.1 b.example.com
References
jwilder NGNIX Proxy Github
NGNIX reverse proxy using docker
I am building a "hackme" challenge. Users can hack there way into the website and gain root access.
I made this in docker containers to prevent users from harming the host or each others game play(container).
I can't just redirect it to a different port for every user because users could just port scan the server and find the different containers.
What i want is http://example.com/challange1/A1B2C3
were "A1B2C3" is the unique identifier for their own container.
Could someone tell me how to do this?
With nginx you can do following - you can rewrite the requestes for given paths like here: https://gist.github.com/soheilhy/8b94347ff8336d971ad0#step-7----rewriting-requests
Where the names here are the existing run containers.
server {
listen ...;
...
location /a1b2c3 {
proxy_pass http://a1b2c3:8080;
}
location /a2b1c4 {
proxy_pass http://a2b1c4:8080;
}
...
}
I would go with Jwilder/Nginx for this task. Just start the different
containers with "random" Subdomains.
Start Nginx/Jwilder
docker run -d -p 80:80 -v /var/run/docker.sock:/tmp/docker.sock:ro jwilder/nginx-proxy
Start your Containers
for i in {1..5}
do
docker run -e VIRTUAL_HOST=foo-$i.bar.com mycontainer
done
Jwilder/Nginx will be updated on every container started, and will register
the new VHOST.
Our Docker images ship closed sources, we need to store them somewhere safe, using own private docker registry.
We search the simplest way to deploy a private docker registry with a simple authentication layer.
I found :
this manual way http://www.activestate.com/blog/2014/01/deploying-your-own-private-docker-registry
and the shipyard/docker-private-registry docker image based on stackbrew/registry and adding basic auth via Nginx - https://github.com/shipyard/docker-private-registry
I think use shipyard/docker-private-registry, but is there one another best way?
I'm still learning how to run and use Docker, consider this an idea:
# Run the registry on the server, allow only localhost connection
docker run -p 127.0.0.1:5000:5000 registry
# On the client, setup ssh tunneling
ssh -N -L 5000:localhost:5000 user#server
The registry is then accessible at localhost:5000, authentication is done through ssh that you probably already know and use.
Sources:
https://blog.codecentric.de/en/2014/02/docker-registry-run-private-docker-image-repository/
https://docs.docker.com/userguide/dockerlinks/
You can also use an Nginx front-end with a Basic Auth and an SSL certificate.
Regarding the SSL certificate I have tried couple of hours to have a working self-signed certificate but Docker wasn't able to work with the registry. To solve this I have a free signed certificate which work perfectly. (I have used StartSSL but there are others).
Also be careful when generating the certificate. If you want to have the registry running at the URL registry.damienroch.com, you must give this URL with the sub-domain otherwise it's not going to work.
You can perform all this setup using Docker and my nginx-proxy image (See the README on Github: https://github.com/zedtux/nginx-proxy).
This means that in the case you have installed nginx using the distribution package manager, you will replace it by a containerised nginx.
Place your certificate (.crt and .key files) on your server in a folder (I'm using /etc/docker/nginx/ssl/ and the certificate names are private-registry.crt and private-registry.key)
Generate a .htpasswd file and upload it on your server (I'm using /etc/docker/nginx/htpasswd/ and the filename is accounts.htpasswd)
Create a folder where the images will be stored (I'm using /etc/docker/registry/)
Using docker run my nginx-proxy image
Run the docker registry with some environment variable that nginx-proxy will use to configure itself.
Here is an example of the commands to run for the previous steps:
sudo docker run -d --name nginx -p 80:80 -p 443:443 -v /etc/docker/nginx/ssl/:/etc/nginx/ssl/ -v /var/run/docker.sock:/tmp/docker.sock -v /etc/docker/nginx/htpasswd/:/etc/nginx/htpasswd/ zedtux/nginx-proxy:latest
sudo docker run -d --name registry -e VIRTUAL_HOST=registry.damienroch.com -e MAX_UPLOAD_SIZE=0 -e SSL_FILENAME=private-registry -e HTPASSWD_FILENAME=accounts -e DOCKER_REGISTRY=true -v /etc/docker/registry/data/:/tmp/registry registry
The first line starts nginx and the second one the registry. It's important to do it in this order.
When both are up and running you should be able to login with:
docker login https://registry.damienroch.com
I have create an almost ready to use but certainly ready to function setup for running a docker-registry: https://github.com/kwk/docker-registry-setup .
Maybe it helps.
Everything (Registry, Auth server, and LDAP server) is running in containers which makes parts replacable as soon as you're ready to. The setup is fully configured to make it easy to get started. There're even demo certificates for HTTPs but they should be replaced at some point.
If you don't want LDAP authentication but simple static authentication you can disable it in auth/config/config.yml and put in your own combination of usernames and hashed passwords.