I know about starting a Containerized private registry with TLS enabled. And copying domain.crt to other docker hosts to access the registry.
I have a private Container registry server is already running (not as Container, but from the office) and I can login using username, password. How Can I use it with ssl certificate?
I know I can generate a CA certificate. But, how to upload the private key to registry. Like, using ssh, where we upload the key to Gitlab, and the public key in Host machine.
Or how can I download the domain.crt file from the registry to docker host?
What am I missing?
Thanks and regards
I played with it a couple of years ago and got it working with nginx, with a configuration along this lines:
{
upstream private-docker-registry {
server docker_registry:5000;
}
server {
listen 443 ssl;
listen 80;
server_name mydockerregistry.com;
ssl_certificate /etc/nginx/certs/mydockerregistry.com.crt;
ssl_certificate_key /etc/nginx/certs/mydockerregistry.com.key;
proxy_set_header Host $http_host; # required for Docker client sake
proxy_set_header X-Real-IP $remote_addr; # pass on real client IP
client_max_body_size 0; # disable any limits to avoid HTTP 413 for large image uploads
# required to avoid HTTP 411: see Issue #1486 (https://github.com/dotcloud/docker/issues/1486)
chunked_transfer_encoding on;
location / {
# let Nginx know about our auth file
auth_basic "Restricted";
auth_basic_user_file /etc/nginx/authentication/docker-registry.htpasswd;
proxy_pass http://private-docker-registry;
}
location /_ping {
auth_basic off;
proxy_pass http://private-docker-registry;
}
location /v1/_ping {
auth_basic off;
proxy_pass http://private-docker-registry;
}
Create an htpasswd file for authentication, in this example I called it docker-registry.htpasswd
Then run an nginx image linking to docker registry container, in this example call it docker_registry, in this nginx configuration example, it will be listening on port 5000, to run the nginx container will be something like this:
sudo docker run -d \
--name="nginx_docker_registry" \
-p 443:443 \
-p 80:80 \
--link my_docker_registry_container_name:docker_registry \
-v "path_to_htpasswd_file_in_host:/etc/nginx/authentication:ro" \
-v "path_to_certificates_in_host:/etc/nginx/certs:ro" \
-v "path_to_nginx_conf_in_host:/etc/nginx/nginx.conf:ro" \
nginx
Related
Given the following docker containers:
an nginx service that runs an unmodified official nginx:latest image
container name: proxy
two applications running in separate containers based on modified official php:7.4.1-apache images
container names: app1 and app2
all containers proxy, app1, and app2 are in the same Docker-created network with automatic DNS resolution
With the following example domain names:
local.web.test
local1.web.test
local2.web.test
I want to achieve the following behavior:
serve local.web.test from nginx as the default server block
configure nginx to proxy requests from local1.web.test and local2.web.test to app1 and app2, respectively, both listening on port 80
configure nginx to serve all three domain names using a wildcard SSL certificate
I experience two problems:
I notice the following error in the nginx logs:
2020/06/28 20:00:59 [crit] 27#27: *36 SSL_read() failed (SSL: error:14191044:SSL routines:tls1_enc:internal error) while waiting for request, client: 172.19.0.1, server: 0.0.0.0:443
the mod_rpaf seems not to work properly (i.e., the ip address in the apache access logs is of the nginx server [e.g., 172.19.0.2] instead of the ip of the client that issues the request
172.19.0.2 - - [28/Jun/2020:20:05:05 +0000] "GET /favicon.ico HTTP/1.0" 404 457 "http://local1.web.test/" "Mozilla/5.0 (Windows NTndows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.116 Safari/537.36
the output of phpinfo() for Apache Environment shows that:
HTTP_X_REAL_IP lists the client ip
SERVER_ADDR lists the app1 container ip (e.g., 172.19.0.4)
REMOTE_ADDR shows the proxy container ip (e.g., 172.19.0.2) instead of the client ip
To make this reproducible, this is how everything is set up. I tried this on my Windows machine so there are two preliminary steps.
Preliminary steps
a. in my C:\Windows\System32\drivers\etc\hosts file I added the following:
127.0.0.1 local.web.test
127.0.0.1 local1.web.test
127.0.0.1 local2.web.test
b. I generated a self-signed SSL certificate with the Common Name set to *.local.test via
openssl req -x509 -sha256 -nodes -newkey rsa:2048 -days 365 -keyout localhost.key -out localhost.crt
The proxy service setup
a. the nginx.yml for the docker-compose:
version: "3.8"
services:
nginx:
image: nginx:latest
container_name: proxy
ports:
- "80:80"
- "443:443"
volumes:
- ./nginx:/etc/nginx/conf.d
- ./certs:/etc/ssl/nginx
- ./static/local.web.test:/usr/share/nginx/html
networks:
- proxy
networks:
proxy:
driver: bridge
b. within ./nginx that is bind mounted at /etc/nginx/conf.d there is a file default.conf that contains:
server {
listen 80 default_server;
server_name local.web.test;
location / {
return 301 https://$host$request_uri;
}
}
server {
listen 443 ssl;
server_name local.web.test;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
ssl_certificate /etc/ssl/nginx/localhost.crt;
ssl_certificate_key /etc/ssl/nginx/localhost.key;
}
c. the ./certs:/etc/ssl/nginx bind mounts the folder containing the self-signed certificate and key
d. the ./static/local.web.test:/usr/share/nginx/html makes available a file index.html that contains
<h1>local.web.test</h1>
The app1 and app2 services setup
a. the apache.yml for the docker-compose:
version: "3.8"
services:
app1:
build:
context: .
dockerfile: apache.dockerfile
image: app1
container_name: app1
volumes:
- ./static/local1.web.test:/var/www/html
networks:
- exp_proxy
app2:
build:
context: .
dockerfile: apache.dockerfile
image: app2
container_name: app2
volumes:
- ./static/local2.web.test:/var/www/html
networks:
- exp_proxy
networks:
# Note: the network is named `exp_proxy` because the root directory is `exp`.
exp_proxy:
external: true
b. the apache.dockerfile image looks like this:
# Base image.
FROM php:7.4.1-apache
# Install dependencies.
RUN apt-get update && apt-get install -y curl nano wget unzip build-essential apache2-dev
# Clear cache.
RUN apt-get clean && rm -rf /var/lib/apt/lists/*
# Change working directory,
WORKDIR /root
# Fetch mod_rpaf.
RUN wget https://github.com/gnif/mod_rpaf/archive/stable.zip
# Unzip.
RUN unzip stable.zip
# Change working directory,
WORKDIR /root/mod_rpaf-stable
# Compile and install.
RUN make && make install
# Register the module for load.
RUN echo "LoadModule rpaf_module /usr/lib/apache2/modules/mod_rpaf.so" > /etc/apache2/mods-available/rpaf.load
# Copy the configuration for mod_rpaf.
COPY ./apache/mods/rpaf.conf /etc/apache2/mods-available/rpaf.conf
# Enable the module.
RUN a2enmod rpaf
# Set working directory.
WORKDIR /var/www/html
c. the file ./apache/mods/rpaf.conf copied contains:
<IfModule mod_rpaf.c>
RPAF_Enable On
RPAF_Header X-Real-Ip
RPAF_ProxyIPs 127.0.0.1
RPAF_SetHostName On
RPAF_SetHTTPS On
RPAF_SetPort On
</IfModule>
d. the ./static/local1.web.test:/var/www/html bind mounts an index.php file containing:
<h1>local1.web.test</h1>
<?php phpinfo(); ?>
the same goes for ./static/local2.web.test:/var/www/html
e. the 000-default.conf virtual hosts in app1 and app2 are not modified:
<VirtualHost *:80>
ServerAdmin webmaster#localhost
DocumentRoot /var/www/html
ErrorLog ${APACHE_LOG_DIR}/error.log
CustomLog ${APACHE_LOG_DIR}/access.log combined
</VirtualHost>
Starting the setup
a. start the proxy server
docker-compose -f nginx.yml up -d --build
b. start the app1 and app2 services
docker-compose -f apache.yml up -d --build
c. check containers to see if mod_rpaf is enabled
docker-compose -f apache.yml exec app1 apachectl -t -D DUMP_MODULES
d. add two files in ./nginx that will be available on the proxy container at /etc/nginx/conf.d
local1.web.test.conf containing
upstream app-1 {
server app1;
}
server {
listen 80;
server_name local1.web.test;
location / {
return 301 https://$host$request_uri;
}
}
server {
listen 443 ssl;
server_name local1.web.test;
location / {
proxy_pass http://app-1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
ssl_certificate /etc/ssl/nginx/localhost.crt;
ssl_certificate_key /etc/ssl/nginx/localhost.key;
}
the second file is local2.web.test.conf with a similar setup (i.e., number 1 is replaced with 2)
e. check the config and restart the proxy container (or reload the nginx server)
docker-compose -f nginx.yml exec proxy nginx -t
docker-compose -f nginx.yml exec proxy service nginx reload
The issues:
when I run docker logs proxy -f I notice the SSL internal error mentioned above: SSL_read() failed
someone faced a similar error (http2: SSL read failed while sending req in nginx) but in that case, the message more specifically points to the certificate authority
if I run docker logs app1 -f and visit https://local1.web.test, the ip in the GET request matches the ip of the proxy container (i.e., 172.19.0.2) and not that of the remote client
I suspect the cuprit is this RPAF_ProxyIPs 127.0.0.1, but I can't manually fix the ip because I don't know what ip the container will get in the exp_proxy network
also I can't use the hostname because RPAF_ProxyIPs expects an ip
docker inspect proxy shows "IPAddress": "172.19.0.2"
docker inspect app1 shows "IPAddress": "172.19.0.4"
I can't seem to understand what goes wrong and would appreciate your help.
I have the issue with my Nginx configuration, I can't set my server_name.
I tried to build my docker container with Nginx configuration inside.
Body of my Dockerfile.
FROM nginx
RUN rm -rf /etc/nginx/conf.d/*
RUN mkdir /etc/nginx/ssl
RUN chown -R root:root /etc/nginx/ssl
RUN chmod -R 600 /etc/nginx/ssl
COPY etc/ssl/certs/qwobbleprod.crt /etc/nginx/ssl
COPY etc/ssl/certs/app.qwobble.com.key /etc/nginx/ssl
COPY nginx/default.conf /etc/nginx/conf.d/
COPY dist /usr/share/nginx/html
EXPOSE 443
and my Nginx configuration file ->
server {
listen 443 ssl default_server;
root /usr/share/nginx/html;
server_name blabla.com www.blabla.com;
access_log /var/log/nginx/nginx.access.log;
error_log /var/log/nginx/nginx.error.log;
ssl on;
ssl_certificate /etc/nginx/ssl/blabla.crt;
ssl_certificate_key /etc/nginx/ssl/blabla.com.key;
sendfile on;
location / {
try_files $uri /index.html =404;
}
location ~* \.(js|css|png|jpg|jpeg|gif|ico|html)$ {
expires max;
log_not_found off;
}
}
I tried to build and run my docker container
docker build -t <name> .
docker run -it -p 443:443 <name>
As the result, I have my app on https://localhost:443
but I haven't access to my app through https://blabla.com:443 or https://www.blabla.com:443
I'm a newbie in working with Docker and Nginx, and I have no idea what is wrong.
I will be grateful for any help!
In this case I would expect that you actually need the blabla.com domain and that the dns (Domain Name Service) should point to your external IP address.
You must then configure the router to accept connections on port 443 (what you desire) and point (port forwarding) it to the computer running your docker image on the port that it is actually running on.
It might also be necessary to open firewall settings on the computer docker is running on.
I see you also want to listen to https so you might need some certificates for that.
or if you want to fake it you can edit your hosts file (on mac or linux /etc/hosts) and add an entry like:
##
# Host Database
#
# localhost is used to configure the loopback interface
# when the system is booting. Do not change this entry.
##
127.0.0.1 localhost
255.255.255.255 broadcasthost
::1 localhost
127.0.0.1 blabla.com
but now blabla.com will only work on your machine...
Hope it helps
I recently installed this web application on my Ubuntu server which runs Apache (SSL disabled).
It doesn't matter how much i try i can't get the application to use http. tried the -p flag. Then it exposes port 443 and binds something else. I hate browser warnings about SSL. I just want to use http with port 8080.
The application use nginx which only listens to 443. I want my application URL to look like http://localhost:8080 This application use Google OAuth for logins. I'm assuming it will work on http.
How to get it to work in http?
You must edit nginx.conf in order to use plain http (nginx will never speak http on a https port, only for some errors)
Change:
listen 443;
server_name localhost;
access_log /dev/stdout;
error_log /dev/stderr;
ssl on;
ssl_certificate /src/openseedbox/conf/host.cert;
ssl_certificate_key /src/openseedbox/conf/host.key;
To:
listen 8080;
server_name localhost;
access_log /dev/stdout;
error_log /dev/stderr;
Then after docker build, run with:
docker run -p 8080:8080 .......
Alternatively you can set your Apache as an HTTP virtual host that reverse-proxy to the secure HTTPS nginx. But I think it is easier to modify nginx config.
Approach #2
You can add another nginx container to act as reverse proxy, not sure if the application behind will break, but it acts as http "plainer":
docker-compose.yml
# Add this:
plain_nginx:
image: nginx
volumes:
- ./plain_nginx.conf:/etc/nginx/conf.d/default.conf
ports:
- 8080:80
links:
- openseedbox
plain_nginx.conf
server {
listen 80;
server_name _;
access_log /dev/stdout;
error_log /dev/stderr;
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Ssl on;
proxy_pass https://openseedbox;
}
}
Then do from ./docker/ directory in that repo:
docker-compose up
Then you have http://localhost:8080 acting as reverse proxy of the SSL stuff
I have one server, one IP, and multiple websites.
What I try to do is to isolate all websites, each website in it's own docker container.
What I need, its the NGINX as proxy, and one Docker container for each website.
I tried something like that (as root on server):
/etc/hosts :
127.0.0.1 example.com
NGINX config:
http {
upstream app-a {
server 127.0.0.1:3000;
}
server {
listen 80;
server_name example.com www.example.com;
location / {
proxy_pass http://app-a;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
}
}
Docker:
docker run -d -p 3000:80 coreos/apache /usr/sbin/apache2ctl -D FOREGROUND
curl example.com :
<title>Welcome to nginx!</title>
Why do I get the NGINX response and not the Apache server response?
If I connect on Docker container, the Apache server is started:
sudo docker exec -i -t ID /bin/bash
curl 127.0.0.1 :
<html><body><h1>It works!</h1>
Also, on a remote computer (my PC), I have test.com pointed to my server's public IP. If I access example.com, I get the same NGINX response.
And one more question: is this a good approach, to isolate/run a live websites on docker container? Or should I look for another VM solution?
Thanks in advance.
Edited: the problem was inside my NGINX config file. The strange thing, I get no error from NGINX server.
Changed from
location \ {
to
location / {
It works fine now. Thanks #Oliver Charlesworth.
I need a little direction here. I want to get https with my hostname that I generated at No-IP working with my Plex Media Server. I can connect through my hostname to my plex media server just fine I just want letsencrypt to generate secure SSL certs for it.
I run the following command:
sudo su -
./certbot-auto --webroot "/var/lib/plexmediaserver/Library/Application Support" -d example.com
and it return the following error:
letsencrypt: error: unrecognized arguments: /var/lib/plexmediaserver/Library/Application Support
If I run the following command:
sudo su -
./certbot-auto certonly --standalone -d example.com
It return the following error:
Failed authorization procedure. example.com (tls-sni-01): urn:acme:error:unauthorized :: The client lacks sufficient authorization :: Incorrect validation certificate for TLS-SNI-01 challenge. Requested e1b6ab6aa7251a908a0f2fc1dd6a3597.beae34c6504c7db8412d92c3f1885e08.acme.invalid from 1.2.3.4:443. Received certificate containing '*.0beedbf17c2042c089ef5e20952e62c8.plex.direct'
I really don't even know if that is the right webroot or not. I'm at a complete lose as to where to go from here. This is the last step in my puzzle and any direction would be helpful.
Note: This is running on a Rasberry pi 3.
I'm assuming you already have plex setup so I will skip that part, if not look at this link: wesleysinstructions.weebly.com
Go to No-IP (or any other service you want to use for a hostname) and setup a hostname
Login To the dashboard.
On the side bar click "Dynamic DNS"
Select "Hostnames"
On that page click the button "Add Hostname"
- Fill that out and you now have a hostname (Note: This takes about 5 minutes to become active)
Install the Dynamic DNS client to link your plex ip address (that is always changing) to your hostname on No-Ip.com
Note: They have instructions on their website on how to do this
On your router port forward 443/80 to where you're hosting plex
Visit portforward.com for instructions regarding your exact router
SSH into your plex server
Install "certbot" by LetsEncrypt
mkdir ~/certs
cd ~/certs
wget https://dl.eff.org/certbot-auto
sudo chmod a+x certbot-auto
sudo ./certbot-auto certonly --standalone -d <hostname>
NOTE: This will attempt to verify the host over 443.
If everything goes well you should get a message that looks something like this:
Congratulations! Your certificate and chain have been saved at
/etc/letsencrypt/live/<hostname>/fullchain.pem. Your cert
will expire on..
Setup a Reverse Nginx proxy to serve your cert.
sudo apt-get update
sudo apt-get install nginx -y
sudo unlink /etc/nginx/sites-enabled/default
sudo nano /etc/nginx/sites-available/reverse
The "reverse" file is setup something like the following:
server {
listen 80;
server_name <hostname>;
rewrite https://$host$request_uri? permanent;
listen 443 ssl;
ssl_certificate /etc/letsencrypt/live/<hostname>/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/<hostname>/privkey.pem;
#root /usr/share/nginx/html;
#index index.html index.htm;
ssl_stapling on;
ssl_stapling_verify on;
location / {
proxy_pass http://127.0.0.1:32400;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
Note: This assumes you have the default plex setup where it is using port 32400.
Finish the setup
sudo ln -s /etc/nginx/sites-available/reverse /etc/nginx/sites-enabled/reverse
sudo nginx -t
sudo service nginx restart
Hopefully I didn't type anything wrong. If I did at least this is the setup process you will need to go through.