Nginx Docker haven't access to server through server_name - ssl

I have the issue with my Nginx configuration, I can't set my server_name.
I tried to build my docker container with Nginx configuration inside.
Body of my Dockerfile.
FROM nginx
RUN rm -rf /etc/nginx/conf.d/*
RUN mkdir /etc/nginx/ssl
RUN chown -R root:root /etc/nginx/ssl
RUN chmod -R 600 /etc/nginx/ssl
COPY etc/ssl/certs/qwobbleprod.crt /etc/nginx/ssl
COPY etc/ssl/certs/app.qwobble.com.key /etc/nginx/ssl
COPY nginx/default.conf /etc/nginx/conf.d/
COPY dist /usr/share/nginx/html
EXPOSE 443
and my Nginx configuration file ->
server {
listen 443 ssl default_server;
root /usr/share/nginx/html;
server_name blabla.com www.blabla.com;
access_log /var/log/nginx/nginx.access.log;
error_log /var/log/nginx/nginx.error.log;
ssl on;
ssl_certificate /etc/nginx/ssl/blabla.crt;
ssl_certificate_key /etc/nginx/ssl/blabla.com.key;
sendfile on;
location / {
try_files $uri /index.html =404;
}
location ~* \.(js|css|png|jpg|jpeg|gif|ico|html)$ {
expires max;
log_not_found off;
}
}
I tried to build and run my docker container
docker build -t <name> .
docker run -it -p 443:443 <name>
As the result, I have my app on https://localhost:443
but I haven't access to my app through https://blabla.com:443 or https://www.blabla.com:443
I'm a newbie in working with Docker and Nginx, and I have no idea what is wrong.
I will be grateful for any help!

In this case I would expect that you actually need the blabla.com domain and that the dns (Domain Name Service) should point to your external IP address.
You must then configure the router to accept connections on port 443 (what you desire) and point (port forwarding) it to the computer running your docker image on the port that it is actually running on.
It might also be necessary to open firewall settings on the computer docker is running on.
I see you also want to listen to https so you might need some certificates for that.
or if you want to fake it you can edit your hosts file (on mac or linux /etc/hosts) and add an entry like:
##
# Host Database
#
# localhost is used to configure the loopback interface
# when the system is booting. Do not change this entry.
##
127.0.0.1 localhost
255.255.255.255 broadcasthost
::1 localhost
127.0.0.1 blabla.com
but now blabla.com will only work on your machine...
Hope it helps

Related

How to correctly use Nginx as reverse proxy for multiple Apache Docker containers with SSL?

Given the following docker containers:
an nginx service that runs an unmodified official nginx:latest image
container name: proxy
two applications running in separate containers based on modified official php:7.4.1-apache images
container names: app1 and app2
all containers proxy, app1, and app2 are in the same Docker-created network with automatic DNS resolution
With the following example domain names:
local.web.test
local1.web.test
local2.web.test
I want to achieve the following behavior:
serve local.web.test from nginx as the default server block
configure nginx to proxy requests from local1.web.test and local2.web.test to app1 and app2, respectively, both listening on port 80
configure nginx to serve all three domain names using a wildcard SSL certificate
I experience two problems:
I notice the following error in the nginx logs:
2020/06/28 20:00:59 [crit] 27#27: *36 SSL_read() failed (SSL: error:14191044:SSL routines:tls1_enc:internal error) while waiting for request, client: 172.19.0.1, server: 0.0.0.0:443
the mod_rpaf seems not to work properly (i.e., the ip address in the apache access logs is of the nginx server [e.g., 172.19.0.2] instead of the ip of the client that issues the request
172.19.0.2 - - [28/Jun/2020:20:05:05 +0000] "GET /favicon.ico HTTP/1.0" 404 457 "http://local1.web.test/" "Mozilla/5.0 (Windows NTndows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.116 Safari/537.36
the output of phpinfo() for Apache Environment shows that:
HTTP_X_REAL_IP lists the client ip
SERVER_ADDR lists the app1 container ip (e.g., 172.19.0.4)
REMOTE_ADDR shows the proxy container ip (e.g., 172.19.0.2) instead of the client ip
To make this reproducible, this is how everything is set up. I tried this on my Windows machine so there are two preliminary steps.
Preliminary steps
a. in my C:\Windows\System32\drivers\etc\hosts file I added the following:
127.0.0.1 local.web.test
127.0.0.1 local1.web.test
127.0.0.1 local2.web.test
b. I generated a self-signed SSL certificate with the Common Name set to *.local.test via
openssl req -x509 -sha256 -nodes -newkey rsa:2048 -days 365 -keyout localhost.key -out localhost.crt
The proxy service setup
a. the nginx.yml for the docker-compose:
version: "3.8"
services:
nginx:
image: nginx:latest
container_name: proxy
ports:
- "80:80"
- "443:443"
volumes:
- ./nginx:/etc/nginx/conf.d
- ./certs:/etc/ssl/nginx
- ./static/local.web.test:/usr/share/nginx/html
networks:
- proxy
networks:
proxy:
driver: bridge
b. within ./nginx that is bind mounted at /etc/nginx/conf.d there is a file default.conf that contains:
server {
listen 80 default_server;
server_name local.web.test;
location / {
return 301 https://$host$request_uri;
}
}
server {
listen 443 ssl;
server_name local.web.test;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
ssl_certificate /etc/ssl/nginx/localhost.crt;
ssl_certificate_key /etc/ssl/nginx/localhost.key;
}
c. the ./certs:/etc/ssl/nginx bind mounts the folder containing the self-signed certificate and key
d. the ./static/local.web.test:/usr/share/nginx/html makes available a file index.html that contains
<h1>local.web.test</h1>
The app1 and app2 services setup
a. the apache.yml for the docker-compose:
version: "3.8"
services:
app1:
build:
context: .
dockerfile: apache.dockerfile
image: app1
container_name: app1
volumes:
- ./static/local1.web.test:/var/www/html
networks:
- exp_proxy
app2:
build:
context: .
dockerfile: apache.dockerfile
image: app2
container_name: app2
volumes:
- ./static/local2.web.test:/var/www/html
networks:
- exp_proxy
networks:
# Note: the network is named `exp_proxy` because the root directory is `exp`.
exp_proxy:
external: true
b. the apache.dockerfile image looks like this:
# Base image.
FROM php:7.4.1-apache
# Install dependencies.
RUN apt-get update && apt-get install -y curl nano wget unzip build-essential apache2-dev
# Clear cache.
RUN apt-get clean && rm -rf /var/lib/apt/lists/*
# Change working directory,
WORKDIR /root
# Fetch mod_rpaf.
RUN wget https://github.com/gnif/mod_rpaf/archive/stable.zip
# Unzip.
RUN unzip stable.zip
# Change working directory,
WORKDIR /root/mod_rpaf-stable
# Compile and install.
RUN make && make install
# Register the module for load.
RUN echo "LoadModule rpaf_module /usr/lib/apache2/modules/mod_rpaf.so" > /etc/apache2/mods-available/rpaf.load
# Copy the configuration for mod_rpaf.
COPY ./apache/mods/rpaf.conf /etc/apache2/mods-available/rpaf.conf
# Enable the module.
RUN a2enmod rpaf
# Set working directory.
WORKDIR /var/www/html
c. the file ./apache/mods/rpaf.conf copied contains:
<IfModule mod_rpaf.c>
RPAF_Enable On
RPAF_Header X-Real-Ip
RPAF_ProxyIPs 127.0.0.1
RPAF_SetHostName On
RPAF_SetHTTPS On
RPAF_SetPort On
</IfModule>
d. the ./static/local1.web.test:/var/www/html bind mounts an index.php file containing:
<h1>local1.web.test</h1>
<?php phpinfo(); ?>
the same goes for ./static/local2.web.test:/var/www/html
e. the 000-default.conf virtual hosts in app1 and app2 are not modified:
<VirtualHost *:80>
ServerAdmin webmaster#localhost
DocumentRoot /var/www/html
ErrorLog ${APACHE_LOG_DIR}/error.log
CustomLog ${APACHE_LOG_DIR}/access.log combined
</VirtualHost>
Starting the setup
a. start the proxy server
docker-compose -f nginx.yml up -d --build
b. start the app1 and app2 services
docker-compose -f apache.yml up -d --build
c. check containers to see if mod_rpaf is enabled
docker-compose -f apache.yml exec app1 apachectl -t -D DUMP_MODULES
d. add two files in ./nginx that will be available on the proxy container at /etc/nginx/conf.d
local1.web.test.conf containing
upstream app-1 {
server app1;
}
server {
listen 80;
server_name local1.web.test;
location / {
return 301 https://$host$request_uri;
}
}
server {
listen 443 ssl;
server_name local1.web.test;
location / {
proxy_pass http://app-1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
ssl_certificate /etc/ssl/nginx/localhost.crt;
ssl_certificate_key /etc/ssl/nginx/localhost.key;
}
the second file is local2.web.test.conf with a similar setup (i.e., number 1 is replaced with 2)
e. check the config and restart the proxy container (or reload the nginx server)
docker-compose -f nginx.yml exec proxy nginx -t
docker-compose -f nginx.yml exec proxy service nginx reload
The issues:
when I run docker logs proxy -f I notice the SSL internal error mentioned above: SSL_read() failed
someone faced a similar error (http2: SSL read failed while sending req in nginx) but in that case, the message more specifically points to the certificate authority
if I run docker logs app1 -f and visit https://local1.web.test, the ip in the GET request matches the ip of the proxy container (i.e., 172.19.0.2) and not that of the remote client
I suspect the cuprit is this RPAF_ProxyIPs 127.0.0.1, but I can't manually fix the ip because I don't know what ip the container will get in the exp_proxy network
also I can't use the hostname because RPAF_ProxyIPs expects an ip
docker inspect proxy shows "IPAddress": "172.19.0.2"
docker inspect app1 shows "IPAddress": "172.19.0.4"
I can't seem to understand what goes wrong and would appreciate your help.

Certbot Fails Domain Authentication

I am stumped. I have 2 different domains that I'm trying to install an SSL cert for with Certbot on a Digital Ocean Ubuntu server. Here is the final command I run to obtain the SSL cert:
sudo certbot --nginx -d mydomain1.com -d www.mydomain1.com
I run the exact same command for mydomain1.com and mydomain2.com
Here's what makes no sense. Authentication passes for mydomain1.com but FAILS for mydomain2.com
I'm using identical Nginx Server block config files for both domains. Yes, this includes the root filepath and the server names being identical in the Nginx config file for both.
I have both the config files set to the following:
listen 80;
listen [::]:80;
root /var/www/mydomain2.com;
index index.html index.htm index.nginx-debian.html;
server_name mydomain2.com www.mydomain2.com;
Yes, both config files (for mydomain1.com and mydomain2.com are set to root path of mydomain2.com and server name of mydomain2.com because I need Nginx to serve up the exact same content that I have in that directory. My intention for this was to have mydomain1.com redirect to mydomain2.com but it appears it doesn't work like that which is a separate problem. Right now I'm just trying to validate the SSL cert for mydomain2.com then I'll figure out the redirect.
Thank you in advance for your help.
If I understood the problem correctly, the answer is as follows:
certbot certonly --standalone --preferred-challenges http -d domain1.com -d www.domain1.com
You do not need to modify the nginx default.conf file. use the following method instead.
nano /etc/nginx/sites_available/domain1.com
server {
listen 80;
listen [::]:80;
server_name domain1.com www.domain1.com;
return 301 https://domain1.com\$request_uri;
}
server {
listen 443 ssl;
server_name domain1.com;
ssl_certificate /etc/letsencrypt/live/domain1.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/domain1.com/privkey.pem;
}
sites enabled symbolic link
ln -s /etc/nginx/sites-available/domain1.conf /etc/nginx/sites-enabled
The same in domain2.com

How to have both docker and apache using only one port?

I have apache using port 80 and the server is only accessible from outside at port 80 due to firewall. If I run a command like the following, there will be a conflict at port 80. Could anybody show me how to support both application at the same port (two domains are mapped to the same IP, so they can be presumably separated by the domain name)? Thanks.
docker run -d -p 80:8787 quay.io/hemberg-group/scrna-seq-course-rstudio
Run the docker with -p 80:8787 will try to listen a Physical machine. and also the Apache listen the 80 by default.
You can't run more than one programmer to listen the same PORT, but if you just want to a map 80 which you open to the public to the backend server, you can use a Nginx as your delegation server. and then you can run number of servers.
Here is simple config of your Nginx
server { # php/fastcgi
listen 80;
server_name domain1.com www.domain1.com;
access_log logs/domain1.access.log main;
root html;
location ~ \ {
server 127.0.0.1:8787;
server 127.0.0.1:8788;
server 127.0.0.1:8780;
}
}
And your container can run as following way:
docker run -d -p 8787:8787 quay.io/hemberg-group/scrna-seq-course-rstudio
docker run -d -p 8788:8787 quay.io/hemberg-group/scrna-seq-course-rstudio
And your Apache server can be configuration with another PORT, example 8780

ssl certificate for private docker repository

I know about starting a Containerized private registry with TLS enabled. And copying domain.crt to other docker hosts to access the registry.
I have a private Container registry server is already running (not as Container, but from the office) and I can login using username, password. How Can I use it with ssl certificate?
I know I can generate a CA certificate. But, how to upload the private key to registry. Like, using ssh, where we upload the key to Gitlab, and the public key in Host machine.
Or how can I download the domain.crt file from the registry to docker host?
What am I missing?
Thanks and regards
I played with it a couple of years ago and got it working with nginx, with a configuration along this lines:
{
upstream private-docker-registry {
server docker_registry:5000;
}
server {
listen 443 ssl;
listen 80;
server_name mydockerregistry.com;
ssl_certificate /etc/nginx/certs/mydockerregistry.com.crt;
ssl_certificate_key /etc/nginx/certs/mydockerregistry.com.key;
proxy_set_header Host $http_host; # required for Docker client sake
proxy_set_header X-Real-IP $remote_addr; # pass on real client IP
client_max_body_size 0; # disable any limits to avoid HTTP 413 for large image uploads
# required to avoid HTTP 411: see Issue #1486 (https://github.com/dotcloud/docker/issues/1486)
chunked_transfer_encoding on;
location / {
# let Nginx know about our auth file
auth_basic "Restricted";
auth_basic_user_file /etc/nginx/authentication/docker-registry.htpasswd;
proxy_pass http://private-docker-registry;
}
location /_ping {
auth_basic off;
proxy_pass http://private-docker-registry;
}
location /v1/_ping {
auth_basic off;
proxy_pass http://private-docker-registry;
}
Create an htpasswd file for authentication, in this example I called it docker-registry.htpasswd
Then run an nginx image linking to docker registry container, in this example call it docker_registry, in this nginx configuration example, it will be listening on port 5000, to run the nginx container will be something like this:
sudo docker run -d \
--name="nginx_docker_registry" \
-p 443:443 \
-p 80:80 \
--link my_docker_registry_container_name:docker_registry \
-v "path_to_htpasswd_file_in_host:/etc/nginx/authentication:ro" \
-v "path_to_certificates_in_host:/etc/nginx/certs:ro" \
-v "path_to_nginx_conf_in_host:/etc/nginx/nginx.conf:ro" \
nginx

SSL Certs for Plex Media Server using Letstencrypt

I need a little direction here. I want to get https with my hostname that I generated at No-IP working with my Plex Media Server. I can connect through my hostname to my plex media server just fine I just want letsencrypt to generate secure SSL certs for it.
I run the following command:
sudo su -
./certbot-auto --webroot "/var/lib/plexmediaserver/Library/Application Support" -d example.com
and it return the following error:
letsencrypt: error: unrecognized arguments: /var/lib/plexmediaserver/Library/Application Support
If I run the following command:
sudo su -
./certbot-auto certonly --standalone -d example.com
It return the following error:
Failed authorization procedure. example.com (tls-sni-01): urn:acme:error:unauthorized :: The client lacks sufficient authorization :: Incorrect validation certificate for TLS-SNI-01 challenge. Requested e1b6ab6aa7251a908a0f2fc1dd6a3597.beae34c6504c7db8412d92c3f1885e08.acme.invalid from 1.2.3.4:443. Received certificate containing '*.0beedbf17c2042c089ef5e20952e62c8.plex.direct'
I really don't even know if that is the right webroot or not. I'm at a complete lose as to where to go from here. This is the last step in my puzzle and any direction would be helpful.
Note: This is running on a Rasberry pi 3.
I'm assuming you already have plex setup so I will skip that part, if not look at this link: wesleysinstructions.weebly.com
Go to No-IP (or any other service you want to use for a hostname) and setup a hostname
Login To the dashboard.
On the side bar click "Dynamic DNS"
Select "Hostnames"
On that page click the button "Add Hostname"
​ - Fill that out and you now have a hostname (Note: This takes about 5 minutes to become active)
Install the Dynamic DNS client to link your plex ip address (that is always changing) to your hostname on No-Ip.com
Note: They have instructions on their website on how to do this
On your router port forward 443/80 to where you're hosting plex
Visit portforward.com for instructions regarding your exact router
SSH into your plex server
Install "certbot" by LetsEncrypt
mkdir ~/certs
cd ~/certs
wget https://dl.eff.org/certbot-auto
sudo chmod a+x certbot-auto
sudo ./certbot-auto certonly --standalone -d <hostname>
NOTE: This will attempt to verify the host over 443.
If everything goes well you should get a message that looks something like this:
Congratulations! Your certificate and chain have been saved at
/etc/letsencrypt/live/<hostname>/fullchain.pem. Your cert
will expire on..
Setup a Reverse Nginx proxy to serve your cert.
sudo apt-get update
sudo apt-get install nginx -y
sudo unlink /etc/nginx/sites-enabled/default
sudo nano /etc/nginx/sites-available/reverse
The "reverse" file is setup something like the following:
server {
listen 80;
server_name <hostname>;
rewrite https://$host$request_uri? permanent;
listen 443 ssl;
ssl_certificate /etc/letsencrypt/live/<hostname>/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/<hostname>/privkey.pem;
#root /usr/share/nginx/html;
#index index.html index.htm;
ssl_stapling on;
ssl_stapling_verify on;
location / {
proxy_pass http://127.0.0.1:32400;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
Note: This assumes you have the default plex setup where it is using port 32400.
Finish the setup
sudo ln -s /etc/nginx/sites-available/reverse /etc/nginx/sites-enabled/reverse
sudo nginx -t
sudo service nginx restart
Hopefully I didn't type anything wrong. If I did at least this is the setup process you will need to go through.