SSL Certs for Plex Media Server using Letstencrypt - ssl

I need a little direction here. I want to get https with my hostname that I generated at No-IP working with my Plex Media Server. I can connect through my hostname to my plex media server just fine I just want letsencrypt to generate secure SSL certs for it.
I run the following command:
sudo su -
./certbot-auto --webroot "/var/lib/plexmediaserver/Library/Application Support" -d example.com
and it return the following error:
letsencrypt: error: unrecognized arguments: /var/lib/plexmediaserver/Library/Application Support
If I run the following command:
sudo su -
./certbot-auto certonly --standalone -d example.com
It return the following error:
Failed authorization procedure. example.com (tls-sni-01): urn:acme:error:unauthorized :: The client lacks sufficient authorization :: Incorrect validation certificate for TLS-SNI-01 challenge. Requested e1b6ab6aa7251a908a0f2fc1dd6a3597.beae34c6504c7db8412d92c3f1885e08.acme.invalid from 1.2.3.4:443. Received certificate containing '*.0beedbf17c2042c089ef5e20952e62c8.plex.direct'
I really don't even know if that is the right webroot or not. I'm at a complete lose as to where to go from here. This is the last step in my puzzle and any direction would be helpful.
Note: This is running on a Rasberry pi 3.

I'm assuming you already have plex setup so I will skip that part, if not look at this link: wesleysinstructions.weebly.com
Go to No-IP (or any other service you want to use for a hostname) and setup a hostname
Login To the dashboard.
On the side bar click "Dynamic DNS"
Select "Hostnames"
On that page click the button "Add Hostname"
​ - Fill that out and you now have a hostname (Note: This takes about 5 minutes to become active)
Install the Dynamic DNS client to link your plex ip address (that is always changing) to your hostname on No-Ip.com
Note: They have instructions on their website on how to do this
On your router port forward 443/80 to where you're hosting plex
Visit portforward.com for instructions regarding your exact router
SSH into your plex server
Install "certbot" by LetsEncrypt
mkdir ~/certs
cd ~/certs
wget https://dl.eff.org/certbot-auto
sudo chmod a+x certbot-auto
sudo ./certbot-auto certonly --standalone -d <hostname>
NOTE: This will attempt to verify the host over 443.
If everything goes well you should get a message that looks something like this:
Congratulations! Your certificate and chain have been saved at
/etc/letsencrypt/live/<hostname>/fullchain.pem. Your cert
will expire on..
Setup a Reverse Nginx proxy to serve your cert.
sudo apt-get update
sudo apt-get install nginx -y
sudo unlink /etc/nginx/sites-enabled/default
sudo nano /etc/nginx/sites-available/reverse
The "reverse" file is setup something like the following:
server {
listen 80;
server_name <hostname>;
rewrite https://$host$request_uri? permanent;
listen 443 ssl;
ssl_certificate /etc/letsencrypt/live/<hostname>/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/<hostname>/privkey.pem;
#root /usr/share/nginx/html;
#index index.html index.htm;
ssl_stapling on;
ssl_stapling_verify on;
location / {
proxy_pass http://127.0.0.1:32400;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
Note: This assumes you have the default plex setup where it is using port 32400.
Finish the setup
sudo ln -s /etc/nginx/sites-available/reverse /etc/nginx/sites-enabled/reverse
sudo nginx -t
sudo service nginx restart
Hopefully I didn't type anything wrong. If I did at least this is the setup process you will need to go through.

Related

How to correctly use Nginx as reverse proxy for multiple Apache Docker containers with SSL?

Given the following docker containers:
an nginx service that runs an unmodified official nginx:latest image
container name: proxy
two applications running in separate containers based on modified official php:7.4.1-apache images
container names: app1 and app2
all containers proxy, app1, and app2 are in the same Docker-created network with automatic DNS resolution
With the following example domain names:
local.web.test
local1.web.test
local2.web.test
I want to achieve the following behavior:
serve local.web.test from nginx as the default server block
configure nginx to proxy requests from local1.web.test and local2.web.test to app1 and app2, respectively, both listening on port 80
configure nginx to serve all three domain names using a wildcard SSL certificate
I experience two problems:
I notice the following error in the nginx logs:
2020/06/28 20:00:59 [crit] 27#27: *36 SSL_read() failed (SSL: error:14191044:SSL routines:tls1_enc:internal error) while waiting for request, client: 172.19.0.1, server: 0.0.0.0:443
the mod_rpaf seems not to work properly (i.e., the ip address in the apache access logs is of the nginx server [e.g., 172.19.0.2] instead of the ip of the client that issues the request
172.19.0.2 - - [28/Jun/2020:20:05:05 +0000] "GET /favicon.ico HTTP/1.0" 404 457 "http://local1.web.test/" "Mozilla/5.0 (Windows NTndows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.116 Safari/537.36
the output of phpinfo() for Apache Environment shows that:
HTTP_X_REAL_IP lists the client ip
SERVER_ADDR lists the app1 container ip (e.g., 172.19.0.4)
REMOTE_ADDR shows the proxy container ip (e.g., 172.19.0.2) instead of the client ip
To make this reproducible, this is how everything is set up. I tried this on my Windows machine so there are two preliminary steps.
Preliminary steps
a. in my C:\Windows\System32\drivers\etc\hosts file I added the following:
127.0.0.1 local.web.test
127.0.0.1 local1.web.test
127.0.0.1 local2.web.test
b. I generated a self-signed SSL certificate with the Common Name set to *.local.test via
openssl req -x509 -sha256 -nodes -newkey rsa:2048 -days 365 -keyout localhost.key -out localhost.crt
The proxy service setup
a. the nginx.yml for the docker-compose:
version: "3.8"
services:
nginx:
image: nginx:latest
container_name: proxy
ports:
- "80:80"
- "443:443"
volumes:
- ./nginx:/etc/nginx/conf.d
- ./certs:/etc/ssl/nginx
- ./static/local.web.test:/usr/share/nginx/html
networks:
- proxy
networks:
proxy:
driver: bridge
b. within ./nginx that is bind mounted at /etc/nginx/conf.d there is a file default.conf that contains:
server {
listen 80 default_server;
server_name local.web.test;
location / {
return 301 https://$host$request_uri;
}
}
server {
listen 443 ssl;
server_name local.web.test;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
ssl_certificate /etc/ssl/nginx/localhost.crt;
ssl_certificate_key /etc/ssl/nginx/localhost.key;
}
c. the ./certs:/etc/ssl/nginx bind mounts the folder containing the self-signed certificate and key
d. the ./static/local.web.test:/usr/share/nginx/html makes available a file index.html that contains
<h1>local.web.test</h1>
The app1 and app2 services setup
a. the apache.yml for the docker-compose:
version: "3.8"
services:
app1:
build:
context: .
dockerfile: apache.dockerfile
image: app1
container_name: app1
volumes:
- ./static/local1.web.test:/var/www/html
networks:
- exp_proxy
app2:
build:
context: .
dockerfile: apache.dockerfile
image: app2
container_name: app2
volumes:
- ./static/local2.web.test:/var/www/html
networks:
- exp_proxy
networks:
# Note: the network is named `exp_proxy` because the root directory is `exp`.
exp_proxy:
external: true
b. the apache.dockerfile image looks like this:
# Base image.
FROM php:7.4.1-apache
# Install dependencies.
RUN apt-get update && apt-get install -y curl nano wget unzip build-essential apache2-dev
# Clear cache.
RUN apt-get clean && rm -rf /var/lib/apt/lists/*
# Change working directory,
WORKDIR /root
# Fetch mod_rpaf.
RUN wget https://github.com/gnif/mod_rpaf/archive/stable.zip
# Unzip.
RUN unzip stable.zip
# Change working directory,
WORKDIR /root/mod_rpaf-stable
# Compile and install.
RUN make && make install
# Register the module for load.
RUN echo "LoadModule rpaf_module /usr/lib/apache2/modules/mod_rpaf.so" > /etc/apache2/mods-available/rpaf.load
# Copy the configuration for mod_rpaf.
COPY ./apache/mods/rpaf.conf /etc/apache2/mods-available/rpaf.conf
# Enable the module.
RUN a2enmod rpaf
# Set working directory.
WORKDIR /var/www/html
c. the file ./apache/mods/rpaf.conf copied contains:
<IfModule mod_rpaf.c>
RPAF_Enable On
RPAF_Header X-Real-Ip
RPAF_ProxyIPs 127.0.0.1
RPAF_SetHostName On
RPAF_SetHTTPS On
RPAF_SetPort On
</IfModule>
d. the ./static/local1.web.test:/var/www/html bind mounts an index.php file containing:
<h1>local1.web.test</h1>
<?php phpinfo(); ?>
the same goes for ./static/local2.web.test:/var/www/html
e. the 000-default.conf virtual hosts in app1 and app2 are not modified:
<VirtualHost *:80>
ServerAdmin webmaster#localhost
DocumentRoot /var/www/html
ErrorLog ${APACHE_LOG_DIR}/error.log
CustomLog ${APACHE_LOG_DIR}/access.log combined
</VirtualHost>
Starting the setup
a. start the proxy server
docker-compose -f nginx.yml up -d --build
b. start the app1 and app2 services
docker-compose -f apache.yml up -d --build
c. check containers to see if mod_rpaf is enabled
docker-compose -f apache.yml exec app1 apachectl -t -D DUMP_MODULES
d. add two files in ./nginx that will be available on the proxy container at /etc/nginx/conf.d
local1.web.test.conf containing
upstream app-1 {
server app1;
}
server {
listen 80;
server_name local1.web.test;
location / {
return 301 https://$host$request_uri;
}
}
server {
listen 443 ssl;
server_name local1.web.test;
location / {
proxy_pass http://app-1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
ssl_certificate /etc/ssl/nginx/localhost.crt;
ssl_certificate_key /etc/ssl/nginx/localhost.key;
}
the second file is local2.web.test.conf with a similar setup (i.e., number 1 is replaced with 2)
e. check the config and restart the proxy container (or reload the nginx server)
docker-compose -f nginx.yml exec proxy nginx -t
docker-compose -f nginx.yml exec proxy service nginx reload
The issues:
when I run docker logs proxy -f I notice the SSL internal error mentioned above: SSL_read() failed
someone faced a similar error (http2: SSL read failed while sending req in nginx) but in that case, the message more specifically points to the certificate authority
if I run docker logs app1 -f and visit https://local1.web.test, the ip in the GET request matches the ip of the proxy container (i.e., 172.19.0.2) and not that of the remote client
I suspect the cuprit is this RPAF_ProxyIPs 127.0.0.1, but I can't manually fix the ip because I don't know what ip the container will get in the exp_proxy network
also I can't use the hostname because RPAF_ProxyIPs expects an ip
docker inspect proxy shows "IPAddress": "172.19.0.2"
docker inspect app1 shows "IPAddress": "172.19.0.4"
I can't seem to understand what goes wrong and would appreciate your help.

Add certificate to my site when accessing without “www”

My domain is: www.nace.network
My web server is (include version): nginx version: nginx/1.15.8
The operating system my web server runs on is (include version): Ubuntu 14.04.6 LTS
I can login to a root shell on my machine (yes or no, or I don’t know): yes
The version of my client is (e.g. output of certbot --version or certbot-auto --version if you’re using Certbot): certbot 0.31.0
Recently I was able to renew my certificate for my website, I can access it through www.nace.network but when accessing my site without using the “www” it sends me the “Warning: Potential Security Risk Ahead” alert, in what way could I fix it? this is the content of my nginx file:
server {
listen 8080 default_server;
listen [::]:8080 default_server ipv6only=on;
server_name www.nace.network;
root /home/ubuntu/nace/public; #could maybe change this to dummy location like /nul
location / {
return 301 https://$host$request_uri;
}#location
}#server
server {
listen [::]:443 ssl ipv6only=on; # managed by Certbot
listen 443 ssl; # managed by Certbot
server_name www.nace.network;
passenger_enabled on;
rails_env production;
root /home/ubuntu/nace/public;
# redirect server error pages to the static page /50x.html
error_page 500 502 503 504 /50x.html;
location / {
deny 46.229.168.0;
deny 51.68.152.0;
}#locatoin
location = /50x.html {
root html;
}#location
ssl_certificate /etc/letsencrypt/live/www.nace.network/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/www.nace.network/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
}#server
at the time I renew the certificate with this command :
ubuntu#ip-112-33-0-224:~/letsencrypt$ sudo -H ./letsencrypt-auto certonly --standalone -d nace.network -d www.nace.network
and this was the result
./letsencrypt-auto has insecure permissions!
To learn how to fix them, visit https://community.letsencrypt.org/t/certbot-auto-deployment-best-practices/91979/
/opt/eff.org/certbot/venv/local/lib/python2.7/site-packages/cryptography/hazmat/primitives/constant_time.py:26: CryptographyDeprecationWarning: Support for your Python version is deprecated. The next version of cryptography will remove support. Please upgrade to a release (2.7.7+) that supports hmac.compare_digest as soon as possible.
utils.PersistentlyDeprecated2018,
Saving debug log to /var/log/letsencrypt/letsencrypt.log
Plugins selected: Authenticator standalone, Installer None
Cert is due for renewal, auto-renewing...
Renewing an existing certificate
Performing the following challenges:
http-01 challenge for nace.network
Cleaning up challenges
Problem binding to port 80: Could not bind to IPv4 or IPv6.
I tried to combine the certificates with the command: certbot certonly -t -n --standalone --expand --rsa-key-size 4096 --agree-tos -d www.nace.network,nace.network
but it throws me the following:
Saving debug log to /var/log/letsencrypt/letsencrypt.log
Plugins selected: Authenticator standalone, Installer None
Attempting to parse the version 0.39.0 renewal configuration file found at /etc/letsencrypt/renewal/www.nace.network.conf with version 0.31.0 of Certbot. This might not work.
Cert is due for renewal, auto-renewing...
Renewing an existing certificate
Performing the following challenges:
http-01 challenge for nace.network
Cleaning up challenges
Problem binding to port 80: Could not bind to IPv4 or IPv6.
What names were configured on the cert ?
Hi again, reviewing you're configs I noticed that you do not have a server name without www.
You can follow this Nginx no-www to www and www to no-www
or simple edit the server name to the one without "www" and then redirect it to www.yourdomain.stuff

Nginx Docker haven't access to server through server_name

I have the issue with my Nginx configuration, I can't set my server_name.
I tried to build my docker container with Nginx configuration inside.
Body of my Dockerfile.
FROM nginx
RUN rm -rf /etc/nginx/conf.d/*
RUN mkdir /etc/nginx/ssl
RUN chown -R root:root /etc/nginx/ssl
RUN chmod -R 600 /etc/nginx/ssl
COPY etc/ssl/certs/qwobbleprod.crt /etc/nginx/ssl
COPY etc/ssl/certs/app.qwobble.com.key /etc/nginx/ssl
COPY nginx/default.conf /etc/nginx/conf.d/
COPY dist /usr/share/nginx/html
EXPOSE 443
and my Nginx configuration file ->
server {
listen 443 ssl default_server;
root /usr/share/nginx/html;
server_name blabla.com www.blabla.com;
access_log /var/log/nginx/nginx.access.log;
error_log /var/log/nginx/nginx.error.log;
ssl on;
ssl_certificate /etc/nginx/ssl/blabla.crt;
ssl_certificate_key /etc/nginx/ssl/blabla.com.key;
sendfile on;
location / {
try_files $uri /index.html =404;
}
location ~* \.(js|css|png|jpg|jpeg|gif|ico|html)$ {
expires max;
log_not_found off;
}
}
I tried to build and run my docker container
docker build -t <name> .
docker run -it -p 443:443 <name>
As the result, I have my app on https://localhost:443
but I haven't access to my app through https://blabla.com:443 or https://www.blabla.com:443
I'm a newbie in working with Docker and Nginx, and I have no idea what is wrong.
I will be grateful for any help!
In this case I would expect that you actually need the blabla.com domain and that the dns (Domain Name Service) should point to your external IP address.
You must then configure the router to accept connections on port 443 (what you desire) and point (port forwarding) it to the computer running your docker image on the port that it is actually running on.
It might also be necessary to open firewall settings on the computer docker is running on.
I see you also want to listen to https so you might need some certificates for that.
or if you want to fake it you can edit your hosts file (on mac or linux /etc/hosts) and add an entry like:
##
# Host Database
#
# localhost is used to configure the loopback interface
# when the system is booting. Do not change this entry.
##
127.0.0.1 localhost
255.255.255.255 broadcasthost
::1 localhost
127.0.0.1 blabla.com
but now blabla.com will only work on your machine...
Hope it helps

ssl certificate for private docker repository

I know about starting a Containerized private registry with TLS enabled. And copying domain.crt to other docker hosts to access the registry.
I have a private Container registry server is already running (not as Container, but from the office) and I can login using username, password. How Can I use it with ssl certificate?
I know I can generate a CA certificate. But, how to upload the private key to registry. Like, using ssh, where we upload the key to Gitlab, and the public key in Host machine.
Or how can I download the domain.crt file from the registry to docker host?
What am I missing?
Thanks and regards
I played with it a couple of years ago and got it working with nginx, with a configuration along this lines:
{
upstream private-docker-registry {
server docker_registry:5000;
}
server {
listen 443 ssl;
listen 80;
server_name mydockerregistry.com;
ssl_certificate /etc/nginx/certs/mydockerregistry.com.crt;
ssl_certificate_key /etc/nginx/certs/mydockerregistry.com.key;
proxy_set_header Host $http_host; # required for Docker client sake
proxy_set_header X-Real-IP $remote_addr; # pass on real client IP
client_max_body_size 0; # disable any limits to avoid HTTP 413 for large image uploads
# required to avoid HTTP 411: see Issue #1486 (https://github.com/dotcloud/docker/issues/1486)
chunked_transfer_encoding on;
location / {
# let Nginx know about our auth file
auth_basic "Restricted";
auth_basic_user_file /etc/nginx/authentication/docker-registry.htpasswd;
proxy_pass http://private-docker-registry;
}
location /_ping {
auth_basic off;
proxy_pass http://private-docker-registry;
}
location /v1/_ping {
auth_basic off;
proxy_pass http://private-docker-registry;
}
Create an htpasswd file for authentication, in this example I called it docker-registry.htpasswd
Then run an nginx image linking to docker registry container, in this example call it docker_registry, in this nginx configuration example, it will be listening on port 5000, to run the nginx container will be something like this:
sudo docker run -d \
--name="nginx_docker_registry" \
-p 443:443 \
-p 80:80 \
--link my_docker_registry_container_name:docker_registry \
-v "path_to_htpasswd_file_in_host:/etc/nginx/authentication:ro" \
-v "path_to_certificates_in_host:/etc/nginx/certs:ro" \
-v "path_to_nginx_conf_in_host:/etc/nginx/nginx.conf:ro" \
nginx

using certbot-auto for nginx

I have an nginx running.
Now I want my nginx to use SSL:
certbot-auto --nginx -d my.domain.com -n --agree-tos --email admin#mail.com
OUTPUT:
Performing the following challenges:
tls-sni-01 challenge for my.domain.com
Cleaning up challenges
Cannot find a VirtualHost matching domain my.domain.com.
my.domain.com is pointing to the IP of my server. It's its dns name.
What am I doing wrong? I did this already for apache and it was working fine. My nginx is running (and I'm not able to restart it manually after the certbot-auto but this wasn't necessary when I used certbot-auto --apache
In my case, I had to add the "server_name" line because it wasn't in my nginx config so it was giving me the error message "Cannot find a VirtualHost matching domain my.domain.com" when I ran:
certbot --nginx
Make sure this is in your config:
server {
server_name my.domain.com;
....
}
Your are probably missing some Server Blocks (virtual hosts) files in the sites-enabled folder. Check if your config files exist in /etc/nginx/sites-available and /etc/nginx/sites-enabled. If they are not present in the sites-enabled folder, create symbolic links for them:
$ sudo ln -s /etc/nginx/sites-available/my.domain.com /etc/nginx/sites-enabled/
Add your site, check for config errors and restart nginx:
$ sudo certbot --nginx -d my.domain.com
$ sudo nginx -t
$ sudo service nginx restart