I am strying to set httpS on my Strapi production but it is very difficult. I followed the tutorial written by derrickmehaffy, but it does not work for me. :'(
My pm2 service:
pm2 configuration
My /etc/nginx/conf.d/upstream.conf file :
# Strapi upstream server
upstream strapi-gatsby {
server localhost:1337;
}
My /etc/nginx/sites-available/strapi.live-for-good.org.conf file :
server {
# Listen HTTP
listen 80;
server_name strapi.live-for-good.org;
# Define LE Location
location ~ ^/.well-known/acme-challenge/ {
default_type "text/plain";
root /var/www/html;
}
# Else Redirect to HTTPS // API
# location / {
# return 301 https://$host$request_uri;
# }
}
I cleared the default package in sites-available and sites-enabled, made the link, and when I wanted to check it with service nginx configtest, it fails
But, when I launch sudo nginx -t && sudo service nginx reload, I got this :
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
System
Node.js version: 12.16.1
NPM version: 6.13.4
Strapi version: 3.0.0-beta.19.3
Database: mongoDB on MLab
Operating system: ubuntu 18.04
Related
Given the following docker containers:
an nginx service that runs an unmodified official nginx:latest image
container name: proxy
two applications running in separate containers based on modified official php:7.4.1-apache images
container names: app1 and app2
all containers proxy, app1, and app2 are in the same Docker-created network with automatic DNS resolution
With the following example domain names:
local.web.test
local1.web.test
local2.web.test
I want to achieve the following behavior:
serve local.web.test from nginx as the default server block
configure nginx to proxy requests from local1.web.test and local2.web.test to app1 and app2, respectively, both listening on port 80
configure nginx to serve all three domain names using a wildcard SSL certificate
I experience two problems:
I notice the following error in the nginx logs:
2020/06/28 20:00:59 [crit] 27#27: *36 SSL_read() failed (SSL: error:14191044:SSL routines:tls1_enc:internal error) while waiting for request, client: 172.19.0.1, server: 0.0.0.0:443
the mod_rpaf seems not to work properly (i.e., the ip address in the apache access logs is of the nginx server [e.g., 172.19.0.2] instead of the ip of the client that issues the request
172.19.0.2 - - [28/Jun/2020:20:05:05 +0000] "GET /favicon.ico HTTP/1.0" 404 457 "http://local1.web.test/" "Mozilla/5.0 (Windows NTndows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.116 Safari/537.36
the output of phpinfo() for Apache Environment shows that:
HTTP_X_REAL_IP lists the client ip
SERVER_ADDR lists the app1 container ip (e.g., 172.19.0.4)
REMOTE_ADDR shows the proxy container ip (e.g., 172.19.0.2) instead of the client ip
To make this reproducible, this is how everything is set up. I tried this on my Windows machine so there are two preliminary steps.
Preliminary steps
a. in my C:\Windows\System32\drivers\etc\hosts file I added the following:
127.0.0.1 local.web.test
127.0.0.1 local1.web.test
127.0.0.1 local2.web.test
b. I generated a self-signed SSL certificate with the Common Name set to *.local.test via
openssl req -x509 -sha256 -nodes -newkey rsa:2048 -days 365 -keyout localhost.key -out localhost.crt
The proxy service setup
a. the nginx.yml for the docker-compose:
version: "3.8"
services:
nginx:
image: nginx:latest
container_name: proxy
ports:
- "80:80"
- "443:443"
volumes:
- ./nginx:/etc/nginx/conf.d
- ./certs:/etc/ssl/nginx
- ./static/local.web.test:/usr/share/nginx/html
networks:
- proxy
networks:
proxy:
driver: bridge
b. within ./nginx that is bind mounted at /etc/nginx/conf.d there is a file default.conf that contains:
server {
listen 80 default_server;
server_name local.web.test;
location / {
return 301 https://$host$request_uri;
}
}
server {
listen 443 ssl;
server_name local.web.test;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
ssl_certificate /etc/ssl/nginx/localhost.crt;
ssl_certificate_key /etc/ssl/nginx/localhost.key;
}
c. the ./certs:/etc/ssl/nginx bind mounts the folder containing the self-signed certificate and key
d. the ./static/local.web.test:/usr/share/nginx/html makes available a file index.html that contains
<h1>local.web.test</h1>
The app1 and app2 services setup
a. the apache.yml for the docker-compose:
version: "3.8"
services:
app1:
build:
context: .
dockerfile: apache.dockerfile
image: app1
container_name: app1
volumes:
- ./static/local1.web.test:/var/www/html
networks:
- exp_proxy
app2:
build:
context: .
dockerfile: apache.dockerfile
image: app2
container_name: app2
volumes:
- ./static/local2.web.test:/var/www/html
networks:
- exp_proxy
networks:
# Note: the network is named `exp_proxy` because the root directory is `exp`.
exp_proxy:
external: true
b. the apache.dockerfile image looks like this:
# Base image.
FROM php:7.4.1-apache
# Install dependencies.
RUN apt-get update && apt-get install -y curl nano wget unzip build-essential apache2-dev
# Clear cache.
RUN apt-get clean && rm -rf /var/lib/apt/lists/*
# Change working directory,
WORKDIR /root
# Fetch mod_rpaf.
RUN wget https://github.com/gnif/mod_rpaf/archive/stable.zip
# Unzip.
RUN unzip stable.zip
# Change working directory,
WORKDIR /root/mod_rpaf-stable
# Compile and install.
RUN make && make install
# Register the module for load.
RUN echo "LoadModule rpaf_module /usr/lib/apache2/modules/mod_rpaf.so" > /etc/apache2/mods-available/rpaf.load
# Copy the configuration for mod_rpaf.
COPY ./apache/mods/rpaf.conf /etc/apache2/mods-available/rpaf.conf
# Enable the module.
RUN a2enmod rpaf
# Set working directory.
WORKDIR /var/www/html
c. the file ./apache/mods/rpaf.conf copied contains:
<IfModule mod_rpaf.c>
RPAF_Enable On
RPAF_Header X-Real-Ip
RPAF_ProxyIPs 127.0.0.1
RPAF_SetHostName On
RPAF_SetHTTPS On
RPAF_SetPort On
</IfModule>
d. the ./static/local1.web.test:/var/www/html bind mounts an index.php file containing:
<h1>local1.web.test</h1>
<?php phpinfo(); ?>
the same goes for ./static/local2.web.test:/var/www/html
e. the 000-default.conf virtual hosts in app1 and app2 are not modified:
<VirtualHost *:80>
ServerAdmin webmaster#localhost
DocumentRoot /var/www/html
ErrorLog ${APACHE_LOG_DIR}/error.log
CustomLog ${APACHE_LOG_DIR}/access.log combined
</VirtualHost>
Starting the setup
a. start the proxy server
docker-compose -f nginx.yml up -d --build
b. start the app1 and app2 services
docker-compose -f apache.yml up -d --build
c. check containers to see if mod_rpaf is enabled
docker-compose -f apache.yml exec app1 apachectl -t -D DUMP_MODULES
d. add two files in ./nginx that will be available on the proxy container at /etc/nginx/conf.d
local1.web.test.conf containing
upstream app-1 {
server app1;
}
server {
listen 80;
server_name local1.web.test;
location / {
return 301 https://$host$request_uri;
}
}
server {
listen 443 ssl;
server_name local1.web.test;
location / {
proxy_pass http://app-1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
ssl_certificate /etc/ssl/nginx/localhost.crt;
ssl_certificate_key /etc/ssl/nginx/localhost.key;
}
the second file is local2.web.test.conf with a similar setup (i.e., number 1 is replaced with 2)
e. check the config and restart the proxy container (or reload the nginx server)
docker-compose -f nginx.yml exec proxy nginx -t
docker-compose -f nginx.yml exec proxy service nginx reload
The issues:
when I run docker logs proxy -f I notice the SSL internal error mentioned above: SSL_read() failed
someone faced a similar error (http2: SSL read failed while sending req in nginx) but in that case, the message more specifically points to the certificate authority
if I run docker logs app1 -f and visit https://local1.web.test, the ip in the GET request matches the ip of the proxy container (i.e., 172.19.0.2) and not that of the remote client
I suspect the cuprit is this RPAF_ProxyIPs 127.0.0.1, but I can't manually fix the ip because I don't know what ip the container will get in the exp_proxy network
also I can't use the hostname because RPAF_ProxyIPs expects an ip
docker inspect proxy shows "IPAddress": "172.19.0.2"
docker inspect app1 shows "IPAddress": "172.19.0.4"
I can't seem to understand what goes wrong and would appreciate your help.
My domain is: www.nace.network
My web server is (include version): nginx version: nginx/1.15.8
The operating system my web server runs on is (include version): Ubuntu 14.04.6 LTS
I can login to a root shell on my machine (yes or no, or I don’t know): yes
The version of my client is (e.g. output of certbot --version or certbot-auto --version if you’re using Certbot): certbot 0.31.0
Recently I was able to renew my certificate for my website, I can access it through www.nace.network but when accessing my site without using the “www” it sends me the “Warning: Potential Security Risk Ahead” alert, in what way could I fix it? this is the content of my nginx file:
server {
listen 8080 default_server;
listen [::]:8080 default_server ipv6only=on;
server_name www.nace.network;
root /home/ubuntu/nace/public; #could maybe change this to dummy location like /nul
location / {
return 301 https://$host$request_uri;
}#location
}#server
server {
listen [::]:443 ssl ipv6only=on; # managed by Certbot
listen 443 ssl; # managed by Certbot
server_name www.nace.network;
passenger_enabled on;
rails_env production;
root /home/ubuntu/nace/public;
# redirect server error pages to the static page /50x.html
error_page 500 502 503 504 /50x.html;
location / {
deny 46.229.168.0;
deny 51.68.152.0;
}#locatoin
location = /50x.html {
root html;
}#location
ssl_certificate /etc/letsencrypt/live/www.nace.network/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/www.nace.network/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
}#server
at the time I renew the certificate with this command :
ubuntu#ip-112-33-0-224:~/letsencrypt$ sudo -H ./letsencrypt-auto certonly --standalone -d nace.network -d www.nace.network
and this was the result
./letsencrypt-auto has insecure permissions!
To learn how to fix them, visit https://community.letsencrypt.org/t/certbot-auto-deployment-best-practices/91979/
/opt/eff.org/certbot/venv/local/lib/python2.7/site-packages/cryptography/hazmat/primitives/constant_time.py:26: CryptographyDeprecationWarning: Support for your Python version is deprecated. The next version of cryptography will remove support. Please upgrade to a release (2.7.7+) that supports hmac.compare_digest as soon as possible.
utils.PersistentlyDeprecated2018,
Saving debug log to /var/log/letsencrypt/letsencrypt.log
Plugins selected: Authenticator standalone, Installer None
Cert is due for renewal, auto-renewing...
Renewing an existing certificate
Performing the following challenges:
http-01 challenge for nace.network
Cleaning up challenges
Problem binding to port 80: Could not bind to IPv4 or IPv6.
I tried to combine the certificates with the command: certbot certonly -t -n --standalone --expand --rsa-key-size 4096 --agree-tos -d www.nace.network,nace.network
but it throws me the following:
Saving debug log to /var/log/letsencrypt/letsencrypt.log
Plugins selected: Authenticator standalone, Installer None
Attempting to parse the version 0.39.0 renewal configuration file found at /etc/letsencrypt/renewal/www.nace.network.conf with version 0.31.0 of Certbot. This might not work.
Cert is due for renewal, auto-renewing...
Renewing an existing certificate
Performing the following challenges:
http-01 challenge for nace.network
Cleaning up challenges
Problem binding to port 80: Could not bind to IPv4 or IPv6.
What names were configured on the cert ?
Hi again, reviewing you're configs I noticed that you do not have a server name without www.
You can follow this Nginx no-www to www and www to no-www
or simple edit the server name to the one without "www" and then redirect it to www.yourdomain.stuff
I'm trying to automate the setup of certbot + nginx on a server using Ansible.
The first time it runs, there are no letsencrypt certificates (yet). However I create the nginx conf as follows, referencing SSL/cert directories that will be created by certbot
server {
listen 443 ssl;
server_name example.co;
# ...
# SSL
ssl_certificate /etc/letsencrypt/live/example.co/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.co/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
}
server {
if ($host = example.co) {
return 301 https://$host$request_uri;
}
listen 80;
server_name example.co;
return 404;
}
Then later in the ansible play I run certbot-auto with the --nginx plugin, but I receive an error
> /usr/local/bin/certbot-auto certonly --nginx -n --agree-tos --text -d example.co --email admin#example.co
Error while running nginx -c /etc/nginx/nginx.conf -t.
nginx: [emerg] cannot load certificate "/etc/letsencrypt/live/example.co/fullchain.pem"
It seems that certbot first checks the nginx conf before proceeding (which makes sense) but the conf fails validation since it refers to directories that don't exist. Also, the --nginx plugin (or at least some other plugin) are required so I can't leave it off.
So I'm in a sort of chicken-and-egg situation because -
I can't create the nginx conf before running certbot because certbot tries to validate the nginx conf, and it fails because it references directories that don't exist
I can't run certbot before creating the nginx conf because certbot uses the site's conf to requires new cerificates
The only option i can see is to
create the nginx conf without the #SSL lines
run certbot to get new certs
update the nginx conf file to add in the #SSL lines
This feels messy, but not sure if there's another way?
What's the right order to run this in?
Thanks!
The .conf file surely needs to be there before running certbot. Certbot will then itself write the path to the certificates into the file, so step 3 should not be necessary.
Error that I'm getting:
nginx_prod_vet | 2019/03/07 20:57:11 [error] 6#6: *1 connect() failed
(111: Connection refused) while connecting to upstream, client:
172.23.0.1, server: , request: "GET /backend HTTP/1.1", upstream: "http://172.23.0.2:81/backend", host: "localhost:90"
My goal is use nginx as reverse-proxy to delivery the frontend files and proxy the other services to the frontend, so it would be accessible localhost:90/backend been call from localhost:90/.
I tried to access from outside the container the backend but it gives me the error above.
Here are the most relevant files:
# docker-compose.yml
version: '3'
services:
nginx:
container_name: nginx_prod_vet
build:
context: .
dockerfile: nginx/prod/Dockerfile
ports:
- "90:80"
volumes:
- ./nginx/prod/prod.conf:/etc/nginx/nginx.conf:ro
networks:
- main
depends_on:
- backend
backend:
container_name: backend_prod_vet
build:
context: .
dockerfile: apache/Dockerfile
ports:
- "81:81"
networks:
- main
networks:
main:
driver: bridge
# apache/Dockerfile
FROM httpd:2.4.32-alpine
RUN apk update; \
apk upgrade;
# Copy apache vhost file to proxy php requests to php-fpm container
COPY apache/apache.conf /usr/local/apache2/conf/apache.conf
RUN echo "Include /usr/local/apache2/conf/apache.conf" \
>> /usr/local/apache2/conf/httpd.conf
# apache/apache.conf
ServerName localhost
LoadModule deflate_module /usr/local/apache2/modules/mod_deflate.so
LoadModule proxy_module /usr/local/apache2/modules/mod_proxy.so
LoadModule proxy_fcgi_module /usr/local/apache2/modules/mod_proxy_fcgi.so
<VirtualHost *:81>
# Proxy .php requests to port 9000 of the php-fpm container
# ProxyPassMatch ^/(.*\.php(/.*)?)$ fcgi://php:9000/var/www/html/$1
DocumentRoot /var/www/html/
<Directory /var/www/html/>
# DirectoryIndex index.php
Options Indexes FollowSymLinks
AllowOverride All
Require all granted
</Directory>
# Send apache logs to stdout and stderr
CustomLog /proc/self/fd/1 common
ErrorLog /proc/self/fd/2
</VirtualHost>
# nginx/prod/prod.conf
user nginx;
worker_processes 1;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
client_max_body_size 100m;
upstream backend {
server backend:81;
}
server {
listen 80;
charset utf-8;
root /dist/;
index index.html;
location /backend {
proxy_redirect off;
proxy_pass http://backend;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
}
}
}
# nginx/prod/Dockerfile
# build stage
FROM node:10.14.2-jessie as build-stage
WORKDIR /app/
COPY frontend/package.json /app/
RUN npm cache verify
RUN npm install
COPY frontend /app/
RUN npm run build
# production stage
FROM nginx:1.13.12-alpine as production-stage
COPY nginx/prod/prod.conf /etc/nginx/nginx.conf
COPY --from=build-stage /app/dist /dist/
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
Edit:
docker-compose exec backend netstat -lnpt
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 127.0.0.11:38317 0.0.0.0:* LISTEN -
tcp 0 0 :::80 :::* LISTEN 1/httpd
docker-compose exec nginx sh -c "nc backend 81 && echo opened || echo closed"
closed.
docker-compose exec backend netstat -lnpt shows us that the httpd webserver for service backend is listening on port 80 and not 81.
So must probably, your Dockerfile apache/Dockerfile is incorrect regarding how it tries to provide your custom httpd configuration apache/apache.conf.
To investigate further:
Make sure the main apache conf contents is what you expect with: docker-compose exec backend cat /usr/local/apache2/conf/httpd.conf
Inspect your backend service log: docker-compose logs backend
Doing so, you will realize your are missing the Listen 81 directive in the main apache config file. You can fix this in your apache/Dockerfile file:
# apache/Dockerfile
FROM httpd:2.4.32-alpine
RUN apk update; \
apk upgrade;
# Copy apache vhost file to proxy php requests to php-fpm container
COPY apache/apache.conf /usr/local/apache2/conf/apache.conf
RUN echo "Listen 81" >> /usr/local/apache2/conf/httpd.conf
RUN echo "Include /usr/local/apache2/conf/apache.conf" >> /usr/local/apache2/conf/httpd.conf
Why have your backend container listen on port 81?
It does not add any value to make your different containers open different ports. Each container has it's own IP address, thus there is no need for avoiding port collision between the services defined in a docker-compose project.
I am trying to passwoard protect the default server in my Nginx config. However, no username/password dialog is shown when I visit the site. Nginx returns the content as usual. Here is the complete configuration:
worker_processes 1;
events
{
multi_accept on;
}
http
{
include mime.types;
sendfile on;
tcp_nopush on;
keepalive_timeout 30;
tcp_nodelay on;
gzip on;
# Set path for Maxmind GeoLite database
geoip_country /usr/share/GeoIP/GeoIP.dat;
# Get the header set by the load balancer
real_ip_header X-Forwarded-For;
set_real_ip_from 0.0.0.0/0;
real_ip_recursive on;
server {
listen 80;
server_name sub.domain.com;
auth_basic "Restricted";
auth_basic_user_file /etc/nginx/htpasswd/sub.domain.com.htpasswd;
expires -1;
access_log /var/log/nginx/sub.domain.com.access default;
error_log /var/log/nginx/sub.domain.com.error debug;
location / {
return 200 '{hello}';
}
}
}
Interestingly, when I tried using an invalid file path as the value of auth_basic_user_file, the configtest still passes. This should not be the case.
Here's the Nginx and system info:
[root#ip nginx]# nginx -v
nginx version: nginx/1.8.0
[root#ip nginx]# uname -a
Linux 4.1.7-15.23.amzn1.x86_64 #1 SMP Mon Sep 14 23:20:33 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
We are using the Nginx RPM available through yum.
You need to add auth_basic and auth_basic_user_file inside of your location block instead of the server block.
location / {
auth_basic "Restricted";
auth_basic_user_file /etc/nginx/htpasswd/sub.domain.com.htpasswd;
return 200 '{hello}';
}
Did you tried to reload/stop-and-start your nginx after basic auth was added to config? It is necessary to reload nginx with something like:
sudo -i service nginx reload
---- in order to make new settings work.
Also I would double check the URLs that are under your tests.
(Once I tried to test Nginx Basic Auth in an Nginx proxy configuration accessing the actual URL of the resource that was behind the Nginx proxy and not the actual URL of Nginx.)
P.S.
Using an invalid file path as the value of auth_basic_user_file still doesn't cause the configtest to fail in 2018 as well.
Here's my version of Nginx:
nginx version: nginx/1.10.2
Though an invalid file path causes Basic Auth check to be failed and results in:
403 Forbidden
---- HTTP response after credentials provided.
In my case, adding the directives to /etc/nginx/sites-available/default worked, whereas adding the directives to /etc/nginx/nginx.conf did not.
Of course this only happens if you have this in your nginx.conf file:
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
The config is simple (put it under location for specific part of your website, or under server for your whole website):
server {
location /foo/ {
auth_basic "This part of website is restricted";
auth_basic_user_file /etc/apache2/.htpasswd;
}
}