I am setting up a server for my university which has to be only accessible from inside their network.
This I can easily do with nginx. Unfortunatley there are some people that do not have access to this network/IP range. I could use Basic Authentication With Source IP Whitelisting but I would prefer to use a certificate.
Is there a way to first check, if the access is from within the allowed IP Range and if not asking for a certificate?
I tried something like:
server {
root /var/www/html;
index index.html index.htm index.nginx-debian.html;
listen [::]:443 ssl ipv6only=on; # managed by Certbot
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/test.de/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/test.de/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
# server_name _;
server_name test.de;
ssl_client_certificate /etc/nginx/client_certs/ca.crt;
ssl_verify_client optional;
error_log /var/log/nginx/errors.log debug;
location / {
satisfy any;
allow 123.0.0.0/16;
allow 456.0.0.0/16;
deny all;
if ($ssl_client_verify != SUCCESS) {
return 403;
}
try_files $uri $uri/ =403;
}
}
server {
if ($host = test.de) {
return 301 https://$host$request_uri;
} # managed by Certbot
listen 80 ;
listen [::]:80 ;
server_name test.de;
return 404; # managed by Certbot
}
which is not working.
I could use a check for the IP before checking the ssl_client_verify like
if ($remote_addr = 1.2.3.4 )
{
proxy_pass http://10.10.10.1;
break;
}
if ($ssl_client_verify != "SUCCESS")
{ return 403; }
but this would not be feasible for every single ip adress.
How could I handle this efficiently?
Thank you in advance
~Fabian
You may be able to use a geo block instead of the allow/deny statements and use $ssl_client_verify as the default value.
For example:
geo $verify {
123.0.0.0/16 "SUCCESS";
456.0.0.0/16 "SUCCESS";
default $ssl_client_verify;
}
server {
if ($verify != "SUCCESS") { return 403; }
...
}
See this document for details.
Related
I've been struggling to figure it all out, but I finally have something working with the following:
nginx:
server {
server_name example.com;
location = /favicon.ico { access_log off; log_not_found off; }
location /static/ {
root /home/example/example;
}
location / {
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
include proxy_params;
proxy_pass http://unix:/tmp/daphne.sock;
}
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
server {
if ($host = www.example.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
if ($host = example.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
listen 80;
server_name example.com;
return 404; # managed by Certbot
}
daphne.service:
[Unit]
Description=daphne daemon
After=network.target
[Service]
User=example
Group=www-data
WorkingDirectory=/home/example/example
StandardOutput=file:/var/example.log
StandardError=file:/var/example.log
ExecStart=/home/example/example/venv/bin/daphne \
-u /tmp/daphne.sock \
project.asgi:application
[Install]
WantedBy=multi-user.target
I'm constrained to using https. The page does load with that. However, if I try and make a websocket connection it fails with something about mixed protocols. So I change 'ws://' to 'wss://' and now I get The URL 'wss://' is invalid. How can I get this to work?
The problem was quite simple to resolve. In my JavaScript I had
new WebSocket(
window.location.protocol == 'https:' ? 'wss://' : 'ws://'
+ window.location.host
+ '/ws/'
);
when I should have had
new WebSocket(
(window.location.protocol == 'https:' ? 'wss://' : 'ws://')
+ window.location.host
+ '/ws/'
);
Note the parentheses.
so I manage the domain britoanderson.com and I am trying to get ssl to work on it.
I used certbot to make the certificate for both www. subdomain and the main britoanderson.com domain.
I set up cloudflare to "Full" encryption mode.
For some reason, the SSL certificate works on https://www.britoanderson.com/ but not on https://britoanderson.com/ where the website just refuses to open.
Here is my nginx default file:
server {
if ($host = www.britoanderson.com) {
return 301 https://www.$host$request_uri;
} # managed by Certbot
if ($host = britoanderson.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
listen 80 default_server;
listen [::]:80 default_server;
root /var/www/html/;
index index.php index.html index.htm;
server_name britoanderson.com www.britoanderson.com;
location / {
try_files $uri $uri/ /index.php?$args;
}
location ~ \.php$ {
include snippets/fastcgi-php.conf;
fastcgi_pass unix:/run/php/php7.0-fpm.sock;
}
}
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
ssl on;
ssl_certificate /etc/letsencrypt/live/britoanderson.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/britoanderson.com/privkey.pem; # managed by Certbot
root /var/www/html/;
index index.php index.html index.htm;
server_name britoanderson.com www.britoanderson.com;
location / {
try_files $uri $uri/ /index.php?$args;
}
location ~ \.php$ {
include snippets/fastcgi-php.conf;
fastcgi_pass unix:/run/php/php7.0-fpm.sock;
}
}
Both A records for the main domain britoanderson.com and the subdomain www have been set on cloudflare.
What am I doing wrong? Why does the main website just refuse to open?
Do you have restarted nginx after issuing the certificates? I can only access the http site, but not the https-site, so it looks like your https-forwarding which was done by certbot isn't working yet too.
Turns out my PC was giving me a DNS_PROBE_FINISHED_NXDOMAIN while the actual error was within the redirects. Removing the
if ($host = www.britoanderson.com) {
return 301 https://www.$host$request_uri;
} # managed by Certbot
if ($host = britoanderson.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
fixed the issue
I have an Nginx server with SSL. And when I use proxy_pass to http://127.0.0.1:8765 with ssl added its giving "This site can’t be reached". Without SSL it was working correctly. If I send a request as http://domain-name:8765 in a web browser it gave the output correctly. Below is my configuration :
server {
server_name domain-name.in;
access_log /var/log/nginx/reverse-access.log;
error_log /var/log/nginx/reverse-error.log;
location / {
proxy_pass http://127.0.0.1:8765;
}
listen [::]:443 ssl; # managed by Certbot
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/domain-name.in/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/domain-name.in/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
server {
if ($host = domain-name.in) {
return 301 https://$host$request_uri;
} # managed by Certbot
listen 80;
listen [::]:80;
server_name domain-name.in;
return 404; # managed by Certbot
}
I have 2 virtual hosts configured in nginx and both using ssl in a way that http://www.firstsite.com redirects to https://www.firstsite.com and it works correctly, the problem is that http://www.secondsite.com is not redirecting to https://www.secondsite.com, but to https://www.firstsite.com
this is the first config file
server {
listen 80;
return 301 https://www.dianadelvalle.com$request_uri;
server_name www.dianadelvalle.com;
}
server{
listen 443 ssl;
ssl_certificate /etc/letsencrypt/live/www.koohack.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/www.koohack.com/privkey.pem;
root /home/pi/www.dianadelvalle.com/;
index commingsoon.html index.html index.htm index.nginx-debian.html;
server_name www.dianadelvalle.com;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
# max upload size
client_max_body_size 5M; # adjust to taste
location / {
try_files $uri $uri/ =404;
}
}
and the second config file:
# the upstream component nginx needs to connect to
upstream django {
server unix:///home/pi/koohack/mysite.sock; # for a file socket
#server 127.0.0.1:8001; # for a web port socket (we'll use this first)
}
server {
listen 80;
server_name www.koohack.com;
return 301 https://www.koohack.com$request_uri;
}
# configuration of the server
server {
listen 443 ssl;
server_name www.koohack.com;
ssl_certificate /etc/letsencrypt/live/www.koohack.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/www.koohack.com/privkey.pem;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
# max upload size
client_max_body_size 15M; # adjust to taste
if (-f /home/pi/koohack/.maintenance) {
return 503;
}
error_page 503 #maintenance;
location #maintenance {
rewrite ^(.*)$ /home/pi/koohack/static/maintenance.html break;
}
# Django media
location /media {
alias /home/pi/koohack/media; # your Django project's media files - amend as required
}
location /static {
alias /home/pi/koohack/static; # your Django project's static files - amend as required
}
# Finally, send all non-media requests to the Django server.
location / {
uwsgi_pass django;
include /etc/nginx/uwsgi_params; # the uwsgi_params file you installed
}
location /.well-known {
alias /home/pi/koohack/.well-known;
}
}
I spared the server name, log and certificate paths for clarity. What I'm doing wrong? Any suggestions?
Necessary note: I already looked to this possible answer to avoid content duplication but it didn't help
You may have the following configs:
server_name my.domain.com;
ssl_certificate /etc/nginx/chain.pem;
ssl_certificate_key /etc/nginx/my.domain.key;
Check that your second site is also listening on ssl ports.
listen 443 ssl;
listen [::]:443 ssl;
If the 2nd site is missing the listening config, it will redirect to default, regardless of the ssl certificate configs.
so I have multiple domains with multiple let's encrypt ssl certificates (one per domain) which all point to the same app (upstream). Currently I am using the code below. However it is quite a lot of code, especially if I have to replicated it for every domain. So I am wondering if there is a way to combine it so that I have much of the code only once, which would make it much easier to maintain.
The redirect for https://www.any-domain-here is problematic, as well as the last, main, server block, as both require the ssl certificate and I will need to include those for all different domains. So is there a way to do this without duplicating those code blocks?
############################
#
# Upstream
#
upstream upstream {
least_conn;
server app:8080;
}
upstream blog.upstream {
least_conn;
server app_nginx;
}
############################
#
# redirect all 80 to 443
# and allow Let's Encrypt
#
server {
server_name ~.;
listen 80;
listen [::]:80;
# config for .well-known
include /etc/nginx/includes/letsencrypt.conf;
location / {
return 301 https://$host$uri;
}
}
############################
#
# Redirect all www to non-www
#
server {
server_name "~^www\.(.*)$" ;
return 301 https://$1$request_uri ;
ssl_certificate /etc/letsencrypt/live/www.domain.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/www.domain.com/privkey.pem;
}
##########################
# HTTPS
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name domain.com;
location /blog/ {
proxy_set_header Host $host;
proxy_pass http://blog.upstream;
}
ssl_certificate /etc/letsencrypt/live/domain.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/domain.com/privkey.pem;
# access_log
access_log /var/log/nginx/access.log;
# proxy_pass config
location / {
# include proxy presets
include /etc/nginx/includes/proxy.conf;
proxy_pass http://domain.com$uri;
}
# general ssl parameters
include /etc/nginx/includes/ssl-params-with-preload.conf;
root /var/www/html;
}
I solved this by creating quite a couple of include files.
I have the following default.conf now:
# don't redirect proxy
proxy_redirect off;
# turn off global logging
access_log off;
# DON'T enable gzip as it opens up vulnerabilities
# logging format
log_format compression '$remote_addr - $remote_user [$time_local] '
'"$request" $status $bytes_sent '
'"$http_referer" "$http_user_agent" "$gzip_ratio"';
############################
#
# redirect all 80 to 443
# and allow Let's Encrypt
#
server {
listen 80;
listen [::]:80;
server_name ~. ;
location /.well-known/acme-challenge {
root /var/www/html;
default_type text/plain;
# allow all;
}
location / {
return 301 https://$host$uri;
}
}
# include website configs
include /etc/nginx/includes/nginx-server.conf;
My nginx-server.conf has the following content:
############################
#
# Upstream
#
upstream veare_upstream {
server veare:8080;
}
############################
#
# redirect all 80 to 443
# and allow Let's Encrypt
#
server {
server_name www.veare.de;
listen 80;
listen [::]:80;
root /var/www/html;
location /.well-known/acme-challenge {
default_type text/plain;
}
location / {
return 301 https://$host$uri;
}
}
############################
#
# Redirect all www to non-www
#
server {
listen 80;
listen [::]:80;
server_name "~^www\.(.*)$" ;
return 301 https://$1$request_uri;
}
##########################
# HTTPS
include /etc/nginx/includes/domains/*.conf;
The last line includes all my domain files, one e.g. is veare.de.conf they are all named exactly like the domain:
############################
#
# Redirect all www to non-www
#
#
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name www.veare.de;
ssl_certificate /etc/letsencrypt/live/www.veare.de/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/www.veare.de/privkey.pem;
return 301 https://veare.de$request_uri;
}
##########################
# HTTPS
server {
server_name veare.de;
ssl_certificate /etc/letsencrypt/live/veare.de/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/veare.de/privkey.pem;
location ^~ /.well-known/acme-challenge {
allow all;
# Set correct content type. According to this:
# https://community.letsencrypt.org/t/using-the-webroot-domain-verification-method/1445/29
# Current specification requires "text/plain" or no content header at all.
# It seems that "text/plain" is a safe option.
default_type "text/plain";
root /var/www/html;
}
include /etc/nginx/includes/main-server.conf;
}
This works perfectly for me.