After http -> https nginx does not find static content - ssl

I had a working setup for static file serving with nginx, using http.
Later, when I switched to https, changing the port from 80 to 443 as well as adding ssl certificate and key, the web server cannot find the static files any more. I get 404 responses for all static files. This is my nginx configuration file:
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
upstream http_backend {
server 127.0.0.1:3001;
keepalive 32;
}
server {
listen 80;
server_name test.com;
# Redirect to https
location / {
rewrite ^ https://$host$request_uri permanent;
}
}
server {
listen 443 default_server ssl;
server_name localhost;
root /var/www/myblog/app/resources/public;
ssl on;
ssl_certificate /etc/nginx/certificate.crt;
ssl_certificate_key /etc/nginx/key.pem;
location / {
proxy_redirect off;
proxy_pass http://http_backend;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
proxy_set_header X-Forwarded-Proto https;
access_log /var/www/logs/myblog.access.log;
error_log /var/www/logs/myblog.error.log;
}
location ^~ /(css|js) {
root /var/www/myblog/app/resources/public;
}
}
}
My access log:
37.201.226.254 - - [26/Aug/2018:06:41:25 +0000] "GET /js/compiled/foo.js HTTP/1.1" 404 38 "https://www.test.com/" "Mozilla/5.0 (X11; Linux x86_64; rv:61.0) Gecko/20100101 Firefox/61.0"
37.201.226.254 - - [26/Aug/2018:06:41:25 +0000] "GET /css/mui.css HTTP/1.1" 404 38 "https://www.test.com/" "Mozilla/5.0 (X11; Linux x86_64; rv:61.0) Gecko/20100101 Firefox/61.0"
37.201.226.254 - - [26/Aug/2018:06:41:25 +0000] "GET /css/style.css HTTP/1.1" 304 0 "https://www.test.com/" "Mozilla/5.0 (X11; Linux x86_64; rv:61.0) Gecko/20100101 Firefox/61.0"
The files are in /var/www/myblog/app/resources/public: this is configured as root. I run the web server from /var/www/myblog/app.
Bear in mind this is my first deployment with NGINX. Does anyone have an idea what I forgot to configure to serve static files with https? My OS is linux.

I found out the problem through random trial and error. For some reason, changing
location ^~ /(css|js) {
root /var/www/myblog/app/resources/public;
}
to
location ~ ^/(css/|js/) {
root /var/www/myblog/app/resources/public/;
}
seems to work.

Related

nginx seems to cut off GET-params

i have a problem with a qgis-system behind a nginx reverse-proxy.
some requests will result in 404 when requested over nginx - without nginx the 404 errors do not happen.
i will post some vital parts of the nginx-config:
log_format main '$host $remote_addr - $remote_user [$time_local] "$request" $status $body_bytes_sent "$http_referer" "$http_user_agent" "$http_x_forwarded_for" $request_time $upstream_addr $upstream_http_status "$upstream_addr$request_uri"';
access_log /data/logs/nginx/access.log main buffer=4k;
upstream backend-geodatenportal {
ip_hash;
zone http_backend 256k;
server 172.28.136.21:80 weight=1 max_fails=3 fail_timeout=30s;
keepalive 1024;
}
server {
listen geodatenportal.domain.tld:443 ssl;
server_name geodatenportal.domain.tld;
...
location / {
proxy_set_header X-Forwarded-Proto https;
proxy_set_header Host $host;
proxy_set_header ClientProtocol https;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://backend-main;
}
location ~/kommunalportal(.*)$ {
proxy_set_header X-Forwarded-Proto https;
proxy_set_header Host $host;
proxy_set_header ClientProtocol https;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://backend-geodatenportal/kommunalportal$1;
}
now the requests with problems:
in nginx-log:
geodatenportal.domain.tld - $IP - - [18/Sep/2019:09:59:54 +0200] "GET /kommunalportal/index.php/view/media/illustration?repository=kp&project=m01_Klostermansfeld HTTP/2.0" 404 72 "https://geodatenportal.domain.tld/kommunalportal/index.php/view/" "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/76.0.3809.132 Safari/537.36" "-" 0.031 172.28.136.21:80 - "172.28.136.21:80/kommunalportal/index.php/view/media/illustration?repository=kp&project=m01_Klostermansfeld"
with the log-configuration from top the bold string should be the request to the backend
but this is what apache logs:
194.113.79.210 - - [18/Sep/2019:09:59:54 +0200] "GET /kommunalportal/index.php/view/media/illustration HTTP/1.1" 404 72
so it seems the GET-Parameters get lost on its way from nginx to apache
can someone explain this?
You use regex location to match against URI, then use it in proxy_pass. The NGINX regex location matching is done against URI without arguments.
If you want to pass them on, just add them like so:
proxy_pass http://backend-geodatenportal/kommunalportal$1$is_args$args;

How to serve devpi with https?

I have an out-of-the-box devpi-server running on http://
I need to get it to work on https:// instead.
I already have the certificates for the domain.
I followed the documentation for nginx-site-config, and created the /etc/nginx/conf.d/domain.conf file that has the server{} block that points to my certificates (excerpt below).
However, my devpi-server --start --init is totally ignoring any/all nginx configurations.
How do i point the devpi-server to use the nginx configurations? Is it even possible, or am I totally missing the point?
/etc/nginx/conf.d/domain.conf file contents:
server {
server_name localhost $hostname "";
listen 8081 ssl default_server;
listen [::]:8081 ssl default_server;
server_name domain;
ssl_certificate /root/certs/domain/domain.crt;
ssl_certificate_key /root/certs/domain/domain.key;
ssl_protocols TLSv1.1 TLSv1.2;
ssl_ciphers EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH;
gzip on;
gzip_min_length 2000;
gzip_proxied any;
gzip_types application/json;
proxy_read_timeout 60s;
client_max_body_size 64M;
# set to where your devpi-server state is on the filesystem
root /root/.devpi/server;
# try serving static files directly
location ~ /\+f/ {
# workaround to pass non-GET/HEAD requests through to the named location below
error_page 418 = #proxy_to_app;
if ($request_method !~ (GET)|(HEAD)) {
return 418;
}
expires max;
try_files /+files$uri #proxy_to_app;
}
# try serving docs directly
location ~ /\+doc/ {
try_files $uri #proxy_to_app;
}
location / {
# workaround to pass all requests to / through to the named location below
error_page 418 = #proxy_to_app;
return 418;
}
location #proxy_to_app {
proxy_pass https://localhost:8081;
proxy_set_header X-outside-url $scheme://$host:$server_port;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Proto https;
}
}
This is the answer I gave to the same question on superuser.
Devpi doesn't know anything about Nginx, it will just serve HTTP traffic. When we want to interact with a web-app via HTTPS instead, we as the client need to talk to a front-end which can handle it (Nginx) which will in turn communicate with our web-app. This application of Nginx is known as a reverse proxy. As a reverse proxy we can also benefit from Nginx's ability to serve static files more efficiently than getting our web-app to do it itself (hence the "try serving..." location blocks).
Here is a complete working Nginx config that I use for devpi. Note that this is /etc/nginx/nginx.conf file rather than a domain config like yours because I'm running Nginx and Devpi in docker with compose but you should be able to pull out what you need:
worker_processes 1;
events {
worker_connections 1024;
}
http {
# Define the location for devpi
upstream pypi-backend {
server localhost:8080;
}
# Redirect HTTP to HTTPS
server {
listen 80;
listen [::]:80;
server_name _;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl;
server_name example.co.uk; # This is the accessing address eg. https://example.co.uk
root /devpi/server; # This is where your devpi server directory is
gzip on;
gzip_min_length 2000;
gzip_proxied any;
proxy_read_timeout 60s;
client_max_body_size 64M;
ssl_certificate /etc/nginx/certs/cert.crt; Path to certificate
ssl_certificate_key /etc/nginx/certs/cert.key; Path to certificate key
ssl_session_cache builtin:1000 shared:SSL:10m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4;
ssl_prefer_server_ciphers on;
access_log /var/log/nginx/pypi.access.log;
# try serving static files directly
location ~ /\+f/ {
error_page 418 = #pypi_backend;
if ($request_method !~ (GET)|(HEAD)) {
return 418;
}
expires max;
try_files /+files$uri #pypi_backend;
}
# try serving docs directly
location ~ /\+doc/ {
try_files $uri #pypi_backend;
}
location / {
error_page 418 = #pypi_backend;
return 418;
}
location #pypi_backend {
proxy_pass http://pypi-backend; # Using the upstream definition
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-outside-url $scheme://$host:$server_port;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
}
}
With Nginx using this configuration and devpi running on http://localhost:8080, you should be able to access https://localhost or with your machine with appropriate DNS https://example.co.uk. A request will be:
client (HTTPS) > Nginx (HTTP) > devpi (HTTP) > Nginx (HTTPS) > client
This also means that you will need to make sure that Nginx is running yourself, as devpi start won't know any better. You should at the very least see an Nginx welcome page.

Assets Directory returning 404 via SSL on unicorn nginx Ubuntu 14

This is similar question to this one, but suggested answer does not work.
I have confirmed that the assets exist, and restart nginx as well as unicorn service.
$ service nginx restart
$ service unicorn_app_name restart
This is my /etc/nginx/sites-enabled/[app_name] config.
upstream unicorn {
server unix:/home/unicorn_user/apps/app_name/shared/sock/unicorn.unicorn_user.sock fail_timeout=0;
}
server {
listen 80;
server_name staging.mydomain.org mydomain.org;
# Do not use a /tmp folder or other users can obtain certificates.
location '/.well-known/acme-challenge' {
default_type "text/plain";
root /etc/letsencrypt/webrootauth;
}
location / {
rewrite ^/(.*) https://staging.mydomain.org/$1 permanent;
}
}
ssl_certificate /etc/letsencrypt/live/staging.mydomain.org/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/staging.mydomain.org/privkey.pem;
server {
listen 443 ssl;
server_name www.staging.mydomain.org;
rewrite ^(.*) https://staging.mydomain.org/$1 permanent;
}
server {
listen 443 ssl;
server_name staging.mydomain.org;
root /home/unicorn_user/apps/app_name/current/public;
try_files $uri/index.html $uri #unicorn;
location #unicorn {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_redirect off;
proxy_pass http://unicorn;
}
location ~ ^/(assets)/ {
gzip_static on;
expires max;
add_header Cache-Control public;
#add_header Last-Modified "";
#add_header ETag "";
}
error_page 500 502 503 504 /500.html;
client_max_body_size 4G;
keepalive_timeout 60;
}
Seeing the 404's in the error and access logs. For example:
2016/05/18 07:46:01 [error] 2490#0: *4 open() "/home/unicorn_user/apps/app_name/current/public/assets/loading.gif" failed (2: No such file or directory), client: ip_address, server: staging.mydomain.org, request: "GET /assets/loading.gif HTTP/1.1", host: "staging.mydomain.org", referrer: "https://staging.mydomain.org/path/to/page"
Permissions on all the assets are 644 and 755, owned by unicorn_user and in same group name.
Can anyone make another suggestion for a log or configuration to check, a service to restart? Is there an nginx config misconfiguration?
the 404 i was getting was because one particular asset, loading.gif was generated in current/app/assets/ while the server is looking in current/public/assets/.
I had assumed that other javascript errors were being caused because of this setting, but it's simply because my Rails ignorance is so vast, I wasn't sure where to look for what.

Docker push to artifactory gives a 403

I am trying to push a docker image to a local docker repo on artifactory
docker push myNginxlb:2222/ubuntu
This gets a 403- Access is forbidden error. Folloing is my reverse proxy configuration under /etc/nginx/sites-enabled/artifactory
upstream artifactory_lb {
server mNginxLb.mycompany.com:8081;
server mNginxLb.mycompany.com backup;
}
log_format upstreamlog '[$time_local] $remote_addr - $remote_user - $server_name to: $upstream_addr: $request upstream_response_time $upstream_response_time msec $msec request_time $request_time';
server {
listen 80;
listen 443 ssl;
ssl_certificate /etc/nginx/ssl/my-certs/myCert.pem;
ssl_certificate_key /etc/nginx/ssl/my-certs/myserver.key;
client_max_body_size 2048M;
location / {
proxy_set_header Host $host:$server_port;
proxy_pass http://artifactory_lb;
proxy_read_timeout 90;
}
access_log /var/log/nginx/access.log upstreamlog;
location /basic_status {
stub_status on;
allow all;
}
}
# Server configuration
server {
listen 2222 ssl;
server_name mNginxLb.mycompany.com;
if ($http_x_forwarded_proto = '') {
set $http_x_forwarded_proto $scheme;
}
rewrite ^/(v1|v2)/(.*) /api/docker/my_local_repo_key/$1/$2;
client_max_body_size 0;
chunked_transfer_encoding on;
location / {
proxy_read_timeout 900;
proxy_pass_header Server;
proxy_cookie_path ~*^/.* /;
proxy_pass http://artifactory_lb;
proxy_set_header X-Artifactory-Override-Base-Url $http_x_forwarded_proto://$host;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-Proto $http_x_forwarded_proto;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
The access log indicate the following http requests
"GET /v2/ HTTP/1.1" 404 465 "-" "docker/1.9.1 go/go1.4.2 git-commit/a34a1d5 kernel/3.13.0-24-generic os/linux arch/amd64"
"GET /v2/ HTTP/1.1" 404 465 "-" "docker/1.9.1 go/go1.4.2 git-commit/a34a1d5 kernel/3.13.0-24-generic os/linux arch/amd64"
"GET /v1/_ping HTTP/1.1" 404 469 "-" "docker/1.9.1 go/go1.4.2 git-commit/a34a1d5 kernel/3.13.0-24-generic os/linux arch/amd64"
"PUT /v1/repositories/ubuntu/ HTTP/1.1" 403 449 "-" "docker/1.9.1 go/go1.4.2 git-commit/a34a1d5 kernel/3.13.0-24-generic os/linux arch/amd64"
Also in artifactory I have configured the docker local repo to use v2 api, what am I missing?
I fixed this by appending the intermediate certificate to the ssl certificate in context

Nginx Enable HTTPS/SSL when forwarding to other URL

Currently, I'm working with an AWS Ubuntu EC2 instance, running a Node.js app on port 3000, that has an Nginx reverse proxy. I have been trying to enable HTTPS and add a SSL certificate and I've been successful in that I don't get any errors in the nginx.conf file. However, I am redirecting my main website, "example.com" to the public DNS of the AWS server and when I try to load the "http://example.com" or "https://example.com" page, I get a "Unable to Connect" error from Firefox, which is my testing browser. Also when I run sudo nginx -t, there are no syntactical errors in the configuration file and when I check the /var/log/nginx/error.log file it is empty. Below is my current nginx.conf file.
Update: I changed server_name from example.com to the public DNS of my server, lets call it amazonaws.com. Now, when I type in https://amazonaws.com the page loads and the SSL certificate shows up when running the website through ssllabs.com. However, when I type in amazonaws.com or http://amazonaws.com I get a blank page like before.
user root;
worker_processes 1;
error_log /var/log/nginx/error.log;
pid /var/run/nginx.pid;
events {
# max_clients = worker_processes * worker_connections / 4
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
gzip on;
gzip_comp_level 6;
gzip_vary on;
gzip_min_length 1000;
gzip_proxied any;
gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
gzip_buffers 16 8k;
# backend applications
upstream nodes {
server 127.0.0.1:3000;
keepalive 64;
}
server {
listen 80;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl;
ssl_certificate /etc/nginx/ssl/example_com.crt;
ssl_certificate_key /etc/nginx/ssl/example_com.key;
ssl_protocols SSLv3 TLSv1;
ssl_ciphers HIGH:!aNULL:!MD5;
server_name example.com;
# everything else goes to backend node apps
location / {
proxy_pass http://nodes;
proxy_redirect off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $host;
proxy_set_header X-NginX-Proxy true;
proxy_set_header Connection "";
proxy_http_version 1.1;
}
}
}
You should give this server definition
server {
listen 80;
return 301 https://$host$request_uri;
}
a server_name (eg amazonaws.com) as well.