nginx: proxy_pass subdirectories to other servers - ssl

Im using nginx as Web Server and Reverse Proxy with SSL enabled.
The Web Server serves an WoltLab Suite Forum 5.0.0 (formerly known as Burning Board) and proxied some subdomains to different hosts like a NodeJS backend, an Tomcat backend and many other different services.
This worked great so far, but now i have the problem that i can no longer use subdomains to accomplish this.
Please don't ask why, please don't.
Now, that i can no longer use subdomains, Im trying to get it working with sub directories.
An example:
I had xyz.example.com point to my nginx server at 12.13.14.15.
nginx proxied all requests to xyz.example.com to 10.20.30.40:1234
Now i want nginx to proxy all requests to example.com/xyz/ to 10.20.30.40:1234
I got this working with Apache Archiva as backend service, but all other services like my NodeJS backend are refusing to work correctly with my current configuration.
It sends me to the BurningBoard wich shows me their Page-Not-Found page.
example.com/xyz/admin/index.php becomes to example.com/admin/index.php wich wont work of course.
The directory that proxies to Archiva has the exact same configuration, just other directory names of course.
The Archiva URL looks like this after i call it from Web:
example.com/repo/ becomes example.com/repo/#welcome and shows me Archivas Welcome Page.
This is exactly what i want for my other services too.
Here are my current configuration files for nginx (sensitive data replaced with X):
<=== sites-available/default ===>
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name XXXX.XX www.XXXX.XX;
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl default_server;
listen [::]:443 ssl default_server;
include snippets/ssl-XXXXX.XX.conf;
include snippets/ssl-params.conf;
root /var/www/html;
include /etc/nginx/snippets/proxies.conf;
# the last try_files argument is for SEO friendly URLs for the WSF 5.0
location / {
index index.php;
try_files $uri $uri/ /index.php?$uri&$args;
}
location ~ \.php$ {
include snippets/fastcgi-php.conf;
fastcgi_pass unix:/var/run/php/php7.0-fpm.sock;
}
# Lets Encrypt automation
location ~ /.well-known {
allow all;
}
location ~ /\.ht {
deny all;
}
}
<=== snippets/proxies.conf ===>
# Apache Archiva
location /repo {
rewrite ^/repo(/.*)$ $1 break;
proxy_pass http://XXXXXXXX:XXXXX;
return 404;
}
# Git Solution
location /git {
rewrite ^/git(/.*)$ $1 break;
proxy_pass http://XXXXXXXX:XXXXX;
return 404;
}
# Filehosting
location /cloud {
rewrite ^/cloud(/.*)$ $1 break;
proxy_pass http://XXXXXXXX:XXXXX;
return 404;
}
# NodeJS
location /webinterface {
rewrite ^/webinterface(/.*)$ $1 break;
proxy_pass https://XXXXXXXX:XXXXX;
include /etc/nginx/snippets/websocket-magic.conf;
return 404;
}
Any ideas how to solve this problem?
Also, please tell me if you need more informations like nginx' version or the like.

Related

Without specifying a port number inside of a API request how server decided that this request is for port 80 or 443

I did not mention port number in my api end point, but how server decided to serve this request using port 80 or 443
const {data} = await axios.get('/api/users/currentuser');
Use something like this in NGINX:
server {
root /var/www/html;
server_name _;
# this is the react/angular fronted application endpoints
location / {
try_files $uri $uri/ /index.html;
}
# this is the api endpoints
location /api {
proxy_pass http://127.0.0.1:8000;
}
}
However you have not explicitely mentioned in the question that you are using NGINX or not but you have nginx in one of the 5 tags of this question, so I am supposing that you must be using NGINX and so, I am answering in that context.

Unable to redirect from 80 to 8080 with nginx

I am unable to redirect my website from domain_name:8080 to domain_name:80
This is the code in my /etc/nginx/sites-available/file.
server {
listen *:80;
root /var/www/;
server_name domain_name;
location / {
proxy_pass http://127.0.0.1:8080/;
}
}
What's interesting is, I am able to access it without :8080 on LAN. I am using com.bmuschko.tomcat plugin with Gradle. Am I missing something?

Nginx redirection conflicts with other ports

My situations is as follows:
app 1 running at: server.domain.com (192.168.1.3)
app 2 running at: server.domain.com:8080 (192.168.1.2)
My router is set up to route requests on port 80 to app 1 and port 8080 to app 2.
So far so good, this scenario has been working for ages.
Recently I tried switching to nginx and I decided to redirect http traffic to https traffic for app 1.
I set up a container with nginx and am using the following config:
server {
listen 80 default_server;
server_name _;
return 301 https://$host$request_uri;
}
# main server block
server {
listen 443 ssl default_server;
root /config/www;
index index.html index.htm index.php;
server_name _;
ssl_certificate /path to cert;
ssl_certificate_key /path to cert;
ssl_dhparam /path to cert;
ssl_ciphers '';
ssl_prefer_server_ciphers on;
client_max_body_size 0;
location / {
try_files $uri $uri/ /index.html /index.php?$args =404;
}
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
# With php7-cgi alone:
fastcgi_pass 127.0.0.1:9000;
# With php7-fpm:
#fastcgi_pass unix:/var/run/php7-fpm.sock;
fastcgi_index index.php;
include /etc/nginx/fastcgi_params;
}
}
This successfully redirects http to https and app 1 works as expected.
However when trying to visit app 2 I will also be redirected to https (which it shouldn't, app 2 doesn't support it)
Now I already figured out why this happens.
Google Chrome has a cache so when I visit the non-https url it gets a 301 redirect to the https version. It saves this in it's cache and now thinks I always want https regardless of the port.
The workaround I've found is going to chrome://net-internals and clearing the cache there. Opening app 2 then succeeds but after visiting app 1 I end up in the same loop all over again.
I've tried several default fixes found all over the net but none of them have worked thus far.
Anyone know what I have to put in my config to fix this?
ps: cert paths, domain names and ports are fake representations of the real situation
First off it would be helpful if in the nginx config you label which server definition corresponds to App 1 and App 2, because it appears there may be a mix up in the configuration. You are also missing some configuration, such as listening to port 8080. So first I'll clarify the requirements you clearly stated for both apps:
App 1:
Listens on port 80
Uses SSL
App 2:
Listens on port 8080
Does not use SSL / doesn't support it.
So I'd recommend config closer to:
# Corresponds better to app 2 given your requirements
server {
listen 8080 default_server;
server_name _;
# NOTE: You may want to listen for certain routes, without redirect EG
# location /foo/* { . . . }
return 301 $scheme://$host$request_uri;
}
# main server block - app 1
server {
listen 443 ssl default_server;
. . . # The rest of your definition here is fine for an SSL server
}
My main point here is that the server block on port 80 as you've defined it above is just a redirect machine to https, hardcoded. This block as you've defined it contradicts the requirements that you "route requests on port 80 to app 1" and you "use SSL for app 1" since your SSL configuration is actually in the second server definition. What you've set up in the first server definition is actually a pattern used to force ssl redirects leaving you in a position where you'll never serve non-ssl HTTP traffic. This might clear up the issue somewhat; perhaps I can help more once the server blocks more closely match the stated requirements.
Finally noting that it is possible to listen to multiple ports and route to http and https traffic within one server definition block:
server {
listen 80;
listen 443 ssl;
# can force some routes to be ssl or non ssl accordingly
}
Configuration like this may be more ideal if both app servers are hosted on the same machine using the same nginx service.

Configure proxy_pass for intermittent service

I'm using Nginx within a Doccker container to host my application
I'm trying to configure Nginx to proxy traffic to the /.well-known/ directory to another container that handles the letsencrypt process to setup & renew SSL certificates, but I don't need that container to be running all the time, only when attempting to renew the certificates.
My idea was to use proxy_pass for the directory specific traffic, through to the leysencrypt container, but as it's not always running, the Nginx process exits complaining that the upstream is not available.
Is there a way to configure Nginx not to check the status of the upstream for proxy_pass setting?
Here's the current config, if it's useful…
server {
listen 80 default_server;
listen [::]:80 default_server ipv6only=on;
server_name domain.com;
root /var/www/html/web;
location / {
return 301 https://$host$request_uri;
}
location ^~ /.well-known/ {
proxy_pass http://letsencrypt/.well-known/;
}
}
I guess I could use an in app forwarding of files, but this feels clunky. I'd rather configure it within Nginx.
location ^~ /.well-known/ {
resolver 127.0.0.1;
set $upstream letsencrypt;
proxy_pass http://$upstream/.well-known/; # use variables to make nginx startable
}

Invalid ports added in redirects on AWS EC2 nginx using SSL decryption offloaded to ELB

On AWS, I'm trying to migrate a PHP Symfony app running on nginx. I want to be able to test the app by directly talking to the EC2 server and via an ELB (the public route in).
I've setup an elastic load balancer to decrypt all the SSL traffic and pass this on to my EC2 server via port 80, as well as pass port 80 directly onto my EC2 server via port 80.
Initially this caused infinite redirects in my app but I researched and then fixed this by adding
fastcgi_param HTTPS $https;
with some custom logic that looks at $http_x_forwarded_proto to figure out when its actually via SSL.
There remains one issue I can't solve. When a user logs into the Symfony app, if they come via the ELB, the form POST eventually returns a redirect back to
https://elb.mysite.com:80/dashboard
instead of
https://elb.mysite.com/dashboard
which gives the user an error of "SSL connection error".
I've tried setting
fastcgi_param SERVER_PORT $fastcgi_port;
to force it away from 80 and I've also added the
port_in_redirect off
directive but both make no difference.
The only way I've found to fix this is to alter the ELB 443 listener to pass traffic via https. The EC2 server has a self certified SSL certificate configured. But this means the EC2 server is wasting capacity performing this unnecessary 2nd decryption.
Any help very much appreciated. Maybe there is a separate way within nginx of telling POST requests to not apply port numbers?
Nginx vhost config:
server {
port_in_redirect off;
listen 80;
listen 443 ssl;
ssl_certificate /etc/nginx/ssl/mysite.com/self-ssl.crt;
ssl_certificate_key /etc/nginx/ssl/mysite.com/self-ssl.key;
# Determine if HTTPS being used either locally or via ELB
set $fastcgi_https off;
set $fastcgi_port 80;
if ( $http_x_forwarded_proto = 'https' ) {
# ELB is using https
set $fastcgi_https on;
# set $fastcgi_port 443;
}
if ( $https = 'on' ) {
# Local connection is using https
set $fastcgi_https on;
# set $fastcgi_port 443;
}
server_name *.mysite.com my-mysite-com-1234.eu-west-1.elb.amazonaws.com;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log error;
rewrite ^/app\.php/?(.*)$ /$1 permanent;
location / {
port_in_redirect off;
root /var/www/vhosts/mysite.com/web;
index app.php index.php index.html index.html;
try_files $uri #rewriteapp;
}
location ~* \.(jpg|jpeg|gif|png)$ {
root /var/www/vhosts/mysite.com/web;
access_log off;
log_not_found off;
expires 30d;
}
location ~* \.(css|js)$ {
root /var/www/vhosts/mysite.com/web;
access_log off;
log_not_found off;
expires 2h;
}
location #rewriteapp {
rewrite ^(.*)$ /app.php/$1 last;
}
location ~ ^/(app|app_dev|config)\.php(/|$) {
port_in_redirect off;
fastcgi_pass 127.0.0.1:9000;
fastcgi_split_path_info ^(.+\.php)(/.*)$;
fastcgi_param HTTPS $fastcgi_https;
# fastcgi_param SERVER_PORT $fastcgi_port;
#fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME /var/www/vhosts/mysite.com/web$fastcgi_script_name;
include fastcgi_params;
}
}
References:
FastCGI application behind NGINX is unable to detect that HTTPS secure connection is used
https://serverfault.com/questions/256191/getting-correct-server-port-to-php-fpm-through-nginx-and-varnish
http://nginx.org/en/docs/http/ngx_http_core_module.html#port_in_redirect
Finally got a solution via another channel.
The answer is to commented out SERVER_PORT with a # in the file fastcgi_params file.
Much thanks to Maxim from Nginx.