Nginx proxy pass to reverse proxy - ssl

I'm fairly new to nginx and stuck with the current configuration.
I also checked ssl - nginx does redirect, nginx as proxy for web app, nginx proxy_pass, nginx proxy rewrite and another post related to my question.
I also looked into some other posts which didn't help me right now. I didn't read all of the approximately 21500 posts around the topics nginx and proxy.
Google also failed directing me to the solution.
Current setup is:
[CMS (Plone in LAN)]<--->[Reverse-Proxy (Apache / http://oldsite.foo)]
This is the old site setup. Basically we need a redesign of the CMS. But it has grown with plenty of dependencies and self written modules by at least two developers (who never met each other). It will be a task for merely a year to get it replaced properly. There is also some weird stuff in the Apache config, so we can't avoid using Apache at the moment.
Unfortunately we need an optical redesign as soon as we can.
So we came with the idea to use Diazo/XSLT in Nginx to redesign the old website and show our assessors some results.
Therefore I try the following setup:
[Plone]<--->[Apache]<--->[Proxy (XSLT in Nginx / https://newsite.foo)]
Here is my xslt_for_oldsite config file (Cache-Control only off for debugging):
add_header Cache-Control no-cache;
server {
server_name newsite.foo;
server_tokens off;
listen b.b.b.b:80;
return 301 https://$server_name$request_uri;
access_log /var/log/nginx/newsite.port80.access.log;
error_log /var/log/nginx/newsite.port80.error.log;
}
server {
server_name newsite.foo;
server_tokens off;
listen b.b.b.b:443 ssl;
access_log /var/log/nginx/newsite.port443.access.log;
error_log /var/log/nginx/newsite.port443.error.log;
ssl_certificate /etc/ssl/certs/nginx.crt;
ssl_certificate_key /etc/ssl/private/nginx.key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3;
ssl_ciphers HIGH:!aNULL:!MD5:!ADH:!AECDH;
ssl_session_cache shared:SSL:5m;
proxy_http_version 1.1;
#proxy_set_header X-Forwarded-Host $host:$server_port;
#proxy_set_header X-Forwarded-Server $host;
#proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# proxy_set_header Connection "";
# proxy_ignore_headers Expires;
# proxy_set_header X-Real-IP $remote_addr;
# proxy_set_header X-forwarded-host $host;
sub_filter_types *;
sub_filter_once off;
sub_filter "http://oldsite.foo" "https://newsite.foo";
location / {
proxy_pass http://oldsite.foo/;
proxy_redirect off;
#proxy_redirect http://oldsite.foo/ https://newsite.foo/;
proxy_set_header Host $host;
}
}
If I start my browser to connect to http://oldsite.foo then it loads:
1 HTML document from oldsite
3 CSS files from oldsite
9 JS files from oldsite
10 graphic files from oldsite
But if I use my browser to get https://newsite.foo then it loads:
1 HTML document from newsite
only 5 graphic files from oldsite (direct request from my browser)
everything else is missing
While the HTML document received with wget https://newsite.foo -o index.html has all links modified to https://newsite.foo (correctly replacing http://oldsite.foo with https://newsite.foo) the browser shows all links unmodified: http://oldsite.foo instead of https://newsite.foo.
I get the following server header with curl -I https://newsite.foo:
HTTP/1.1 200 OK
Server: nginx
Date: Fri, 11 Sep 2020 10:28:15 GMT
Content-Type: text/html
Connection: keep-alive
Accept-Ranges: none
Accept-Ranges: bytes
X-Varnish: 1216306480
Age: 0
Via: 1.1 varnish
Set-Cookie: I18N_LANGUAGE="de"; Path=/
Via: 1.1 oldsite.foo
Vary: Accept-Encoding
Cache-Control: no-cache
I played around with the add_header, proxy_set_header and proxy_redirect. I tried also
location ~* .* {
proxy_pass http://oldsite.foo$request_uri;
proxy_redirect off;
proxy_set_header Host $host;
}
but none of my changes changed the behaviour of nginx that it redirects the GET requests to http://oldsite.foo and shows the answers as if they would come from https://newsite.foo .
I have no answer to these questions:
Why my browser keeps connecting to http://oldsite.foo? It should connect to https://newsite.foo .
Why are the links in the HTML different between the version from wget and my browser?
Why over the half of the website doesn't reach the browser over https://newsite.foo?
How can I fix this?
Is anyone out there who may point me in the right direction?
Thanks in advance. At least thank you for reading my post.
Best regards.

Meanwhile I found the solution.
Apache sent the data gzipped and sub_filter couldn't handle it (see official documentation: sub_filter).
Indeed I tried to avoid this by using proxy_set_header Accept-Encoding ""; but it didn't work.
The reason is that this part must be set in location context.
Hence the correct configuration for Ubuntu 20.04 LTS, Nginx 1.14.0 at the time of writing (2020-09-15) is:
...
server {
server_name newsite.foo;
server_tokens off;
listen b.b.b.b:443 ssl;
access_log /var/log/nginx/newsite.port443.access.log;
error_log /var/log/nginx/newsite.port443.error.log;
ssl_certificate /etc/ssl/certs/nginx.crt;
ssl_certificate_key /etc/ssl/private/nginx.key;
# Double check and modify this part BEFORE using in production:
ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3;
ssl_ciphers HIGH:!aNULL:!MD5:!ADH:!AECDH;
ssl_session_cache shared:SSL:5m;
location / {
proxy_http_version 1.1;
proxy_set_header Accept-Encoding ""; # MUST be written HERE in this context!
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_pass http://oldsite.foo;
proxy_redirect off;
sub_filter_types text/html text/css text/javascript; # If unsure you may use '*'
sub_filter_once off;
sub_filter http://oldsite.foo https://newsite.foo;
}
...
Thanks to adrianTNT who pointed out the crucial part for me (see the missing detail).

Related

Drupal Media, nginx proxy to apache, getting 502 Bad Request for progress-bar

i´m have migrated a Drupal site to my server,
the server uses nginx to do ssl-termination and let apache do the rest, e.g nginx works as a proxy.
However, using the Drupal Media-Browser to upload a file, i´m getting a "502 Bad Gateway" error for requesting /file/progress/xyz (i guess it´s the progress-bar) the actual file-upload works though.
this is the nginx server block for the site:
server {
listen 443 ssl;
listen [::]:443 ssl;
server_name www.example.com;
port_in_redirect off;
ssl on;
ssl_certificate /etc/ssl/certs/xyz.crt;
ssl_certificate_key /etc/ssl/certs/xyz.key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS;
ssl_prefer_server_ciphers on;
ssl_session_cache shared:SSL:20m;
ssl_session_timeout 60m;
add_header Strict-Transport-Security "max-age=31536000";
add_header X-Content-Type-Options nosniff;
location / {
gzip_static on;
proxy_set_header X-Forwarded-Proto https;
proxy_set_header HTTPS "on";
include /etc/nginx/proxy.conf;
}
}
server {
listen 80;
listen [::]:80;
server_name www.example.com;
return 301 https://$server_name$request_uri;
}
and this is my proxy.conf
proxy_pass http://127.0.0.1:8080;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_buffering Off;
proxy_buffers 32 4m;
proxy_busy_buffers_size 25m;
proxy_buffer_size 512k;
proxy_ignore_headers "Cache-Control" "Expires";
proxy_max_temp_file_size 0;
client_max_body_size 1024m;
client_body_buffer_size 4m;
proxy_connect_timeout 75s;
proxy_read_timeout 300s;
proxy_send_timeout 300s;
proxy_intercept_errors off;
i also tried adding this to the http block of nginx.conf
fastcgi_temp_file_write_size 10m;
fastcgi_busy_buffers_size 512k;
fastcgi_buffer_size 512k;
fastcgi_buffers 16 512k;
client_max_body_size 50M;
with no success, so i basically tried everything I found on the web regarding this topic with no success, i´m pretty new to nginx though, so maybe i am just overseeing sth?
Nginx logs to error_log:
2019/05/15 08:09:26 [error] 21245#0: *42 upstream prematurely closed connection while reading response header from upstream,
client: 55.10.229.62, server: www.example.com, request: "GET /file/progress/190432132829 HTTP/1.1",
upstream: "http://127.0.0.1:8080/file/progress/190432132829",
host: "www.example.com",
referrer: "https://www.example.com/media/browser?render=media-popup&options=Uusog2IwkXxNr-0EaqD1L6-Y0aBHQVunf-k4J1oUb_U&plugins="
So maybe it´s because upstream is http?
What makes me worry even more is that I get a segfault logged in httpd-error_log
[core:notice] [pid 21277] AH00052: child pid 21280 exit signal Segmentation fault (11)
I have the latest Drupal-7.67 core and all modules are uptodate
using PHP 7.2.17 on a CENTOS7
with nginx 1:1.12.2-2.el7
and httpd 2.4.6-88.el7.centos
i also added this to drupal´s settings.php
$conf['reverse_proxy'] = TRUE;
$conf['reverse_proxy_addresses'] = ['127.0.0.1'];
but it doesn´t seem to have any effect
Update:
for beeing complete on this one, here are the details of the failing request (from chrome network-tab)
Response Headers
Connection: keep-alive
Content-Length: 575
Content-Type: text/html
Date: Wed, 15 May 2019 06:09:26 GMT
Server: nginx/1.12.2
Request Headers
Accept: application/json, text/javascript, */*; q=0.01
Accept-Encoding: gzip, deflate, br
Accept-Language: de-DE,de;q=0.9,en-US;q=0.8,en;q=0.7
Connection: keep-alive
Cookie: _pk_ses.10.9e92=1; has_js=1; cookie-agreed=2; SSESS812a016a321fb8affaf4=pY3nnqqagiCksF61R45R6Zrmi6g6DdMcYRxSPM1HLP0; Drupal.toolbar.collapsed=0; _pk_id.10.9e92=614e315e332df7.1557898005.1.1557900255.1557898005.
Host: www.example.com
Referer: https://www.example.com/media/browser?render=media-popup&options=Uusog2IwkXxNr-0EaqD1L6-Y0aBHQVunf-k4J1oUb_U&plugins=
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.103 Safari/537.36
X-Requested-With: XMLHttpRequest
when I remove the php pecl-uploadprogress
yum remove php-pecl-uploadprogress.x86_64
the error is gone, but the progress-bar is not working then, even though i have apc. On the Pecl-uploadprogress page they mention that other SAPI implementations, than apache with mod_php only unfortunately still have issues.
I guess i ran into one of these,
however i would highly approchiate to let Apache report the progress.

NGINX ignore bad certificate and configuration and just run?

We have an app that uploads automatically generated SSL certificate to our NGINX load balancers. One time the we had this issue that a "bad certificate" got uploaded and then a automated nginx reload is thereafter executed, our server went offline for a while causing DNS issues (DNS not found) for our server domain. Causing a huge downtime to our clients.
However it is a feature / function in our application to allow apps to upload SSL cerficate and our backend server installs it automatically, is there a way to tell to ignore bad NGINX conf files and crt/key's altogether? Looking at the before logs I can remember that I saw something like SSL handshake error before the incident.
Here's how our main nginx-jelastic.conf looks like:
######## HTTP SECTION PROTOTYPE ########
http {
server_tokens off ;
### other settings hidden for simplicity
include /etc/nginx/conf.d/*.conf;
}
######## TCP SECTION PROTOTYPE ########
So what I am thinking if it's possible for nginx to just ignore all bad NGINX conf files that is located there. Here's a sample of what gets uploaded in the conf.d folder:
#
www.example-domain.com HTTPS server configuration
#
server {
listen 443 ssl;
server_name www.example-domain.com;
ssl_certificate /var/lib/nginx/ssl/www.example-domain.com.crt;
ssl_certificate_key /var/lib/nginx/ssl/www.example-domain.com.key;
access_log /var/log/nginx/localhost.access_log main;
error_log /var/log/nginx/localhost.error_log info;
proxy_temp_path /var/nginx/tmp/;
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
location / {
set $upstream_name common;
include conf.d/ssl.upstreams.inc;
proxy_pass http://$upstream_name;
proxy_next_upstream error;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Host $http_host;
proxy_set_header X-Forwarded-For $http_x_forwarded_for;
proxy_set_header X-URI $uri;
proxy_set_header X-ARGS $args;
proxy_set_header Refer $http_refer;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
For some reason the certificate and key indicated in the configuration could be wrong, that that is going to wreck the nginx server and since our domain is pointed to this server via A record, it us a total disaster if the nginx fails as DNS issues happens and it could take 24-48 hours for DNS to get back.

Nginx load balancing with Node.js and Google Oauth

I created a Node.js site that uses Google authentication. The site is used by 100+ users concurrently which affect the performance. So I understand that Nginx could help in scaling the site by creating multiple instance of the Node.js app in multiple ports and then we use Nginx as a load balancer.
So, I configured Nginx, but the issue is that it dose not seem to work with Google authentication. I am able to see the first page of my site and I am able to to login via Google but it is dose not work after this point.
Any suggestions to what could be missing to make this work.
This is my configuration file:
upstream my_app
{
least_conn; # Use Least Connections strategy
server ip:3001; # NodeJS Server 2 I changed the actual ip
server ip:3002; # NodeJS Server 3
server ip:3003; # NodeJS Server 4
server ip:3004; # NodeJS Server 5
keepalive 256;
}
server {
listen 80 default_server;
listen [::]:80 default_server ipv6only=on;
expires epoch;
add_header Cache-Control "no-cache, public, must-revalidate, proxy-revalidate";
server_name ip;
access_log /var/log/nginx/example.com-access.log;
error_log /var/log/nginx/example.com-error.log error;
# Browser and robot always look for these
# Turn off logging for them
location = /favicon.ico { log_not_found off; access_log off; }
location = /robots.txt { log_not_found off; access_log off; }
# pass the request to the node.js server
# with some correct headers for proxy-awareness
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_buffers 8 16k;
proxy_buffer_size 32k;
proxy_pass http://my_app ;
proxy_redirect off ;
add_header Pragma "no-cache";
# Handle Web Socket connections
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
I just started learning about nginx, I checked when the upstream have only one ip address and it is working. i.e it works as a reverse proxy but not as a load balancer and my guess is due to google authentication nature.
And the error I receive in the error log is connection refused.
Thanks.
I figure out what was wrong. least_conn load balancing technique was not the right to choose since it dose not persist session. I changed it to
hash $remote_addr or hash_ip and it is working.

Another nginx reverse proxy issue

I'm putting together an nginx reverse proxy. Here is a working nginx conf file snippet:
upstream my_upstream_server {
server 10.20.30.40:12345;
}
server {
server_name ssl-enabled.example.com;
listen 443 ssl;
ssl_certificate /etc/ssl/server.crt;
ssl_certificate_key /etc/ssl/server.key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!MD5;
location / {
proxy_pass http://my_upstream_server/;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
This allows us to serve requests from my_upstream_server without changing any of its configuration files, and in the bargain serve them up via ssl. So far so good.
What I really want to do, though, is configure this so that instead of going to https://ssl-enabled.example.com/, we can direct users to https://ssl-enabled.example.com/upstream/. (I want to do this so we can have multiple virtual hosts running, each proxying a different service that we want to ssl-enable.) I've tried changing the location line from location / to location /upstream/; when I do that, the index page of the application (https://ssl-enabled.example.com/upstream/) renders fine, but pages underneath it generate 404 errors. Here's an example:
This link is broken
Nginx tries to serve /some/link.html instead of /upstream/some/link.html, which doesn't work.
I tried to create a rewrite that would send the request to /upstream$1, but for the main page (which nginx now thinks is https://.../upstream/) it goes into an endless loop, tries to serve /upstream/upstream/upstream/..., and of course fails.
I suspect I'm missing something both vital and simple, but so far I haven't figured out what it might be. The documentation may provide a clue, but if it does I'm not seeing it. Any help from the nginx experts out there would be greatly appreciated. Thanks.
The config below should do a similar redirect as you mentioned without entering a loop:
upstream my_upstream_server {
server 10.20.30.40:12345;
}
server {
server_name ssl-enabled.example.com;
listen 443 ssl;
ssl_certificate /etc/ssl/server.crt;
ssl_certificate_key /etc/ssl/server.key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!MD5;
location /upstream {
proxy_pass http://my_upstream_server/;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
location / {
return 301 https://ssl-enabled.example.com/upstream$request_uri;
}
}
Basically two location blocks.
One for requests starting with "upstream", which are served, and the other for those without, which are redirected.
Alexey is right about / being easier to use, and about the time he posted his comment, I came to the realization that since I can create DNS entries for example.com, instead of trying to direct people to https://server.example.com/upstream/ it would be much easier to just create a DNS entry for https://upstream.example.com/
So that's what I did and it looks like the code is doing exactly what I want. Thanks to Alexey and Dayo for their replies.

How do you access the original request (and port) in ngnix

I have the following network configuration:
F5 LB --> 2 NGNIX nodes --> App server
For server to server calls we sign the request based on scheme, port and uri on the source server and compare this signature on the destination by re-signing the request again based on the same parameters.
server to server calls follow this path:
source server --> F5 LB --> NGNIX --> destination server.
The original request sent by the source server is sent to https without port, and thus signed without port (or using default port for that matter).
LB adds custom port to the request and pass it to NGNIX. NGNIX in turn is configured to pass the server scheme, host and port with the request to the app server:
proxy_set_header Host $host:$server_port;
proxy_set_header X-Scheme $scheme;
The destination server received the port coming from the LB instead of the one sent with the original request sent by the source server, ending up failing the signature check on the destination server.
The same was tested with Apache, using ajp with the proxied servers and the request passed is holding the original port, not the one added by the LB.
After thorough reading, it comes up to a simple question:
How do you access the original request (and port) in ngnix?
Here's the rest of the relevant configuration:
proxy.conf:
proxy_redirect off;
client_max_body_size 10m;
client_body_buffer_size 128k;
proxy_connect_timeout 90;
proxy_send_timeout 90;
proxy_read_timeout 90;
proxy_buffers 32 4k;
proxy_buffer_size 8k;
proxy_http_version 1.0;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
configuration
log_format upstreamlog '[$time_local] $remote_addr $status "$request" $body_bytes_sent - $server_name to: $upstream_addr $upstream_response_time sec "$http_user_agent"';
server {
listen 9080;
listen 9443 ssl;
server_name myserver.com;
root html;
error_log /data/server_openresty/error.log info;
access_log /ldata/server_openresty/logs/access.log upstreamlog;
gzip on;
gzip_types text/plain text/xml text/css text/javascript application/javascript application/xhtml+xml application/xml;
ssl_certificate /data/server_openresty/nginx/certs/dev_wildCard.crt;
ssl_certificate_key /code/server_openresty/nginx/certs/dev_wildCard.key;
ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:MEDIUM:!aNULL:!MD5;
### headers passed to the proxies
proxy_set_header Host $host:$server_port;
proxy_set_header X-Scheme $scheme;
location /api/serverA{
proxy_pass http://serverA-cluster;
}
location /api/serverB{
proxy_pass http://serverB-cluster;
}
}