Another nginx reverse proxy issue - ssl

I'm putting together an nginx reverse proxy. Here is a working nginx conf file snippet:
upstream my_upstream_server {
server 10.20.30.40:12345;
}
server {
server_name ssl-enabled.example.com;
listen 443 ssl;
ssl_certificate /etc/ssl/server.crt;
ssl_certificate_key /etc/ssl/server.key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!MD5;
location / {
proxy_pass http://my_upstream_server/;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
This allows us to serve requests from my_upstream_server without changing any of its configuration files, and in the bargain serve them up via ssl. So far so good.
What I really want to do, though, is configure this so that instead of going to https://ssl-enabled.example.com/, we can direct users to https://ssl-enabled.example.com/upstream/. (I want to do this so we can have multiple virtual hosts running, each proxying a different service that we want to ssl-enable.) I've tried changing the location line from location / to location /upstream/; when I do that, the index page of the application (https://ssl-enabled.example.com/upstream/) renders fine, but pages underneath it generate 404 errors. Here's an example:
This link is broken
Nginx tries to serve /some/link.html instead of /upstream/some/link.html, which doesn't work.
I tried to create a rewrite that would send the request to /upstream$1, but for the main page (which nginx now thinks is https://.../upstream/) it goes into an endless loop, tries to serve /upstream/upstream/upstream/..., and of course fails.
I suspect I'm missing something both vital and simple, but so far I haven't figured out what it might be. The documentation may provide a clue, but if it does I'm not seeing it. Any help from the nginx experts out there would be greatly appreciated. Thanks.

The config below should do a similar redirect as you mentioned without entering a loop:
upstream my_upstream_server {
server 10.20.30.40:12345;
}
server {
server_name ssl-enabled.example.com;
listen 443 ssl;
ssl_certificate /etc/ssl/server.crt;
ssl_certificate_key /etc/ssl/server.key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!MD5;
location /upstream {
proxy_pass http://my_upstream_server/;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
location / {
return 301 https://ssl-enabled.example.com/upstream$request_uri;
}
}
Basically two location blocks.
One for requests starting with "upstream", which are served, and the other for those without, which are redirected.

Alexey is right about / being easier to use, and about the time he posted his comment, I came to the realization that since I can create DNS entries for example.com, instead of trying to direct people to https://server.example.com/upstream/ it would be much easier to just create a DNS entry for https://upstream.example.com/
So that's what I did and it looks like the code is doing exactly what I want. Thanks to Alexey and Dayo for their replies.

Related

Harbor 2.5.0 behind Apache reverse proxy

I installed Harbor in a server inside the company farm and I can use it without problem through https://my-internal-server.com/harbor.
I tried to add the reverse proxy rules to Apache to access it through the public server for harbor, v2, chartrepo, service endpoints, like https://my-public-server.com/harbor, but this doesn't work.
For example:
ProxyPass /harbor https://eslregistry.eng.it/harbor
ProxyPassReverse /harbor https://eslregistry.eng.it/harbor
I also set in harbor.yaml:
external_url: https://my-public-server.com
When I try to access to https://my-public-server.com/harbor with the browser I see a Loading... page and 404 errors for static resources because it tries to get them with this GET:
https://my-public-server.com/scripts.a459d5a2820e9a99.js
How can I configure it to work?
You should pass the whole domain, not only the path. Take a look at the official Nginx config to have an idea how this might look like.
upstream harbor {
server harbor_proxy_ip:8080;
}
server {
listen 443 ssl;
server_name harbor.mycomp.com;
ssl_certificate /etc/nginx/conf.d/mycomp.com.crt;
ssl_certificate_key /etc/nginx/conf.d/mycomp.com.key;
client_max_body_size 0;
chunked_transfer_encoding on;
location / {
proxy_pass http://harbor/;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_buffering off;
proxy_request_buffering off;
}
Note that you should disable proxy or buffering

Nginx reverse proxy https to https

I have a QNAP TS-253A with its admin interface exposed to the internet.
The qnap has it's own certificate installed by a dedicated tool (ie. I don't know exactly where to locate the certificate).
https://mydomain.myqnapcloud.com points to my static IP, and my router has a firewall rule, which forwards port 443 to 192.168.200.6 which is the internal address of my QNAP.
That all works as it should.
Now I have spun up a Docker container on 192.168.200.18, which I would like to expose to https://identity.someotherdomain.com.
My Idea was to spin up another container with an Nginx reverse proxy (192.168.200.8), and change the firewall rule to forward 443 (and 80) to the reverse proxy.
There are lots of guides to use nginx to sit in front of a http server and add SSL certificate thereby converting an existing http site to https. But my use case should be even simpler as the server i forward to, is already https.
I have tried this, which doesn't work:
upstream qnap {
server 192.168.200.6:443;
}
server {
listen 192.168.200.8:443;
server_name mydomainmyqnapcloud.com;
location / {
proxy_pass https://qnap;
}
}
How do I configure nginx to forward traffic intended for https://mydomain.myqnapcloud.com to https://192.168.200.6
and traffic intended for https://identity.someotherdomain.com to https://192.168.200.18
The way I got this working was to locate the certificate and key on the Qnap (in /etc/stunnel) and copy them to a folder shared into the reverse proxy docker image and include them in the nginx.conf:
server {
listen 443 ssl;
server_name mydomain.myqnapcloud.com;
ssl_certificate /etc/ssl/private/backup.cert;
ssl_certificate_key /etc/ssl/private/backup.key;
ssl_session_cache builtin:1000 shared:SSL:10m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4;
ssl_prefer_server_ciphers on;
access_log /var/log/nginx/access.log;
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass https://192.168.200.6;
proxy_read_timeout 90;
proxy_redirect https://192.168.200.6 https://mydomain.myqnapcloud.com;
}
}

NGINX ignore bad certificate and configuration and just run?

We have an app that uploads automatically generated SSL certificate to our NGINX load balancers. One time the we had this issue that a "bad certificate" got uploaded and then a automated nginx reload is thereafter executed, our server went offline for a while causing DNS issues (DNS not found) for our server domain. Causing a huge downtime to our clients.
However it is a feature / function in our application to allow apps to upload SSL cerficate and our backend server installs it automatically, is there a way to tell to ignore bad NGINX conf files and crt/key's altogether? Looking at the before logs I can remember that I saw something like SSL handshake error before the incident.
Here's how our main nginx-jelastic.conf looks like:
######## HTTP SECTION PROTOTYPE ########
http {
server_tokens off ;
### other settings hidden for simplicity
include /etc/nginx/conf.d/*.conf;
}
######## TCP SECTION PROTOTYPE ########
So what I am thinking if it's possible for nginx to just ignore all bad NGINX conf files that is located there. Here's a sample of what gets uploaded in the conf.d folder:
#
www.example-domain.com HTTPS server configuration
#
server {
listen 443 ssl;
server_name www.example-domain.com;
ssl_certificate /var/lib/nginx/ssl/www.example-domain.com.crt;
ssl_certificate_key /var/lib/nginx/ssl/www.example-domain.com.key;
access_log /var/log/nginx/localhost.access_log main;
error_log /var/log/nginx/localhost.error_log info;
proxy_temp_path /var/nginx/tmp/;
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
location / {
set $upstream_name common;
include conf.d/ssl.upstreams.inc;
proxy_pass http://$upstream_name;
proxy_next_upstream error;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Host $http_host;
proxy_set_header X-Forwarded-For $http_x_forwarded_for;
proxy_set_header X-URI $uri;
proxy_set_header X-ARGS $args;
proxy_set_header Refer $http_refer;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
For some reason the certificate and key indicated in the configuration could be wrong, that that is going to wreck the nginx server and since our domain is pointed to this server via A record, it us a total disaster if the nginx fails as DNS issues happens and it could take 24-48 hours for DNS to get back.

Nginx configuration leads to endless redirect loop

So I've looked at every sample configuration I could find and yet every time I try and view a page that requires ssl, I end up in an redirect loop. I'm running nginx/0.8.53 and passenger 3.0.2.
Here's the ssl config
server {
listen 443 default ssl;
server_name <redacted>.com www.<redacted>.com;
root /home/app/<redacted>/public;
passenger_enabled on;
rails_env production;
ssl_certificate /home/app/ssl/<redacted>.com.pem;
ssl_certificate_key /home/app/ssl/<redacted>.key;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X_FORWARDED_PROTO https;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-Url-Scheme $scheme;
proxy_redirect off;
proxy_max_temp_file_size 0;
location /blog {
rewrite ^/blog(/.*)?$ http://blog.<redacted>.com/$1 permanent;
}
location ~* \.(js|css|jpg|jpeg|gif|png)$ {
if (-f $request_filename) {
expires max;
break;
}
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
}
Here's the non-ssl config
server {
listen 80;
server_name <redacted>.com www.<redacted>.com;
root /home/app/<redacted>/public;
passenger_enabled on;
rails_env production;
location /blog {
rewrite ^/blog(/.*)?$ http://blog.<redacted>.com/$1 permanent;
}
location ~* \.(js|css|jpg|jpeg|gif|png)$ {
if (-f $request_filename) {
expires max;
break;
}
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
}
Let me know if there's any additional info I can give to help diagnose the issue.
It's your line here:
listen 443 default ssl;
change it to:
listen 443;
ssl on;
This I'll call the old style.
Also, that along with
proxy_set_header X_FORWARDED_PROTO https;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-Url-Scheme $scheme;
proxy_redirect off;
proxy_max_temp_file_size 0;
did the trick for me. I see now i am missing the real IP line you have, but so far, this got rid of my infinite loop problem with ssl_requirement and ssl_enforcer.
I've toyed around with a bunch of these answers but nothing worked for me. Then I realized since I use Cloudflare the problem may not be in the server but with Cloudflare. Lo and behold when I set my SSL to Full (Strict) everything works as it should!
I found that it was this line
proxy_set_header Host $http_host;
Which should be changed to
proxy_set_header Host $host;
According to the nginx documentation by using '$http_host you're passing the "unchanged request-header".
Have you tried using "X-Forwarded-Proto" instead of X_FORWARDED_PROTO?
I've run into a problem with this header before, it wasn't causing redirects, but changing this header fixed it for me.
Since you have a rewrite statement found in both ssl and non-ssl sections
location /blog {
rewrite ^/blog(/.*)?$ http://blog.<redacted>.com/$1 permanent;
}
Where is the server section for blog..com?? Could that be the source of the issue?
I had a similar issue for my symfony2 application, albeit form a different cause: I had set fastcgi_param HTTPS off; when I of course needed fastcgi_param HTTPS on; in my nginx configuration.
location ~ ^/(app|app_dev|config)\.php(/|$) {
satisfy any;
allow all;
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_split_path_info ^(.+\.php)(/.*)$;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param HTTPS on;
}
In case someone else stumbles on this, I was attempting to configure both http and https via the same server {} block, but only added the "listen 443" directive believing that the "this line is default and implied" meant that it would also listen on 80 as well, it didn't. Uncommenting the "listen 80" line so that both listen lines were present corrected the infinite loop. No idea why it would have even been getting a redirect at all, but it did.
For those who are searching desperatly why their owncloud keep making a redirect loop in spite of having a good configuration file, i've found why it's not working.
My config:
nginx + php-fpm + mysql on a fresh centos 6.5
when installing php-fpm and nginx, default permission on /var/lib/php/session/ is root:apache
php-fpm through nginx store php session here, if nginx did not have authorization to write it fail miserably to keep any login session, resulting in an infinite loop.
So juste add nginx in apache group (usermod -a -G apache nginx) or change ownership of this folder.
Have a nice day.
X_FORWARDED_PROTO as in your file can cause errors and it did in my case. X-Forwarded-Proto is correct whereas the hiphens are more important than uppercase or lowercase letters.
You can avoid those problems by sticking to conventions ;)
see also here: Custom HTTP headers : naming conventions and here: http://www.ietf.org/rfc/rfc2047.txt

Apache handeling SSL requests and passing them through to HAproxy

I am trying to set up as a front end reverse proxy with Haproxy forwarding requests to Apache web servers in the back end. My problem is that I have been unsuccessful in getting it to work with SSL requests using Apache.
I know that Haproxy can not handle SSL requests so I am trying to set up Apache to accept the clients requests on port 443 and forward it to Haproxy which will then pick up and forward the requests to the right Apache back end web server. Has anyone done this successfully? If yes can you provide examples of the Apache and Haproxy config please?
Yes I have please see the configuration here link text
I use nginx, here is an example nginx.conf:
server {
listen 443;
server_name localhost;
ssl on;
ssl_certificate /etc/pki/tls/certs/localhost.crt;
ssl_certificate_key /etc/pki/tls/private/localhost.key;
ssl_session_timeout 5m;
ssl_protocols SSLv2 SSLv3 TLSv1;
ssl_ciphers ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP;
ssl_prefer_server_ciphers on;
location / {
root html;
index index.html index.htm;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_max_temp_file_size 0;
proxy_pass http://127.0.0.1:8000;
break;
}
}
In haproxy.cfg, set:
listen http_proxy 127.0.0.1:8000