nginx auth_request to remote authentication script - authentication

I'm trying to setup a nginx reverse proxy in front of some internal servers with auth_request to protect them from unauthorized users. I have an authentication script running at 192.168.1.101/scripts/auth/user.php which is accessed inside of the /auth block. The problem is that I'd like to use a named location rather than matching URI so that there is no risk of URI collision with the internal service (which I don't control). The following works:
server {
listen 80;
server_name private.domain.com;
location /auth {
proxy_pass http://192.168.1.101/scripts/auth/user.php;
proxy_pass_request_body off;
proxy_set_header Content-Length "";
proxy_set_header X-Original-URI $request_uri;
}
location / {
auth_request /auth;
proxy_pass http://internal.domain.com;
}
}
I'd like to replace the /auth with #auth however when I do nginx throws an error during relad. I've read that the fix is the replace the proxy_pass inside of the auth location with just the IP address however when I do that the auth_request never makes it to the script. Any thoughts on the correct way to proceed with this configuration?

Due some nginx restrictions named locations can't be used for subrequest. You can prevent outside access of auth location with internal config option. Try this config:
location = /scripts/auth/user.php {
internal;
proxy_pass http://192.168.1.101;
proxy_pass_request_body off;
proxy_set_header Content-Length "";
proxy_set_header X-Original-URI $request_uri;
}
location / {
auth_request /scripts/auth/user.php;
proxy_pass http://internal.domain.com;
}

Related

troubleshooting application behind nginx reverse proxy, as POST/PUT requests are replied with Error 400 (Bad Request)

I'm trying to host my Angular/ASP.net core 3.1 application on linux for the first time.
Nginx stands on the port 80 and serves the static files and acts as a reverse proxy for the api part which will be passed to the .NET/kestrel server.
The problem is that I systematically get a 400 status code error (Bad Request) on any web API request containing a body, like POST & PUT, but GET is ok.
I added some logs through a middleware, just to see if I can get the requests. Basically, something like:
app.Use(async (context, next) => {
context.Request.EnableBuffering();
string reqBody;
using (var reader = new StreamReader(context.Request.Body))
{
reqBody = await reader.ReadToEndAsync();
context.Request.Body.Seek(0, SeekOrigin.Begin);
ms_oLogger.Debug($"Incoming request: METHOD={context.Request.Method} PATH={context.Request.Path} BODY=\"{reqBody}\"");
}
await next();
});
This just loggs stuff for GET requests, but nothing appears for the problematic PUT/POST requests... Can I conclude that this is only a nginx problem?
I also enabled the logs on nginx for the given "/api" location, but I can not tell what happens... How can I know which tier has generated the 400 status code?
EDIT1: I started a blank new project just with a poor Web API project containing one GET and one POST method just to check if there was something wrong with my application, but I still get the problem.
So I set up a new ubuntu server (this time, ubuntu server instead of desktop version) and now it works!!!
I compared configuration etc... but could not figure out what was wrong!....
But my initial question is still valid: how can I troubleshoot where the problem comes from?
EDIT2: This is my default.conf:
server {
listen 80 default_server;
listen [::]:80 default_server;
root /var/www/html;
index index.html index.htm index.nginx-debian.html;
server_name _;
location / {
try_files $uri $uri/ =404;
}
location /fusion {
root /opt/fichet/WebUI/NgApp;
}
location /fusion/api {
proxy_pass http://localhost:5000/api;
error_log /var/log/nginx/fusion_error_logs.log debug;
access_log /var/log/nginx/fusion_access.log;
proxy_http_version 1.1;
proxy_cache_bypass $http_upgrade;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Port $server_port;
}
}
It seems that no firewall is enabled ("sudo ufw status verbose" tells us that it is "inactive")
Remove these lines from your nginx config.
proxy_cache_bypass $http_upgrade;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
These headers are used for WebSocket connections, and shouldn't be present for non-websocket requests. My guess is your upstream server doesn't mind them for GET requests but does for POST/PUT, for some reason.
If you are not using websockets, you can leave them removed.
If you are using websockets, you need nginx to add or not these headers based on whether the requests is websockets or not. Something like this should work:
http {
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
server {
...
location /fusion/api {
proxy_pass ...
...
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
}
}
See here for more info.

Is there a way to use multiple auth_request directives in nginx?

I would like to use multiple auth_request directives in order to try authentication with multiple servers - i.e. if the first auth server returns 403, try the second auth server. I tried a straightforward approach like this:
location /api {
satisfy any;
auth_request /auth-1/;
auth_request /auth-2/;
proxy_pass http://api_impl;
}
location /auth-1/ {
internal;
proxy_pass http://auth_server_1;
proxy_pass_request_body off;
proxy_set_header Content-Length "";
proxy_set_header X-Original-URI $request_uri;
}
location /auth-2/ {
internal;
proxy_pass http://auth_server_2;
proxy_pass_request_body off;
proxy_set_header Content-Length "";
proxy_set_header X-Original-URI $request_uri;
}
But nginx wouldn't parse the config file. I received the response
nginx: [emerg] "auth_request" directive is duplicate
Is there a way to achive such functionality in nginx?
Here is my solution after finding this question in google looking for the same things:
Set an upstream server that will pass to loopback nginx servers
Make those upstream servers do what your /auth-1 and /auth-2 endpoints were doing, except they return 503 on authentication error (except the last in the chain that still returns 401 to signal to nginx that there are no more servers to try)
Tell nginx on /auth to just use this upstream, so it will try all authentication "servers" sequentially (thanks to the 503 return codes) until one of them succeeds OR the last one returns 401.
upstream auth {
server 127.0.2.1:8000 max_fails=0;
server 127.0.2.1:8001 max_fails=0;
server 127.0.2.1:8002 max_fails=0;
}
# Method 1
server {
listen 127.0.2.1:8000;
location / {
proxy_pass http://auth_server_1; # Returns **503** on failure
proxy_pass_request_body off;
proxy_set_header Content-Length "";
proxy_set_header X-Original-URI $request_uri;
}
}
# Method 2
server {
listen 127.0.2.1:8001;
location / {
proxy_pass http://auth_server_2; # Returns **503** on failure
proxy_pass_request_body off;
proxy_set_header Content-Length "";
proxy_set_header X-Original-URI $request_uri;
}
}
# Method 3
server {
listen 127.0.2.1:8002;
location / {
proxy_pass http://auth_server_3; # Returns **401** on failure
proxy_pass_request_body off;
proxy_set_header Content-Length "";
proxy_set_header X-Original-URI $request_uri;
}
}
server {
# ...
location /api {
auth_request /auth;
proxy_pass http://api_impl;
}
location /auth {
proxy_pass http://auth/;
proxy_pass_request_body off;
proxy_set_header Content-Length "";
proxy_set_header X-Original-URL $request_uri;
proxy_next_upstream error timeout http_503;
}
# ...
}
I had a similar problem, but found a different solution that may be acceptable in certain scenarios. It essentially creates a double-proxy layer which has a significant performance penalty. The first proxy layer is for "authentication" and the second proxy layer is for "authorization".
The following configuration is untested and is to convey the concept rather than a working example.
### What I wanted
location /protected_path {
auth_request /authenticate; # Login to IDP
auth_request /authorize; # Apply Role/Group based authorization
# Final routing logic
}
### My solution
server {
listen 443;
location /protected_path {
auth_request /authenticate;
auth_request_set $idp-data $arbitrary_data_from_idp_server;
proxy_pass http://localhost:8000;
proxy_set_header Arbitrary-Header $idp-data;
}
}
server {
listen 127.0.0.1:8000;
location /protected_path {
auth_request /authorize;
# Final routing logic
}
}

nginx proxy authentication intercept

I have a couple of service and they stand behind an nginx instance. In order to handle authentication, in nginx, I am intercepting each request and sending it to the authentication service. There, if the credentials are are correct, I am setting a cookie which includes user related info.
The request should now be routed to the appropriate service, with the cookie set.
Here is my nginx config:
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
upstream xyz {
server ***;
}
upstream auth {
server ***;
}
server {
listen 8080;
location ~ ^/(abc|xyz)/api(/.*)?$ {
auth_request /auth-proxy;
set $query $2;
proxy_pass http://$1/api$query$is_args$args;
proxy_set_header X-Target $request_uri;
proxy_set_header Host $http_host;
}
location = /auth-proxy {
internal;
proxy_pass http://auth;
proxy_pass_request_body off;
proxy_set_header Content-Length "";
proxy_set_header X-Target $request_uri;
proxy_set_header Host $http_host;
proxy_set_header X-CookieName "auth";
proxy_set_header Cookie "auth=$cookie_auth";
proxy_set_header Set-Cookie "auth=$cookie_auth";
proxy_cookie_path / "/; Secure; HttpOnly";
add_header Cookie "auth=$cookie_auth";
add_header Set-Cookie "auth=$cookie_auth";
}
}
If I make a request to /auth-proxy with an x-target header set manually, the response contains the cookie as expected.
If I make a request to the desired target, the request is intercepted, it reaches /auth-proxy which correctly sets the cookie. However, when the request reaches the target, it does not contain the cookie.
I assume that nginx is not forwarding the cookie when doing the target request.
I've been struggling with this for the last couple of days... what am I missing?
Thanks!
I've finally figured it out. I used auth_request_set to read the cookie from the auth response and I manually set it both on the response to the caller and on the subsequent request to the target.
Because if is evil, I've added the check in lua.
server {
listen 8080;
location ~ ^/(abc|xyz)/api(/.*)?$ {
auth_request /auth-proxy;
# read the cookie from the auth response
auth_request_set $cookie $upstream_cookie_auth;
access_by_lua_block {
if not (ngx.var.cookie == nil or ngx.var.cookie == '') then
ngx.header['Set-Cookie'] = "auth=" .. ngx.var.cookie .. "; Path=/"
end
}
# add the cookie to the target request
proxy_set_header Cookie "auth=$cookie";
set $query $2;
proxy_pass http://$1/api$query$is_args$args;
proxy_set_header X-Target $request_uri;
proxy_set_header Host $http_host;
}
}

Nginx as Exchange-proxy

I've been looking for a solution for this for quite a few hours already. I'm rather new to Nginx as well, so if someone could help me with a demo config, it would be superb.
1 public IP address (this is what's causing so much trouble)
Nginx as proxy
Exchange 2013
Current situation:
http: apps.domain.org, video.domain.org, geo.domain.org . Traffic on port 80 goes to the Nginx server.
https: mail.domain.org . Traffic on port 443 goes straight to Exchange 2013.
Now, we need https / SSL on our apps.domain.org .
Our firewall only checks the IP addresses and forwards traffic.
So basically, my idea is to have all traffic go to Nginx.
There, I need to know what's for mail.domain.org and redirect it to Exchange. Specifically, I need everything to work. OWA, autodiscover: OK. But I'm struggling with what seems to be RPC.
Someone mentioned I should use a stream config in Nginx to manage that.
But I don't know how to differentiate, so that only mail.domain.org uses a stream, while apps.domain.org is in a http config?
My current config (thanks to the links below, but in particular tigunov's comment about getting Outlook Anywhere aka RPC to work) gets me further than before. Currently failing at a FolderSync attempt when I try Microsoft's Remote Connectivity Analyzer. In Outlook, the credentials box still pops up.
server {
(server_name , SSL-certs etc)
# Set global proxy settings
proxy_pass_header Date;
proxy_pass_header Server;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Accept-Encoding "";
keepalive_timeout 3h;
proxy_read_timeout 3h;
#reset_timedout_connection on;
tcp_nodelay on;
client_max_body_size 3G;
#proxy_pass_header Authorization;
proxy_pass_request_headers on;
proxy_http_version 1.1;
proxy_request_buffering off;
proxy_buffering off;
proxy_set_header Connection "Keep-Alive";
}
Test now results in: (everything fine, including ActiveSync - OPTIONS), but:
Attempting the FolderSync command on the Exchange ActiveSync session.
The test of the FolderSync command failed.
Exception details:
Message: The request was aborted: The request was canceled.
Type: System.Net.WebException
Stack trace:
at System.Net.HttpWebRequest.GetResponse()
at Microsoft.Exchange.Tools.ExRca.Extensions.RcaHttpRequest.GetResponse()
Elapsed Time: 526 ms.
No further details to be seen in the connectivity tool.
This configuration is based on Tad DeVries' configuration found here and Daniel Kempkens' fix for autodiscover and RPC issues found here.
Note that since I don't have an Exchange environment to test against, I'm not sure if this configuration will work properly, but it's worth a try.
server {
listen 80;
#listen [::]:80;
server_name mail.gwtest.us autodiscover.gwtest.us;
return 301 https://$host$request_uri;
}
server {
listen 443;
#listen [::]:443 ipv6only=on;
ssl on;
ssl_certificate /etc/ssl/nginx/mail.gwtest.us.crt;
ssl_certificate_key /etc/ssl/nginx/mail.gwtest.us.open.key;
ssl_session_timeout 5m;
server_name mail.gwtest.us;
location / {
return 301 https://mail.gwtest.us/owa;
}
proxy_http_version 1.1;
proxy_read_timeout 360;
proxy_pass_header Date;
proxy_pass_header Server;
proxy_pass_header Authorization;
proxy_set_header Accept-Encoding "";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
more_set_input_headers 'Authorization: $http_authorization';
more_set_headers -s 401 'WWW-Authenticate: Basic realm="exch1.test.local"';
location ~* ^/owa { proxy_pass https://exch1.test.local; }
location ~* ^/Microsoft-Server-ActiveSync { proxy_pass https://exch1.test.local; }
location ~* ^/ecp { proxy_pass https://exch1.test.local; }
location ~* ^/rpc { proxy_pass https://exch1.test.local; }
#location ~* ^/mailarchiver { proxy_pass https://mailarchiver.local; }
error_log /var/log/nginx/owa-ssl-error.log;
access_log /var/log/nginx/owa-ssl-access.log;
}

Nginx does redirect, not proxy

I want to set up Nginx as a reverse proxy for a https service, because we have a special usecase where we need to "un-https" a connection:
http://nginx_server:8080/myserver ==> https://mysecureservice
But what happens is that the actual https service isn't proxied. Nginx does redirect me to the actual service, so the URL in the browser changes. I want to interact with Nginx as it was the actual service, just without https.
This is what I have:
server {
listen 0.0.0.0:8080 default_server;
location /myserver {
proxy_pass https://myserver/;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $host;
}
}
You have to use the proxy_redirect to handle the redirection.
Sets the text that should be changed in the “Location” and “Refresh” header fields of a
proxied server response. Suppose a proxied server returned the header field
“Location:https://myserver/uri/”. The directive
will rewrite this string to “Location: http://nginx_server:8080/uri/”.
Example:
proxy_redirect https://myserver/ http://nginx_server:8080/;
Source: http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_redirect
You can setup nginx like this if you do not want the server to do redirects:
server
{
listen 80;
server_name YOUR.OWN.DOMAIN.URL;
location / {
proxy_pass http://THE.SITE.URL.YOU.WANT.TO.DELEGAGE/;
proxy_redirect off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
For me, this config was sufficient:
events {
}
http {
server {
location / {
resolver 8.8.8.8;
proxy_pass https://www.example.com$request_uri;
}
}
}
(Note that the resolver directive has nothing to do with the problem in the OP, I just needed it to be able to proxy an external domain such as example.com)
The problem for me was just that I was missing the www. in www.example.com. In the Firefox developer's console, I could see the GET request to localhost coming back with a 301, and so I thought that NGINX was issuing 301s instead of just mirroring example.com. Not so: in fact the problem was that example.com was returning 301s to redirect to www.example.com, NGINX was dutifully mirroring those 301s, and then Firefox "changed the URL" (followed the redirect) straight from localhost to www.example.com.
I was having a similar issue. In my case, I was able to resolve the issue by added a trailing slash to the proxy_pass URL:
before
server {
location / {
proxy_pass http://example.com/path/to/some/folder;
}
}
after
server {
location / {
# added trailing slash
proxy_pass http://example.com/path/to/some/folder/;
}
}