nginx proxy authentication intercept - authentication

I have a couple of service and they stand behind an nginx instance. In order to handle authentication, in nginx, I am intercepting each request and sending it to the authentication service. There, if the credentials are are correct, I am setting a cookie which includes user related info.
The request should now be routed to the appropriate service, with the cookie set.
Here is my nginx config:
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
upstream xyz {
server ***;
}
upstream auth {
server ***;
}
server {
listen 8080;
location ~ ^/(abc|xyz)/api(/.*)?$ {
auth_request /auth-proxy;
set $query $2;
proxy_pass http://$1/api$query$is_args$args;
proxy_set_header X-Target $request_uri;
proxy_set_header Host $http_host;
}
location = /auth-proxy {
internal;
proxy_pass http://auth;
proxy_pass_request_body off;
proxy_set_header Content-Length "";
proxy_set_header X-Target $request_uri;
proxy_set_header Host $http_host;
proxy_set_header X-CookieName "auth";
proxy_set_header Cookie "auth=$cookie_auth";
proxy_set_header Set-Cookie "auth=$cookie_auth";
proxy_cookie_path / "/; Secure; HttpOnly";
add_header Cookie "auth=$cookie_auth";
add_header Set-Cookie "auth=$cookie_auth";
}
}
If I make a request to /auth-proxy with an x-target header set manually, the response contains the cookie as expected.
If I make a request to the desired target, the request is intercepted, it reaches /auth-proxy which correctly sets the cookie. However, when the request reaches the target, it does not contain the cookie.
I assume that nginx is not forwarding the cookie when doing the target request.
I've been struggling with this for the last couple of days... what am I missing?
Thanks!

I've finally figured it out. I used auth_request_set to read the cookie from the auth response and I manually set it both on the response to the caller and on the subsequent request to the target.
Because if is evil, I've added the check in lua.
server {
listen 8080;
location ~ ^/(abc|xyz)/api(/.*)?$ {
auth_request /auth-proxy;
# read the cookie from the auth response
auth_request_set $cookie $upstream_cookie_auth;
access_by_lua_block {
if not (ngx.var.cookie == nil or ngx.var.cookie == '') then
ngx.header['Set-Cookie'] = "auth=" .. ngx.var.cookie .. "; Path=/"
end
}
# add the cookie to the target request
proxy_set_header Cookie "auth=$cookie";
set $query $2;
proxy_pass http://$1/api$query$is_args$args;
proxy_set_header X-Target $request_uri;
proxy_set_header Host $http_host;
}
}

Related

Bad gRPC response. HTTP status code: 502 on https with ngnix. working fine on local http

I have deployed a sample grpc service on my ubuntu server with .net core 3.1. I am able to connect using a plain HTTP URL but when trying to access it via reverse proxy I am getting a Bad grpc response error
my ngnix setting is like
server {
listen 80;
server_name abc.def.net;
location / {
proxy_pass http://10.10.10.10:8086/;
proxy_next_upstream error http_502;
proxy_redirect off;
server_tokens off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
client_max_body_size 25m;
client_body_buffer_size 256k;
proxy_connect_timeout 180;
proxy_send_timeout 180;
proxy_read_timeout 180;
proxy_buffer_size 8k;
proxy_buffers 8 32k;
proxy_busy_buffers_size 64k;
proxy_temp_file_write_size 64k;
proxy_buffering on;
access_log /var/log/nginx/abc.def.net_access_log ;
error_log /var/log/nginx/abc.def.net_error_log notice;
}
}
My code for accessing the grpc service is like
var serverAddress = "https://abc.def.net/";
// var serverAddress = "http://10.10.10.10:8086/";
//AppContext.SetSwitch("System.Net.Http.SocketsHttpHandler.Http2UnencryptedSupport", true);
var channel = GrpcChannel.ForAddress(serverAddress);
var client = new CreditRatingCheck.CreditRatingCheckClient(channel);
var creditRequest = new CreditRequest { CustomerId = "id0201", Credit = 7000 };
var reply = client.CheckCreditRequest(creditRequest);
Console.WriteLine($"Credit for customer {creditRequest.CustomerId} {(reply.IsAccepted ? "approved" : "rejected")}!");
Console.WriteLine("Press any key to exit...");
Console.ReadKey();
Only adding a config example how we used it (per the request of one of the above commenter)
In this,
We are using variables just so nginx doesn't fail if the container is offline
We detect if its a grpc content type and redirect it to the correct internal service
We are using a subpath here, so we had to re-write the URL inside so dotnet would handle the request properly (it doesnt map right without that rewrite under a sub path)
Replace [serviceName] with the sub path if needed, or remove if not used.
location /[serviceName]/ {
set $internal_service http://[containerName]:6009;
set $internal_grpc_service grpcs://[containerName]:5009;
if ($content_type = 'application/grpc' ){
rewrite ^/[serviceName]/(.*) /$1 break;
grpc_pass $internal_grpc_service;
}
proxy_set_header X-Forwarded-Prefix /[serviceName];
proxy_pass $internal_service;
}
Problem was in nginx file.
we have to use grpc_pass instead of proxy_pass

troubleshooting application behind nginx reverse proxy, as POST/PUT requests are replied with Error 400 (Bad Request)

I'm trying to host my Angular/ASP.net core 3.1 application on linux for the first time.
Nginx stands on the port 80 and serves the static files and acts as a reverse proxy for the api part which will be passed to the .NET/kestrel server.
The problem is that I systematically get a 400 status code error (Bad Request) on any web API request containing a body, like POST & PUT, but GET is ok.
I added some logs through a middleware, just to see if I can get the requests. Basically, something like:
app.Use(async (context, next) => {
context.Request.EnableBuffering();
string reqBody;
using (var reader = new StreamReader(context.Request.Body))
{
reqBody = await reader.ReadToEndAsync();
context.Request.Body.Seek(0, SeekOrigin.Begin);
ms_oLogger.Debug($"Incoming request: METHOD={context.Request.Method} PATH={context.Request.Path} BODY=\"{reqBody}\"");
}
await next();
});
This just loggs stuff for GET requests, but nothing appears for the problematic PUT/POST requests... Can I conclude that this is only a nginx problem?
I also enabled the logs on nginx for the given "/api" location, but I can not tell what happens... How can I know which tier has generated the 400 status code?
EDIT1: I started a blank new project just with a poor Web API project containing one GET and one POST method just to check if there was something wrong with my application, but I still get the problem.
So I set up a new ubuntu server (this time, ubuntu server instead of desktop version) and now it works!!!
I compared configuration etc... but could not figure out what was wrong!....
But my initial question is still valid: how can I troubleshoot where the problem comes from?
EDIT2: This is my default.conf:
server {
listen 80 default_server;
listen [::]:80 default_server;
root /var/www/html;
index index.html index.htm index.nginx-debian.html;
server_name _;
location / {
try_files $uri $uri/ =404;
}
location /fusion {
root /opt/fichet/WebUI/NgApp;
}
location /fusion/api {
proxy_pass http://localhost:5000/api;
error_log /var/log/nginx/fusion_error_logs.log debug;
access_log /var/log/nginx/fusion_access.log;
proxy_http_version 1.1;
proxy_cache_bypass $http_upgrade;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Port $server_port;
}
}
It seems that no firewall is enabled ("sudo ufw status verbose" tells us that it is "inactive")
Remove these lines from your nginx config.
proxy_cache_bypass $http_upgrade;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
These headers are used for WebSocket connections, and shouldn't be present for non-websocket requests. My guess is your upstream server doesn't mind them for GET requests but does for POST/PUT, for some reason.
If you are not using websockets, you can leave them removed.
If you are using websockets, you need nginx to add or not these headers based on whether the requests is websockets or not. Something like this should work:
http {
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
server {
...
location /fusion/api {
proxy_pass ...
...
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
}
}
See here for more info.

Is there a way to use multiple auth_request directives in nginx?

I would like to use multiple auth_request directives in order to try authentication with multiple servers - i.e. if the first auth server returns 403, try the second auth server. I tried a straightforward approach like this:
location /api {
satisfy any;
auth_request /auth-1/;
auth_request /auth-2/;
proxy_pass http://api_impl;
}
location /auth-1/ {
internal;
proxy_pass http://auth_server_1;
proxy_pass_request_body off;
proxy_set_header Content-Length "";
proxy_set_header X-Original-URI $request_uri;
}
location /auth-2/ {
internal;
proxy_pass http://auth_server_2;
proxy_pass_request_body off;
proxy_set_header Content-Length "";
proxy_set_header X-Original-URI $request_uri;
}
But nginx wouldn't parse the config file. I received the response
nginx: [emerg] "auth_request" directive is duplicate
Is there a way to achive such functionality in nginx?
Here is my solution after finding this question in google looking for the same things:
Set an upstream server that will pass to loopback nginx servers
Make those upstream servers do what your /auth-1 and /auth-2 endpoints were doing, except they return 503 on authentication error (except the last in the chain that still returns 401 to signal to nginx that there are no more servers to try)
Tell nginx on /auth to just use this upstream, so it will try all authentication "servers" sequentially (thanks to the 503 return codes) until one of them succeeds OR the last one returns 401.
upstream auth {
server 127.0.2.1:8000 max_fails=0;
server 127.0.2.1:8001 max_fails=0;
server 127.0.2.1:8002 max_fails=0;
}
# Method 1
server {
listen 127.0.2.1:8000;
location / {
proxy_pass http://auth_server_1; # Returns **503** on failure
proxy_pass_request_body off;
proxy_set_header Content-Length "";
proxy_set_header X-Original-URI $request_uri;
}
}
# Method 2
server {
listen 127.0.2.1:8001;
location / {
proxy_pass http://auth_server_2; # Returns **503** on failure
proxy_pass_request_body off;
proxy_set_header Content-Length "";
proxy_set_header X-Original-URI $request_uri;
}
}
# Method 3
server {
listen 127.0.2.1:8002;
location / {
proxy_pass http://auth_server_3; # Returns **401** on failure
proxy_pass_request_body off;
proxy_set_header Content-Length "";
proxy_set_header X-Original-URI $request_uri;
}
}
server {
# ...
location /api {
auth_request /auth;
proxy_pass http://api_impl;
}
location /auth {
proxy_pass http://auth/;
proxy_pass_request_body off;
proxy_set_header Content-Length "";
proxy_set_header X-Original-URL $request_uri;
proxy_next_upstream error timeout http_503;
}
# ...
}
I had a similar problem, but found a different solution that may be acceptable in certain scenarios. It essentially creates a double-proxy layer which has a significant performance penalty. The first proxy layer is for "authentication" and the second proxy layer is for "authorization".
The following configuration is untested and is to convey the concept rather than a working example.
### What I wanted
location /protected_path {
auth_request /authenticate; # Login to IDP
auth_request /authorize; # Apply Role/Group based authorization
# Final routing logic
}
### My solution
server {
listen 443;
location /protected_path {
auth_request /authenticate;
auth_request_set $idp-data $arbitrary_data_from_idp_server;
proxy_pass http://localhost:8000;
proxy_set_header Arbitrary-Header $idp-data;
}
}
server {
listen 127.0.0.1:8000;
location /protected_path {
auth_request /authorize;
# Final routing logic
}
}

nginx auth_request to remote authentication script

I'm trying to setup a nginx reverse proxy in front of some internal servers with auth_request to protect them from unauthorized users. I have an authentication script running at 192.168.1.101/scripts/auth/user.php which is accessed inside of the /auth block. The problem is that I'd like to use a named location rather than matching URI so that there is no risk of URI collision with the internal service (which I don't control). The following works:
server {
listen 80;
server_name private.domain.com;
location /auth {
proxy_pass http://192.168.1.101/scripts/auth/user.php;
proxy_pass_request_body off;
proxy_set_header Content-Length "";
proxy_set_header X-Original-URI $request_uri;
}
location / {
auth_request /auth;
proxy_pass http://internal.domain.com;
}
}
I'd like to replace the /auth with #auth however when I do nginx throws an error during relad. I've read that the fix is the replace the proxy_pass inside of the auth location with just the IP address however when I do that the auth_request never makes it to the script. Any thoughts on the correct way to proceed with this configuration?
Due some nginx restrictions named locations can't be used for subrequest. You can prevent outside access of auth location with internal config option. Try this config:
location = /scripts/auth/user.php {
internal;
proxy_pass http://192.168.1.101;
proxy_pass_request_body off;
proxy_set_header Content-Length "";
proxy_set_header X-Original-URI $request_uri;
}
location / {
auth_request /scripts/auth/user.php;
proxy_pass http://internal.domain.com;
}

How to set response as header value in Nginx auth

Pardon me if I am asking a poor standard question, I am thinking of using Nginx as proxy layer for my api,Here is what I want to do
NGINX -------- auth request ----> AUTH PROXY
|
| <---200 + Response <------ SUCCESS
|
----> underlying request + Response from auth call ----> BACKEND SERVER
The problem is I am not able to figure out how to add header value as response in nginx auth
this is my conf
location / {
auth_request /_auth;
auth_request_set $user <Response Value>;
proxy_set_header x-user $user;
proxy_pass http://backend_server;
}
location = /_auth {
internal;
proxy_pass https://auth;
proxy_pass_request_body off;
proxy_set_header Content-Length "";
proxy_set_header X-Original-URI $request_uri;
}