Nginx -> Apache 2 authentication -> return to Nginix - authentication

We have a nginx and an apache2 server.
Apache2 is configured to manage Kerberos (Active Directory) authentication.
We have a website managed by nginx with a reserved area.
I would know if this is possible:
the user goes to main site managed by nginx
from main site, there is a link to "/login" mapped to apache2:
location /login/ {
proxy_pass http://apache2server/testlogin;
}
when the login is successful, apache2 is configured to go to another nginx webpage, using proxypass too:
ProxyPass /testlogin http://nginxserver/logindone.php
ProxyPassReverse /testlogin http://nginxserver/logindone.php
I wonder if this is the right solution to the problem.

The best way you can implement an external authentication to your NGiNX website is using auth_request directive.
Basically, you can protect any request doing a subrequest to any external web server. The subrequest must return HTTP code 2XX to allow proceeding to the content, and any other HTTP code returned will deny access.
To accomplish that, be sure you've NGiNX with auth_request enabled (compiled with --with-http_auth_request_module). To check that, use the following command at shell:
nginx -V 2>&1 | grep "http_auth_request_module"
Add the auth_request directive to the location you want to protect, specifying an internal location where the authorization subrequest will be forwarded to, using:
location /system/ {
auth_request /auth;
#...
}
So, when a request is made to /system/ location, the system will create a subrequest to /auth location. Now we need to create the internal /auth location. We can use the following example below:
location = /auth {
internal;
proxy_pass http://my.app.webserver/auth_endpoint;
proxy_pass_request_body off;
proxy_set_header Content-Length "";
#...
}
Here, we created the /auth internal location. We used the internal directive to disable external NGiNX access (any external request to /auth will not be processed by this location). Also, we removed the request body content and set the request length to zero, removing any original request variable. We do a subrequest to http://my.app.webserver/auth_endpoint passing all requested cookies, so your backend application could determine if user has access or not.
If you need to know the original requested URI, you can add it on an extra HTTP header at subrequest adding:
proxy_set_header X-Original-URI $request_uri;
You can learn more about NGiNX auth_request directive here.

Related

How to fix incorrect nginx s3 reverse proxy paths?

I'm working on an nginx s3 reverse proxy container image to proxy frontend files (Angular apps) from s3 behind an Application Load Balancer. The frontend files are located in the specific folder of the given app name in the s3 bucket. These are angular apps which are built using standard angular commands. The dist contents are uploaded to s3 and then the ALB route paths, along with the nginx locations map to those app folders in s3. For example, here is my nginx conf file:
server {
listen 80;
listen 443 ssl;
ssl_certificate /etc/ssl/nginx-server.crt;
ssl_certificate_key /etc/ssl/nginx-server.key;
server_name timemachine.com;
sendfile on;
default_type application/octet-stream;
resolver 8.8.8.8;
server_tokens off;
location ~ ^/app1/(.*) {
proxy_http_version 1.1;
proxy_buffering off;
proxy_ignore_headers "Set-Cookie";
proxy_hide_header x-amz-id-2;
proxy_hide_header x-amz-request-id;
proxy_hide_header x-amz-meta-s3cmd-attrs;
proxy_hide_header Set-Cookie;
proxy_set_header Authorization "";
proxy_intercept_errors on;
rewrite ^/app1/?$ /app1/index.html;
proxy_pass https://<s3 bucket name here>;
break;
}
}
So there is a corresponding bucket folder /app1 in s3 which has the dist contents and is serving up the index.html. And on the ALB, there are two route paths. The first is /app1 which redirects to https:{port}//app1/ and then the second route path /app1/* which just forwards to the nginx reverse proxy container deployed via ECS Fargate.
This is not using cloudfront. The bucket is proxied internally on https and specific permissions are set on the bucket to be accessible w/in the given VPC.
The angular apps have specific modules, but the issue is since Im not saving any of this content in the container, I can't just do a try_files, or set an index to make this work, since all of this is proxied from s3 and the content is accessed differently.
I can access the app at with the given proxy configuration above, but for other paths, say when I navigate to the part of the apps where its /app1/account and then do a refresh, the page throws an access denied on the bucket and I just get the standard xml page in the browser.
How do I get this to work with all of those other relative paths without having to add each of those paths to nginx or the ALB routes? In other words, I dont want to have to add
location /app1/account {
}
and so on, or something like that. Yes, Im sort of new to nginx, so im still figuring things out.
I was expecting the above proxy to work with all paths on /app1 but im unsure what other route paths need to be added to the ALB or if the regex is off, or what else needs to be added to the nginx conf file.
All that to say, when I enter this
https://timemachine.com/app1
or this,
https://timemachine.com/app1/
both work and just rewrite to the index.html which is good.
After this, when I click on another icon in the UI that directs to another path on /app1/, I get directed to the page correctly at...
https://timemachine.com/app1/news
but then on a refresh on this path, instead of hitting url https://timemachine.com/app1/news, with all the data shown when I accessed this through UI, the url stays at https://timemachine.com/app1/news but the page defaults to s3 bucket access denied on that route(.xml).
The goal is just to be able to reload on the pages I can already access without the UI blowing up and defaulting to the access denied message. So I would like to be able to just enter https://timemachine.com/app1/news, which will display the content, then do a refresh and see the content again.
The are various modules within the angular apps and so these are relative paths, which may be part of the problem.
NOTE: All files, aside from assets folder, are in the base app1 bucket folder. So https://<s3_bucket_name>/app1 (with app1 being the folder).
Angular's docs indicate to use the Frontend Controller pattern for static files like so:
Use try_files, as described in Front Controller Pattern Web Apps,
modified to serve index.html:
try_files $uri $uri/ /index.html;
Obviously, that won't work here (since the files aren't local to nginx) so my understanding is we're looking for equivalent logic to that for when the files are hosted elsewhere.
Route not-assets to index.html
All assets are in the /assets/ folder - so the simplest solution is to look for anything starting with not-that and proxy those requests to the html file for the response:
server {
location ~ /app1/ {
rewrite ^/app1/(?!assets/) /index.html;
proxy_pass https://domain/bucket/app1/;
}
}
That regex means that:
/app1/assets/some.css gets proxied to https://domain/bucket/app1/assets/some.css
/app1/ gets proxied to https://domain/bucket/app1/index.html
/app1/something/else gets proxied to https://domain/bucket/app1/index.html
etc.
Do note that this is going to make your app respond HTTP 200 OK with html to almost any url - which may be confusing.
If there are any problems setting this up, enable the nginx debug log to see to what url requests are being proxied, and determine the difference from what's desired.

nginx proxy_reverse not setting cookie from upstream server to the client browser

I have an expressjs server to authenticate login requests from a front app built in svelte.
The front app is running on frontenddomain.com and the expressjs server is running on backenddomain.com
Here is my login post route that authenticate and set cookie:
app.post('/login', (req, res)=>{
// check db,find the user, write a jwt token and put it in a cookie to send it to the
// browser
res.cookie("accesstoken", accessToken)
res.cookie("refreshtoken", refreshtoken)
res.send(...)
}
This server code deployed to an ubuntu server with Nginx running as a proxy_reverse, here is my nginx block configuration:
server {
...
location / {
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
proxy_cookie_domain localhost .frontenddomain.com;
proxy_cookie_domain ~^(.+)(Domain=frontenddomain.com)(.+)$ "$1
Domain=.frontenddomain.com $3";
}
}
server {
listen 80;
listen [::]:80;
server_name backenddomain.com www.backenddomain.com;
root /var/www/backenddomain.com;
index index.html;
location / {
try_files $uri $uri/ /index.html;
}
}
When I run the server and svelte app (front app) using my local machine, everything works (customer provide credential, cookie is sent to client browser and upon inspecting google dev tools, I confirm that the cookies has been set correctly in the client's browser)
When I deploy my expressjs server to ubuntu (20.04) and use pm2 to run my server, it does start and I can view all my console.log. My front app runs and I go to my login page, enter credential and click submit, the app logs me in (because credentials are correct and user set to true on the front app localstorage) but NO COOKIES are set in the browser.
I read the nginx docs, I read material and posts from different sites on how to set Nginx proxy_reverse cookie domain but unable to fix the problem (the problem is cookies are not set in the browser, the server issues them) but my proxy server is not passing them to the browser.
These questions about proxy_reverse and cookies come up, the poster comeback and post vague answer to their own question and no other answers. It seems like there are not enough technical people out there with knowledge of this issue.
my location code has the proxy_cookie_domain localhost .frontenddomain.com;
How do you set nginx proxy_reverse to pass-set cookies to the browser passed on from upstream server?
So it wasn't related to the nginx block configuration but it was the cookie settings. For cross site cookies to work, it has to be set with sameSite : none (or strict) and secure flags. Make sure that your backend and front has to be using domains (ip is not allowed in the latest draft as of this writing)
You also need both front and back domains to be secured (https) with an ssl.
Your ufw on nginx needs to allow https.
Cookie settings:
res.cookie("name", "value", { sameSite: "none", secure : true })
restart your nginx after updating the server and your nginx conf sites-available
and it should work.

nginx authentication from external service

My goal is to enable access on one (static) web page/folder only to authenticated users.
auth_basic is NOT option.
There are 2 servers: nginx (contains http,css and javascript) and REST server (doing authentication and provides secure content).
REST server provides access token.
On www.somesite.com/secret (hosted by nginx) should access only authenticated users.
I have tried something similar to this https://www.nginx.com/resources/admin-guide/restricting-access-auth-request/
this is my config example:
location /secret/ {
auth_request /restauth;
auth_request_set $auth_status $upstream_status;
}
location /restauth/ {
internal;
proxy_pass http://localhost:8080/api/auth/login;
proxy_pass_request_body off;
proxy_set_header Content-Length "";
proxy_set_header X-Original-URI $request_uri;
}
With this, only GET request is sent, REST service support only POST.
How can I send POST with access token, and with code 200 to allow access to content of /auth folder?
Another interesting thing I found is nginscript, but this simple example from this site doesn't work to me.

Jenkins/Nginx - Double prompted for basic auth, why? Why is there an internal Jenkins auth?

Below is my nginx configuration file for Jenkins. Most of it is exactly as per I've read in the documentation.
Config file:
upstream app_server {
server 127.0.0.1:8080 fail_timeout=0;
}
server {
listen 80;
listen [::]:80 default ipv6only=on;
server_name sub.mydomain.net;
location ^~ /jenkins/ {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
if (!-f $request_filename) {
proxy_pass http://app_server;
break;
}
auth_basic "[....] Please confirm identity...";
auth_basic_user_file /etc/nginx/.htpasswd;
}
}
When navigating to http://sub.mydomain.net/jenkins I get prompted for my basic auth with Server says: [....] Please confirm identify....
This is correct, but as soon a I enter the proper credentials I then get PROMPTED AGAIN for basic auth once again, but this time: Server says: Jenkins.
Where is this second hidden basic_auth coming from?! It's not making any sense to me.
Hitting CANCEL on the first prompt I then correctly receive a 401 authorization required error.
Hitting CANCEL on the second basic auth ("Server says: Jenkins") I get:
HTTP ERROR 401
Problem accessing /jenkins/. Reason:
Invalid password/token for user: _____
Powered by Jetty://
Does anyone know what's possibly going on?
Found the solution to my issue by searching for Nginx used as a reverse proxy for any other application with basic_auth.
Solution was the answer found here:
https://serverfault.com/questions/511846/basic-auth-for-a-tomcat-app-jira-with-nginx-as-reverse-proxy
The line I was missing from my nginx configuration was:
# Don't forward auth to Tomcat
proxy_set_header Authorization "";
By default, it appears that after basic auth Nginx will additionally forward the auth headers to Jenkins and this is what was leading to my issue. Jenkins receives the forwarded auth headers and then thinks it needs to authorize itself too?!
If we set our reverse proxy to not forward any authorization headers as shown above then everything works as it should. Nginx will prompt basic_auth and after successful auth we explicitly clear (reset?) the auth headers when forwarding to our reverse proxy.
I had this issue as well, in my case it was caused by having security enabled in jenkins itself, disabling security resolved the issue.
According to their docs:
If you do access control in Apache, do not enable security in Jenkins, as those two things will interfere with each other.
https://wiki.jenkins-ci.org/display/JENKINS/Apache+frontend+for+security
What seems to be happening is that nginx forwards the auth_basic response to jenkins, which attempts to perform auth_basic in response. I have not yet found a satisfying solution to the issue.

web proxy to URL in cookie value

I want to config reverse proxy server to be able to proxy client for a new server which created specifically for him.
So let say, client A logged in http://server.com , and has a cookie proxy_url=http://ec2-2323A.server.com so this solution is proxy him to this server, while he see only http://server.com in the main page.
I tried to do it with nginx reading this Nginx redirect if cookie present
but get some silly errors,
and my custom configuration seems not working
location / {
set $upstream 127.0.0.1:8080/;
if ($http_cookie ~* "proxy_url=([^;]+)(?:;|$)") {
set $upstream $1;
}
proxy_pass $upstream;
...
Any simple solution or other servers?