Azure AD WebApp behind reverse proxy receives 502 Bad Gateway - asp.net-core

I have an ASP.NET Core app running on a server behind a nginx reverse proxy.
The reverse proxy forwards xxx.mydomain.com to https://localhost:5000. If I use Azure AD for authentication I get a 502 Bad Gateway after the sign in procedure. The callback path seems correct /signin-oidc. I added the full address to the portal.
EDIT:
I was able to get the nginx log from the server and I get the following error:
2017/03/05 22:13:20 [error] 20059#20059: *635 upstream sent too big header
while reading response header from upstream, client: xx.xx.xxx.xxx, server:
xxx.mydomain.com, request: "POST /signin-oidc HTTP/1.1", upstream:
"https://192.168.3.20:5566/signin-oidc", host: "xxx.mydomain.com", referrer:
"https://login.microsoftonline.com/5712e004-887f-4c52-8fa1-
fcc61882e0f9/oauth2/authorize?client_id=37b8827d-c501-4b03-b86a-
7eb69ddf9a8d&redirect_uri=https%3A%2F%2...ch%2Fsignin-
oidc&response_type=code%20id_token&scope=openid%20profile&response_mode=form_pos
t&nonce=636243452000653500.NzRjYmY2ZTMtOTcyZS00N2FlLTg5NGQtMTYzMDJi..."
As I read in many other posts I tried to update the buffer sizes etc. but that all didn't work.
I am out of ideas where to look. Any ideas?

To answer this question it was the buffer size set in the nginx reverse proxy.
The problem was that i was running this on my synology and after every reboot the nginx settings will be reset. So what I ended up doing is write a small bash script that was run after the reboot and copied back my edited settings and restarted the reverse proxy.

I had the same issue with a Synology NAS as the reverse proxy for the application using Azure AD.
What I did:
Created a file under /usr/local/etc/nginx/sites-enabled named custom.conf with the following content:
fastcgi_buffers 16 16k;
fastcgi_buffer_size 32k;
proxy_buffer_size 128k;
proxy_buffers 4 256k;
proxy_busy_buffers_size 256k;
Values should be adjusted if needed. These worked fine for me with Azure AD.
This all files under that directory are loaded by nginx.
Simply restart the nginx service using:
synoservicectl --restart nginx

Related

Problems with HTTP/2 on nginx for Windows?

I am using nginx 1.17.10.1 Unicorn build from http://nginx-win.ecsds.eu/ and Apache/2.4.43 build from Apachelounge on Windows Server 2012 R2.
Nginx serves static files and proxies Apache responses for PHP scripts. Everything was fine until recently.
Two times in a day without any distinct reason the websites stop responding. Memory/CPU/Network usages are ok. Apache starts logging like
XX.XX.XX.XX 0ms [01/Jul/2020:05:05:20 -0700] "-" 408 - "-" "-"
for each request.
Nginx log shows
2020/07/01 06:04:54 [error] 11800#12192: *5002230 WSARecv() failed (10053: An established connection was aborted by the software in your host machine) while reading response header from upstream, client: YY.YY.YY.YY, server: example.com, request: "GET /the/url/here HTTP/2.0", upstream: "http://XX.XX.XX.XX:8181/the/url/here", host: "example.com"
Server reboot doesn't help. I can connect to the backend directly and it serves the response without any problem.
The only way I could resolve the problem was to switch HTTP/2 off in nginx configuration.
So what can cause this behavior?

SSL_CLOSE_NOTIFY Not received by client when nginx proxy is used

I am using Nginx proxy server as a reverse proxy and https load balancer. Client connects to backend server through the reverse proxy in a load balanced environment. I have setup the correct https configuration (with ssl certificates and all) so that my ssl communication is going through proxy. The only problem I am facing is client don't get SSL_CLOSE_NOTIFY When sever disconnects the connnction gracefully. ( in my case server always disconnncts the connection) . My client and server are running fine without any problem , But in case of nginx proxy ssl close notifiy is not received by client.
I found the solution so copying it here.
This was happening because the connection was closed by server forcibly by server before client receives SSL_CLOSE_NOTIFY. This was happening because proxy_read_timeout and client_body_timeout was missing from nginx.conf.

Google Cloud Load Balancer - 502 - Unmanaged instance group failing health checks

I currently have an HTTPS Load Balancer setup operating with a 443 Frontend, Backend and Health Check that serves a single host nginx instance.
When navigating directly to the host via browser the page loads correctly with valid SSL certs.
When trying to access the site through the load balancer IP, I receive a 502 - Server error message. I check the Google logs and I notice "failed_to_pick_backend" errors at the load balancer. I also notice that it failing health checks.
Some digging around leads me to these two links: https://cloudplatform.googleblog.com/2015/07/Debugging-Health-Checks-in-Load-Balancing-on-Google-Compute-Engine.html
https://github.com/coreos/bugs/issues/1195
Issue #1 - Not sure if google-address-manager is running on the server
(RHEL 7). I do not see an entry for the HTTPS load balancer IP in the
routes. The Google SDK is installed. This is a Google-provided image
and if I update the IP address in the console, it also gets updated on
the host. How do I check if google-address-manager is running on
RHEL7?
[root#server]# ip route ls table local type local scope host
10.212.2.40 dev eth0 proto kernel src 10.212.2.40
127.0.0.0/8 dev lo proto kernel src 127.0.0.1
127.0.0.1 dev lo proto kernel src 127.0.0.1
Output of all google services
[root#server]# systemctl list-unit-files
google-accounts-daemon.service enabled
google-clock-skew-daemon.service enabled
google-instance-setup.service enabled
google-ip-forwarding-daemon.service enabled
google-network-setup.service enabled
google-shutdown-scripts.service enabled
google-startup-scripts.service enabled
Issue #2: Not receiving a 200 OK response. The certificate is valid
and the same on both the LB and server. When running curl against the
app server I receive this response.
root#server.com curl -I https://app-server.com
curl: (60) SSL certificate problem: unable to get local issuer certificate
More details here: https://curl.haxx.se/docs/sslcerts.html
Thoughts?
You should add firewall rules for the health check service -
https://cloud.google.com/compute/docs/load-balancing/health-checks#health_check_source_ips_and_firewall_rules and make sure that your backend service listens on the load balancer ip (easiest is bind to 0.0.0.0) - this is definitely true for an internal load balancer, not sure about HTTPS with an external ip.
A couple of updates and lessons learned:
I have found out that "google-address-manager" is now deprecated and replaced by "google-ip-forward-daemon" which is running.
[root#server ~]# sudo service google-ip-forwarding-daemon status
Redirecting to /bin/systemctl status google-ip-forwarding-daemon.service
google-ip-forwarding-daemon.service - Google Compute Engine IP Forwarding Daemon
Loaded: loaded (/usr/lib/systemd/system/google-ip-forwarding-daemon.service; enabled; vendor preset: enabled)
Active: active (running) since Fri 2017-12-22 20:45:27 UTC; 17h ago
Main PID: 1150 (google_ip_forwa)
CGroup: /system.slice/google-ip-forwarding-daemon.service
└─1150 /usr/bin/python /usr/bin/google_ip_forwarding_daemon
There is an active firewall rule allowing IP ranges 130.211.0.0/22 and 35.191.0.0/16 for port 443. The target is also properly set.
Finally, the health check is currently using the default "/" path. The developers have put an authentication in front of the site during the development process. If I bypassed the SSL cert error, I received a 401 unauthorized when running curl. This was the root cause of the issue we were experiencing. To remedy, we modified nginx basic authentication configuration to disable authentication to a new route (eg. /health)
Once nginx configuration was updated and the path was updated to the new /health route at the health check, we were receivied valid 200 responses. This allowed the health check to return healthy instances and allowed the LB to pass through traffic

502 bad gateway with Sails and SSL

I have a sails application that is hosted on digitalocean via dokku. Everying runs and deploys fine and if I havigate to my domain, I can see that the app is working.
Now I have added a TLS certificate (so that my app is accessible via HTTPS) by:
Creating my private key and CSR request.
Using them to get an certificate from CA authority.
Adding my private key and issued certificate to config/local.js
tarballing key and certificate and adding them to dokku via dokku certs:add
So after all that if I push my app to dokku it boots just fine without any errors upon deployment phase. I can clearly see that upon deployment my app should be accessible via https from buildpack logs:
...
-----> Creating https nginx.conf
-----> Running nginx-pre-reload
Reloading nginx
-----> Setting config vars
DOKKU_APP_RESTORE: 1
-----> Shutting down old containers in 60 seconds
=====> c302066ebd1ecc0ac5323c3cbbcaf9132eebf905f5616e5b4407cecf2b316969
=====> Application deployed:
http://my-domain-here.com
https://my-domain-here.com
The only problem is that when I navigate to my domain, I get "502 bad gateway" error in browser and if I look at nginx's error log of the app I can see the following errors there:
2016/07/14 03:09:30 [error] 7827#0: *391 upstream prematurely closed connection while reading response header from upstream, client: --hidden--, server: my-domain-here.com, request: "GET / HTTP/1.1", upstream: "http://172.17.0.2:5000/", host: "getmocky.com"
What is wrong? How to fix it?
Ok, I have figured it out. It turned out that if you read closely about deployment in Sails you can see there a text like
don't worry about configuring Sails to use an SSL certificate. SSL will almost always be resolved at your load balancer/proxy server, or by your PaaS provider
What this means is that from my list I have to exclude p3 and after that everything will work.

Running Fiddler as a Reverse Proxy for HTTPS server

I have the following situation: 2 hosts, one is a client and the other an HTTPS server.
Client (:<brwsr-port>) <=============> Web server (:443)
I installed Fiddler on the server so that I now have Fiddler running on my server on port 8888.
The situation i would like to reach is the following:
|Client (:<brwsr-port>)| <===> |Fiddler (:8888) <===> Web server (:443)|
|-Me-------------------| |-Server--------------------------------|
From my computer I want to contact Fiddler which will redirect traffic to the web server. The web server however uses HTTPS.
On The server I set up Fiddler to handle HTTPS sessions and decrypt them. I was asked to install on the server Fiddler's fake CA's certificate and I did it! I also inserted the script suggested by the Fiddler wiki page to redirect HTTPS traffic
// HTTPS redirect -----------------------
FiddlerObject.log("Connect received...");
if (oSession.HTTPMethodIs("CONNECT") && (oSession.PathAndQuery == "<server-addr>:8888")) {
oSession.PathAndQuery = "<server-addr>:443";
}
// --------------------------------------
However when I try https://myserver:8888/index.html I fail!
Failure details
When using Fiddler on the client, I can see that the CONNECT request starts but the session fails because response is HTTP error 502. Looks like no one is listening on port 8888. In fact, If I stop Fiddler on the server I get the same situation: 502 bad gateway.
Please note that when I try https://myserver/index.html and https://myserver:443/index.html everything works!
Question
What am I doing wrong?
Is it possible that...?
I thought that since maybe TLS/SSL works on port 443, I should have Fiddler listen there and move my web server to another port, like 444 (I should probably set on IIS an https binding on port 444 then). Is it correct?
If Fiddler isn't configured as the client's proxy and is instead running as a reverse proxy on the Server, then things get a bit more complicated.
Running Fiddler as a Reverse Proxy for HTTPS
Move your existing HTTPS server to a new port (e.g. 444)
Inside Tools > Fiddler Options > Connections, tick Allow Remote Clients to Connect. Restart Fiddler.
Inside Fiddler's QuickExec box, type !listen 443 ServerName where ServerName is whatever the server's hostname is; for instance, for https://Fuzzle/ you would use fuzzle for the server name.
Inside your OnBeforeRequest method, add:
if ((oSession.HostnameIs("fuzzle")) &&
(oSession.oRequest.pipeClient.LocalPort == 443) )
{
oSession.host = "fuzzle:444";
}
Why do you need to do it this way?
The !listen command instructs Fiddler to create a new endpoint that will perform a HTTPS handshake with the client upon connection; the default proxy endpoint doesn't do that because when a proxy receives a connection for HTTPS traffic it gets a HTTP CONNECT request instead of a handshake.
I just ran into a similar situation where I have VS2013 (IISExpress) running a web application on HTTPS (port 44300) and I wanted to browse the application from a mobile device.
I configured Fiddler to "act as a reverse proxy" and "allow remote clients to connect" but it would only work on port 80 (HTTP).
Following on from EricLaw's suggestion, I changed the listening port from 8888 to 8889 and ran the command "!listen 8889 [host_machine_name]" and bingo I was able to browse my application on HTTPS on port 8889.
Note: I had previously entered the forwarding port number into the registry (as described here) so Fiddler already knew what port to forward the requests on to.