Forward https client's header to backend application - ssl

I'm using nginx like reverse proxy to backend application.
- The clients connects to nginx with certificate-A.pem, and nginx use an intermediate CA chained with the root CA to validate the clients (ALL OK).
Now come the problem:
- nginx must forward the requests to the backend application using the same header used by the real host (containing the certificate-A.pem). The packets forwarded to the backend application must be identical to the clients packets recived by nginx.
Why? Because i've a very large number of clients to manage (every one with a different certificate issued by the same Intermediate CA), the backend need the certificate (used to do stuff) but i want to pre-verify this certificate with nginx (really fastest)
I've tired different configuration, but i can't figure how to proxy_set the correct header
upstream proxy_server {
server 127.0.0.1:8443; <---my backend application
}
server {
listen 443 ssl;
listen [::]:443;
ssl_certificate /home/mender/Projects/utility-scripts/gwsw-root-gen/cert_tree/intermediate_CA/gwsw/x509-med_ca_gwsw.pem;
ssl_certificate_key /home/mender/Projects/utility-scripts/gwsw-root-gen/cert_tree/intermediate_CA/gwsw/x509-med_ca_gwsw-key.pem;
location / {
proxy_pass https://proxy_server;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Custom-Referrer $http_x_custom_referrer;
}
}
Cannot verificate the certificate.

The packets forwarded to the backend application must be identical to the clients packets recived by nginx.
If this is the case, I believe you need to just use nginx in stream mode acting as a TCP proxy. I don't know that you'd be able to do any validation on the nginx side however, it'll be purely be up to the backend application.
Its not possible (I don't think) to both have nginx set up an SSL connection and also pass the encrypted data to the backend.

Related

Can'r run ASP.NET Core app on AWS EC2 Ubunti

Tried many step-by-step guides, but still unable to run a very simple ASP.NET Core 5.0 on Ubuntu Linux 20.4 from AWS EC2
What I did:
Configured Inbound rules for the instance:
80 TCP 0.0.0.0/0
80 TCP ::/0
22 TCP 0.0.0.0/0
443 TCP 0.0.0.0/0
443 TCP ::/0
Installed Dotnet SDK on instance
Installed NGINX
Configured NGINX with these settings
server {
listen 80;
server_name localhost;
location / {
proxy_pass http://localhost:5000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection keep-alive;
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
This syntax of config file is correct
Reload NGINX
Status of NGINX is Active/Running
When I run the application it show this
So, it shows that it is Ok and listening to 5000/5001
And finally when I try to access instance in the browser by ip like this:
http://33.333.333:80
in the console I see this:
but app is not coming in the browser - i see ERR_CONNECTION_TIMED_OUT and page is redirected to https://3.33.33.333:**5001**/
In Startup.cs in Configure method at the very beginng I have this:
app.UseForwardedHeaders(new ForwardedHeadersOptions { ForwardedHeaders = ForwardedHeaders.XForwardedFor | ForwardedHeaders.XForwardedProto });
So, request from browser definately reaches the server, but the rest is not working.
Which step am I missing?
I assume that your goal is to enable HTTPS/TLS access to he app. There are 2 levels of the response I can provide:
Without nginx everything would work fine on ports 5000 and 5001. This is confirmed by the results of your attempt to run wget https://localhost:5001 --no-check-certificate, which was successful (except for certificate trust issue).
So to run the app without nginx, it seems to be enough to change the ports to 80 and 443 in your ASP.NET Core config file and solve the TLS certificate trust issue. To learn more about development certificates, you should refer to Microsoft Docs for information about development certificates for ASP.NET Core. At the same time, from the question it looks like you need to refresh the general theory on HTTPS, TLS and certificate trust questions.
For some reason, you'd like to include nginx in the setup. In this case your need to consider the below items:
Potentially reconsider your design choice: which component (nginx or app) is responsible for redirecting users from port 80 to port 443? This can be done on nginx and will remove this concern from your app.
Enable proxying the requests from port 443 to 5001 in nginx config file. The nginx.conf file included in your question don't include necessary directives.
Configure a trusted TLS certificate using the nginx config file. The question of trust is a broad one and the answer depends on the goals of the deployment that your are creating. Is it for development, testing or production?

How to configure SSL passthrough on NGINX where the NGINX reverse proxy is introduced after it was set up? [duplicate]

This question already has answers here:
SSL Pass-Through in Nginx Reverse proxy?
(2 answers)
Nginx TCP forwarding based on hostname
(3 answers)
Closed 3 years ago.
I have an internal LAMP (Ubuntu 18.04) server that I use for various personal projects. It has always been exposed directly on ports 80 and 443. It hosts 4 sites (Apache virtual hosts) and I use CloudFlare full SSL for the domains. I issued the Let's Encrypt certs using certbot. This was all done following tutorials online as I've never been a sysadmin.
Last night a friend put an NGINX server up and all traffic on ports 80 and 443 go to it instead. We're working on some projects together and now have several servers in my network, hence the nginx reverse proxy. I have been asked to simply use ssl passthrough to re-enable access to my sites.
I know where the config files are located and I know how to restart the nginx service, but that's about it. I have never worked with NGINX and I have no idea what configuration to use or how to proceed.
The answer you are looking for is here:
https://reinout.vanrees.org/weblog/2017/05/02/https-behind-proxy.html
In summary here are the config options you need.
server {
listen 443;
server_name sitename.example.org;
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-Proto https;
proxy_redirect off;
proxy_pass https://internal-server-name;
proxy_http_version 1.1;
}
ssl on;
....
ssl_certificate /etc/ssl/certs/wildcard.example.org.pem;
ssl_certificate_key /etc/ssl/private/wildcard.example.org.key;
}
Of course, you'll need to setup SSL on your proxied-to server, but this is the basic idea.
Noteworthy is the fact that nginx by default will proxy using HTTP/1.0. That's why you need the proxy_http_version
Suggest you read the full blog post for more background.

HTTP/HTTPS redirect problem with nginx and bitnamis dockerized osclass

I'm having a problem with a nginx configuration which I use as a reverse proxy for different containerized applications.
Basically Nginx is listening on port 80 and is redirecting every request to https. On different subdomains I'll then proxy pass to the port of the applications.
For example my gitlab config:
server {
listen 443 ssl; # managed by Certbot
server_name gitlab.foo.de www.gitlab.foo.de;
location /{
proxy_pass http://localhost:1080;
}
I'm redirecting to the gitlab http (not https) port. The systems nginx is taking care of SSL, I don't care if the traffic behind is encrypted or not.
This has been working for every app since yesterday.
I'd like to test https://github.com/bitnami/bitnami-docker-osclass for an honorary association. Same config as above but it is not working as intended.
Ressources are downloaded via https while the main page is getting a redirect to http.
Exmaple: https://osclass.foo.de --> redirect --> http://osclass.foo.de:1234/ (yes with the port in the domain which is very strange)
I don't get why? So I changed the config a little to:
server {
listen 443 ssl; # managed by Certbot
server_name osclass.foo.de www.osclass.foo.de;
location /{
proxy_pass http://localhost:1234;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
Now the mainpage is loaded via https and I don't have the port in my domain anymore. But the whole page is broken because no ressources will be loaded due to
"mixed-content warning".
SEC7111: [Mixed-Content] Origin "https://osclass.foo.de" [...] "http://osclass.foo.de/oc-includes/osclass/assets/js/fineuploader/fineuploader.css"
Do I have a conflict with the integrated apache in the docker image or what am I doing wrong?
Any hints are appretiated!
Kind regards from Berlin!
I found a solution to fix the mixed content problem. I just edited the following line in
/opt/bitnami/osclass/config.php
# define('WEB_PATH', 'http://osclass.foo.de/');
define('WEB_PATH', 'https://osclass.foo.de/'); # with https

How to point example.com/directory to another EC2 instance with SSL?

I have all my website files - example.com - on my EC2 server (Ubuntu and Apache) with SSL on EC2 instance 1. I want example.com/blog to go to another EC2 instance - EC2 instance 2. How can I do that with SSL?
I'm using Ubuntu and Apache and Route 53. thanks!
One easy way to do this is with CloudFront, described in this answer at Server Fault, where you can use path patterns to determine which URLs will be handed off to which server.
Another is an Application Load Balancer (ELB/2.0), which allows the instance to be selected based on path rules.
Both of these solutions support free SSL certificates from Amazon Certificate Manager.
Or, you can use ProxyPass in the Apache config on the main example.com web server to relay all requests matching specific paths oer to a different instance.
You cannot accomplish this with Route 53 alone, because DNS does not work at the path level. This is not a limitation in Route 53, it's a fundamental part of how DNS works.
You quickly and easily achieve this by using nginx reverse proxy. Your ssl will still be managed and offloaded on the ELB level. That is listener 443 =>> 80
1) install nginx
yum install nginx
2) add to nginx config
upstream server1 {
server 127.0.0.1:8080;
}
upstream server2 {
server server2_IP_address_here:8080;
}
server {
listen 80;
server_name example.com;
location / {
proxy_pass http://server1;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
}
location /blog {
proxy_pass http://server1;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
}
}

If I'm using a reverse proxy on Nginx do I need an SSL certificate for the reverse proxy and the server?

so I'm starting to learn about nginx and reverse proxy's and I have a question about SSL, the thing is that I have a reverse proxy server like this:
upstream vnoApp {
server vyno.mx:81;
}
server {
listen 80;
server_name app.vno.mx;
location / {
proxy_pass http://vnoApp/;
proxy_set_header X-Real-IP $remote_addr; # http://wiki.nginx.org/HttpProxyModule
proxy_set_header Host $host; # pass the host header - http://wiki.nginx.org/HttpProxyModule#proxy_pass
proxy_http_version 1.1; # recommended with keepalive connections - http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_http_version
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
}
}
what this is doing as you might expect is to listen to http://app.vno.mx and it 'reverse proxys'' it to http://vyno.mx:81, and everything works just fine, but now I want to add SSL support for the site and my question is if I have to add an SSL certificate to both vyno.mx and app.vno.mx (wildcard *vno.mx), or if I just add it to app.vno.mx it will work fine, thanks to all in advance!
No problem, you just need a certificate for the user-facing host.
As a side note, unless circumstances justify, it is generally ill-advised to forward anything to a publicly available port and host.
So that - unless there is a reason not to do so - you should firewall port 81 on vyno.mx to accept connections only from the app.vno.mx server.
If they are the same server, that's it, or perhaps using 127.0.0.1 is even better.
If they are distant, however, you might wish to encrypt the internal connection as well, you can just do that with a snakeoil (self-signed) certificate.