AWS Elastic Beanstalk VPC - HTTPS from ELB to instance - apache

I'm trying to figure out the best way to manage HTTPS for an EB docker application.
At the moment I'm using the following approach.
ELB accepts HTTPS connections on 443, and forwards to HTTP port 80 on the instance.
ELB accepts HTTP connections on 80, and forwards to HTTP port 8080 on the instance.
Instance accepts HTTP connections on port 80 and forwards to docker app.
Instance accepts HTTP connections on port 8080 and redirects them to HTTPS.
This all works reasonably well. It means the docker app doesn't have to worry about redirects at all. It just listens on port 80, and does it's thing. The ELB and docker host do the rest.
My only concern with this setup, is the docker app doesn't know that it's running secure. If within my application I check for this, it'll fail.
I want to completely separate my docker app from domain names, and SSL certificates that may change, so I would prefer to continue terminating the original HTTPS connection at the ELB. I'm wondering if there is some way I can get the docker host (or ELB) to forward (re-encrypt) request in HTTPS protocol, but using a self-signed certificate, so I can keep it completely generic.
Just to be clear, this would only be between the ELB and/or docker host, and my docker app, not to the browser.
If I create a non-expiring self-signed certificate, and then register this with the web-server in the docker app (currently using Apache2, but could potentially use nginx), and then simply tell the ELB, or docker host to forward requests as HTTPS, will this work? Or would it fall over at some point because the certificate isn't trusted?
Or is there some way to be able to terminate a HTTPS connection at the docker app web-server without actually needing to pre-generate a certificate (I'm guessing no on this, as presumably it'd need to generate a certificate on the fly or something).
Is there a recommended best practice way to do this kind of thing?

A common solution when you have a load balancer terminating client connections and forwarding to backend is for the load balancer to add headers onto the backend requests to fill in any information stripped by having the load balancer there.
ELB has a page on this and uses the following headers:
X-Forwarded-For - The client IP
X-Forwarded-Proto - The scheme/protocol
X-Forwarded-Port - The incoming port.
You would generally not allow these headers directly from the client, unless they were a trusted client. I assume ELB takes care of that for you.

Related

GCP load balancing ("internal" traffic over HTTPS)

I have a GCP instance group with 2 instances. Both are up and running. I want to configure a load balancer (HTTPS) to manage the traffic.
I've set up a forwarding rule with the HTTP-protocol and a certificate managed by google. This all works, but only when the traffic between the load balancer and the backend (the instances) is plain HTTP.
Steps I did so far
I create a template and this template is just a normal N1 series machine. I checked the boxes to create firewall rules for allowing http and https traffic.
I create a firewall rule named "allow-ports". This firewall rule targets all instances in the network, has a 0.0.0.0/0 IP-range and allow port tcp = 80, 443. How I see this, this firewall rule should open both the http (80) and https (443) port.
I create an instance group with port mapping. "http-port" = 80, "https-port" = 443. I use the template I just created.
When the instance group is created, I check if this is running. With SSH, I get access to the instances and install apache (sudo apt-get install -y apache2) on the both. When navigating to their external IP's in the browser, I see them both.
I create a HTTP(S) load balancer, with the option "From internet to my VMs". For backend configuration, I add a backend service with my instance group, protocol HTTP, named port "http-port". For frontend configuration, I set up the HTTPS protocol, create an IPv4 IP address, create a google-managed ssl certificate, and I'm done. I also added health checks btw.
Now... these steps work (after a few minutes). With the cloud DNS, I have set up a domain name which points to the IP address of the load balancer. When going to , I see the apache page.
What doesn't work?
When I change the backend configuration to HTTPS (and named port "https-port"), I get a 502 server error. So it seems to me that there is some connection, but there is an error. Could this be an apache error?
I have spent a whole day, creating and recreating instance groups, firewall rules, load balancers, ... but nothing seems to work. I'm surely missing something, probably something dumb, but I have no clue what it could be.
What do I want to achieve?
I do not only want a secure (HTTPS) connection between the client and my load balancer, I also want a secure connection between the load balancer and the backend service (the instance group). Because GCP offers the option to use the HTTPS protocol when creating a backend service, I feel that this could be done.
To be honest: I'm reading some articles about the fact that the internal traffic is secured, so a HTTPS connection is not necessary. But that doesn't matter to me, I really want to know how this works!
EDIT
I'm using the correct VPC (default). I also edited the firewall rule from 0.0.0.0/0 to 130.211.0.0/22 and 35.191.0.0/16 (see: https://cloud.google.com/compute/docs/tutorials/globally-autoscaling-a-web-service-on-compute-engine?hl=nl#configure_the_load_balancer).
In addition to my previous comment. I followed your steps at my test project to find out the cause of your issue. I installed the same configuration and checked it with HTTP at the back-end. As it was expected, I found no errors. After that, I installed SSL certificates to the back-end and to the load balancer. Then I switched my back-end, load balancer and health checks to HTTPS and disabled HTTP at the back-end. At this point, I found no errors also.
So, I decided to get 502 error in my test configuration in some way. I switched my health check at the load balancer to HTTP. A few minutes later I tried to reach my test service again and got 502 error. When I switched back my health check to HTTPS 502 error gone away.
During this test, I didn't change firewall rules, but allowed HTTP and HTTPS traffic in my instance template and I used default network.

Google Cloud http load balancer SSL termination

I have an instance listening on port 8080.
I want to create a load balancer to map 443 (ssl) to the instance port 8080 so that ssl terminates at the load balancer and traffic between the lb and the instance is not encrypted.
I have uploaded the ssl cert, created an HTTP load balancer but can't seem to figure out how to set up the forwarding like that.
Coming from AWS ELB, there's a simple way to do this, can't find a way to do it on Google Cloud Platform.
Any thoughts ?
Found it.
Create an instance group that has at least 1 live instance
Create http load balancer with the following:
Upload a SSL certificate
Create a backend service to point to the instance group. Make sure the protocol is HTTP
Create a target https proxy with the certificate you uploaded
Finally, create a global forwarding rule that points HTTPS to the target proxy you created before.

AWS ELB HTTPS Port forward setting

When I setup the LBS on the aws, I set SSL offload to forward requests from port 443 on ELB to port 80 on the EC2 instances.
I am not sure this is right or not since I saw many people did that. But after I did that I got lots of errors within the browser console:
"This request has been blocked; the content must be served over HTTPS."
Should I change the forward port 80 to 443? and install ssl for each instance?
443 to 80 is correct. Your EC2 instance serves plain data, the ELB encrypts it before sending it out of AWS.
There are a few nuances to this- the server thinks it is serving plain content on 80, so if it creates URLs they will typically be http:// without some configuration. (it depends on your framework, server, etc)
The second nuance is that hardcoded URLs will break it in a similar manner. That error message has been explained in a stackoverflow question about Ajax, some form of that is causing your problem.

Adding SSL communication between ELB EC2 on AWS and forcing only HTTPS comunication

I am trying to add SSL support for my site which is on AWS infrastructure.
I am using (Ubuntu, Apache, cake-php).
I installed the certificate from go daddy on AWS ELB per this guide.
When I test my domain via HTTPS it works fine, but the site is also available via HTTP.
I would like to redirect all calls to HTTP, but per the guide instructions the ELB and EC2 communicate via HTTP, so the protocol identified by my EC2 is on port 80, so the EC2 has no way to redirect the user, because all communication to it from ELB is over port 80.
If I change ELB-EC2 settings to HTTPS, it does not work anymore, I assume some configuration is required (on ELB,EC2?) but I could not find any documentation on the above.
Any input, links etc. would be greatly appreciated!
Thanks
ELB sets X-Forwarded-Proto header, you can use it to detect if original request was to HTTP and redirect to HTTPS then.
Take a look at ELB docs.

Using Apache and mod_proxy in a forward proxy to convert http requests to https

I've used both Apache and nginx as a reverse proxy performing HTTPS termination (listening on port 443) and forwarding the unencrypted HTTP traffic to Tomcat on port 8080 before.
However, what I need to do now is do the opposite. I have some client applications running on localhost that are (for simplicity) just talking plain HTTP. I want to be able to tell these client apps to use a forward proxy (on localhost) that will convert them to HTTPS and use a client-side certificate for the communication to the origin. Ie, the client will think it is communicating plain HTTP on port 80, but the traffic will actually leave the host as HTTPS on port 443.
Does anyone know how to configure mod_proxy to do this (or even if it is possible)?
At a further stage, I may need to configure the proxy to use different client certificates based on headers set by the client and also have mod_proxy use RFC 5077 (quick session resumption).
It doesn't have to be Apache (so if nginx or squid can do the function I'm happy with that) as long as it's not a resource hog. We already have Apache running as a reverse proxy anyway so it would be handy if Apache can do it.