AWS Beanstalk SSL Load balanced environment - ssl

I am trying to make sense of a note in the aws documentation to configure HTTPS for an elastic beanstalk application.
The note reads:
If at any point you decide to redeploy your application using a load-balanced environment, you risk opening port 443 to all incoming traffic from the Internet. In that case, delete the configuration file from your .ebextensions directory. Then create a load-balanced environment and set up SSL using the Load Balancer section of the Configuration page of the Elastic Beanstalk management console.
Here is the link to the original documentation page: http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/configuring-https.html#configuring-https-elb
Can you help me make sense of the warning?

This documentation is poorly written.
There are actually 2 separate possibilities
Load Balanced Environment: All traffic goes through a load balancer first, then to instance.
Non-Load Balanced Environment: All traffic goes directly through your instance.
If all the traffic goes directly to the instance, you need to open up your instances HTTPS port 443 to everyone
Resources:
sslSecurityGroupIngress:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId: {Ref : AWSEBSecurityGroup}
IpProtocol: tcp
ToPort: 443
FromPort: 443
CidrIp: 0.0.0.0/0
If you didn't, everyone couldn't access your site.
If all your traffic is going through a load balancer you can change your instance security to only talk to it. It gets to ignore the rest of the internet. This is more secure because your load balancer is the one that is open to everyone. This matters because imagine if there was a bug in linux that let someone take over your machine through port 443 by doing something weird. The load balancer sees it first and standardizes it so its harder for your instance to be attacked.

Related

Azure Load Balancer - cannot reach port unless it is open in Network Security Group

Load Balancer is configured to redirect TCP requests on the front port 80 to backend port 8080.
That worked fine till I removed "Allow-Port-8080" rule from the Network Security Groups attached to pool VMs.
In my understanding Load Balancer is always allowed by default due to AllowAzureLoadBalancerInBound security rule that I did not touch. Isn't it?
Moreover, port 8080 on pool VMs is reachable from hosts in the same virtual network, so there is no issue with local firewall which is not running on Centos Azure hosts by default BTW.
So to sum up - the question is why should I add an inbound security rule to let Load Balancer to redirect requests to a particular port.
After considering the issue a bit more I've realized that AllowAzureLoadBalancerInBound security rule only applies to the traffic originated by the Load Balancer - health probes, etc.
For all LoadBalancer-redirected traffic general security rules apply, hence we should setup security rules accordingly.

GCP load balancing ("internal" traffic over HTTPS)

I have a GCP instance group with 2 instances. Both are up and running. I want to configure a load balancer (HTTPS) to manage the traffic.
I've set up a forwarding rule with the HTTP-protocol and a certificate managed by google. This all works, but only when the traffic between the load balancer and the backend (the instances) is plain HTTP.
Steps I did so far
I create a template and this template is just a normal N1 series machine. I checked the boxes to create firewall rules for allowing http and https traffic.
I create a firewall rule named "allow-ports". This firewall rule targets all instances in the network, has a 0.0.0.0/0 IP-range and allow port tcp = 80, 443. How I see this, this firewall rule should open both the http (80) and https (443) port.
I create an instance group with port mapping. "http-port" = 80, "https-port" = 443. I use the template I just created.
When the instance group is created, I check if this is running. With SSH, I get access to the instances and install apache (sudo apt-get install -y apache2) on the both. When navigating to their external IP's in the browser, I see them both.
I create a HTTP(S) load balancer, with the option "From internet to my VMs". For backend configuration, I add a backend service with my instance group, protocol HTTP, named port "http-port". For frontend configuration, I set up the HTTPS protocol, create an IPv4 IP address, create a google-managed ssl certificate, and I'm done. I also added health checks btw.
Now... these steps work (after a few minutes). With the cloud DNS, I have set up a domain name which points to the IP address of the load balancer. When going to , I see the apache page.
What doesn't work?
When I change the backend configuration to HTTPS (and named port "https-port"), I get a 502 server error. So it seems to me that there is some connection, but there is an error. Could this be an apache error?
I have spent a whole day, creating and recreating instance groups, firewall rules, load balancers, ... but nothing seems to work. I'm surely missing something, probably something dumb, but I have no clue what it could be.
What do I want to achieve?
I do not only want a secure (HTTPS) connection between the client and my load balancer, I also want a secure connection between the load balancer and the backend service (the instance group). Because GCP offers the option to use the HTTPS protocol when creating a backend service, I feel that this could be done.
To be honest: I'm reading some articles about the fact that the internal traffic is secured, so a HTTPS connection is not necessary. But that doesn't matter to me, I really want to know how this works!
EDIT
I'm using the correct VPC (default). I also edited the firewall rule from 0.0.0.0/0 to 130.211.0.0/22 and 35.191.0.0/16 (see: https://cloud.google.com/compute/docs/tutorials/globally-autoscaling-a-web-service-on-compute-engine?hl=nl#configure_the_load_balancer).
In addition to my previous comment. I followed your steps at my test project to find out the cause of your issue. I installed the same configuration and checked it with HTTP at the back-end. As it was expected, I found no errors. After that, I installed SSL certificates to the back-end and to the load balancer. Then I switched my back-end, load balancer and health checks to HTTPS and disabled HTTP at the back-end. At this point, I found no errors also.
So, I decided to get 502 error in my test configuration in some way. I switched my health check at the load balancer to HTTP. A few minutes later I tried to reach my test service again and got 502 error. When I switched back my health check to HTTPS 502 error gone away.
During this test, I didn't change firewall rules, but allowed HTTP and HTTPS traffic in my instance template and I used default network.

AWS Elastic Beanstalk VPC - HTTPS from ELB to instance

I'm trying to figure out the best way to manage HTTPS for an EB docker application.
At the moment I'm using the following approach.
ELB accepts HTTPS connections on 443, and forwards to HTTP port 80 on the instance.
ELB accepts HTTP connections on 80, and forwards to HTTP port 8080 on the instance.
Instance accepts HTTP connections on port 80 and forwards to docker app.
Instance accepts HTTP connections on port 8080 and redirects them to HTTPS.
This all works reasonably well. It means the docker app doesn't have to worry about redirects at all. It just listens on port 80, and does it's thing. The ELB and docker host do the rest.
My only concern with this setup, is the docker app doesn't know that it's running secure. If within my application I check for this, it'll fail.
I want to completely separate my docker app from domain names, and SSL certificates that may change, so I would prefer to continue terminating the original HTTPS connection at the ELB. I'm wondering if there is some way I can get the docker host (or ELB) to forward (re-encrypt) request in HTTPS protocol, but using a self-signed certificate, so I can keep it completely generic.
Just to be clear, this would only be between the ELB and/or docker host, and my docker app, not to the browser.
If I create a non-expiring self-signed certificate, and then register this with the web-server in the docker app (currently using Apache2, but could potentially use nginx), and then simply tell the ELB, or docker host to forward requests as HTTPS, will this work? Or would it fall over at some point because the certificate isn't trusted?
Or is there some way to be able to terminate a HTTPS connection at the docker app web-server without actually needing to pre-generate a certificate (I'm guessing no on this, as presumably it'd need to generate a certificate on the fly or something).
Is there a recommended best practice way to do this kind of thing?
A common solution when you have a load balancer terminating client connections and forwarding to backend is for the load balancer to add headers onto the backend requests to fill in any information stripped by having the load balancer there.
ELB has a page on this and uses the following headers:
X-Forwarded-For - The client IP
X-Forwarded-Proto - The scheme/protocol
X-Forwarded-Port - The incoming port.
You would generally not allow these headers directly from the client, unless they were a trusted client. I assume ELB takes care of that for you.

Google Load Balancer not passing traffic to back end

I am trying to use a load balancer to direct traffic to a container backend. The service in the containers hosts web traffic on port 80. My health checks are all passing. If I ssh into the Kubernetes host for the containers, I can curl each container and get correct responses over port 80. When I try to access them through the load balancers external IP, however, I receive a 502 response. I have firewall rules allowing traffic from 130.211.0.0/22 on tcp:1-5000 and on the NodePort port. I've also tried adding firewall rules from 0.0.0.0/0 ports 80 and 443 to those nodes.
When in the Kubernetes host, capturing with tcpdump, I see the health check requests to my containers, but no traffic is coming through when I make an external request.
I have an identical configuration that points to a single Compute Engine VM that works perfectly. This leads me to believe that the issue might be in the container setup rather than the load balancer.
Does anyone have any advice on resolving this issue?
I was able to resolve the problem by changing the Named Port that the Load Balancer was connecting to. By default, the Load Balancer connected to Named Port "http", which pointed to port 80. It was my assumption (always a bad thing) that this matched since my application serves on port 80. Not so. Since I'd exposed the containers though NodePort, it was assigned another port. This is the port I had my Health Check pointing to. By going into "Compute Engine -> Instance groups" selecting the group, and then "Edit Group", I was able to change the Named Port "http" to match my NodePort number. Once I did that, traffic started flowing.

AWS ELB HTTPS Port forward setting

When I setup the LBS on the aws, I set SSL offload to forward requests from port 443 on ELB to port 80 on the EC2 instances.
I am not sure this is right or not since I saw many people did that. But after I did that I got lots of errors within the browser console:
"This request has been blocked; the content must be served over HTTPS."
Should I change the forward port 80 to 443? and install ssl for each instance?
443 to 80 is correct. Your EC2 instance serves plain data, the ELB encrypts it before sending it out of AWS.
There are a few nuances to this- the server thinks it is serving plain content on 80, so if it creates URLs they will typically be http:// without some configuration. (it depends on your framework, server, etc)
The second nuance is that hardcoded URLs will break it in a similar manner. That error message has been explained in a stackoverflow question about Ajax, some form of that is causing your problem.