Single SSL certificate in a Load Balancing architecture - ssl

I am getting into load balancing and how security with SSL certificates can be integrated with a load balancer.
Let's say that I want to expose several copies of the same RESTful web service over Amazon Elastic Load Balancer. All should be fine and smooth up until now. However, security has not yet been taken into consideration.
Now, let's say that we want the communication to be secured with an SSL certificate, so we go ahead and buy a certificate. We will have several IP addresses which are all exposing the same RESTful server with the load balancer. These IP addresses will all get mapped to the same domain name (https://thedomain.com). This way, the clients always connect to the same domain. It is then up to the load balancer to redirect to the web service which is getting the least traffic.
The main question is, is it possible for such an architecture with a single SSL certificate? As if this is so, it would be possible to extend the amount of services dynamically without having to change the security.

It is then up to the load balancer to redirect to the web service which is getting the least traffic.
AFAIK, the ELB supports only RoundRobin and Stick sessions. So what you said above will not happen.
is it possible for such an architecture with a single SSL certificate?
You can install the SSL certificate on the ELB and let it do the SSL termination. The traffic between ELB and your Web Nodes will be un-encrypted then. You should explore AWS VPC where you can have a public facing ELB and your Web Nodes will be within Private subnet.
Also, ELB supports TCP load balancing. In this case, you install the Certificate on the Web Nodes and ELB will accept traffic on port 443 from internet and will simply forward it to port 443 on web nodes wherein web nodes have to do SSL encryption/decryption.
Hope this helps.

Related

Is HTTPS necessary on internal proxies?

I have drawed the following chart to help explain our situation:
We have an API running on the KESTREL WebServer in the back end. This is running as several instances in docker containers. In front of this, we have a HAProxy Load Balancer which directs the traffic to the Kestrel server with the least connections.
All of this so far is only available internally on our network.
We have one server accessible from the internet, running IIS on port 80 and 443.
In IIS we have added a SSL Certificate which encrypts the communication between the end user and our first entry point.
Is there any further need to encrypt the communication between IIS which proxies the request to our HAProxy on the internal network, and from there on, necessary to encrypt communication from HAProxy to the backends?
As far as my knowledge goes, the only benefit of this, would be that nobody on the internal network at our office can listen to the connections going between the servers, but please correct me if I am wrong here.

What is the difference between HTTPS Load-Balancer w/ non-TLS backend and HTTPS Load-Balancer w/ TLS backend

I'm trying to configure load balancer to serve in HTTPS with certificates provide by Lets Encrypt, even though I couldn't do it yet, reading this article gives steps how to configure
Self-signed certs
Network Load-Balancer w/ TLS backend
HTTPS Load-Balancer w/ non-TLS backend
HTTPS Load-Balancer w/ TLS backend
As I'm intersting only in HTTPS, I wonder what is the difference between this two:
HTTPS Load-Balancer w/ non-TLS backend
HTTPS Load-Balancer w/ TLS backend
But I meant not the obvious reason that is the first one is not encrypted from load balancer to backend, I mean in performance and HTTP2 conection, for example will I continue to get all the benefits from http2 like multiplexing and streaming? or is the first option
HTTPS Load-Balancer w/ non-TLS backend
only an illusion but I won't get http2?
To talk HTTP/2 all web browsers require the use of HTTPS. And even without HTTP/2 it's still good to have HTTPS for various reasons.
So the point your web browser needs to talk to (often called the edge server), needs to be HTTPS enabled. This is often a load balancer, but could also be a CDN, or just a single web server (e.g. Apache) in front of an application server (e.g. Tomcat).
So then the question is does the connection from that edge server to any downstream servers need to be HTTPS enabled? Well, ultimately the browser will not know, so not for it. Then you're down to two reasons to encrypt this connection:
Because the traffic is still travelling across an insecure channel (e.g. CDN to origin server, across the Internet).
Many feel it's disingenuous to make the user think they are on a secure (with a green padlock) then in fact they are not for the full end to end connection.
This to me is less of an issue if your load balancer is in a segregated network area (or perhaps even on the same machine!) as the server it is connecting to. For example if the load balancer and the 2 (or more) web servers is is connecting to are both in a separate area in a DMZ segregated network or their own VPC.
Ultimately the traffic will be decrypted at some point and the question for server owners is where/when in your networking stack that happens and how comfortable you are with it.
Because you want HTTPS for some other reason (e.g. HTTP/2 all the way through).
On this one I think there is less of a good case to be made. HTTP/2 primarily helps high latency, low bandwidth connections (i.e. browser to edge node) and is less important for low latency, high bandwidth connections (as load balancer to web servers often are). My answer to this question discusses this more.
In both the above scenarios, if you are using HTTPS on your downstream servers, you can use self-signed certificates, or long lived self-signed certificates. This means you are not bound by the 30 days LetsEncrypt limitations, nor does it require you to purchase longer certificates from another CA. As the browser never sees these certificates you only need your load balancer to trust them, which is in your control to do for self-signed certificates. This is also useful if the downstream web servers cannot talk to LetsEncrypt to be able to get certificates from there.
The third option, if it really is important to have HTTPS and/or HTTP/2 all the way through, is to use a TCP load balancer (which is option 2 in your question so apologies for confusing the numbering here!). This just forwards TCP packets to the downstream servers. The packets may still be HTTPS encrypted but the load balancer does not care - it's just forwarding them on and if they are HTTPS encrypted then the downstream server is tasked with decrypting them. So you still can have HTTPS and HTTP/2 in this scenario, you just have the end user certificates (i.e. the LetsEncrypt ones) on the downstream web servers. This can be difficult to manage (should the same certificates be used on both? Or should they have different ones? Do we need to have sticky sessions so HTTPS traffic always hits the sae downstream server). It also means the load balancer cannot see or understand any HTTP traffic - they are all just TCP packets as far as it is concerned. So no filtering on HTTP headers, or adding new HTTP headers (e.g. X-FORWARDED_FOR with the orignal IP address.)
To be honest it is absolutely fine, and even quite common, to have HTTPS on the load balancer and HTTP traffic on downstream servers - if in a secure network between the two. It is usually the easiest to set up (one place to manage HTTPS certificates and renewals) and the easiest supported (e.g. some downstream servers may not easily support HTTPS or HTTP/2). Using HTTPS on this connection either by use of self-signed certificates or CA issued certificates is equally fine, though requires a bit more effort, and the TCP load balancer option is probably the most effort.

Using Google Cloud Load Balancer & SSL For MANY Domains

I'm planning to set up HTTP/HTTPS load balancing (https://cloud.google.com/compute/docs/load-balancing/http/) on the Google Cloud Platform for over 1,700 domains (different websites); and all will have TLS/SSL. However, you can only add up to 10 SSL certificates per load balancer, according to this: https://cloud.google.com/compute/docs/load-balancing/http/ssl-certificates
How should I go about trying to set up load balancing to serve websites using Compute Engine? I'd like to have instances in several different regions, and all of the steps in adding a domain should be automated (I have the deployment process figured out).
Of course I'll be providing my own SSL certificates. I can add up to 100 domains per certificate using Let's Encrypt (https://letsencrypt.org/docs/rate-limits/). But do I need a separate certificate for each domain for the Google Cloud load balancer? But if I can use one certificate for every 100 domains, does that mean I can only use a load balancer for up to 1,000 domains (10*100)? Would I have to create multiple load balancers, each with its own Frontend, using the same Backend service? How many load balancers am I allowed to create per project?
We also had the same scenario and requirement (1000+ domains, letsencrypt SSL and Google LoadBalancer) but alas couldn't use Google Load Balancer to do that. Instead we made an TCP LoadBalancer instead of an HTTPS one, so that we could handle the 443 port.
Now the request directly came to our instances (even with ssl), and we made conf for all the domains in nginx and all the domains had their ssl certifciates configured using letsencrypt and serve the app based on the domain.
The limitation on number of certs is by IP, not by load balancer. The number of certs by ip is now 15 on each per the GCP docs. If in your case the sites can use a shared cert, then that would cover 1500 domains per IP address.
GCP quotas have a default but you can request an increase if your case needs it.

SSL protocol on AWS architecture?

Here's my AWS architecture
1 Load Balancer
2 Web/Application server
1 DB server
If client - and my LB communicates with SSL(HTTPS) protocol,
would it be safe with internal LB-WEB/APP-DB server communicates with HTTP? Or should they communicate with same SSL certificates internally too?
You certainly can terminate SSL on your web instances, but it is probably much easier to have SSL on your load balancer, and communicate over http between ELB and web instance.
This assumes you're running inside of a VPC of course.
As you scale having SSL terminated at your LB and internal traffic non-SSL will save you a great deal of overhead.
Using Cloudfront
Another option is to create a Cloudfront distribution in-front of your ELB, where your SSL connection is terminated at the nearest Edge Location. From Cloudfront to LLB(In a particular region) it uses AWS WAN so if you can live with that level of security, you can get better performance also with static content cached and delivered from Edge location. Another advantage is that you can get free SSL certificates from AWS for Cloudfront regardless of your ELB region.
For the DB Server, normally it kept inside the same VPN as the WebServer and not allowed for external access. So I don't see a great deal in putting a separate certificate for DB access within the private network unless you have specific regulatory requirements.

Is it standard to use HTTPS from client to Load Balancer, but not from LB to app server?

Just wondering if it is a standard practice to use AWS Load Balancer to handle the HTTPS and forward it to the application as HTTP so none of the app instances have to worry about ssl certificates.
Yes, that's a common practice. One of the most important optimizations you could do for a website is to perform the SSL offloading geographically as close as possible to the client.
The SSL handshake consists in a couple of exchanges between the client and the server in order to establish the SSL session. And by having the SSL offloading as close as possible to the user you are reducing the network latency. The load balancer could then dispatch the request to your webfarm which could be situated anywhere in the world.