I'm planning to set up HTTP/HTTPS load balancing (https://cloud.google.com/compute/docs/load-balancing/http/) on the Google Cloud Platform for over 1,700 domains (different websites); and all will have TLS/SSL. However, you can only add up to 10 SSL certificates per load balancer, according to this: https://cloud.google.com/compute/docs/load-balancing/http/ssl-certificates
How should I go about trying to set up load balancing to serve websites using Compute Engine? I'd like to have instances in several different regions, and all of the steps in adding a domain should be automated (I have the deployment process figured out).
Of course I'll be providing my own SSL certificates. I can add up to 100 domains per certificate using Let's Encrypt (https://letsencrypt.org/docs/rate-limits/). But do I need a separate certificate for each domain for the Google Cloud load balancer? But if I can use one certificate for every 100 domains, does that mean I can only use a load balancer for up to 1,000 domains (10*100)? Would I have to create multiple load balancers, each with its own Frontend, using the same Backend service? How many load balancers am I allowed to create per project?
We also had the same scenario and requirement (1000+ domains, letsencrypt SSL and Google LoadBalancer) but alas couldn't use Google Load Balancer to do that. Instead we made an TCP LoadBalancer instead of an HTTPS one, so that we could handle the 443 port.
Now the request directly came to our instances (even with ssl), and we made conf for all the domains in nginx and all the domains had their ssl certifciates configured using letsencrypt and serve the app based on the domain.
The limitation on number of certs is by IP, not by load balancer. The number of certs by ip is now 15 on each per the GCP docs. If in your case the sites can use a shared cert, then that would cover 1500 domains per IP address.
GCP quotas have a default but you can request an increase if your case needs it.
Related
Im currently building an application that will reside at app.mydomain.com which is running on Heroku. All users will have their own entry points, like app.mydomain.com/client1, app.mydomain.com/client2, etc. I want clients to be able to setup their own domain (www.clientdomain.com) and cname it to their entry point. I understand this is pretty straight forward up until now.
All my DNS is handled by Cloudflare and I believe I can configure Cloudlfare into Full (Strict) mode, all I need to do is install their Origin Cert onto my Heroku dyno. This will ensure that all direct connects to my domain will be secure (going to app.mydomain.com/client1).
Question is, how does a client go about getting an SSL'ed connection for their domain; do I need to get a multidomain cert and start adding domains to it as I get clients, or am i supposed to install their cert onto Heroku (I believe I can only install 1 so thats a no go) or is it supposed to live on Cloudflare somewhere, or are there additional options I'm not seeing (I hope there are!).
Im not wondering what to do for my own domains, but rather, how do clients setup an SSL connection with their domains that resolve onto my servers.
This is rather perplexing!
The flow would be (I think):
User Browser -> Clients DNS -> (cname to) My Cloudflare -> Heroku
Hmm, it looks like this might be a pretty solid solution to this issue...
https://blog.cloudflare.com/introducing-ssl-for-saas/
Edit - after clarification
I'm currently building an application that will reside at
app.mydomain.com which is running on Heroku. All users will have their
own entry points, like app.mydomain.com/client1,
app.mydomain.com/client2, etc. Question is, how does a client go about
getting an SSL'ed connection for their domain; do I need to get a
multidomain cert and start adding domains to it as I get clients?
If you are going to use the same Heroku app for all of your clients (I think this is a bad idea by the way, but you might be required to) - then yes - you should get a multi-domain certificate and keep adding domains to it as your list of clients expand.
Original answer - which explains SSL + Load Balancing on Heroku.
Im currently building an application that will reside at
app.mydomain.com which is running on Heroku. I was clients to be able
to setup their own domain www.clientdomain.com and cname it to mine.
You will need a wildcard certificate to cover your subdomain (for the app.mydomain.com). You'll have use that cert in heroku.
...all I need to do is install their Origin Cert onto my Heroku dyno.
You are correct - except it's not on your Heroku dyno, it's on your Heroku app endpoint. There's a good read here: https://serverfault.com/questions/68753/does-each-server-behind-a-load-balancer-need-their-own-ssl-certificate
If you do your load balancing on the TCP or IP layer (OSI layer 4/3,
a.k.a L4, L3), then yes, all HTTP servers will need to have the SSL
certificate installed.
If you load balance on the HTTPS layer (L7), then you'd commonly
install the certificate on the load balancer alone, and use plain
un-encrypted HTTP over the local network between the load balancer and
the webservers (for best performance on the web servers).
So you should install your SSL certificate to your Heroku endpoint and let Heroku handle the rest.
Question is, how does a client go about getting an SSL'ed connection;
do I need to get a multidomain cert and start adding domains to it as
I get clients, am i supposed to install their cert onto Heroku (I
believe I can only install 1 so thats a no go) or is it supposed to
live on Cloudflare somewhere?
If you're referring to adding servers to your service from heroku, all you need to do is increase the number of web-dynos. Heroku will handle the load balancing in between these dynos. Your SSL certificate should be resolved in the load balancer so your dynos will be serving requests for the same endpoint. You shouldn't need another SSL certificate for the endpoint you've defined - as long as you're serving traffic from multiple dynos attached to it.
My application domain was bought on GoDaddy, the NS servers point to Route 53. Route53 has A records to Elastic Load Balancers for different urls which manage traffic to my Elastic Beanstalk.
Do I require more than one SSL certificate? Will a wildcard certificate be fine for this scenario (I assume it will be).
The reason I'm confused is because of the setup of the system and a lack of understanding how SSL certificates work with A records and Load Balancers.
A wildcard certificate should be sufficient, but you'll need to use CNAME records to map your subdomains to your various ELBs. A records are a recipe for disaster, because the ELBs change IP frequently.
You'll also need to load your certificate into AWS and create an HTTPS listener for each ELB, which is a lot of fun. See the ELB developer's guide.
This question is a little old, and I wanted to point out that today there is a free way to do this (rather than a wildcard cert which is typically expensive.)
Using Amazon certificate manager, you can request a cert for free, and assign a number of domains and subdomains to it. I have a cert running that has a total of six subdomains across two different domains. Four of the subdomains all point to the same application, with the load balancer on elastic beanstalk application pointing to the one cert.
I'm not the most advanced AWS user there is, and have come across a bit of a roadblock.
I've got 2 Elastic Beanstalk Environments, each with a Load Balancer, 2 EC2 instances and they share a RDS instance. One environment is for Development and the other for Production.
I have purchased a wildcard SSL certificate from Thawte, and would like to install it on both the Development and Production environments. I've gone through other threads about adding SSL certificates in AWS, but the admin interface has changed since they were written so I've been going round in circles trying to figure it out.
Also, do I install the same SSL certificate on both Load Balancers? Or is it a case of only having one load balancer and redirecting traffic depending on the domain?
Thanks
You will need two load balancers, one for each environment. For uploading the certificate, it sounds like you are creating your Beanstalk environment through the console. In that case, after you create the environment, go to the EC2 tab, then 'Load Balancers', then 'Listeners'. Edit that, change the protocol to https. You'll see there is a place to change the certificate:
That will give you a place to upload the certificate:
Now that the cert is there, you can use the Elastic Beanstalk configuration to change future environments to use that cert:
Here's my AWS architecture
1 Load Balancer
2 Web/Application server
1 DB server
If client - and my LB communicates with SSL(HTTPS) protocol,
would it be safe with internal LB-WEB/APP-DB server communicates with HTTP? Or should they communicate with same SSL certificates internally too?
You certainly can terminate SSL on your web instances, but it is probably much easier to have SSL on your load balancer, and communicate over http between ELB and web instance.
This assumes you're running inside of a VPC of course.
As you scale having SSL terminated at your LB and internal traffic non-SSL will save you a great deal of overhead.
Using Cloudfront
Another option is to create a Cloudfront distribution in-front of your ELB, where your SSL connection is terminated at the nearest Edge Location. From Cloudfront to LLB(In a particular region) it uses AWS WAN so if you can live with that level of security, you can get better performance also with static content cached and delivered from Edge location. Another advantage is that you can get free SSL certificates from AWS for Cloudfront regardless of your ELB region.
For the DB Server, normally it kept inside the same VPN as the WebServer and not allowed for external access. So I don't see a great deal in putting a separate certificate for DB access within the private network unless you have specific regulatory requirements.
I am getting into load balancing and how security with SSL certificates can be integrated with a load balancer.
Let's say that I want to expose several copies of the same RESTful web service over Amazon Elastic Load Balancer. All should be fine and smooth up until now. However, security has not yet been taken into consideration.
Now, let's say that we want the communication to be secured with an SSL certificate, so we go ahead and buy a certificate. We will have several IP addresses which are all exposing the same RESTful server with the load balancer. These IP addresses will all get mapped to the same domain name (https://thedomain.com). This way, the clients always connect to the same domain. It is then up to the load balancer to redirect to the web service which is getting the least traffic.
The main question is, is it possible for such an architecture with a single SSL certificate? As if this is so, it would be possible to extend the amount of services dynamically without having to change the security.
It is then up to the load balancer to redirect to the web service which is getting the least traffic.
AFAIK, the ELB supports only RoundRobin and Stick sessions. So what you said above will not happen.
is it possible for such an architecture with a single SSL certificate?
You can install the SSL certificate on the ELB and let it do the SSL termination. The traffic between ELB and your Web Nodes will be un-encrypted then. You should explore AWS VPC where you can have a public facing ELB and your Web Nodes will be within Private subnet.
Also, ELB supports TCP load balancing. In this case, you install the Certificate on the Web Nodes and ELB will accept traffic on port 443 from internet and will simply forward it to port 443 on web nodes wherein web nodes have to do SSL encryption/decryption.
Hope this helps.