How to set up an architecture of scalable custom domains & auto-SSL on Google Kubernetes Engine - ssl

We are researching the best solution to allow customers to use their domain names with our hosting services. The hosting services are based on Google App Engine standard. The requirements are:
Customers can point their domain name to our server via CNAME or A record
Our server should be able to generate SSL certs for them automatically using Let's Encrypt
Our server should be able to handle custom SSL certs uploaded by customers
Should be robust and reliable when adding new customers (new confs, SSL certs etc.) into our servers
Should be scalable, and can handle a large number of custom domains and traffic (e.g. from 0 to 10000)
Minimum operation costs (the less time needed for maintaining the infrastructure, the better)
It seems Google Kubernetes Engine (formerly known as Google Container Engine) would be the direction to go. Is there a specific, proven way to set it up? Any suggestions/experiences sharing would be appreciated.

I would recommend going through this link to get started with setting up a GKE cluster.
For your purpose of SSL on GKE I would recommend creating an Ingress as specified in this link which automatically creates a Loadbalancer Resource in GCP if you use the default GLBC ingress controller. The resulting LB's configuration (Ports, Host Path rules, Certificates, Backend Services, etc. ) are defined by the configuration of the Ingress Object itself. You can point the domain the domain name to the IP of the Loadbalancer.
If you want to configure your Ingress(and consequently the resulting LB) to use certs created by 'Let's Encrypt', you would be modifying the configuration presented in the YAML of the ingress.
For actually integrating Let's Encrypt for Kubernetes, it is actually possible by using a service called cert-manager to automate the process of obtaining TLS/SSL certificates and store them inside secrets.
This link shows how to use cert-manager with GKE.
If you want to use self managed SSL certificates please see this link for more information. The GKE is scalable by the GKE's cluster autoscaler which automatically resizes clusters based on the demands of the workloads you want to run.

Related

kubernetes traffic with own certificates

I have a kubernetes cluster in a corporate environment, where all HTTPS traffic is man-in-the-middled and the certificates are replaced with the company owns. Right now, all the applications running on the cluster get the Company's certificates injected by rebuilding the Docker image or by mounting them from a secret and adding them to the local store. This is painful and makes it harder to just use public helm charts and docker images without modifying them.
For example, I'm running jenkins on the cluster, which tries to install plugins from https://updates.jenkins-ci.org/. This would normally fail in my case with a SSL exception, unless I add the certficates to the Jenkins keystore.
I was wondering if there's a way to set this up at the cluster level,
So that there's some component that deals with this and the applications can then access the internet normally, without being aware of the whole certificate situation?
My thoughts were:
A cluster proxy pod, that all the applications then use.
Some ambassador container on each pod, that the apps connect to
I would imagine I'm not the only one in this situation but couldn't find a generic solution for this.
You could have a look at istio. It's a service mesh that uses sidecar proxies to (beside other things) take over responsibility for encrypting traffic between applications.
The proxies use the concept of mutual TLS (mTLS), where all connections inside the mesh are encrypted out-of-the-box. The applications them-self don't have to bother with certificates and can send messages in plain text.
Istio also provides a mechanism to migrate to mTLS, so you can include your applications into the mesh one by one, switch to mTLS and disable your own certification overhead.
You can set everything up with your own PKI so you're still using your companies certificates. Also you get a bunch of other features like enhanced observability, canary deployments, on the fly token based authentication/authorization and more.

Certificate Management in Managed Kubernetes

We are trying to secure our AKS cluster by providing trusted CAs (ssl certs) to Kubernetes Control Plane.
The default API server certificate is issued by while the cluster is created.
Is there any way that we can embed trusted Certificates into the control plane before provisioning the cluster?
Like when we try to reach the kubernetes server it shows ssl certificate issue
To ged rid of this we must be able to add organizations certificates to the api server.
When we create a cluster in Cloud (managed Kubernetes Cluster) we do not have access to the control plane nodes, due to which we won't be able to configure the api server.
Could anyone please help me out figuring out how to add ssl certs to the control plane of kubernetes?
When we create a cluster in Cloud (managed Kubernetes Cluster) we do
not have access to the control plane nodes, due to which we won't be
able to configure the api server.
And that's the biggest inconvenience and pain for everyone who likes anything else except OOB solutions...
My answer is NO. No, unfortunately you cant achieve this in case of AKS usage.
Btw, here also interesting info: Self signed certificates used on management API. Copy paste here for future references despite the fact that answer doesn't help you.
You are correct that per the normal PKI specification dictates use of
non self signed certificates for SSL transport. However, the reason we
do not currently support fully signed certificates is:
Kubernetes requires the ability to self generate and sign certificates Users injecting their own CA is known to be error prone
in Kubernetes as a whole
We are aware of the desire to move away from self signed certificates,
however this requires work in upstream to make this much more likely
to succeed. The official documentation explains a lot of this as well
as the requirements well:
https://kubernetes.io/docs/concepts/cluster-administration/certificates/
https://kubernetes.io/docs/tasks/tls/managing-tls-in-a-cluster/
https://kubernetes.io/docs/setup/best-practices/certificates/
Additionally, this post goes in deeper to the issues around cert
management:
https://jvns.ca/blog/2017/08/05/how-kubernetes-certificates-work/

It's possible to use a dynamic route in the nginx ingress controller?

Our services use a K8s service with a reverse proxy to receive a request by multiple domains and redirect to our services, additionally, we manage SSL certificates powered by let's encrypt for every user that configures their domain in our service. Resuming I have multiple .conf files in the nginx for every domain that is configured. Works really great.
But now we need to increase our levels of security and availability and now we ready to configure the ingress in K8s to handle this problem for us because they are built for it.
Everything looks fine until we discover that every time that I need to configure a new domain as a host in the ingress I need to alter the config file and re-apply.
So that's the problem, I want to apply the same concept that I already have running, but in the nginx ingress controller. It's that possible? I have more than 10k domains up and running, I can't configure all in my ingress resource file.
Any thoughts?
In terms of scaling Kubernetes 10k domains should be fine to be configured in an Ingress resource. You might want to check how much storage you have in the etcd nodes to make sure you can store enough data there.
The default etcd storage is 2Gb, but if you keep increasing it's something to keep in mind.
You can also refer to the K8s best practices when it comes to building large clusters.
Another practice that you can use is to use apply and not create when changing the ingress resource, that way the changes are incremental. Furthermore, if you are using K8s 1.18 or later you can take advantage of Server Side Apply.

Certificates per cluster or certificate per service provider?

We have service provider that takes a request and creates cluster of elastic search.
What is the best practice to issue ssl certificate ?
1. Should we issue certificate per cluster ?
2. or One cluster for my service provider should be enough which will be used to access clusters ?
I am assuming issuing new certificate while creating cluster is better.
Please provide me the input.
Also, inside the cluster, do I really need to enable ssl so that pods talk to each other passing certificate ?
Yes, you should definitely use TLS to encrypt network traffic to, from, and within your Elasticsearch clusters run on shared and managed K8S version (GKE).
Additionally I would opt for a maximum separation of customer spaces with:
Kubernetes namespaces
namespaced serviceaccounts/rolebindings
and even PD-SSD based volumes with customer supplied encryption keys
I'm not sure if you are aware of existence of 'Elastic Cloud on Kubernetes' (ECK) - it applies Kubernetes Operator pattern for running and operating Elasticsearch clusters on your own K8S cluster in GCP. Treat it also like a collection of a best practices for running Elasticsearch cluster in most secure way, here is a quick start tutorial.

Good practices for handling TLS LetsEncrypt with Kubernetes Service

Considering a Nginx reverse-proxy handling TLS LetsEncrypt certificates "in front" of a backend service, what is the good deployment architecture of this setup on Kubernetes ?
My first thought was do make a container with both Nginx and my server in a container as a Stateful Set.
All those stateful sets have access to a volume mounted on /etc/nginx/certificates.
All those containers are running a cron and are allowed to renew those certificates.
However, I do not think it's the best approach. This type of architecture is made to be splited, not running completely independant services everwhere.
Maybe I should run an independent proxy service which handle certificates and does the redirection to the backend server deployment (ingress + job for certificate renewal) ?
If you are using a managed service (such as GCP HTTPS Load Balancer), how do you issue a publicly trusted certificate and renew your it?
You want kube-lego.
kube-lego automatically requests certificates for Kubernetes Ingress resources from Let's Encrypt
It works with GKE+LoadBalancer and with nginx-ingress as well. Usage is trivial; automatic certificate requests (including renewals); uses LetsEncrypt.
The README says -perhaps tongue in the cheek- that you need a non production use case. I have been using it for production and I have found it to be reliable enough.
(Full disclosure: I'm loosely associated with the authors but not paid to advertise the product)