Setting up multiple TLS Certificates & Domains with Kubernetes and Helm - ssl

This question is more to give me some direction on how to go about the problem in general, not a specific solution to the problem.
I have a working kubernetes cluster that's using an nginx ingress as the gate to the outside world. Right now everything is on minikube, but the end goal is to move it eventually to GKE, EKS or AKS (for on premise clients that want our software).
For this I'm going to use helm charts to paremetrize the yaml files and ENV variables needed to setup the resources. I will keep using nginx as ingress to avoid maintining alb ingress or other cloud-specific ingress controllers.
My question is:
I'm not sure how to manage TLS certificates and then how to point the ingress to a public domain for people to use it.
I wanted some guidance on how to go about this in general. Is the TLS certificate something that the user can provide to the helm chart before configuring it? Where can I see a small exmaple of this. And finally, is the domain responbility of the helm chart? Or is this something that has to be setup on the DNS provide (Route53) for example. Is there an example you can suggest me to take a look at?
Thanks a lot for the help.

Installing certificates using Helm is perfectly fine just make sure you don't accidentally put certificates into public Git repo. Best practice is to have those certificates only on your local laptop and added to .gitignore. After that you may tell Helm to grab those certificates from their directory.
Regarding the DNS - you may use external-dns to make Kubernetes create DNS records for you. You will need first to integrate external-dns with your DNS provider and then it will watch ingress resources for domain names and automatically create them.

Related

It's possible to use a dynamic route in the nginx ingress controller?

Our services use a K8s service with a reverse proxy to receive a request by multiple domains and redirect to our services, additionally, we manage SSL certificates powered by let's encrypt for every user that configures their domain in our service. Resuming I have multiple .conf files in the nginx for every domain that is configured. Works really great.
But now we need to increase our levels of security and availability and now we ready to configure the ingress in K8s to handle this problem for us because they are built for it.
Everything looks fine until we discover that every time that I need to configure a new domain as a host in the ingress I need to alter the config file and re-apply.
So that's the problem, I want to apply the same concept that I already have running, but in the nginx ingress controller. It's that possible? I have more than 10k domains up and running, I can't configure all in my ingress resource file.
Any thoughts?
In terms of scaling Kubernetes 10k domains should be fine to be configured in an Ingress resource. You might want to check how much storage you have in the etcd nodes to make sure you can store enough data there.
The default etcd storage is 2Gb, but if you keep increasing it's something to keep in mind.
You can also refer to the K8s best practices when it comes to building large clusters.
Another practice that you can use is to use apply and not create when changing the ingress resource, that way the changes are incremental. Furthermore, if you are using K8s 1.18 or later you can take advantage of Server Side Apply.

Kubernetes cluster certificates rotation

I'm looking for some advice on the procedure of certificates rotation. I have been practicing to install a cluster from scratch with Kelsey Hightower's Kubernetes the hard way. It has been great to understand the certificates needed to build trust between components that form a Kubernetes cluster.
But consulting the official documentation about certificates rotation I 've only found this resource, which mentions only the kubelet component.
I guess that the idea of certificate rotation would be to change all af the certificates involved: controller-manager, kube-proxy, scheduler, api-server, etc.
So, my questions are:
Are there any resources about the subject that you would recommend?
Is there an order I should follow in the update of the components to minimize the service disruption? I imagine there will be a period where there will be communication problems because some components will be using the old certificates and some others will be using the new ones
Say I backup the old certificates (create a copy in a different path) and replace the current files with newly generated certificates. Will I still need to restart the system units (or static pods / regular pods) that include some certificates configuration or will the configuraton be "hot" reloaded?
Thanks
That would be better managed by a side-car proxy service such as Istio
It offers certificat ttl out of the box, with by default 90 days.
The rotation is not automated though.
Using an external provider like Let'sEncrypt can help (as described here).

How to configure a Flask ws in Kubernetes with SSL?

I have a containerized Flash application (a simple webservice exposed in the internet) with SSL enabled by gunicorn through:
CMD ["gunicorn", "--certfile", "/var/tmp/fullchain.pem", "--keyfile", "/var/tmp/key.pem", "__init__:create_app()", "-b", ":8080"]
I have a bot that renews Let's Encrypt certificates in this path every 3 months.
Now I am creating a Kubernetes cluster to put this application an orchestrate the replicas.
In a related question I've seen some ingress controllers provide this certificate creation/renew functionality so I would not need to map to .pem files anymore. There is also cert-manager that does that.
Now I don't know if I need gunicorn or what is the easyest and recommended way to configuring that to run the application. I am also in the process of chosing an ingress controller for my cluster.
Now I don't know if I need gunicorn.
Gunicorn is like java Tomcat, and it can also improve performance for python web server, so using Gunicorn is also recommend without SSL.
If you have other service in same cluster want to talk to your Flask server, and you want to protect that connection, you should config Gunicorn with SSL. If not, I think using an ingress controller with certificate manager is convenient.
I am also in the process of chosing an ingress controller for my cluster.
Well, I think cert-manager offical doc can help you, it deploy cert-manager with Nginx ingress controller.
Theoretically you don't need to resign from your current setup: Flask app exposed on HTTPS. For instance the NGINX ingress controller can pass (encrypted) TLS packets directly to an upstream server (in your case Gunicorn) using SSL Passthrough feature.
But definitely it would be better to do it in a recommended Kubernetes way, with TLS enabled for Ingress (where cert-manager add-on can help you in obtaining certificates from sources like Let's Encrypt)

How to set up an architecture of scalable custom domains & auto-SSL on Google Kubernetes Engine

We are researching the best solution to allow customers to use their domain names with our hosting services. The hosting services are based on Google App Engine standard. The requirements are:
Customers can point their domain name to our server via CNAME or A record
Our server should be able to generate SSL certs for them automatically using Let's Encrypt
Our server should be able to handle custom SSL certs uploaded by customers
Should be robust and reliable when adding new customers (new confs, SSL certs etc.) into our servers
Should be scalable, and can handle a large number of custom domains and traffic (e.g. from 0 to 10000)
Minimum operation costs (the less time needed for maintaining the infrastructure, the better)
It seems Google Kubernetes Engine (formerly known as Google Container Engine) would be the direction to go. Is there a specific, proven way to set it up? Any suggestions/experiences sharing would be appreciated.
I would recommend going through this link to get started with setting up a GKE cluster.
For your purpose of SSL on GKE I would recommend creating an Ingress as specified in this link which automatically creates a Loadbalancer Resource in GCP if you use the default GLBC ingress controller. The resulting LB's configuration (Ports, Host Path rules, Certificates, Backend Services, etc. ) are defined by the configuration of the Ingress Object itself. You can point the domain the domain name to the IP of the Loadbalancer.
If you want to configure your Ingress(and consequently the resulting LB) to use certs created by 'Let's Encrypt', you would be modifying the configuration presented in the YAML of the ingress.
For actually integrating Let's Encrypt for Kubernetes, it is actually possible by using a service called cert-manager to automate the process of obtaining TLS/SSL certificates and store them inside secrets.
This link shows how to use cert-manager with GKE.
If you want to use self managed SSL certificates please see this link for more information. The GKE is scalable by the GKE's cluster autoscaler which automatically resizes clusters based on the demands of the workloads you want to run.

Kubernetes, GCE, Load balancing, SSL

To preface this I’m working on the GCE, and Kuberenetes. My goal is simply to expose all microservices on my cluster over SSL. Ideally it would work the same as when you expose a deployment via type=‘LoadBalancer’ and get a single external IP. That is my goal but SSL is not available with those basic load balancers.
From my research the best current solution would be to set up an nginx ingress controller, use ingress resources and services to expose my micro services. Below is a diagram I drew up with my understanding of this process.
I’ve got this all to successfully work over HTTP. I deployed the default nginx controller from here: https://github.com/kubernetes/contrib/tree/master/ingress/controllers/nginx . As well as the default backend and service for the default backend. The ingress for my own micro service has rules set as my domain name and path: /.
This was successful but there were two things that were confusing me a bit.
When exposing the service resource for my backend (microservice) one guide I followed used type=‘NodePort’ and the other just put a port to reach the service. Both set the target port to the backend app port. I tried this both ways and they both seemed to work. Guide one is from the link above. Guide 2: http://blog.kubernetes.io/2016/03/Kubernetes-1.2-and-simplifying-advanced-networking-with-Ingress.html. What is the difference here?
Another point of confusion is that my ingress always gets two IPs. My initial thought process was that there should only be one external ip and that would hit my ingress which is then directed by nginx for the routing. Or is the ip directly to the nginx? Anyway the first IP address created seemed to give me the expected results where as visiting the second IP fails.
Despite my confusion things seemed to work fine over HTTP. Over HTTPS not so much. At first when I made a web request over https things would just hang. I opened 443 on my firewall rules which seemed to work however I would hit my default backend rather than my microservice.
Reading led me to this from Kubernetes docs: Currently the Ingress resource only supports http rules.
This may explain why I am hitting the default backend because my rules are only for HTTP. But if so how am I supposed to use this approach for SSL?
Another thing I noticed is that if I write an ingress resource with no rules and give it my desired backend I still get directed to my original default backend. This is even more odd because kubectl describe ing updated and states that my default backend is my desired backend...
Any help or guidance would be much appreciated. Thanks!
So, for #2, you've probably ended up provisioning a Google HTTP(S) LoadBalancer, probably because you're missing the kubernetes.io/ingress.class: "nginx" annotation as described here: https://github.com/kubernetes/contrib/tree/master/ingress/controllers/nginx#running-multiple-ingress-controllers.
GKE has it's own ingress controller that you need to override by sticking that annotation on your nginx deployment. This article has a good explanation about that stuff.
The kubernetes docs have a pretty good description of what NodePort means - basically, the service will allocate a port from a high range on each Node in your cluster, and Nodes will forward traffic from that port to your service. It's one way of setting up load balancers in different environments, but for your approach it's not necessary. You can just omit the type field of your microservice's Service and it will be assigned the default type, which is ClusterIP.
As for SSL, it could be a few different things. I would make sure you've got the Secret set up just as they describe in the nginx controller docs, eg with a tls.cert and tls.key field.
I'd also check the logs of the nginx controller - find out which pod it's running as with kubectl get pods, and then tail it's logs: kubectl logs nginx-pod-<some-random-hash> -f. This will be helpful to find out if you've misconfigured anything, like if a service does not have any endpoints configured. Most of the time I've messed up the ingress stuff, it's been due to some pretty basic misconfiguration of Services/Deployments.
You'll also need to set up a DNS record for your hostname pointed at the LoadBalancer's static IP, or else ping your service with cURL's -H flag as they do in the docs, otherwise you might end up getting routed to the default back end 404.
To respond directly to your questions, since that's the whole point... Disclaimer: I'm a n00b, so take this all with a grain of salt.
With respect to #2, the blog post I link to below suggests the following architecture:
Create a deployment that deploys the nginx controller pods
Create a service with a type LoadBalancer and a static IP that routes traffic to the controller pods
Create an ingress resource that gets used by the nginx controller pods
Create a secret that gets used by the nginx controller pods to terminate SSL
And other stuff too
From what I understand, the http vs https stuff happens with the nginx controller pods. All of my ingress rules are also http, but the nginx ingress controller forces SSL and takes care of all that, terminating SSL at the controller so that everything below it, all the ingress stuff, can be HTTP. I have all http rules, but all of my traffic through the LoadBalancer service is getting forced to use SSL.
Again, I'm a n00b. Take this all with a grain of salt. I'm speaking in layman's terms because I'm a layman trying to figure this all out.
I came across your question while looking for some answers to my own questions. I ran into a lot of the same issues that you ran into (I'm assuming past tense given the amount of time that has passed). I wanted to point you (and/or others with similar issues) to a blog post that I found helpful when learning about the nginx controller. So far (I'm still at an early stage and in the middle of using the post), everything in the post has worked.
You're probably already past this stuff now being that it's been a few months. But maybe this will help someone else even if it doesn't help you:
https://daemonza.github.io/2017/02/13/kubernetes-nginx-ingress-controller/
It helped me understand what resources needed to be created, how to deploy the controller pods, and how to expose the controller pods (create a LoadBalancer service for the controller pods with a static IP), and also force SSL. It helped me jump over several hurdles and get past the "how do all the moving parts fit together".
The Kubernetes technical documentation is helpful for how to use each piece, but doesn't necessarily lay it all out and slap pieces together like this blog post does. Disclaimer: the model in the blog post might not be the best way to do it though (I don't have enough experience to make that call), but it did help me at least get a working example of an nginx ingress controller that forced SSL.
Hope this helps someone eventually.
Andrew