I have a containerized Flash application (a simple webservice exposed in the internet) with SSL enabled by gunicorn through:
CMD ["gunicorn", "--certfile", "/var/tmp/fullchain.pem", "--keyfile", "/var/tmp/key.pem", "__init__:create_app()", "-b", ":8080"]
I have a bot that renews Let's Encrypt certificates in this path every 3 months.
Now I am creating a Kubernetes cluster to put this application an orchestrate the replicas.
In a related question I've seen some ingress controllers provide this certificate creation/renew functionality so I would not need to map to .pem files anymore. There is also cert-manager that does that.
Now I don't know if I need gunicorn or what is the easyest and recommended way to configuring that to run the application. I am also in the process of chosing an ingress controller for my cluster.
Now I don't know if I need gunicorn.
Gunicorn is like java Tomcat, and it can also improve performance for python web server, so using Gunicorn is also recommend without SSL.
If you have other service in same cluster want to talk to your Flask server, and you want to protect that connection, you should config Gunicorn with SSL. If not, I think using an ingress controller with certificate manager is convenient.
I am also in the process of chosing an ingress controller for my cluster.
Well, I think cert-manager offical doc can help you, it deploy cert-manager with Nginx ingress controller.
Theoretically you don't need to resign from your current setup: Flask app exposed on HTTPS. For instance the NGINX ingress controller can pass (encrypted) TLS packets directly to an upstream server (in your case Gunicorn) using SSL Passthrough feature.
But definitely it would be better to do it in a recommended Kubernetes way, with TLS enabled for Ingress (where cert-manager add-on can help you in obtaining certificates from sources like Let's Encrypt)
Related
I have nodejs app using kafkajs package for connecting to AWS MSK.
We are moving to Strimzi Kafka because we already have a kubernetes cluster and we don't need the MSK anymore.
Until now we were connected with SSL but didn't have to specify any CA path or something. We used this way of connection both on our nodejs apps and kafka-ui and it worked with no issues.
We are trying to the same with Strimzi Kafka, but we get SSL handshake failed.
For my understanding is AWS MSK is using amazon certificates that are known while the Strimzi Kafka is generating self signed certificates which is ok by us.
How can I still using this way like we used with AWS MSK? With just use ssl: true in kafkajs (It works)
Thanks.
The easiest way to use a certificate signed by some public CA is using the listener certificate which lets you provide your own server certificate for given listener. I'm not sure how the Amazon CA works, but this blog post shows how to do it for example using Cert-Manager and Let's Encrypt.
Keep in mind that to use the public CAs, you usually need to use some proper domain names and not just internal Kubernetes services. This might for example increase costs or latency if your applications run in the same Kubernetes cluster because the traffic might need to go through a load balancer or ingress.
I managed to run successfully
multiple applications in different namespaces with http
one application with https (using cert-manager and letsencrypt)
But I need to run multiple https apps.
I tried two paths:
Using multiple dedicated ingress controllers+cert-managers
Using only one controller+cert and route traffic with ingress rules
Is there an open source (complete) example of a working solution for this configuration? Also one based on Azure Application Gateway Ingress Controller (AGIC) would do.
This question is more to give me some direction on how to go about the problem in general, not a specific solution to the problem.
I have a working kubernetes cluster that's using an nginx ingress as the gate to the outside world. Right now everything is on minikube, but the end goal is to move it eventually to GKE, EKS or AKS (for on premise clients that want our software).
For this I'm going to use helm charts to paremetrize the yaml files and ENV variables needed to setup the resources. I will keep using nginx as ingress to avoid maintining alb ingress or other cloud-specific ingress controllers.
My question is:
I'm not sure how to manage TLS certificates and then how to point the ingress to a public domain for people to use it.
I wanted some guidance on how to go about this in general. Is the TLS certificate something that the user can provide to the helm chart before configuring it? Where can I see a small exmaple of this. And finally, is the domain responbility of the helm chart? Or is this something that has to be setup on the DNS provide (Route53) for example. Is there an example you can suggest me to take a look at?
Thanks a lot for the help.
Installing certificates using Helm is perfectly fine just make sure you don't accidentally put certificates into public Git repo. Best practice is to have those certificates only on your local laptop and added to .gitignore. After that you may tell Helm to grab those certificates from their directory.
Regarding the DNS - you may use external-dns to make Kubernetes create DNS records for you. You will need first to integrate external-dns with your DNS provider and then it will watch ingress resources for domain names and automatically create them.
Our services use a K8s service with a reverse proxy to receive a request by multiple domains and redirect to our services, additionally, we manage SSL certificates powered by let's encrypt for every user that configures their domain in our service. Resuming I have multiple .conf files in the nginx for every domain that is configured. Works really great.
But now we need to increase our levels of security and availability and now we ready to configure the ingress in K8s to handle this problem for us because they are built for it.
Everything looks fine until we discover that every time that I need to configure a new domain as a host in the ingress I need to alter the config file and re-apply.
So that's the problem, I want to apply the same concept that I already have running, but in the nginx ingress controller. It's that possible? I have more than 10k domains up and running, I can't configure all in my ingress resource file.
Any thoughts?
In terms of scaling Kubernetes 10k domains should be fine to be configured in an Ingress resource. You might want to check how much storage you have in the etcd nodes to make sure you can store enough data there.
The default etcd storage is 2Gb, but if you keep increasing it's something to keep in mind.
You can also refer to the K8s best practices when it comes to building large clusters.
Another practice that you can use is to use apply and not create when changing the ingress resource, that way the changes are incremental. Furthermore, if you are using K8s 1.18 or later you can take advantage of Server Side Apply.
Considering a Nginx reverse-proxy handling TLS LetsEncrypt certificates "in front" of a backend service, what is the good deployment architecture of this setup on Kubernetes ?
My first thought was do make a container with both Nginx and my server in a container as a Stateful Set.
All those stateful sets have access to a volume mounted on /etc/nginx/certificates.
All those containers are running a cron and are allowed to renew those certificates.
However, I do not think it's the best approach. This type of architecture is made to be splited, not running completely independant services everwhere.
Maybe I should run an independent proxy service which handle certificates and does the redirection to the backend server deployment (ingress + job for certificate renewal) ?
If you are using a managed service (such as GCP HTTPS Load Balancer), how do you issue a publicly trusted certificate and renew your it?
You want kube-lego.
kube-lego automatically requests certificates for Kubernetes Ingress resources from Let's Encrypt
It works with GKE+LoadBalancer and with nginx-ingress as well. Usage is trivial; automatic certificate requests (including renewals); uses LetsEncrypt.
The README says -perhaps tongue in the cheek- that you need a non production use case. I have been using it for production and I have found it to be reliable enough.
(Full disclosure: I'm loosely associated with the authors but not paid to advertise the product)