Running multiple applications in the same AKS cluster with ingress controller(s) tls termination - ssl

I managed to run successfully
multiple applications in different namespaces with http
one application with https (using cert-manager and letsencrypt)
But I need to run multiple https apps.
I tried two paths:
Using multiple dedicated ingress controllers+cert-managers
Using only one controller+cert and route traffic with ingress rules
Is there an open source (complete) example of a working solution for this configuration? Also one based on Azure Application Gateway Ingress Controller (AGIC) would do.

Related

Istio load balancer not working while application is running, pods are ok

I have one application with multiple micro services which has exposed service type loadbalancer on port 443. On deployment on eks it generates a lb url which if I hit then it runs successfully.
Now I am try to do blue-green deployment with istio.
I installed istio. It created a load balancer in istio-system namespace. I did all setup of my application along with gateway and virtual service in some different namespace.
I used https 443 port in gateway manifest. All eks instances are 'in-service' yet the istio loadbalancer does not work. I am clueless how to debug.
Seeking help.
I am attaching code as image, sorry for that.
if I run application with LoadBalancer without involving istio then it runs successfully, but istio's loadbalancer somehow not working. I am it lost here

It's possible to use a dynamic route in the nginx ingress controller?

Our services use a K8s service with a reverse proxy to receive a request by multiple domains and redirect to our services, additionally, we manage SSL certificates powered by let's encrypt for every user that configures their domain in our service. Resuming I have multiple .conf files in the nginx for every domain that is configured. Works really great.
But now we need to increase our levels of security and availability and now we ready to configure the ingress in K8s to handle this problem for us because they are built for it.
Everything looks fine until we discover that every time that I need to configure a new domain as a host in the ingress I need to alter the config file and re-apply.
So that's the problem, I want to apply the same concept that I already have running, but in the nginx ingress controller. It's that possible? I have more than 10k domains up and running, I can't configure all in my ingress resource file.
Any thoughts?
In terms of scaling Kubernetes 10k domains should be fine to be configured in an Ingress resource. You might want to check how much storage you have in the etcd nodes to make sure you can store enough data there.
The default etcd storage is 2Gb, but if you keep increasing it's something to keep in mind.
You can also refer to the K8s best practices when it comes to building large clusters.
Another practice that you can use is to use apply and not create when changing the ingress resource, that way the changes are incremental. Furthermore, if you are using K8s 1.18 or later you can take advantage of Server Side Apply.

How to configure a Flask ws in Kubernetes with SSL?

I have a containerized Flash application (a simple webservice exposed in the internet) with SSL enabled by gunicorn through:
CMD ["gunicorn", "--certfile", "/var/tmp/fullchain.pem", "--keyfile", "/var/tmp/key.pem", "__init__:create_app()", "-b", ":8080"]
I have a bot that renews Let's Encrypt certificates in this path every 3 months.
Now I am creating a Kubernetes cluster to put this application an orchestrate the replicas.
In a related question I've seen some ingress controllers provide this certificate creation/renew functionality so I would not need to map to .pem files anymore. There is also cert-manager that does that.
Now I don't know if I need gunicorn or what is the easyest and recommended way to configuring that to run the application. I am also in the process of chosing an ingress controller for my cluster.
Now I don't know if I need gunicorn.
Gunicorn is like java Tomcat, and it can also improve performance for python web server, so using Gunicorn is also recommend without SSL.
If you have other service in same cluster want to talk to your Flask server, and you want to protect that connection, you should config Gunicorn with SSL. If not, I think using an ingress controller with certificate manager is convenient.
I am also in the process of chosing an ingress controller for my cluster.
Well, I think cert-manager offical doc can help you, it deploy cert-manager with Nginx ingress controller.
Theoretically you don't need to resign from your current setup: Flask app exposed on HTTPS. For instance the NGINX ingress controller can pass (encrypted) TLS packets directly to an upstream server (in your case Gunicorn) using SSL Passthrough feature.
But definitely it would be better to do it in a recommended Kubernetes way, with TLS enabled for Ingress (where cert-manager add-on can help you in obtaining certificates from sources like Let's Encrypt)

How to set up an architecture of scalable custom domains & auto-SSL on Google Kubernetes Engine

We are researching the best solution to allow customers to use their domain names with our hosting services. The hosting services are based on Google App Engine standard. The requirements are:
Customers can point their domain name to our server via CNAME or A record
Our server should be able to generate SSL certs for them automatically using Let's Encrypt
Our server should be able to handle custom SSL certs uploaded by customers
Should be robust and reliable when adding new customers (new confs, SSL certs etc.) into our servers
Should be scalable, and can handle a large number of custom domains and traffic (e.g. from 0 to 10000)
Minimum operation costs (the less time needed for maintaining the infrastructure, the better)
It seems Google Kubernetes Engine (formerly known as Google Container Engine) would be the direction to go. Is there a specific, proven way to set it up? Any suggestions/experiences sharing would be appreciated.
I would recommend going through this link to get started with setting up a GKE cluster.
For your purpose of SSL on GKE I would recommend creating an Ingress as specified in this link which automatically creates a Loadbalancer Resource in GCP if you use the default GLBC ingress controller. The resulting LB's configuration (Ports, Host Path rules, Certificates, Backend Services, etc. ) are defined by the configuration of the Ingress Object itself. You can point the domain the domain name to the IP of the Loadbalancer.
If you want to configure your Ingress(and consequently the resulting LB) to use certs created by 'Let's Encrypt', you would be modifying the configuration presented in the YAML of the ingress.
For actually integrating Let's Encrypt for Kubernetes, it is actually possible by using a service called cert-manager to automate the process of obtaining TLS/SSL certificates and store them inside secrets.
This link shows how to use cert-manager with GKE.
If you want to use self managed SSL certificates please see this link for more information. The GKE is scalable by the GKE's cluster autoscaler which automatically resizes clusters based on the demands of the workloads you want to run.

Good practices for handling TLS LetsEncrypt with Kubernetes Service

Considering a Nginx reverse-proxy handling TLS LetsEncrypt certificates "in front" of a backend service, what is the good deployment architecture of this setup on Kubernetes ?
My first thought was do make a container with both Nginx and my server in a container as a Stateful Set.
All those stateful sets have access to a volume mounted on /etc/nginx/certificates.
All those containers are running a cron and are allowed to renew those certificates.
However, I do not think it's the best approach. This type of architecture is made to be splited, not running completely independant services everwhere.
Maybe I should run an independent proxy service which handle certificates and does the redirection to the backend server deployment (ingress + job for certificate renewal) ?
If you are using a managed service (such as GCP HTTPS Load Balancer), how do you issue a publicly trusted certificate and renew your it?
You want kube-lego.
kube-lego automatically requests certificates for Kubernetes Ingress resources from Let's Encrypt
It works with GKE+LoadBalancer and with nginx-ingress as well. Usage is trivial; automatic certificate requests (including renewals); uses LetsEncrypt.
The README says -perhaps tongue in the cheek- that you need a non production use case. I have been using it for production and I have found it to be reliable enough.
(Full disclosure: I'm loosely associated with the authors but not paid to advertise the product)