Istio Ingress with cert-manager - ssl

I have Kubernetes with Kafka where is also running Istio with Strimzi. Certificates are stored in cert-manager. I want to use TLS passthrough in my ingress but I am a little bit confused of that.
When SIMPLE is used, there is credentialName, which must be the same as secret.
tls:
mode: SIMPLE
credentialName: httpbin-credential
It is nice and simple way. But how about mode: PASSTHROUGH when I have many hosts? I studied demo on istio web (https://istio.io/latest/docs/tasks/traffic-management/ingress/ingress-sni-passthrough/#deploy-an-nginx-server) and their certificate details are stored in server configuration file and they are creating configmap. In official Istio documentation is noted that this parameter is only for MUTUAL and SIMPLE.
What is correct and simple way to expose my hosts using istio ingress to external traffic using cert-manager?

The difference between SIMPLE & PASSTHROUGH is:
SIMPLE TLS instructs the gateway to pass the ingress traffic by terminating TLS.
PASSTHROUGH TLS instructs the gateway to pass the ingress traffic AS IS, without terminating TLS.

Related

Using HTTP2 with GKE and Google Managed Certificates

I am using an Ingress using Google-managed SSL certs mostly similar to what is described here:
https://cloud.google.com/kubernetes-engine/docs/how-to/managed-certs#setting_up_a_google-managed_certificate
However my backend service is a grpc service that is using HTTP2. According to the same documentation if I am using HTTP2 my backend needs to be "configured with SSL".
This sounds like I need a separate set of certificates for my backend service to configure it with SSL.
Is there a way to use the same Google managed certs here as well?
What are my other options here? I am using, Google managed certs for the Ingress not to manage any certs on my own, if I then use self signed certificates for my service, that kind of defeats the purpose.
i don't think it's required to create SSL for the backend services if you are terminating the HTTPS at LB level. You can attach your certs to at LB level and the backed-end will be HTTPS > HTTP.
You might need to create SSL/TLS new cert in case there is diff version ssl-protocols: TLSv1.2 TLSv1.3, Cipher set in your ingress controller configmap which you are using Nginx ingress controller, Kong etc.
If you are looking for End to End HTTPS traffic definitely you need to create a cert for the backend service.
You can also create/manage the Managed certificate or Custom cert with Cert manager the K8s secret and mount to deployment which will be used further by the service, in that case, no need to manage or create the certs. Ingress will passthrough the HTTPS request to service directly.
In this case, it will be an end-to-end HTTPS setup.
Update :
Note: To ensure the load balancer can make a correct HTTP2 request to
your backend, your backend must be configured with SSL. For more
information on what types of certificates are accepted, see Encryption
from the load balancer to the backends ." end to end tls seems to be a
requirement for HTTP2
This is my site https://findmeip.com it's running on HTTP2 and terminating the SSL/TLS at the Nginx level only.
Definitely, it's good to go with the suggested practice so you can use the ESP option from the Google, setting GKE ingress + ESP + grpc stack.
https://cloud.google.com/endpoints/docs/openapi/specify-proxy-startup-options?hl=tr
If not want to use ESP check above suggested :
You can Mount Managed certificate to
deployment which will be used further by the service, in that case, no
need to manage or create the certs. In other words, cert-manager will create/manage/re-new SSL/TLS on behalf of you in K8s secret which will used by service.
Google Managed Certificates can only be used for the frontend portion of the load balancer (aka client to LB). If you need encryption from the LB to the backends you will have use self-signed certificates or some other way to store said certificates on GKE as secrets and configuring the Ingress to connect to the backend using these secrets.
Like this https://cloud.google.com/kubernetes-engine/docs/concepts/ingress-xlb#setting_up_https_tls_between_client_and_load_balancer

Emtpy "ca.crt" file from cert-manager

I use cert-manager to generate TLS certificates for my application on Kubernetes with Let's Encrypt.
It is running and I can see "ca.crt", "tls.crt" and "tsl.key" inside the container of my application (in /etc/letsencrypt/).
But "ca.crt" is empty, and the application complains about it (Error: Unable to load CA certificates. Check cafile "/etc/letsencrypt/ca.crt"). The two other files look like normal certificates.
What does that mean?
With cert-manager you have to use the nginx-ingress controller which will work as expose point.
ingress nginx controller will create one load balancer and you can setup your application tls certificate there.
There is nothing regarding certificate inside the pod of cert-manager.
so setup nginx ingress with cert-manager that will help to manage the tls certificate. that certificate will be stored in kubernetes secret.
Please follow this guide for more details:
https://www.digitalocean.com/community/tutorials/how-to-set-up-an-nginx-ingress-with-cert-manager-on-digitalocean-kubernetes
I noticed this:
$ kubectl describe certificate iot-mysmartliving -n mqtt
...
Status:
Conditions:
...
Message: Certificate issuance in progress. Temporary certificate issued.
and a related line in the docs:
https://docs.cert-manager.io/en/latest/tasks/issuing-certificates/index.html?highlight=gce#temporary-certificates-whilst-issuing
They explain that the two existing certificates are generated for some compatibility, but they are not valid until the issuer has done its work.
So that suggests that the issuer is not properly set up.
Edit: yes it was. The DNS challenge was failing, the debug line that helped was
kubectl describe challenge --all-namespaces=true
More generally,
kubectl describe clusterissuer,certificate,order,challenge --all-namespaces=true
According to the documentation, cafile is for something else (trusted root certificates), and it would probably be more correct to use capath /etc/ssl/certs on most systems.
You can follow this guide if you have Windows Operating System:
tls.
Article is about how to enable Mosquitto and clients to use the TLS protocol.
Establishing a secure TLS connection to the Mosquitto broker requires key and certificate files. Creating all these files with the correct settings is not the easiest thing, but is rewarded with a secure way to communicate with the MQTT broker.
If you want to use TLS certificates you've generated using the Let's Encrypt service.
You need to be aware that current versions of mosquitto never update listener settings when running, so when you regenerate the server certificates you will need to completely restart the broker.
If you use DigitalOcean Kubernetes try to follow this instruction: ca-ninx, you can use Cert-Manager and ingress nginx controller, they will work like certbot.
Another solution is to create the certificate locally on your machine and then upload it to kubernetes secret and use secret on ingress.

Let's encrypt, Kubernetes and Traefik on GKE

I am trying to setup Traefik on Kubernetes with Let's Encrypt enabled. I managed yesterday to retrieve the first SSL certificated from Let's Encrypt but am a little bit stuck on how to store the SSL certificates.
I am able to create a Volume to store the Traefik certificates but that would mean that I am limited to a single replica (when having multiple replicas am I unable to retrieve a certificate since the validation goes wrong most of the times due to that the volume is not shared).
I read that Traefik is able to use something like Consul but I am wondering if I have to setup/run a complete Consul cluster to just store the fetched certificates etc.?
You can store the certificate in a kubernetes secret and you reference to this secret in your ingress.
spec:
tls:
- secretName: testsecret
The secret has to be in same namespace the ingress is running in.
See also https://docs.traefik.io/user-guide/kubernetes/#add-a-tls-certificate-to-the-ingress
You can set up the ingress with controller and apply for the SSL certificate of let's encrypt.
You can use cluster issuer to manage the SSL certificates and store that tls certificate on ingress.you can also use different ingress controllers like nginx also can use service mess istio.
For more details you can check : https://docs.traefik.io/user-guide/kubernetes/

HTTPS endpoints for local kubernetes backend service addresses, after SSL termination

I have a k8s cluster that sits behind a load balancer. The request for myapisite.com passes through the LB and is routed by k8s to the proper deployment, getting the SSL cert from the k8s load balancer ingress, which then routes to the service ingress, like so:
spec:
rules:
- host: myapisite.com
http:
paths:
- backend:
serviceName: ingress-605582265bdcdcee247c11ee5801957d
servicePort: 80
path: /
tls:
- hosts:
- myapisite.com
secretName: myapisitecert
status:
loadBalancer: {}
So my myapisite.com resolves on HTTPS correctly.
My problem is that, while maintaining the above setup (if possible), I need to be able to go to my local service endpoints within the same namespace on HTTPS, i.e. from another pod I should be able to curl or wget the following without a cert error:
https:\\myapisite.namespace.svc.cluster.local
Even if I were interested in not terminating SSL until the pod level, creating a SAN entry on the cert for a .local address is not an option, so that solution is not viable.
Is there some simple way I'm missing to make all local DNS trusted in k8s? Or some other solution here that's hopefully not a reinvention of the wheel? I am using kubernetes version 1.11 with CoreDNS.
Thanks, and sorry in advance if this is a dumb question.
If your application can listen on both HTTP and HTTPS, you can configure both. Meaning you will be able to access via both HTTP and HTTPS by your preference. Now, how you create and distribute certificate is a different story, but you must solve it on your own (probably by using your own CA and storing cert/key in secret). Unless you want to use something like Istio and its mutual tls support to secure traffic between services.
While you write what you want to achieve, we don't really know why. The reason for this need might actually help to suggest the best solution

What is the recommended way to update SSL certs in a Nginx cluster behind HAProxy?

So I want to have this:
/ Nginx1 (SSL)
HAProxy-- Nginx2 (SSL)
\ Nginx3 (SSL)
But I have questions:
How do I update Letsencrypt certs on all nodes?
If I can't do this with certbot (+some config) - how do you do this? Maybe some distributed k/v storages?
The best thing is to use HTTP only services (not HTTPS) on Nginx nodes and configure SSL on balancer.
Options:
Traefik. Can be configured to auto update LetsEncrypt certs.
Fabio. Also can be configured to use SSL certs. (I've used Hashicorp Vault to store them). Need to configure updates myself.
Those 2 integrate well with service discovery tools like Consul.