Lightweight tools for monitor and alerting SSL certificates in infrastructure - ssl

I'm currently looking for tools to help me track and monitor SSL certificates in my infrastructure. I've currently been looking at Healthchecks, cert-manager (tho this is meant for Kubernetes cluster) and Prometheus with node-exporter, but I'm wondering if there's simpler/better tools out there.
Thanks for any suggestion.

Related

kubernetes traffic with own certificates

I have a kubernetes cluster in a corporate environment, where all HTTPS traffic is man-in-the-middled and the certificates are replaced with the company owns. Right now, all the applications running on the cluster get the Company's certificates injected by rebuilding the Docker image or by mounting them from a secret and adding them to the local store. This is painful and makes it harder to just use public helm charts and docker images without modifying them.
For example, I'm running jenkins on the cluster, which tries to install plugins from https://updates.jenkins-ci.org/. This would normally fail in my case with a SSL exception, unless I add the certficates to the Jenkins keystore.
I was wondering if there's a way to set this up at the cluster level,
So that there's some component that deals with this and the applications can then access the internet normally, without being aware of the whole certificate situation?
My thoughts were:
A cluster proxy pod, that all the applications then use.
Some ambassador container on each pod, that the apps connect to
I would imagine I'm not the only one in this situation but couldn't find a generic solution for this.
You could have a look at istio. It's a service mesh that uses sidecar proxies to (beside other things) take over responsibility for encrypting traffic between applications.
The proxies use the concept of mutual TLS (mTLS), where all connections inside the mesh are encrypted out-of-the-box. The applications them-self don't have to bother with certificates and can send messages in plain text.
Istio also provides a mechanism to migrate to mTLS, so you can include your applications into the mesh one by one, switch to mTLS and disable your own certification overhead.
You can set everything up with your own PKI so you're still using your companies certificates. Also you get a bunch of other features like enhanced observability, canary deployments, on the fly token based authentication/authorization and more.

Certificate Management in Managed Kubernetes

We are trying to secure our AKS cluster by providing trusted CAs (ssl certs) to Kubernetes Control Plane.
The default API server certificate is issued by while the cluster is created.
Is there any way that we can embed trusted Certificates into the control plane before provisioning the cluster?
Like when we try to reach the kubernetes server it shows ssl certificate issue
To ged rid of this we must be able to add organizations certificates to the api server.
When we create a cluster in Cloud (managed Kubernetes Cluster) we do not have access to the control plane nodes, due to which we won't be able to configure the api server.
Could anyone please help me out figuring out how to add ssl certs to the control plane of kubernetes?
When we create a cluster in Cloud (managed Kubernetes Cluster) we do
not have access to the control plane nodes, due to which we won't be
able to configure the api server.
And that's the biggest inconvenience and pain for everyone who likes anything else except OOB solutions...
My answer is NO. No, unfortunately you cant achieve this in case of AKS usage.
Btw, here also interesting info: Self signed certificates used on management API. Copy paste here for future references despite the fact that answer doesn't help you.
You are correct that per the normal PKI specification dictates use of
non self signed certificates for SSL transport. However, the reason we
do not currently support fully signed certificates is:
Kubernetes requires the ability to self generate and sign certificates Users injecting their own CA is known to be error prone
in Kubernetes as a whole
We are aware of the desire to move away from self signed certificates,
however this requires work in upstream to make this much more likely
to succeed. The official documentation explains a lot of this as well
as the requirements well:
https://kubernetes.io/docs/concepts/cluster-administration/certificates/
https://kubernetes.io/docs/tasks/tls/managing-tls-in-a-cluster/
https://kubernetes.io/docs/setup/best-practices/certificates/
Additionally, this post goes in deeper to the issues around cert
management:
https://jvns.ca/blog/2017/08/05/how-kubernetes-certificates-work/

Kubernetes cluster certificates rotation

I'm looking for some advice on the procedure of certificates rotation. I have been practicing to install a cluster from scratch with Kelsey Hightower's Kubernetes the hard way. It has been great to understand the certificates needed to build trust between components that form a Kubernetes cluster.
But consulting the official documentation about certificates rotation I 've only found this resource, which mentions only the kubelet component.
I guess that the idea of certificate rotation would be to change all af the certificates involved: controller-manager, kube-proxy, scheduler, api-server, etc.
So, my questions are:
Are there any resources about the subject that you would recommend?
Is there an order I should follow in the update of the components to minimize the service disruption? I imagine there will be a period where there will be communication problems because some components will be using the old certificates and some others will be using the new ones
Say I backup the old certificates (create a copy in a different path) and replace the current files with newly generated certificates. Will I still need to restart the system units (or static pods / regular pods) that include some certificates configuration or will the configuraton be "hot" reloaded?
Thanks
That would be better managed by a side-car proxy service such as Istio
It offers certificat ttl out of the box, with by default 90 days.
The rotation is not automated though.
Using an external provider like Let'sEncrypt can help (as described here).

Security in Azure Managed Kubernetes Service (AKS)

I am trying to get more documentation, understanding about security in Azure Managed Kubernetes Service (AKS).
Does Azure encrypt the containers deployed to the AKS cluster at "rest"? If so, how is data encryption achieved at rest, and in motion?
What are the ways to achieve SSL/TLS in AKS, any documentation is appreciated.
Thanks in advance
I can definitely tell you TLS termination is supported in AKS. I've been able to implement this.
HTTPS Ingress on Azure Kubernetes Service (AKS)
This documentation is slightly out of date though. You should use cert-manager instead of KUBE-LEGO.
I would welcome a more authoritative answer, but as far as I have determined managed disks are always encrypted (https://azure.microsoft.com/en-us/blog/azure-managed-disks-sse/), but the worker nodes are not encrypted by default. It would be necessary to run az vm encryption enable on every node (quite a chore if you are scaling up and down!). If you do that you should be covered, though.
As for SSL/TLS Kubernetes supports TLS for Ingress, see https://kubernetes.io/docs/concepts/services-networking/ingress, but I haven't tested it in AKS. We are using our own Nginx server as gateway and with that approach you can use any TLS tutorial out there. We feel that we have more control that way.

Openshift .kubeconfig file and certificate authentication

I have been messing around with openshift and reading as much documentation as i can. Yet, the authentication performed by default(using admin .kubeconfig) puzzles me.
1)Are client-certificate-data and client-key-data the same as the admin certificate and key? I ask this because the contents of the certificate/key files are not the same as in .kubeconfig.
2).kubeconfig (AFAIK) is used to authenticate agains a kubernetes master. Yet, in OpenShift we are authentication against OpenShift master (right?). Why using .kubeconfig?
Kinds regards and thank you for your patience.
OpenShift builds on top of Kubernetes - it exposes both the OpenShift APIs (builds, deployments, images, projects) and the Kubernetes APIs (pods, replication controllers, services). A client connecting to OpenShift will use both sets of APIs. OpenShift can run on top of an existing Kubernetes cluster, in which case it will proxy API calls to the Kubernetes master and then apply security policy on top (via the OpenShift policy engine which may eventually become part of Kube).
So, the client is really an extension of Kubectl that offers some additional functionality, and it can use .kubeconfig to be consistent with a Kubectl setup. You can talk to an OpenShift cluster via kubectl, so vice versa seems fair.
The client-certificate-data and key-data are base64 encoded versions of the files on disk. They should be the same once you decode them. We do that so the .kubeconfig can be shipped around as one unit, but you can also set it up to reference files on disk.