I have a kubernetes cluster in a corporate environment, where all HTTPS traffic is man-in-the-middled and the certificates are replaced with the company owns. Right now, all the applications running on the cluster get the Company's certificates injected by rebuilding the Docker image or by mounting them from a secret and adding them to the local store. This is painful and makes it harder to just use public helm charts and docker images without modifying them.
For example, I'm running jenkins on the cluster, which tries to install plugins from https://updates.jenkins-ci.org/. This would normally fail in my case with a SSL exception, unless I add the certficates to the Jenkins keystore.
I was wondering if there's a way to set this up at the cluster level,
So that there's some component that deals with this and the applications can then access the internet normally, without being aware of the whole certificate situation?
My thoughts were:
A cluster proxy pod, that all the applications then use.
Some ambassador container on each pod, that the apps connect to
I would imagine I'm not the only one in this situation but couldn't find a generic solution for this.
You could have a look at istio. It's a service mesh that uses sidecar proxies to (beside other things) take over responsibility for encrypting traffic between applications.
The proxies use the concept of mutual TLS (mTLS), where all connections inside the mesh are encrypted out-of-the-box. The applications them-self don't have to bother with certificates and can send messages in plain text.
Istio also provides a mechanism to migrate to mTLS, so you can include your applications into the mesh one by one, switch to mTLS and disable your own certification overhead.
You can set everything up with your own PKI so you're still using your companies certificates. Also you get a bunch of other features like enhanced observability, canary deployments, on the fly token based authentication/authorization and more.
Related
We are trying to secure our AKS cluster by providing trusted CAs (ssl certs) to Kubernetes Control Plane.
The default API server certificate is issued by while the cluster is created.
Is there any way that we can embed trusted Certificates into the control plane before provisioning the cluster?
Like when we try to reach the kubernetes server it shows ssl certificate issue
To ged rid of this we must be able to add organizations certificates to the api server.
When we create a cluster in Cloud (managed Kubernetes Cluster) we do not have access to the control plane nodes, due to which we won't be able to configure the api server.
Could anyone please help me out figuring out how to add ssl certs to the control plane of kubernetes?
When we create a cluster in Cloud (managed Kubernetes Cluster) we do
not have access to the control plane nodes, due to which we won't be
able to configure the api server.
And that's the biggest inconvenience and pain for everyone who likes anything else except OOB solutions...
My answer is NO. No, unfortunately you cant achieve this in case of AKS usage.
Btw, here also interesting info: Self signed certificates used on management API. Copy paste here for future references despite the fact that answer doesn't help you.
You are correct that per the normal PKI specification dictates use of
non self signed certificates for SSL transport. However, the reason we
do not currently support fully signed certificates is:
Kubernetes requires the ability to self generate and sign certificates Users injecting their own CA is known to be error prone
in Kubernetes as a whole
We are aware of the desire to move away from self signed certificates,
however this requires work in upstream to make this much more likely
to succeed. The official documentation explains a lot of this as well
as the requirements well:
https://kubernetes.io/docs/concepts/cluster-administration/certificates/
https://kubernetes.io/docs/tasks/tls/managing-tls-in-a-cluster/
https://kubernetes.io/docs/setup/best-practices/certificates/
Additionally, this post goes in deeper to the issues around cert
management:
https://jvns.ca/blog/2017/08/05/how-kubernetes-certificates-work/
We are researching the best solution to allow customers to use their domain names with our hosting services. The hosting services are based on Google App Engine standard. The requirements are:
Customers can point their domain name to our server via CNAME or A record
Our server should be able to generate SSL certs for them automatically using Let's Encrypt
Our server should be able to handle custom SSL certs uploaded by customers
Should be robust and reliable when adding new customers (new confs, SSL certs etc.) into our servers
Should be scalable, and can handle a large number of custom domains and traffic (e.g. from 0 to 10000)
Minimum operation costs (the less time needed for maintaining the infrastructure, the better)
It seems Google Kubernetes Engine (formerly known as Google Container Engine) would be the direction to go. Is there a specific, proven way to set it up? Any suggestions/experiences sharing would be appreciated.
I would recommend going through this link to get started with setting up a GKE cluster.
For your purpose of SSL on GKE I would recommend creating an Ingress as specified in this link which automatically creates a Loadbalancer Resource in GCP if you use the default GLBC ingress controller. The resulting LB's configuration (Ports, Host Path rules, Certificates, Backend Services, etc. ) are defined by the configuration of the Ingress Object itself. You can point the domain the domain name to the IP of the Loadbalancer.
If you want to configure your Ingress(and consequently the resulting LB) to use certs created by 'Let's Encrypt', you would be modifying the configuration presented in the YAML of the ingress.
For actually integrating Let's Encrypt for Kubernetes, it is actually possible by using a service called cert-manager to automate the process of obtaining TLS/SSL certificates and store them inside secrets.
This link shows how to use cert-manager with GKE.
If you want to use self managed SSL certificates please see this link for more information. The GKE is scalable by the GKE's cluster autoscaler which automatically resizes clusters based on the demands of the workloads you want to run.
Considering a Nginx reverse-proxy handling TLS LetsEncrypt certificates "in front" of a backend service, what is the good deployment architecture of this setup on Kubernetes ?
My first thought was do make a container with both Nginx and my server in a container as a Stateful Set.
All those stateful sets have access to a volume mounted on /etc/nginx/certificates.
All those containers are running a cron and are allowed to renew those certificates.
However, I do not think it's the best approach. This type of architecture is made to be splited, not running completely independant services everwhere.
Maybe I should run an independent proxy service which handle certificates and does the redirection to the backend server deployment (ingress + job for certificate renewal) ?
If you are using a managed service (such as GCP HTTPS Load Balancer), how do you issue a publicly trusted certificate and renew your it?
You want kube-lego.
kube-lego automatically requests certificates for Kubernetes Ingress resources from Let's Encrypt
It works with GKE+LoadBalancer and with nginx-ingress as well. Usage is trivial; automatic certificate requests (including renewals); uses LetsEncrypt.
The README says -perhaps tongue in the cheek- that you need a non production use case. I have been using it for production and I have found it to be reliable enough.
(Full disclosure: I'm loosely associated with the authors but not paid to advertise the product)
In a scenario where there are thousands of webservices are there reasons to use also a signed cert for each microservice or it's just going to add overhead? Services communicate via VPC sitting behind a firewall while Public endpoints are behind a nginx public facing a valid CA cert.
Services are on multiple servers on aws.
From my limited experience, I believe that it is overkill. If an attacker has access to listen in or interact with your internal network then there are most likely other issues which you should be contending with.
This article on auth0.com explains the use of SSL only on connections to the external client. I also share this view and believe implementing SSL at an individual service level would get extremely difficult unless you where running some form of proxy such as HAProxy or Nginx on each individual host which is sub-optimal, especially if you're using a form of managed cluster like Kubernetes or Docker Swarm.
My current thoughts are its fine to run SSL just for your edge services, ensure you lock down your AWS network using something like Scout2 and run unencrypted for inter-service communication on your lan.
unless all intranet in the cloud are fully VLAN-configured and isolated, is it possible for other hosts that you don't own that are on the same LAN to steal your password by running a simple tcpdump? if that's the case, we need ssl or other encryption internally on the cloud too.
I have been messing around with openshift and reading as much documentation as i can. Yet, the authentication performed by default(using admin .kubeconfig) puzzles me.
1)Are client-certificate-data and client-key-data the same as the admin certificate and key? I ask this because the contents of the certificate/key files are not the same as in .kubeconfig.
2).kubeconfig (AFAIK) is used to authenticate agains a kubernetes master. Yet, in OpenShift we are authentication against OpenShift master (right?). Why using .kubeconfig?
Kinds regards and thank you for your patience.
OpenShift builds on top of Kubernetes - it exposes both the OpenShift APIs (builds, deployments, images, projects) and the Kubernetes APIs (pods, replication controllers, services). A client connecting to OpenShift will use both sets of APIs. OpenShift can run on top of an existing Kubernetes cluster, in which case it will proxy API calls to the Kubernetes master and then apply security policy on top (via the OpenShift policy engine which may eventually become part of Kube).
So, the client is really an extension of Kubectl that offers some additional functionality, and it can use .kubeconfig to be consistent with a Kubectl setup. You can talk to an OpenShift cluster via kubectl, so vice versa seems fair.
The client-certificate-data and key-data are base64 encoded versions of the files on disk. They should be the same once you decode them. We do that so the .kubeconfig can be shipped around as one unit, but you can also set it up to reference files on disk.