How can I allow API access to a GKE K8S cluster without modifying the HTTP client - ssl

I set up a k8s cluster on GKE.
I want to control it via the k8s REST API (so, looking at deployments on pods and whatnot, but not accessing what is actually running on the k8s cluster over SSL). I have gotten the appropriate bearer token (curl --insecure [request] works) and can make API requests. However, the SSL certificate isn't valid for my client (it's java, if that matters). I can't easily modify the client to accept the new root cert at this time.
I have been digging around and have examine the following three options:
incorporate the cluster's root CA cert into another cert chain (from my limited understanding of TLS, I'm not sure this is possible) that exists in my client already.
replace the cluster root CA cert (so that I can use something my client has in its keystore). This implies you can do this with vanilla k8s, but this implies that you cannot using GKE: "An internal Google service manages root keys for this CA, which are non-exportable."
allow k8s API access without TLS. I haven't seen anything about this in the docs, which are pretty explicit that k8s API access over the network must use TLS
Are any of these viable options? Or is my best choice to modify the client?

There is an article named "Access Clusters Using the Kubernetes API" (https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/) that addresses your concerns about how to query the REST API using a Java Client (https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/#java-client)
If you are using the Java app inside a POD, you can import your cluster's CA to your Java Trust Store (https://docs.oracle.com/cd/E19509-01/820-3503/6nf1il6er/index.html). The CA certificate of for your cluster is inside all pods running within your cluster on /var/run/secrets/kubernetes.io/serviceaccount/ca.crt directory. More information in (https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/#without-using-a-proxy)
Regarding your questions:
1.- Import the your cluster's CA cert to your trust store.
2.- You can't set your own CA in GKE, but you can rotate the CA certificate if needed (https://cloud.google.com/kubernetes-engine/docs/how-to/credential-rotation)
3.- You can't deactivate TLS communication in GKE (https://cloud.google.com/kubernetes-engine/docs/concepts/cluster-trust)
Your best option is to use the official Java Client or ADD the CA to your current development.

Based on some other feedback (in a slack), I ended up putting a proxy between my GKE cluster and my client. Then I can just add the GKE cluster k8s ca cert to the proxy's keystore (and don't have to modify the client). For my purposes, I didn't need to have the proxy use SSL, but for production I would.

Related

Enable https traffic to Kubernetes pod with internal MTLS auth on EKS Fargate

I'm building a service that require PKI MTLS X509Certificate authentication.
So I have an AWS ACM Private CA that issues private client certificates to identify the client and regular ACM issued certificate to identify the server.
For the MTLS authentication I use Spring security (Java), which requires a trust store containing the private root CA certificate for authenticating clients as well as a PKCS#12 key store to enforce SSL (for the client to authenticate the server).
Everything works fine when I run it locally using SSL.
Before I enabled SSL in the application, everything worked fine in the cluster as well.
However, when I added MTLS logic to the application the connection hangs when talking to the application in the cluster.
I'm guessing that I need to configure https for my service/ingress in the cluster, but everything I find specifies an arn for the certificate to be used, while I already have it installed in the application.
All I want to do is to allow https traffic to pass through the load balancer into my application and let the application handle the SSL stuff.
Alternatively if it would be possible to configure X509Certificate authentication in Spring security without the SSL certificate for the client to verify the server.
In that case the SSL certificate would only be used in production and not locally.
Would that be possible and what's the pros and cons with each?
So in the end I ended up using the nginx ingress controller with ssl-passthrough.
However, EKS Fargate pods do not support the nginx ingress controller, so I had to create a new managed cluster.
Technically I could have used mixed cluster with both managed and Fargate nodes, but I felt that Fargate had given me enough headache and when I did some calculations I found that Fargate probably costs more in our case.

Kubernetes cert-manager GoDaddy

I'm trying to apply SSL to my kubernetes clusters (production & staging environment), but for now only on staging. I successfully installed the cert-manager, and since I have a 5 subdomains, I want to use wildcards, so I want to configure it with dns01. The problem is, we us GoDaddy for DNS management, but it's currently not supported (I think) by cert-manager. There is an issue (https://github.com/jetstack/cert-manager/issues/1083) and also a PR to support this, but I was wondering if there is a workaround for this to use godaddy with cert-manager since there is not a lot of activity on this subject? I want to use ACME so I can use let's encrypt for certificates.
I'm fairly new to kubernetes, so if I missed something let me know.
Is it possible to use let's encrypt with other type of issuers than ACME? Is there any other way where I can use GoDaddy DNS & let's encrypt with kubernetes?
For now I don't have any Ingresses but only 2 services that are external faced. One frontend and one API gateway as LoadBalancer services.
Thanks in advance!
yes definitely you can use the cert-manager with k8s and let's encrypt will be also nice to manage the certificate.
ACME have different api URL to register domain. from there also you can get wildcard * SSl for doamin.
in simple term install cert manager and use ingress controller of nginx and you will be done with it. you have to add the TLS cert on define it on the ingress object.
You can refer this tutorial for setup of cert-manager and nginx ingress controller.
https://docs.cert-manager.io/en/venafi/tutorials/quick-start/index.html
If you are looking to connect publicly-trusted CAs to Kubernetes via cert-manager (such as GlobalSign, DigiCert, Entrust), you can use Venafi Cloud as an issuer with cert-manager to automate certificate renewals for Kubernetes.
Venafi Cloud connects to third-party CAs and is integrated with cert-manager. Venafi Cloud also has a built-in certification authority for privately trusted certificates for internal-facing infrastructure such as containers.
Here are the links that are relevant to get this this set up:
https://cert-manager.io/docs/configuration/venafi/#creating-a-venafi-cloud-issuer
https://www.venafi.com/venaficloud
The accepted solution does work -- a different issuer is one way to go. Though if you want to use the ACME issuer, you'll need to solve challenges. This can be done via either a HTTP01 solver or a DNS01 solver. If you choose to go with the DNS01 solver, you'll either need:
to move your DNS hosting from GoDaddy to one of the supported providers.
or you can try using this GoDaddy Webhook provider, which you may already be aware of. Though I can't guarantee that the project is in working status.

Securing Kubernetes Service with TLS

I have an application that is internal and exposed only to other application on the cluster by a service with cluster IP. Other services are accessing this application via it's DNS (serviceName-namespace.svc.cluster.local). This application handles sensitive data, so although all the communication is inside the cluster I would like to use TLS to secure the communications to this application.
My question is - how can I enable TLS on a service? Is there something already exist or should I handle it on the application code? Also, is there already a CA I can use on the cluster that can sign certificates for .svc.cluster.local?
To clarify, I know I can use ingress for this purpose. The only problem is keeping this service internal only - so only services inside the cluster will be able to access it.
Thanks,
Omer
I just found that Kubernetes API can be used to generate a certificate that will be trusted by all the pods running on the cluster. This option might be simpler than the alternatives. You can find the documentation here, including full flow of generating a certificate and using it.
Following #vonc comments from bellow, I think I have a solution:
Purchase a public valid domain for this service (e.g. something.mycompany.com).
Use CoreDNS to add override rule so all requests to something.mycompany.com will go to something-namesapce.svc.cluster.local, as the service is not exposed externally (this can be done also with normal A record for my use case).
Use Nginx or something else to handle TLS with the certificate for something.mycompany.com.
This sounds pretty complicated but might work. What do you think?
Check if the tutorial "Secure Kubernetes Services with Ingress, TLS and LetsEncrypt" could apply to you:
Ingress can be backed by different implementations through the use of different Ingress Controllers. The most popular of these is the Nginx Ingress Controller, however there are other options available such as Traefik, Rancher, HAProxy, etc. Each controller should support a basic configuration, but can even expose other features (e.g. rewrite rules, auth modes) via annotations.
Give it a domain name and enable TLS. LetsEncrypt is a free TLS certificate authority, and using the kube-lego controller we can automatically request and renew LetsEncrypt certificates for public domain names, simply by adding a few lines to our Ingress definition!
In order for this to work correctly, a public domain name is required and should have an A record pointing to the external IP of the Nginx service.
For limiting to inside the cluster domain though (svc.cluster.local), you might need CoreDNS.
On Google Cloud you can make load balancer service internal, like this:
annotations = {
"cloud.google.com/load-balancer-type" = "Internal"
}

How to use the existing certificates in Kubernetes cluster

I have certain questions regarding importing the existing certificates.
How are certificates used internally in Kubernetes (e.g. between api server and workers, master controller, etc.)?
Is there a CA in Kubernetes?  (how) does it generate certificates for internal use?
What certificates are required at each layer?
Certificates in Kubernetes are primarily used to secure communication from and to the API server. Taken from the official Kubernetes documentation:
Every Kubernetes cluster has a cluster root Certificate Authority
(CA). The CA is generally used by cluster components to validate the
API server’s certificate, by the API server to validate kubelet client
certificates, etc. To support this, the CA certificate bundle is
distributed to every node in the cluster and is distributed as a
secret attached to default service accounts. Optionally, your
workloads can use this CA to establish trust. Your application can
request a certificate signing using the certificates.k8s.io API using
a protocol that is similar to the ACME draft.
When creating a cluster with kubeadm, the tool at first creates a CA in /etc/kubernetes/pki and signs all following certificates with its private key. The ca is later distributed on all nodes for verification and also found base64 encoded in /etc/kubernetes/admin.conf for verification of the api server via kubectl.
It is possible to use your own CA for cluster creation by placing it and your private key as ca.crt and ca.key in /etc/kubernetes/pki before invoking kubeadm init or any folder later specified with --cert-dir.
There are many other ways to install Kubernetes but they all essentially create a CA before any actual Kubernetes code runs or require one to exist beforehand.

How to use Kubernetes SSL certificates

I am trying to build an HTTPs proxy server in front of another service in Kubernetes, using either an NginX proxy LoadBalancer server, or Ingress. Either way, I need a certificate and key so that my external requests get authenticated.
I'm looking at how to manage tls in a cluster, and I've noticed that the certificate used to connect to the container cluster is the same one as is mounted at /var/run/secrets/kubernetes.io/serviceaccount/ca.crt on a running pod.
So I'm thinking that my node cluster already has a registered certificate, all I need is the key, throw it into a secret and mount that into my proxy server. But I can't find how.
Is it this simple? How would I do that? Or do I need to create a new certificate, sign it etc etc? Would I then need to replace the current certificate?
If you want an external request to get into your K8s cluster then this is the job of an ingress controller, or configuring the service with a loadbalancer, if your cloud provider supports it.
The certificate discussed in your reference is really meant to be used for intra-cluster communications, as it says:
Every Kubernetes cluster has a cluster root Certificate Authority (CA). The CA is generally used by cluster components to validate the API server’s certificate, by the API server to validate kubelet client certificates, etc.
If you go for an ingress approach then here is the doc for tls. At the bottom a list of alternatives, such as the load balancer approach.
I guess you could use the internal certificate externally if you are able to get all your external clients to trust it. Personally I'd probably use kube-lego, which automates getting certificates from Let's Encrypt, since most browsers trust this CA now.
Hope this helps