How to use Kubernetes SSL certificates - ssl

I am trying to build an HTTPs proxy server in front of another service in Kubernetes, using either an NginX proxy LoadBalancer server, or Ingress. Either way, I need a certificate and key so that my external requests get authenticated.
I'm looking at how to manage tls in a cluster, and I've noticed that the certificate used to connect to the container cluster is the same one as is mounted at /var/run/secrets/kubernetes.io/serviceaccount/ca.crt on a running pod.
So I'm thinking that my node cluster already has a registered certificate, all I need is the key, throw it into a secret and mount that into my proxy server. But I can't find how.
Is it this simple? How would I do that? Or do I need to create a new certificate, sign it etc etc? Would I then need to replace the current certificate?

If you want an external request to get into your K8s cluster then this is the job of an ingress controller, or configuring the service with a loadbalancer, if your cloud provider supports it.
The certificate discussed in your reference is really meant to be used for intra-cluster communications, as it says:
Every Kubernetes cluster has a cluster root Certificate Authority (CA). The CA is generally used by cluster components to validate the API server’s certificate, by the API server to validate kubelet client certificates, etc.
If you go for an ingress approach then here is the doc for tls. At the bottom a list of alternatives, such as the load balancer approach.
I guess you could use the internal certificate externally if you are able to get all your external clients to trust it. Personally I'd probably use kube-lego, which automates getting certificates from Let's Encrypt, since most browsers trust this CA now.
Hope this helps

Related

Enable https traffic to Kubernetes pod with internal MTLS auth on EKS Fargate

I'm building a service that require PKI MTLS X509Certificate authentication.
So I have an AWS ACM Private CA that issues private client certificates to identify the client and regular ACM issued certificate to identify the server.
For the MTLS authentication I use Spring security (Java), which requires a trust store containing the private root CA certificate for authenticating clients as well as a PKCS#12 key store to enforce SSL (for the client to authenticate the server).
Everything works fine when I run it locally using SSL.
Before I enabled SSL in the application, everything worked fine in the cluster as well.
However, when I added MTLS logic to the application the connection hangs when talking to the application in the cluster.
I'm guessing that I need to configure https for my service/ingress in the cluster, but everything I find specifies an arn for the certificate to be used, while I already have it installed in the application.
All I want to do is to allow https traffic to pass through the load balancer into my application and let the application handle the SSL stuff.
Alternatively if it would be possible to configure X509Certificate authentication in Spring security without the SSL certificate for the client to verify the server.
In that case the SSL certificate would only be used in production and not locally.
Would that be possible and what's the pros and cons with each?
So in the end I ended up using the nginx ingress controller with ssl-passthrough.
However, EKS Fargate pods do not support the nginx ingress controller, so I had to create a new managed cluster.
Technically I could have used mixed cluster with both managed and Fargate nodes, but I felt that Fargate had given me enough headache and when I did some calculations I found that Fargate probably costs more in our case.

How can I allow API access to a GKE K8S cluster without modifying the HTTP client

I set up a k8s cluster on GKE.
I want to control it via the k8s REST API (so, looking at deployments on pods and whatnot, but not accessing what is actually running on the k8s cluster over SSL). I have gotten the appropriate bearer token (curl --insecure [request] works) and can make API requests. However, the SSL certificate isn't valid for my client (it's java, if that matters). I can't easily modify the client to accept the new root cert at this time.
I have been digging around and have examine the following three options:
incorporate the cluster's root CA cert into another cert chain (from my limited understanding of TLS, I'm not sure this is possible) that exists in my client already.
replace the cluster root CA cert (so that I can use something my client has in its keystore). This implies you can do this with vanilla k8s, but this implies that you cannot using GKE: "An internal Google service manages root keys for this CA, which are non-exportable."
allow k8s API access without TLS. I haven't seen anything about this in the docs, which are pretty explicit that k8s API access over the network must use TLS
Are any of these viable options? Or is my best choice to modify the client?
There is an article named "Access Clusters Using the Kubernetes API" (https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/) that addresses your concerns about how to query the REST API using a Java Client (https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/#java-client)
If you are using the Java app inside a POD, you can import your cluster's CA to your Java Trust Store (https://docs.oracle.com/cd/E19509-01/820-3503/6nf1il6er/index.html). The CA certificate of for your cluster is inside all pods running within your cluster on /var/run/secrets/kubernetes.io/serviceaccount/ca.crt directory. More information in (https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/#without-using-a-proxy)
Regarding your questions:
1.- Import the your cluster's CA cert to your trust store.
2.- You can't set your own CA in GKE, but you can rotate the CA certificate if needed (https://cloud.google.com/kubernetes-engine/docs/how-to/credential-rotation)
3.- You can't deactivate TLS communication in GKE (https://cloud.google.com/kubernetes-engine/docs/concepts/cluster-trust)
Your best option is to use the official Java Client or ADD the CA to your current development.
Based on some other feedback (in a slack), I ended up putting a proxy between my GKE cluster and my client. Then I can just add the GKE cluster k8s ca cert to the proxy's keystore (and don't have to modify the client). For my purposes, I didn't need to have the proxy use SSL, but for production I would.

GCP managed SSL certificate stuck at FAILED_RATE_LIMITED?

I am trying to issue a SSL certificate for my loadbalancer, which serves traffic to an application hosted on GKE cluster in the backend. I have reserved a static IP for the loadbalancer, and I am able to access the application over HTTP.
Early this week, I had managed to generate SSL certificates and attached it to the load balancer using the GCP managed SSL certificates, and my domain was working on HTTPS network. However I had to delete the loadbalancer and launch it again using the Kubernetes Ingress. But now I am not able to apply the certificates to the loadbalancer. The certificates fail to provision with the error: FAILED_RATE_LIMITED.
Now I have deleted all the previously generated certificates on my account, and there is a single certificate now. According to the error, I have exhausted the number of GCP managed SSL certificates on my account. But I have deleted all the previously generated certs and I have almost waited for 10-11 hours, but the issue still exists. Is there a solution to this?

Kubernetes cert-manager GoDaddy

I'm trying to apply SSL to my kubernetes clusters (production & staging environment), but for now only on staging. I successfully installed the cert-manager, and since I have a 5 subdomains, I want to use wildcards, so I want to configure it with dns01. The problem is, we us GoDaddy for DNS management, but it's currently not supported (I think) by cert-manager. There is an issue (https://github.com/jetstack/cert-manager/issues/1083) and also a PR to support this, but I was wondering if there is a workaround for this to use godaddy with cert-manager since there is not a lot of activity on this subject? I want to use ACME so I can use let's encrypt for certificates.
I'm fairly new to kubernetes, so if I missed something let me know.
Is it possible to use let's encrypt with other type of issuers than ACME? Is there any other way where I can use GoDaddy DNS & let's encrypt with kubernetes?
For now I don't have any Ingresses but only 2 services that are external faced. One frontend and one API gateway as LoadBalancer services.
Thanks in advance!
yes definitely you can use the cert-manager with k8s and let's encrypt will be also nice to manage the certificate.
ACME have different api URL to register domain. from there also you can get wildcard * SSl for doamin.
in simple term install cert manager and use ingress controller of nginx and you will be done with it. you have to add the TLS cert on define it on the ingress object.
You can refer this tutorial for setup of cert-manager and nginx ingress controller.
https://docs.cert-manager.io/en/venafi/tutorials/quick-start/index.html
If you are looking to connect publicly-trusted CAs to Kubernetes via cert-manager (such as GlobalSign, DigiCert, Entrust), you can use Venafi Cloud as an issuer with cert-manager to automate certificate renewals for Kubernetes.
Venafi Cloud connects to third-party CAs and is integrated with cert-manager. Venafi Cloud also has a built-in certification authority for privately trusted certificates for internal-facing infrastructure such as containers.
Here are the links that are relevant to get this this set up:
https://cert-manager.io/docs/configuration/venafi/#creating-a-venafi-cloud-issuer
https://www.venafi.com/venaficloud
The accepted solution does work -- a different issuer is one way to go. Though if you want to use the ACME issuer, you'll need to solve challenges. This can be done via either a HTTP01 solver or a DNS01 solver. If you choose to go with the DNS01 solver, you'll either need:
to move your DNS hosting from GoDaddy to one of the supported providers.
or you can try using this GoDaddy Webhook provider, which you may already be aware of. Though I can't guarantee that the project is in working status.

How to keep the SSL server certificate for verification in Cloud Foundry/Heroku?

I am developing an app to run in Cloud Foundry.
The app makes constant connections to a web service using https protocol.
The web service uses a pair of self-signed certificate created by openssl.
As there is no DNS setup, I am using IP address as the Common Name(CN) in the ssl certificate.
However, the web service IP address varies from time to time. The ssl certificate has to be re-generated each time.
In order for the app to connect, it needs to trust the SSL certificate so I have been packaging the public key for the web service’s SSL cert as a file with my app.
The problem is that I have to re-upload the app to Cloud Foundry once the public key of the SSL cert changes.
Here are some possible solutions:
Register a host name in DNS. In that case, the certificate is only bound to host name. (Might not be possible cos of the budget. )
Create a private CA and issue certificates from the CA, then install the CA as the trusted CA on the client. It is feasible and a common way for internal services. However, what if the app is pushed to the CF? How can we configure the node for the certs?
Disable the SSL server authentication. Not sure whether it would put the app at risk if the authentication is skipped. For the time being, the app pulls data from the web service.
I've been thinking of keeping the public key in the database. In that case, I don't need to re-upload the app to make it take effect. But I am not sure whether it is a safe way.
Question
I am seeking for a common and safe way to keep the SSL server cert in a Cloud Foundry env. Are any of the above solutions viable? If not, is there any other CF preferred ways?
Thank you
This is a bit old, but in case this helps...
Did you try to generate your server SSL certificate with whatever hostname (even "localhost"). As you are uploading this certificate in your application (i.e. to "blindly" trust it), I think that it could work and this would avoid dependencies with your IP address.