Kubernetes add ca certificate to pods' trust root - ssl

In my 10-machines bare-metal Kubernetes cluster, one service needs to call another https-based service which is using a self-signed certificate.
However, since this self-signed certificate is not added into pods' trusted root ca, the call failed saying can't validate x.509 certificate.
All pods are based on ubuntu docker images. However the way to add ca cert to trust list on ubuntu (using dpkg-reconfigure ca-certificates) is not working on this pod any longer. Of course even I succeeded adding the ca cert to trust root on one pod, it's gone when another pod is kicked.
I searched Kubernetes documents, and surprised not found any except configuring cert to talk to API service which is not what I'm looking for. It should be quite common scenario if any secure channel needed between pods. Any ideas?

If you want to bake the cert in at buildtime, edit your Dockerfile adding the commands to copy the cert from the build context and update the trust. You could even add this as a layer to something from docker hub etc.
COPY my-cert.crt /usr/local/share/ca-certificates/
RUN update-ca-certificates
If you're trying to update the trust at runtime things get more complicated. I haven't done this myself, but you might be able to create a configMap containing the certificate, mount it into your container at the above path, and then use an entrypoint script to run update-ca-certificates before your main process.

Updated Edit read option 3:
I can think of 3 options to solve your issue if I was in your scenario:
Option 1.) (The only complete solution I can offer, my other solutions are half solutions unfortunately, credit to Paras Patidar/the following site:)
https://medium.com/#paraspatidar/add-ssl-tls-certificate-or-pem-file-to-kubernetes-pod-s-trusted-root-ca-store-7bed5cd683d
1.) Add certificate to config map:
lets say your pem file is my-cert.pem
kubectl -n <namespace-for-config-map-optional> create configmap ca-pemstore — from-file=my-cert.pem
2.) Mount configmap as volume to exiting CA root location of container:
mount that config map’s file as one to one file relationship in volume mount in directory /etc/ssl/certs/ as file for example
apiVersion: v1
kind: Pod
metadata:
name: cacheconnectsample
spec:
containers:
- name: cacheconnectsample
image: cacheconnectsample:v1
volumeMounts:
- name: ca-pemstore
mountPath: /etc/ssl/certs/my-cert.pem
subPath: my-cert.pem
readOnly: false
ports:
- containerPort: 80
command: [ "dotnet" ]
args: [ "cacheconnectsample.dll" ]
volumes:
- name: ca-pemstore
configMap:
name: ca-pemstore
So I believe the idea here is that /etc/ssl/certs/ is the location of tls certs that are trusted by pods, and the subPath method allows you to add a file without wiping out the contents of the folder, which would contain the k8s secrets.
If all pods share this mountPath, then you might be able to add a pod present and configmap to every namespace, but that's in alpha and is only helpful for static namespaces. (but if this were true then all your pods would trust that cert.)
Option 2.) (Half solution/idea + doesn't exactly answer your question but solves your problem, I'm fairly confident will work in theory, that will require research on your part, but I think you'll find it's the best option:)
In theory you should be able to leverage cert-manager + external-dns + Lets Encrypt Free + a public domain name to replace the self signed cert with a Public Cert.
(there's cert-manager's end result is to auto gen a k8s tls secret signed by Lets Encrypt Free in your cluster, they have a dns01 challenge that can be used to prove you own the cert, which means that you should be able to leverage that solution even without an ingress / even if the cluster is only meant for private network.)
Edit: Option 3.) (After gaining more hands on experience with Kubernetes)
I believe that switchboard.op's answer is probably the best/should be the accepted answer. This "can" be done at runtime, but I'd argue that it should never be done at runtime, doing it at runtime is super hacky and full of edge cases/there's not a universal solution way of doing it.
Also it turns out that my Option 1 doing it is only half correct.
mounting the ca.crt on the pod alone isn't enough. After that file is mounted on the pod you'd need to run a command to trust it. And that means you probably need to override the pods startup command. Example you can't do something like connect to database (the default startup command) and then update trusted CA Certs's command. You'd have to override the startup file to be a hand jammed, overwrite the default startup script, update trusted CA Certs's, connect to the database. And the problem is Ubuntu, RHEL, Alpine, and others have different locations where you have to mount the CA Cert's and sometimes different commands to trust the CA Certs so a universal at runtime solution that you can apply to all pods in the cluster to update their ca.cert isn't impossible, but would require tons of if statements and mutating webhooks/complexity. (a hand crafted per pod solution is very possible though if you just need to be able to dynamically update it for a single pod.)
switchboard.op's answer is the way I'd do it if I had to do it. Build a new custom docker image with your custom ca.cert being trusted baked into the image. This is a universal solution, and greatly simplifies the YAML side. And it's relatively easy to do on the docker image side.

Just for curiosity, here is an example of manifest utilizing the init container approach.
---
apiVersion: v1
kind: ConfigMap
metadata:
name: demo
data:
# in my case it is CloudFlare CA used to sign certificates for origin servers
origin_ca_rsa_root.pem: |
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----
---
apiVersion: v1
kind: Pod
metadata:
name: demo
labels:
name: demo
spec:
nodeSelector:
kubernetes.io/os: linux
initContainers:
- name: init
# image: ubuntu
# command: ["/bin/sh", "-c"]
# args: ["apt -qq update && apt -qq install -y ca-certificates && update-ca-certificates && cp -r /etc/ssl/certs/* /artifact/"]
# # alternative image with preinstalled ca-certificates utilities
image: grafana/alpine:3.15.4
command: ["/bin/sh", "-c"]
args: ["update-ca-certificates && cp -r /etc/ssl/certs/* /artifact/"]
volumeMounts:
- name: demo
# note - we need change extension to crt here
mountPath: /usr/local/share/ca-certificates/origin_ca_rsa_root.crt
subPath: origin_ca_rsa_root.pem
readOnly: false
- name: tmp
mountPath: /artifact
readOnly: false
containers:
- name: demo
# note - even so init container is alpine base, and this one is ubuntu based everything still works
image: nginx
ports:
- containerPort: 80
volumeMounts:
- name: tmp
mountPath: /etc/ssl/certs
readOnly: false
volumes:
- name: demo
configMap:
name: demo
# will be used to pass files between init container and actual container
- name: tmp
emptyDir: {}
and its usage:
kubectl apply -f demo.yml
kubectl exec demo -c demo -- curl --resolve foo.bar.com:443:10.0.14.14 https://foo.bar.com/swagger/v1/swagger.json
kubectl delete -f demp.yml
notes:
replace foo.bar.com to your domain name
replace 10.0.14.14 to ingress controller cluster IP
you may want to add -vv flag to see more details
Indeed it is kind of ugly and monstrous, but at least it does work and is proof of concept. Workarounds with simple ConfigMap do not work because curl reads ca-certificates.crt, which is not modified in that approach.

Related

How to do TLS between microservices in Kubernetes?

Sorry for my bad English but I don't know how to solve my problem.
So...
Introduction:
I have 2 microservices (I called them gRPCClient and gRPCServer, although it doesn’t matter what exactly). They need to communicate via TLS. Without Kubernets, everything is quite simple. I create my CA via cfssl in a docker container, then I get the root certificate from CA and I put it in trust for my grpc applications (I do this in Dockerfile), so that any certificate signed by my CA passes the test.
Now Kubernetes is included in the game. I'm playing locally with minikube. I create local cluster "minikube start" on mac (maybe this is important, I don’t know ...)
Problem:
How will this flow work with the Kubernetes? As I understand it, there is already a CA inside the Kubernetes (correct me if this is not so). I read many articles, but I really didn’t understand anything. I tried the examples from this article https://kubernetes.io/docs/tasks/tls/managing-tls-in-a-cluster/
Step by step:
Create a signature request
cat <<EOF | cfssl genkey - | cfssljson -bare server
{
"hosts": [
"my-svc.my-namespace.svc.cluster.local",
"my-pod.my-namespace.pod.cluster.local",
"192.0.2.24",
"10.0.34.2"
],
"CN": "my-pod.my-namespace.pod.cluster.local",
"key": {
"algo": "ecdsa",
"size": 256
}
}
EOF
The first thing I did not understand was the hosts. For example, my-svc.my-namespace.svc.cluster.local is the full name of my service? (I mean the service in Kubernetes as kind: Service). I have it in the namespace "dev" and its name is user-app-sesrvice. Should I specify user-app-sesrvice.dev.svc.cluster.local then? or it just user-app-sesrvice. Or is there some kind of command to get the full name of the service? 192.0.2.24 - as I understand it, is the IP of service, it is also unclear whether it is mandatory to specify it or is it possible only the name of the service? What if I have clusterIP: None installed, then I don't have IP for it. my-pod.my-namespace.pod.cluster.local - Should I specify this? If I have several pods, should I list them all? Then the problem is in the dynamics, because the pods are recreated, deleted and added, and I need to send a new request for signature each time. The same questions that I asked about service including some parts "my-pod" and "namespace"? Is it possible to see the full name of the pod with all this data. 10.0.34.2 - pods' IP. The same question about pods' IP.
I tried to specify the host and CN as name of my service name "user-app-service" (as if I was working without a Kubernetes). I created a signature and a key. Then all the steps, created a request object for signature in the Kubernetes
cat <<EOF | kubectl apply -f -
apiVersion: certificates.k8s.io/v1beta1
kind: CertificateSigningRequest
metadata:
name: my-svc.my-namespace
spec:
request: $(cat server.csr | base64 | tr -d '\n')
usages:
- digital signature
- key encipherment
- server auth
EOF
Then I made it and I received a certificate
Further, based on security, I need to store the key and a certificate in secrets and then get it in the container (for the purposes of the test, I just put them in the container in the docker file, hard-coded), this is in the gRPC server. I run the deployment and created a client on golang, specifying config: = &tls.Config{} in the code so that it would pull the trusted certificates from the system itself, I thought that the Kubernetes has a CA, but did not find how to get its certificate in the docs. I thought the Kubernetes adds them to all the containers himself. But I got the error Unavailable desc = connection error: desc = "transport: authentication handshake failed: x509: certificate signed by unknown authority". How should all this work? Where can I get a CA certificate from a Kubernetes? And then, do I need to add it to each container with my hands in dockerfile? or is this not the right tactic and is there some kind of automation from the Kubernetes?
I found another way, this is to try to deploy cfssl https://hub.docker.com/r/cfssl/cfssl/ on the Kubernetes and already work with it, like there was no Kubernetes (I have not tried this method yet)
How to put all this into a working system, what options to use and why? Maybe there are some full articles. I wrote a lot, but I hope it’s clear. I really need the help.
I am going to break down my answer into a couple of parts:
Kubernetes Services and DNS Discovery
In general, it is recommended to put a Service in front of a Deployment that manages pods in Kubernetes.
The Service creates a stable DNS and IP endpoint for pods that may be deleted and be assigned a different
IP address when recreated. DNS service discovery is automatically enabled with a ClusterIP type service and
is in the format: <service name>.<kubernetes namespace>.svc.<cluster domain> where cluster domain is usually
cluster.local. This means that we can use the autocreated DNS and assigned ClusterIP in our altnames for our
certificate.
Kubernetes Internal CA
Kubernetes does have an internal CA along with API methods to post CSRs and have those CSRs signed
by the CA however I would not use the internal CA for securing microservices. The internal CA is
primarily used by the kubelet and other internal cluster processes to authenticate to the Kubernetes
API server. There is no functionality for autorenewal and I think the cert will always be signed for 30 days.
Kubernetes-native Certificate Management
You can install and use cert-manager to have the cluster automatically create and manage certificates
for you using custom resources. They have excellent examples on their website so I would encourage you
to check that out if it is of interest. You should be able to use the CA Issuer Type and create
Certificate Resources that will create a certificate as a Kubernetes Secret. For the altnames, refer
to the below certificate generation steps in the manual section of my response.
Manually Create and Deploy Certificates
You should be able to achieve they same result using your "without Kubernetes" approach using cfssl:
generate CA using cfssl
add CA as trusted in image (using your Dockerfile approach)
create Kubernetes Service (for example purposes I will use kubectl create)
$ kubectl create service clusterip grpcserver --tcp=8000
describe the created Kubernetes Service, note IP will most likely be different in your case
$ kubectl describe service/grpcserver
Name: grpcserver
Namespace: default
Labels: app=grpcserver
Annotations: <none>
Selector: app=grpcserver
Type: ClusterIP
IP: 10.108.125.158
Port: 8000 8000/TCP
TargetPort: 8000/TCP
Endpoints: <none>
Session Affinity: None
Events: <none>
generate certificate for gRPCServer with a CN of grpcserver.default.svc.cluster.local the following altnames:
grpcserver
grpcserver.default.svc
grpcserver.default.svc.cluster.local
10.108.125.158
generate the client certificate with cfssl
put both certificates into Secret objects
kubectl create secret tls server --cert=server.pem --key=server.key
kubectl create secret tls client --cert=client.pem --key=client.key
mount the secret into the podspec
There is a lot of boilerplate work that you need to do with this bespoke approach. If you have an option I would suggest exploring service mesh such as istio or linkerd to secure communication between micro-services using TLS in kubernetes.

Hashicorp Consul - How to do verified TLS from Pods in Kubernetes cluster

I'm having some difficulty understanding Consul end-to-end TLS. For reference, I'm using Consul in Kubernetes (via the hashicorp/consul Helm chart). Only one datacenter and Kubernetes cluster - no external parties or concerns.
I have configured my override values.yaml file like so:
global:
datacenter: sandbox
gossipEncryption:
secretName: "consul"
secretKey: "CONSUL_GOSSIP_ENCRYPTION_KEY"
tls:
enabled: true
httpsOnly: true
enableAutoEncrypt: true
serverAdditionalDNSSANs: ["'consul.service.consul'"]
server:
replicas: 3
bootstrapExpect: 3
storage: 20Gi
dns:
clusterIP: 172.20.53.53
ui:
service:
type: 'LoadBalancer'
syncCatalog:
enabled: true
All other values are as default from the shipped values.yaml file.
This works, and Consul client logs suggest that all agents area connecting nicely using TLS, with relevant certs and keys being created by (as I understand) the Auto-encryption feature of Consul.
What I don't understand is how to initiate a HTTPS connection from an application on Kubernetes, running in a Pod, to a Consul server. Since the Pod's container does not (presumably) have the Consul root CA cert in its trust store, all HTTPS calls fail, as per wget example below:
# Connect to Pod:
laptop$> kubectl exec -it my-pod sh
# Attempt valid HTTPS connection:
my-pod$> wget -q -O - https://consul.service.consul:8501
Connecting to consul.service.consul:8501 (10.110.1.131:8501)
ssl_client: consul.service.consul: certificate verification failed: unable to get local issuer certificate
wget: error getting response: Connection reset by peer
# Retry, but ignore certificate validity issues:
my-pod$> wget --no-check-certificate -q -O - https://consul.service.consul:8501/v1/status/leader
"10.110.1.131:8300"
How am I supposed to enforce end-to-end (verified) HTTPS connections from my apps on Kubernetes to Consul if the container does not recognize the certificate as valid?
Am I misunderstanding something about certificate propagation?
Many thanks - Aaron
Solved with thanks to Hashicorp on their Consul discussion forum.
Create a Kubernetes secret named consul with a key named CONSUL_GOSSIP_ENCRYPTION_KEY and an appropriate encryption key value.
Generate value using consul keygen
Install the hashicorp/consul Helm chart with an values-override.yaml , such as below:
global:
datacenter: sandbox
gossipEncryption:
secretName: "consul"
secretKey: "CONSUL_GOSSIP_ENCRYPTION_KEY"
tls:
enabled: true
httpsOnly: true
enableAutoEncrypt: true
serverAdditionalDNSSANs: ["'consul.service.consul'"]
server:
replicas: 3
bootstrapExpect: 3
storage: 20Gi
dns:
clusterIP: 172.20.53.53
ui:
service:
type: 'LoadBalancer'
syncCatalog:
enabled: true
Create an example Pod spec to represent our application.
Ensure it mounts the Consul server CA cert secret.
Ensure the Pod’s container has HOST_IP exposed as an environment variable.
apiVersion: v1
kind: Pod
metadata:
namespace: default
name: test-pod
spec:
volumes:
- name: consul-consul-ca-cert
secret:
secretName: consul-consul-ca-cert
hostNetwork: false
containers:
- name: consul-test-pod
image: alpine
imagePullPolicy: IfNotPresent
env:
- name: HOST_IP
valueFrom:
fieldRef:
fieldPath: status.hostIP
command: ["/bin/sh"]
args: ["-c", "while true; do sleep 24h; done"]
volumeMounts:
- name: consul-consul-ca-cert
mountPath: /consul/tls/ca
Upon creation of the Pod, kubectl exec into it, and ensure the ca-certificates and curl packages are installed (I’m using Alpine Linux in this example).
(curl is purely for testing purposes)
#> apk update
#> apk add ca-certificates curl
Copy the mounted Consul server CA certificate into the /usr/local/share/ca-certificates/ and execute update-ca-certificates to add it to the system root CA store.
#> cp /consul/tls/ca/tls.crt /usr/local/share/ca-certificates/consul-server-ca.crt
#> update-ca-certificates # might give a trivial warning - ignore it
The Consul server is now accessible (and trusted) over HTTPS as below:
#> curl https://consul.service.consul:8501/v1/status/leader
## No TLS errors ##
We also want to talk to the Consul client (instead of the server) over HTTPS, for performance reasons.
Since the Consul client has its own CA cert, we need to retrieve that from the server.
This requires the consul-k8s binary, so we need to get that.
#> cd /usr/local/bin
#> wget https://releases.hashicorp.com/consul-k8s/0.15.0/consul-k8s_0.15.0_linux_amd64.zip # (or whatever latest version is)
#> unzip consul-k8s_0.15.0_linux_amd64.zip
#> rm consul-k8s_0.15.0_linux_amd64.zip
Get the Consul client CA cert and install it via update-ca-certificates :
#> consul-k8s get-consul-client-ca -server-addr consul.service.consul -server-port 8501 -output-file /usr/local/share/ca-certificates/consul-client-ca.crt
#> update-ca-certificates # might give a trivial warning - ignore it
The Consul client is now accessible (and trusted) over HTTPS as below:
#> curl https://$HOST_IP:8501/v1/status/leader
## No TLS errors ##
We can also access the Consul KV service from the client without issue:
#> curl https://$HOST_IP:8501/v1/kv/foo/bar/baz
## No TLS errors ##
Naturally, all of the above should be automated by the implementer. These manual steps are purely for demonstration purposes.

Is it possible to use PodPresets in OpenShift 3.11 (3.7+)?

I've installed an OpenShift cluster for testing purposes, and since I'm behind a corporate network, I need to include some Root Certificates in any Pod that wants to make external requests. What can I do to inject those certificates automatically at Pod creation?
I'm running OpenShift Origin (OKD) 3.11 in a local CentOS 7 VM, with a GlusterFS storage provisioning on top of it. I already had multiple issues with the VM itself, which gave me errors when trying to access the network: x509: certificate signed by unknown authority. I fixed that by adding my corporation root certificates in /etc/pki/ca-trust/source/anchors and by running the update-ca-trust command.
When I was running for example the docker-registry deployment in the OpenShift cluster, since the created Pods didn't have access to the host root certificates, they gave again x509: certificate signed by unknown authority errors when trying to pull images from docker.io. I resolved that by creating a ConfigMap containing all needed root certificates, and mounting them in a volume on the registry deployment config.
I thought I only needed to mount a volume in all deployment configs which want to request the external network. But then I provisioned a Jenkins instance and I realised something new: When a pipeline runs, Jenkins creates a Pod with an adapted agent (example: a Spring Boot app will need a Maven agent). Since I have no control to those created pods, they can't have the mounted volume with all root certificates. So for instance I have a pipeline that runs helm init --client-only before releasing my app chart, and this command gives a x509: certificate signed by unknown authority error, because this pod hasn't the root certificates.
x509 Error screenshot
I found that a PodPreset could be the perfect way to resolve my problem, but when I enable this feature in the cluster and create the PodPreset, no new pod is populated. I read on the OpenShift documentation that PodPresets are no longer supported as of 3.7, so I think that it could be the reason it is not working.
OpenShift docs screenshot
Here is my PodPreset definition file:
kind: PodPreset
apiVersion: settings.k8s.io/v1alpha1
metadata:
name: inject-certs
spec:
selector: {}
volumeMounts:
- mountPath: /etc/ssl/certs/cert1.pem
name: ca
subPath: cert1.pem
- mountPath: /etc/ssl/certs/cert2.pem
name: ca
subPath: cert2.pem
- mountPath: /etc/ssl/certs/cert3.pem
name: ca
subPath: cert3.pem
- mountPath: /etc/ssl/certs/cert4.pem
name: ca
subPath: cert4.pem
- mountPath: /etc/ssl/certs/cert5.pem
name: ca
subPath: cert5.pem
- mountPath: /etc/ssl/certs/cert6.pem
name: ca
subPath: cert6.pem
volumes:
- configMap:
defaultMode: 420
name: ca-pemstore
name: ca
I don't know if there is any way to make PodPresets work on OpenShift 3.11, or if there is another solution to inject certs file like this in created pods. This would be really great.
The RedHat COP on GitHub contains a project with a podpresent admission webhook controller you can use:
https://github.com/redhat-cop/podpreset-webhook
basically you deploy that project and change the apiVersion in your PodPresent to apiVersion: redhatcop.redhat.io/v1alpha1

helm: x509: certificate signed by unknown authority

I'm using Kubernetes and I recently updated my admin certs used in the kubeconfig. However, after I did that, all the helm commands fail thus:
Error: Get https://cluster.mysite.com/api/v1/namespaces/kube-system/pods?labelSelector=app%3Dhelm%2Cname%3Dtiller: x509: certificate signed by unknown authority
kubectl works as expected:
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
ip-10-1-0-34.eu-central-1.compute.internal Ready master 42d v1.7.10+coreos.0
ip-10-1-1-51.eu-central-1.compute.internal Ready master 42d v1.7.10+coreos.0
ip-10-1-10-120.eu-central-1.compute.internal Ready <none> 42d v1.7.10+coreos.0
ip-10-1-10-135.eu-central-1.compute.internal Ready <none> 27d v1.7.10+coreos.0
ip-10-1-11-71.eu-central-1.compute.internal Ready <none> 42d v1.7.10+coreos.0
ip-10-1-12-199.eu-central-1.compute.internal Ready <none> 8d v1.7.10+coreos.0
ip-10-1-2-110.eu-central-1.compute.internal Ready master 42d v1.7.10+coreos.0
As far as I've been able to read, helm is supposed to use the same certificates as kubectl, which makes me curious as how how kubectl works, but helm doesn't?
This is a production cluster with internal releases handled through helm charts, so it being solved is imperative.
Any hints would be greatly appreciated.
As a workaround you can try to disable certificate verification. Helm uses the kube config file (by default ~/.kube/config). You can add insecure-skip-tls-verify: true for the cluster section:
clusters:
- cluster:
server: https://cluster.mysite.com
insecure-skip-tls-verify: true
name: default
Did you already try to reinstall helm/tiller?
kubectl delete deployment tiller-deploy --namespace kube-system
helm init
Also check if you have configured an invalid certificate in the cluster configuration.
In my case, I was running for a single self-manage and the config file was also container ca-file, so the following the above answer was throwing below error
Error: Kubernetes cluster unreachable: Get "https://XX.XX.85.154:6443/version?timeout=32s": x509: certificate is valid for 10.96.0.1, 172.31.25.161, not XX.XX.85.154
And my config was
- cluster:
certificate-authority-data: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
server: https://54.176.85.154:6443
insecure-skip-tls-verify: true
So I had to remove the certificate-authority-data.
- cluster:
server: https://54.176.85.154:6443
insecure-skip-tls-verify: true
Use --insecure-skip-tls-verify to skip tls verification via command line
helm repo add stable --insecure-skip-tls-verify https://charts.helm.sh/stable
In my case the error was caused by an untrusted certificate from the Helm repository.
Downloading the certificate and specifying it using the --ca-file option solved the issue (at least in Helm version 3).
helm repo add --ca-file /path/to/certificate.crt repoName https://example/repository
--ca-file string, verify certificates of HTTPS-enabled servers using this CA bundle
Adding the line below the -cluster to /home/centos/.kube/config file fixed my issue
insecure-skip-tls-verify: true
fixed my issue.
my config file now looks like this.
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/centos/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 02 Nov 2021 20:51:44 EDT
provider: minikube.sigs.k8s.io
version: v1.23.2
name: cluster_info
server: https://192.168.49.2:8443
insecure-skip-tls-verify: true
name: minikube
contexts:
I encountered an edge case for this. You can also get this error if you have multiple kubeconfig files referenced in the KUBECONFIG variable, and more than one file has clusters with the same name.
For my case, it was an old version of helm (v. 3.6.3 in my case) after I upgraded to helm v.3.9.0 brew upgrade helm everything worked again.
Although adding repo with --ca-file did the thing, when it tried to download from that repo with the command posted under, I still got the x509: certificate signed by unknown authority
helm dependency update helm/myStuff
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "myRepo" chart repository
Update Complete. ⎈Happy Helming!⎈
Saving 18 charts
Downloading myService from repo https://myCharts.me/
Save error occurred: could not download https://myCharts.me/stuff.tgz ...
x509: certificate signed by unknown authority
Deleting newly downloaded charts, restoring pre-update state
What I needed to do, apart from adding repo with --ca-file was to download the repository certificate and install it as Current User:
Place all certificates in the following store: Trusted Root Certification Authorities:
After installing the certificate I also needed to restart the computer. After restart, when you open the browser and paste the repo URL it should connect without giving a warning and trusting the site (this way you know you installed the certificate successfully).
You can go ahead and run the command, it should pick the certificate this time.
helm dependency update helm/myStuff
....
Saving 18 charts
Downloading service1 from repo https://myCharts.me/
Downloading service2 from repo https://myCharts.me/
....

kubectl unable to connect to server: x509: certificate signed by unknown authority

i'm getting an error when running kubectl one one machine (windows)
the k8s cluster is running on CentOs 7 kubernetes cluster 1.7
master, worker
Here's my .kube\config
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: REDACTED
server: https://10.10.12.7:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: system:node:localhost.localdomain
name: system:node:localhost.localdomain#kubernetes
current-context: system:node:localhost.localdomain#kubernetes
kind: Config
preferences: {}
users:
- name: system:node:localhost.localdomain
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
the cluster is built using kubeadm with the default certificates on the pki directory
kubectl unable to connect to server: x509: certificate signed by unknown authority
One more solution in case it helps anyone:
My scenario:
using Windows 10
Kubernetes installed via Docker Desktop ui 2.1.0.1
the installer created config file at ~/.kube/config
the value in ~/.kube/config for server is https://kubernetes.docker.internal:6443
using proxy
Issue: kubectl commands to this endpoint were going through the proxy, I figured it out after running kubectl --insecure-skip-tls-verify cluster-info dump which displayed the proxy html error page.
Fix: just making sure that this URL doesn't go through the proxy, in my case in bash I used export no_proxy=$no_proxy,*.docker.internal
So kubectl doesn't trust the cluster, because for whatever reason the configuration has been messed up (mine included). To fix this, you can use openssl to extract the certificate from the cluster
openssl.exe s_client -showcerts -connect IP:PORT
IP:PORT should be what in your config is written after server:
Copy paste stuff starting from -----BEGIN CERTIFICATE----- to -----END CERTIFICATE----- (these lines included) into a new text file, say... myCert.crt If there are multiple entries, copy all of them.
Now go to .kube\config and instead of
certificate-authority-data: <wrongEncodedPublicKey>`
put
certificate-authority: myCert.crt
(it assumes you put myCert.crt in the same folder as the config file)
If you made the cert correctly it will trust the cluster (tried renaming the file and it no longer trusted afterwards).
I wish I knew what encoding certificate-authority-data uses, but after a few hours of googling I resorted to this solution, and looking back I think it's more elegant anyway.
Run:
gcloud container clusters get-credentials standard-cluster-1 --zone us-central1-a --project devops1-218400
here devops1-218400 is my project name. Replace it with your project name.
I got the same error while running $ kubectl get nodes as a root user. I fixed it by exporting kubelet.conf to environment variable.
$ export KUBECONFIG=/etc/kubernetes/kubelet.conf
$ kubectl get nodes
For my case, its simple worked by adding --insecure-skip-tls-verify at end of kubectl commands, for single time.
Sorry I wasn't able to provide this earlier, I just realized the cause:
So on the master node we're running a kubectl proxy
kubectl proxy --address 0.0.0.0 --accept-hosts '.*'
I stopped this and voila the error was gone.
I'm now able to do
kubectl get nodes
NAME STATUS AGE VERSION
centos-k8s2 Ready 3d v1.7.5
localhost.localdomain Ready 3d v1.7.5
I hope this helps those who stumbled upon this scenario.
I my case I resolved this issue copying the kubelet configuration to my home kube config
cat /etc/kubernetes/kubelet.conf > ~/.kube/config
This was happening because my company's network does not allow self signing certificates through their network. Try switching to a different network
For those of you that were late to the thread like I was and none of these answers worked for you I may have the solution:
When I copied over my .kube/config file to my windows 10 machine (with kubectl installed) I didn't change the IP address from 127.0.0.1:6443 to the master's IP address which was 192.168.x.x. (running windows 10 machine connecting to raspberry pi cluster on the same network). Make sure that you do this and it may fix your problem like it did mine.
On GCP
check: gcloud version
-- localMacOS# gcloud version
Run:
--- localMacOS# gcloud container clusters get-credentials 'clusterName' \ --zone=us-'zoneName'
Get clusterName and zoneName from your console -- here: https://console.cloud.google.com/kubernetes/list?
ref: .x509 #market place deployments on GCP #Kubernetes
I got this because I was not connected to the office's VPN
In case of the error you should export all the kubecfg which contains the certs. kops export kubecfg "your cluster-name and export KOPS_STATE_STORE=s3://"paste your S3 store" .
Now you should be able to access and see the resources of your cluster.
This is an old question but in case that also helps someone else here is another possible reason.
Let's assume that you have deployed Kubernetes with user x. If the .kube dir is under the /home/x user and you connect to the node with root or y user it will give you this error.
You need to switch to the user profile so kubernetes can load the configuration from the .kube dir.
Update: When copying the ~/.kube/config file content on a local pc from a master node make sure to replace the hostname of the loadbalancer with a valid IP. In my case the problem was related to the dns lookup.
Hope this helps.