I have setup a kubernetes cluster from scratch. This just means I did not use services provided by others, but used the k8s installer it self. Before we used to have other clusters, but with providers and they give you tls cert and key for auth, etc. Now this cluster was setup by myself, I have access via kubectl:
$ kubectl get all
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 21h
$
I also tried this and I can add a custom key, but then when I try to query via curl I get pods is forbidden: User "system:anonymous" cannot list resource "pods" in API group "" at the cluster scope.
I can not figure out where can I get the cert and key for a user to authenticate using the API for tls auth. I have tried to understand the official docs, but I have got nowhere. Can someone help me find where those files are or how to add or get certificates that i can use for the rest API?
Edit1: my .kube.config file looks like this:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0t(...)=
server: https://private_IP:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: kubernetes-admin
name: kubernetes-admin#kubernetes
current-context: kubernetes-admin#kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
user:
client-certificate-data: LS0tLS(...)Qo=
client-key-data: LS0(...)tCg==
It works from the localhost just normally.
In the other hand I noticed something. From the localhost I can access the cluster by generating the token using this method.
Also notice that for now I do not mind about creating multiple roles for multiple users, etc. I just need access to the API from remote and can be using "default" authentication or roles.
Now when I try to do the same from remote I get the following:
I tried using that config to run kubectl get all from remote, it runs for a while and then ends in Unable to connect to the server: dial tcpprivate_IP:6443: i/o timeout.
This happens because the config has private_IP, then I changed the IP to Public_IP:6443 and now get the following : Unable to connect to the server: x509: certificate is valid for some_private_IP, My_private_IP, not Public_IP:6443
Keep present that this is and AWS ec2 instance with elastic IP (You can think of Elastic IP as just a public IP on a traditional setup, but this public ip is on your public router and then this router routes requests to your actual server on private network). For AWS fans like I said, I can not use the EKS service here.
So how do I get this to be able to use the Public IP?
It seems your main problem is the TLS server certificate validation.
One option is to tell kubectl to skip the validation of the server certificate:
kubectl --insecure-skip-tls-verify ...
This has obviously the potential to be "insecure", but that depends on your use case
Another option is to recreate the cluster with the public IP address added to the server certificate. And it should also be possible to recreate only the certificate with kubeadm without recreating the cluster. Details about the latter two points can be found in this answer.
You need to setup RBAC for the user. define roles and rolebinding. follow the link for reference -> https://docs.bitnami.com/kubernetes/how-to/configure-rbac-in-your-kubernetes-cluster/
Related
It's hard to ask when you don't know what to ask:) In general, my main question is: how to connect with a certificate to the shema_registry in Kafka? Now a little description: I set up a single-node kafka sasl_tls in openshift cluster. I used a helm chart, I also set up a shema_register and its works fine. Kafka has set up brokers, the scheme is connected, looks good. And i can to connect to external kafka broker using my truststore cert and get list of topics for example. However, I don’t understand how to connect to the schema and in general what kind of certificate is needed? I created 2x jks files, keystore in trustore, then I signed these files with a CA certificate and generated a secret, i put it in openshift cluster. Schema_registry is available from the browser, you can log in via the route using the user and password, the server returns []. But some guys ask what they cant connect to schema. Here: i little piece of helm chart config:
....
auth:
clientProtocol: sasl_tls
interBrokerProtocol: sasl_tls
sasl:
mechanisms: plain,scram-sha-256,scram-sha-512
interBrokerMechanism: plain
jaas:
clientUsers:
- admin
- education
clientPasswords:
- "xxxx"
- "xxxxx"
....
tls:
type: jks
existingSecret: kafka-jks
autoGenerated: false
password: kafkapass
...
Any ideas? thanks
I am trying to create an HA cluster with HAProxy and below 3 master nodes.
On the proxy I am following the official documentation High Availability Considerations/haproxy configuration. I am passing the ssl verification to the Server Api option ssl-hello-chk.
Having said that I can understand that on my ~/.kube/config file I am using the wrong certificate-authority-data that I picked up from the prime master node e.g.:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: <something-something>
server: https://ip:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: kubernetes-admin
name: kubernetes-admin#kubernetes
current-context: kubernetes-admin#kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
user:
client-certificate-data: <something-something>
client-key-data: <something-something>
token: <something-something>
I found a relevant ticket on GitHub Unable to connect to the server: x509: certificate signed by unknown authority/onesolpark which makes sense that I should extract the certificate-authority-data of the proxy.
On this case I assume that I should extract the certificate-authority-data from one of the certs in /etc/kubernetes/pki/ most likely apiserver.*?
Any idea on this?
Thanks in advance for your time and effort.
Okay I managed to figured it out.
When a k8s admin decides to create a HA ckuster he should have minimum one LB but ideally he should have two LB that both are able to LB towards all Master nodes (3,5 etc).
So when the user wants to send a request to Server API towards one of the Master nodes, the request will go through ideally through a Virtual IP and forward to one the LB. As a second step the LB will forward the request to one of the Master nodes.
The problem that I wanted to solve is that the Server API had no record of the IP of the LB(s).
In result the user will get the error Unable to connect to the server: x509: certificate signed by unknown authority.
The solution can be found on this relevant question How can I add an additional IP / hostname to my Kubernetes certificate?.
Straight answer is simply add the LB(s) in the kubeadm config file before launch of Master Prime node e.g.:
apiServer:
certSANs:
- "ip-of-LB1"
- "domain-of-LB1"
- "ip-of-LB2"
- "domain-of-LB2" # etc etc
extraArgs:
authorization-mode: Node,RBAC
timeoutForControlPlane: 4m0s
But as it is also mentioned the analytical documentation can be found here Adding a Name to the Kubernetes API Server Certificate.
Also if the user decides to create its own certificates and not use the default self sign certificates (populated from k8s by default) he can add the nodes manually as documented from the official site Certificates.
Then if you want to copy the ca.crt is under the default dir /etc/kubernetes/pki/ca.crt (unless defined differently), or the user can choose to simply copy the ~/.kube/config file for the kubectl communication.
Hope this helps someone else to spend less time in the future.
Sorry for my bad English but I don't know how to solve my problem.
So...
Introduction:
I have 2 microservices (I called them gRPCClient and gRPCServer, although it doesn’t matter what exactly). They need to communicate via TLS. Without Kubernets, everything is quite simple. I create my CA via cfssl in a docker container, then I get the root certificate from CA and I put it in trust for my grpc applications (I do this in Dockerfile), so that any certificate signed by my CA passes the test.
Now Kubernetes is included in the game. I'm playing locally with minikube. I create local cluster "minikube start" on mac (maybe this is important, I don’t know ...)
Problem:
How will this flow work with the Kubernetes? As I understand it, there is already a CA inside the Kubernetes (correct me if this is not so). I read many articles, but I really didn’t understand anything. I tried the examples from this article https://kubernetes.io/docs/tasks/tls/managing-tls-in-a-cluster/
Step by step:
Create a signature request
cat <<EOF | cfssl genkey - | cfssljson -bare server
{
"hosts": [
"my-svc.my-namespace.svc.cluster.local",
"my-pod.my-namespace.pod.cluster.local",
"192.0.2.24",
"10.0.34.2"
],
"CN": "my-pod.my-namespace.pod.cluster.local",
"key": {
"algo": "ecdsa",
"size": 256
}
}
EOF
The first thing I did not understand was the hosts. For example, my-svc.my-namespace.svc.cluster.local is the full name of my service? (I mean the service in Kubernetes as kind: Service). I have it in the namespace "dev" and its name is user-app-sesrvice. Should I specify user-app-sesrvice.dev.svc.cluster.local then? or it just user-app-sesrvice. Or is there some kind of command to get the full name of the service? 192.0.2.24 - as I understand it, is the IP of service, it is also unclear whether it is mandatory to specify it or is it possible only the name of the service? What if I have clusterIP: None installed, then I don't have IP for it. my-pod.my-namespace.pod.cluster.local - Should I specify this? If I have several pods, should I list them all? Then the problem is in the dynamics, because the pods are recreated, deleted and added, and I need to send a new request for signature each time. The same questions that I asked about service including some parts "my-pod" and "namespace"? Is it possible to see the full name of the pod with all this data. 10.0.34.2 - pods' IP. The same question about pods' IP.
I tried to specify the host and CN as name of my service name "user-app-service" (as if I was working without a Kubernetes). I created a signature and a key. Then all the steps, created a request object for signature in the Kubernetes
cat <<EOF | kubectl apply -f -
apiVersion: certificates.k8s.io/v1beta1
kind: CertificateSigningRequest
metadata:
name: my-svc.my-namespace
spec:
request: $(cat server.csr | base64 | tr -d '\n')
usages:
- digital signature
- key encipherment
- server auth
EOF
Then I made it and I received a certificate
Further, based on security, I need to store the key and a certificate in secrets and then get it in the container (for the purposes of the test, I just put them in the container in the docker file, hard-coded), this is in the gRPC server. I run the deployment and created a client on golang, specifying config: = &tls.Config{} in the code so that it would pull the trusted certificates from the system itself, I thought that the Kubernetes has a CA, but did not find how to get its certificate in the docs. I thought the Kubernetes adds them to all the containers himself. But I got the error Unavailable desc = connection error: desc = "transport: authentication handshake failed: x509: certificate signed by unknown authority". How should all this work? Where can I get a CA certificate from a Kubernetes? And then, do I need to add it to each container with my hands in dockerfile? or is this not the right tactic and is there some kind of automation from the Kubernetes?
I found another way, this is to try to deploy cfssl https://hub.docker.com/r/cfssl/cfssl/ on the Kubernetes and already work with it, like there was no Kubernetes (I have not tried this method yet)
How to put all this into a working system, what options to use and why? Maybe there are some full articles. I wrote a lot, but I hope it’s clear. I really need the help.
I am going to break down my answer into a couple of parts:
Kubernetes Services and DNS Discovery
In general, it is recommended to put a Service in front of a Deployment that manages pods in Kubernetes.
The Service creates a stable DNS and IP endpoint for pods that may be deleted and be assigned a different
IP address when recreated. DNS service discovery is automatically enabled with a ClusterIP type service and
is in the format: <service name>.<kubernetes namespace>.svc.<cluster domain> where cluster domain is usually
cluster.local. This means that we can use the autocreated DNS and assigned ClusterIP in our altnames for our
certificate.
Kubernetes Internal CA
Kubernetes does have an internal CA along with API methods to post CSRs and have those CSRs signed
by the CA however I would not use the internal CA for securing microservices. The internal CA is
primarily used by the kubelet and other internal cluster processes to authenticate to the Kubernetes
API server. There is no functionality for autorenewal and I think the cert will always be signed for 30 days.
Kubernetes-native Certificate Management
You can install and use cert-manager to have the cluster automatically create and manage certificates
for you using custom resources. They have excellent examples on their website so I would encourage you
to check that out if it is of interest. You should be able to use the CA Issuer Type and create
Certificate Resources that will create a certificate as a Kubernetes Secret. For the altnames, refer
to the below certificate generation steps in the manual section of my response.
Manually Create and Deploy Certificates
You should be able to achieve they same result using your "without Kubernetes" approach using cfssl:
generate CA using cfssl
add CA as trusted in image (using your Dockerfile approach)
create Kubernetes Service (for example purposes I will use kubectl create)
$ kubectl create service clusterip grpcserver --tcp=8000
describe the created Kubernetes Service, note IP will most likely be different in your case
$ kubectl describe service/grpcserver
Name: grpcserver
Namespace: default
Labels: app=grpcserver
Annotations: <none>
Selector: app=grpcserver
Type: ClusterIP
IP: 10.108.125.158
Port: 8000 8000/TCP
TargetPort: 8000/TCP
Endpoints: <none>
Session Affinity: None
Events: <none>
generate certificate for gRPCServer with a CN of grpcserver.default.svc.cluster.local the following altnames:
grpcserver
grpcserver.default.svc
grpcserver.default.svc.cluster.local
10.108.125.158
generate the client certificate with cfssl
put both certificates into Secret objects
kubectl create secret tls server --cert=server.pem --key=server.key
kubectl create secret tls client --cert=client.pem --key=client.key
mount the secret into the podspec
There is a lot of boilerplate work that you need to do with this bespoke approach. If you have an option I would suggest exploring service mesh such as istio or linkerd to secure communication between micro-services using TLS in kubernetes.
I'm trying to configure an HTTPS/Layer 7 Load Balancer with GKE. I'm following SSL certificates overview and GKE Ingress for HTTP(S) Load Balancing.
My config. has worked for some time. I wanted to test Google's managed service.
This is how I've set it up so far:
k8s/staging/staging-ssl.yml:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-staging-lb-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: "my-staging-global"
ingress.gcp.kubernetes.io/pre-shared-cert: "staging-google-managed-ssl"
kubernetes.io/ingress.allow-http: "false"
spec:
rules:
- host: staging.my-app.no
http:
paths:
- path: /*
backend:
serviceName: my-svc
servicePort: 3001
gcloud compute addresses list
#=>
NAME REGION ADDRESS STATUS
my-staging-global 35.244.160.NNN RESERVED
host staging.my-app.no
#=>
35.244.160.NNN
but it is stuck on FAILED_NOT_VISIBLE:
gcloud beta compute ssl-certificates describe staging-google-managed-ssl
#=>
creationTimestamp: '2018-12-20T04:59:39.450-08:00'
id: 'NNNN'
kind: compute#sslCertificate
managed:
domainStatus:
staging.my-app.no: FAILED_NOT_VISIBLE
domains:
- staging.my-app.no
status: PROVISIONING
name: staging-google-managed-ssl
selfLink: https://www.googleapis.com/compute/beta/projects/my-project/global/sslCertificates/staging-google-managed-ssl
type: MANAGED
Any idea on how I can fix or debug this further?
I found a section in the doc I linked to at the beginning of the post
Associating SSL certificate resources with a target proxy:
Use the following gcloud command to associate SSL certificate resources with a target proxy, whether the SSL certificates are self-managed or Google-managed.
gcloud compute target-https-proxies create [NAME] \
--url-map=[URL_MAP] \
--ssl-certificates=[SSL_CERTIFICATE1][,[SSL_CERTIFICATE2], [SSL_CERTIFICATE3],...]
Is that necessary when I have this line in k8s/staging/staging-ssl.yml?
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
. . .
ingress.gcp.kubernetes.io/pre-shared-cert: "staging-google-managed-ssl"
. . .
I have faced this issue recently. You need to check whether your A Record correctly points to the Ingress static IP.
If you are using a service like Cloudflare, then disable the Cloudflare proxy setting so that ping to the domain will give the actual IP of Ingress. THis will create the Google Managed SSL certificate correctly with 10 to 15 minutes.
Once the certificate is up, you can again enable Cloudflare proxy setting.
I'm leaving this for anyone who might end up in the same situation as me. I needed to migrate from a self-managed certificate to a google-managed one.
I did create the google-managed certificate following the guide and was expecting to see it being activated before applying the certificate to my Kubernetes ingress (to avoid the possibility of a downtime)
Turns out, as stated by the docs,
the target proxy must reference the Google-managed certificate
resource
So applying the configuration with kubectl apply -f ingress-conf.yaml made the load balancer use the newly created certificate, which became active shortly after (15 min or so)
What worked for me after checking the answers here (I worked with a load balancer but IMO this is correct for all cases):
If some time passed this certificate will not work for you (It may be permamnently gone and it will take time to show that) - I created a new one and replaced it in the Load Balancer (just edit it)
Make sure that the certificate is being used a few minutes after creating it
Make sure that the DNS points to your service. And that your configuration is working when using http!! - This is the best and safest way (also if you just moved a domain - make sure that when you check it you reach to the correct IP)
After creating a new cert or if the problem was fixed - your domain will turn green but you still need to wait (can take an hour or more)
As per the following documentation which you provided, this should help you out:
The status FAILED_NOT_VISIBLE indicates that certificate provisioning failed for a domain because of a problem with DNS or the load balancing configuration. Make sure that DNS is configured so that the certificate's domain resolves to the IP address of the load balancer.
What is the TTL (time to live) of the A Resource Record for staging.my-app.no?
Use, e.g.,
dig +nocmd +noall +answer staging.my-app.no
to figure it out.
In my case, increasing the TTL from 60 seconds to 7200 let the domainStatus finally arrive in ACTIVE.
In addition to the other answers, when migrating from self-managed to google-managed certs I had to:
Enable http to my ingress service with kubernetes.io/ingress.allow-http: true
Leave the existing SSL cert running in the original ingress service until the new managed cert was Active
I also had an expired original SSL cert, though I'm not sure this mattered.
In my case, at work. We are leveraging the managed certificate a lot in order to provide dynamic environment for Developers & QA. As a result, we are provisioning & removing managed certificate quite a lot. This mean that we are also updating the Ingress resource as we are generating & removing managed certificate.
What we have founded out is that even if you delete the reference of the managed certificate from this annotation:
networking.gke.io/managed-certificates: <list>
It seems that randomly the Ingress does not remove the associated ssl-certificates from the LoadBalancer.
ingress.gcp.kubernetes.io/pre-shared-cert: <list>
As a result, when the managed certificate is deleted. The ingress will be "stuck" in a way, that no new managed certificate could be provision. Hence, new managed-ceritifcate will after some times transition from PROVISIONING state to FAILED_NOT_VISIBLE state
The only solution that we founded out so far, is that if a new certificate does not get provision after 30min. We will check if the annotation ingress.gcp.kubernetes.io/pre-shared-cert contains ssl-certificate that does not exist anymore.
You can check existing ssl-certificate with the command below
gcloud compute ssl-certificates list
If it happens that one ssl-certificate that does not exist anymore is still hanging around in the annotation. We'll then remove the unnecessary ssl-certificate from the ingress.gcp.kubernetes.io/pre-shared-cert annotation manually.
After applying the updated configuration, in about 5 minutes, the new managed certificate which was in FAILED_NOT_VISIBLE state should be provision and in ACTIVE state.
As already pointed by Mitzi https://stackoverflow.com/a/66578266/7588668
This is what worked for me
Create cert with subdomains/domains
Must Add it load balancer ( I was waiting for it to become active but only when you add it becomes active !! )
Add static IP as A record for domains/subdomain
It worked in 5min
In my case I needed alter the healthcheck and point it to the proper endpoint ( /healthz on nginx-ingress) and after the healtcheck returned true I had to make sure the managed certificate was created in the same namespace as the gce-ingress. After these two things were done it finally went through, otherwise I got the same error. "FAILED_NOT_VISIBLE"
I met the same issue.
I fixed it by re-looking at the documentation.
https://cloud.google.com/load-balancing/docs/ssl-certificates/troubleshooting?_ga=2.107191426.-1891616718.1598062234#domain-status
FAILED_NOT_VISIBLE
Certificate provisioning failed for the domain. Either of the following might be the issue:
The domain's DNS record doesn't resolve to the IP address of the Google Cloud load balancer. To resolve this issue, update the DNS records to point to your load balancer's IP address.
The SSL certificate isn't attached to the load balancer's target proxy. To resolve this issue, update your load balancer configuration.
Google Cloud continues to try to provision the certificate while the managed status is PROVISIONING.
Because my loadbalancer is behind cloudflare. By default cloudflare has cdn proxy enabled, and i need to first disable it after the DNS verified by Google, the cert state changed to active.
I had this problem for days. Even though the FQDN in Google Cloud public DNS zone correctly resolved to the IP of the HTTPS Load Balancer, certificate created failed with FAILED_NOT_VISIBLE. I eventually resolved the problem as my domain was set up in Google Domains with DNSSEC but had an incorrect DNSSEC record when pointing to the Google Cloud Public DNS zone. DNSSEC configuration can be verified using https://dnsviz.net/
I had the same problem. But my problem was in the deployment. I ran
kubectl describe ingress [INGRESS-NAME] -n [NAMESPACE]
The result shows an error in the resources.timeoutsec for the deployment. Allowed values must be less than 300 sec. My original value was above that. I reduced readinessProbe.timeoutSeconds to a lower number. After 30 mins the SSL cert was generated and the subdomain was verified.
It turns out that I had mistakenly done some changes to the production environment and others to staging. Everything worked as expected when I figured that out and followed the guide. :-)
i'm getting an error when running kubectl one one machine (windows)
the k8s cluster is running on CentOs 7 kubernetes cluster 1.7
master, worker
Here's my .kube\config
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: REDACTED
server: https://10.10.12.7:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: system:node:localhost.localdomain
name: system:node:localhost.localdomain#kubernetes
current-context: system:node:localhost.localdomain#kubernetes
kind: Config
preferences: {}
users:
- name: system:node:localhost.localdomain
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
the cluster is built using kubeadm with the default certificates on the pki directory
kubectl unable to connect to server: x509: certificate signed by unknown authority
One more solution in case it helps anyone:
My scenario:
using Windows 10
Kubernetes installed via Docker Desktop ui 2.1.0.1
the installer created config file at ~/.kube/config
the value in ~/.kube/config for server is https://kubernetes.docker.internal:6443
using proxy
Issue: kubectl commands to this endpoint were going through the proxy, I figured it out after running kubectl --insecure-skip-tls-verify cluster-info dump which displayed the proxy html error page.
Fix: just making sure that this URL doesn't go through the proxy, in my case in bash I used export no_proxy=$no_proxy,*.docker.internal
So kubectl doesn't trust the cluster, because for whatever reason the configuration has been messed up (mine included). To fix this, you can use openssl to extract the certificate from the cluster
openssl.exe s_client -showcerts -connect IP:PORT
IP:PORT should be what in your config is written after server:
Copy paste stuff starting from -----BEGIN CERTIFICATE----- to -----END CERTIFICATE----- (these lines included) into a new text file, say... myCert.crt If there are multiple entries, copy all of them.
Now go to .kube\config and instead of
certificate-authority-data: <wrongEncodedPublicKey>`
put
certificate-authority: myCert.crt
(it assumes you put myCert.crt in the same folder as the config file)
If you made the cert correctly it will trust the cluster (tried renaming the file and it no longer trusted afterwards).
I wish I knew what encoding certificate-authority-data uses, but after a few hours of googling I resorted to this solution, and looking back I think it's more elegant anyway.
Run:
gcloud container clusters get-credentials standard-cluster-1 --zone us-central1-a --project devops1-218400
here devops1-218400 is my project name. Replace it with your project name.
I got the same error while running $ kubectl get nodes as a root user. I fixed it by exporting kubelet.conf to environment variable.
$ export KUBECONFIG=/etc/kubernetes/kubelet.conf
$ kubectl get nodes
For my case, its simple worked by adding --insecure-skip-tls-verify at end of kubectl commands, for single time.
Sorry I wasn't able to provide this earlier, I just realized the cause:
So on the master node we're running a kubectl proxy
kubectl proxy --address 0.0.0.0 --accept-hosts '.*'
I stopped this and voila the error was gone.
I'm now able to do
kubectl get nodes
NAME STATUS AGE VERSION
centos-k8s2 Ready 3d v1.7.5
localhost.localdomain Ready 3d v1.7.5
I hope this helps those who stumbled upon this scenario.
I my case I resolved this issue copying the kubelet configuration to my home kube config
cat /etc/kubernetes/kubelet.conf > ~/.kube/config
This was happening because my company's network does not allow self signing certificates through their network. Try switching to a different network
For those of you that were late to the thread like I was and none of these answers worked for you I may have the solution:
When I copied over my .kube/config file to my windows 10 machine (with kubectl installed) I didn't change the IP address from 127.0.0.1:6443 to the master's IP address which was 192.168.x.x. (running windows 10 machine connecting to raspberry pi cluster on the same network). Make sure that you do this and it may fix your problem like it did mine.
On GCP
check: gcloud version
-- localMacOS# gcloud version
Run:
--- localMacOS# gcloud container clusters get-credentials 'clusterName' \ --zone=us-'zoneName'
Get clusterName and zoneName from your console -- here: https://console.cloud.google.com/kubernetes/list?
ref: .x509 #market place deployments on GCP #Kubernetes
I got this because I was not connected to the office's VPN
In case of the error you should export all the kubecfg which contains the certs. kops export kubecfg "your cluster-name and export KOPS_STATE_STORE=s3://"paste your S3 store" .
Now you should be able to access and see the resources of your cluster.
This is an old question but in case that also helps someone else here is another possible reason.
Let's assume that you have deployed Kubernetes with user x. If the .kube dir is under the /home/x user and you connect to the node with root or y user it will give you this error.
You need to switch to the user profile so kubernetes can load the configuration from the .kube dir.
Update: When copying the ~/.kube/config file content on a local pc from a master node make sure to replace the hostname of the loadbalancer with a valid IP. In my case the problem was related to the dns lookup.
Hope this helps.