i'm getting an error when running kubectl one one machine (windows)
the k8s cluster is running on CentOs 7 kubernetes cluster 1.7
master, worker
Here's my .kube\config
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: REDACTED
server: https://10.10.12.7:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: system:node:localhost.localdomain
name: system:node:localhost.localdomain#kubernetes
current-context: system:node:localhost.localdomain#kubernetes
kind: Config
preferences: {}
users:
- name: system:node:localhost.localdomain
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
the cluster is built using kubeadm with the default certificates on the pki directory
kubectl unable to connect to server: x509: certificate signed by unknown authority
One more solution in case it helps anyone:
My scenario:
using Windows 10
Kubernetes installed via Docker Desktop ui 2.1.0.1
the installer created config file at ~/.kube/config
the value in ~/.kube/config for server is https://kubernetes.docker.internal:6443
using proxy
Issue: kubectl commands to this endpoint were going through the proxy, I figured it out after running kubectl --insecure-skip-tls-verify cluster-info dump which displayed the proxy html error page.
Fix: just making sure that this URL doesn't go through the proxy, in my case in bash I used export no_proxy=$no_proxy,*.docker.internal
So kubectl doesn't trust the cluster, because for whatever reason the configuration has been messed up (mine included). To fix this, you can use openssl to extract the certificate from the cluster
openssl.exe s_client -showcerts -connect IP:PORT
IP:PORT should be what in your config is written after server:
Copy paste stuff starting from -----BEGIN CERTIFICATE----- to -----END CERTIFICATE----- (these lines included) into a new text file, say... myCert.crt If there are multiple entries, copy all of them.
Now go to .kube\config and instead of
certificate-authority-data: <wrongEncodedPublicKey>`
put
certificate-authority: myCert.crt
(it assumes you put myCert.crt in the same folder as the config file)
If you made the cert correctly it will trust the cluster (tried renaming the file and it no longer trusted afterwards).
I wish I knew what encoding certificate-authority-data uses, but after a few hours of googling I resorted to this solution, and looking back I think it's more elegant anyway.
Run:
gcloud container clusters get-credentials standard-cluster-1 --zone us-central1-a --project devops1-218400
here devops1-218400 is my project name. Replace it with your project name.
I got the same error while running $ kubectl get nodes as a root user. I fixed it by exporting kubelet.conf to environment variable.
$ export KUBECONFIG=/etc/kubernetes/kubelet.conf
$ kubectl get nodes
For my case, its simple worked by adding --insecure-skip-tls-verify at end of kubectl commands, for single time.
Sorry I wasn't able to provide this earlier, I just realized the cause:
So on the master node we're running a kubectl proxy
kubectl proxy --address 0.0.0.0 --accept-hosts '.*'
I stopped this and voila the error was gone.
I'm now able to do
kubectl get nodes
NAME STATUS AGE VERSION
centos-k8s2 Ready 3d v1.7.5
localhost.localdomain Ready 3d v1.7.5
I hope this helps those who stumbled upon this scenario.
I my case I resolved this issue copying the kubelet configuration to my home kube config
cat /etc/kubernetes/kubelet.conf > ~/.kube/config
This was happening because my company's network does not allow self signing certificates through their network. Try switching to a different network
For those of you that were late to the thread like I was and none of these answers worked for you I may have the solution:
When I copied over my .kube/config file to my windows 10 machine (with kubectl installed) I didn't change the IP address from 127.0.0.1:6443 to the master's IP address which was 192.168.x.x. (running windows 10 machine connecting to raspberry pi cluster on the same network). Make sure that you do this and it may fix your problem like it did mine.
On GCP
check: gcloud version
-- localMacOS# gcloud version
Run:
--- localMacOS# gcloud container clusters get-credentials 'clusterName' \ --zone=us-'zoneName'
Get clusterName and zoneName from your console -- here: https://console.cloud.google.com/kubernetes/list?
ref: .x509 #market place deployments on GCP #Kubernetes
I got this because I was not connected to the office's VPN
In case of the error you should export all the kubecfg which contains the certs. kops export kubecfg "your cluster-name and export KOPS_STATE_STORE=s3://"paste your S3 store" .
Now you should be able to access and see the resources of your cluster.
This is an old question but in case that also helps someone else here is another possible reason.
Let's assume that you have deployed Kubernetes with user x. If the .kube dir is under the /home/x user and you connect to the node with root or y user it will give you this error.
You need to switch to the user profile so kubernetes can load the configuration from the .kube dir.
Update: When copying the ~/.kube/config file content on a local pc from a master node make sure to replace the hostname of the loadbalancer with a valid IP. In my case the problem was related to the dns lookup.
Hope this helps.
I have a problem with my registry docker. My "server" VM is on kali-linux. I created the registry docker in HTTP and use a centOS VM as a client. I declared the registry insecure in the client VM and it worked perfectly.
Now I try to put it in HTTPS. In order to do that, I use nginx as a proxy. I followed this tutorial : Step 5 — Setting Up SSL except for Part 8 to make it a service (I don't know why but i can't do it).
Because I don't have a domain name, I used a fake one. In order to be recognized, I added my IP (192.168.X.X) and the domain name I used (myregistryexemple) to the /etc/hosts file on both VM.
As asked by the tutorial, I generated the certificat on my "server" VM (the kali one), and send it by scp to my client VM. I make the centOS vm trust the certificate thanks to this commands :
yum install ca-certificates
update-ca-trust force-enable
cp cert.crt /etc/pki/ca-trust/source/anchors/
update-ca-trust extract
I restart the service docker on the client VM. And launch the docker registry and the nginx proxy with "docker-compose up" on my kali VM.
I tag and try to push an ubuntu on the registry :
docker tag ubuntu myregistryexemple/ubuntu
docker push myregistryexemple/ubuntu
But I get this error :
The push refers to a repository [docker.io/myregistryexemple/ubuntu]
56827159aa8b: Preparing
440e02c3dcde: Preparing
29660d0e5bb2: Preparing
85782553e37a: Preparing
745f5be9952c: Preparing
denied: requested access to the resource is denied
Then I try to push to localhost directly :
docker tag ubuntu localhost:5000/ubuntu & docker push localhost:5000/ubuntu
then I docker login on the domain from the client VM, it worked, but when i tried to pull from my domain registry on the client VM, docker cannot find on the registry the docker images i tried to push.
Do someone has any idea why and knows how to help me ?
Ok so i found a way to make it work.
It is quite simple : Juste follow the complete tutorial I quote on the question ( https://www.digitalocean.com/community/tutorials/how-to-set-up-a-private-docker-registry-on-ubuntu-14-04#step-5-%E2%80%94-setting-up-ssl )
After you created the repository, and before you push/pull a docker image.
You need to go, in both client and server VM, on /etc/hosts .
Add the line : domainChosen serverVmIp
Save and quit it.
Now we need the client to trust the certificate generated. In order to do that, you can use this tutorial : http://kb.kerio.com/product/kerio-connect/server-configuration/ssl-certificates/adding-trusted-root-certificates-to-the-server-1605.html .
Then restart your registry and your docker deamon. And you normaly can use your domain name to push/pull in your registry in https.
We have created a private docker registry in artifactory.
Our artifactory is a standalone installation and have Nginx as a webserver.
SSL certificates are trusted and works fine.
on docker client, I have copied the ca.crt to /etc/docker/certs.d/:5001/
while am trying to login or push images from my docker client i see below error.
[root#cds-dev-test ~]# docker login artifactory.host:5001
Username: raj
Password:
Email: raj#gmail.com
Error response from daemon: invalid registry endpoint
https://artifactory.host:5001/v0/: unable to ping registry endpoint
v2 ping attempt failed with error: Get https://artifactory.host:5001/v2/: Tunnel Connection Failed
v1 ping attempt failed with error: Get artifactory.host:5001/v1/_ping: Tunnel Connection Failed. If this private registry supports only HTTP or HTTPS with an unknown CA certificate, please add --insecure-registry artifactory.host:5001 to the daemon's arguments. In the case of HTTPS, if you have access to the registry's CA certificate, no need for the flag; simply place the CA certificate at /etc/docker/certs.d/artifactory.host:5001/ca.crt
my docker version is 1.9.1 and artifactory versioin 4.4.3.
It works when i use --insecure-registry option but not the secure way. We have all trusted certs in place, still see the error.
I have tried using proxy settings on docker client and also without proxy... always the same error.
Any help guys?
I figured it out.
I have proxy settings under my docker deamon. I have added No_Proxy and it works fine.
FYI....
so people if you are using trusted CA cert, and your network is behind proxy, make sure your docker services file doesnt have proxy settings, if it does add No-proxy=artifactory.host.
/etc/systemd/system/docker.service.d/http-proxy.conf
Thanks
My question is, as I understand docker-machine uses docker remote API to do whatever it does, for example to regenerate certificates. I have checked docker API but couldn't find how it's possible to send certificates to that machine using only docker api, can someone help please?
The TLS files are hosted locally on the Docker client. For this reason you should protect the files as if they were a root password.
This page will walk you through generating the files needed to negotiate a connection over TLS. Note that the remote daemon must be running TLS.
https://docs.docker.com/engine/security/https/
docker --tlsverify --tlscacert=ca.pem --tlscert=cert.pem --tlskey=key.pem -H=$HOST:2376 version
Note: Docker over TLS should run on TCP port 2376.
Warning: As shown in the example above, you don’t have to run the
docker client with sudo or the docker group when you use certificate
authentication. That means anyone with the keys can give any
instructions to your Docker daemon, giving them root access to the
machine hosting the daemon. Guard these keys as you would a root
password!
For https access I need to add a CA cert file to /usr/local/share/ca-certificates on my Ubuntu host machine.
Currently my Dockerfile RUN wget https... is failing since the certificate verification is failing.
How can Docker use the host machine CA cert? Or is there an existing enhancement opened to allow this?
I've used CA and SSL certs via a passthrough mount, but this looks like you're trying to do it in the Dockerfile.
So my suggestion would be - copy the CA cert to the image as part of the Dockerfile, and then proceed as normal. Or drop to http, or run wget --no-check-certificate if you're happy with that.
There are a few open bugs in this area:
https://github.com/docker/machine/issues/1799
https://github.com/docker/docker/issues/4372
https://github.com/docker/machine/issues/1435
https://github.com/deis/deis/issues/2230