I want to ask how to access Kubelet API from microk8s cluster.
I looked to this url and it says that Kubelet API requires client certificate.
So I called this (from /var/snap/microk8s/current/certs)
curl -v https://127.0.0.1:10250 --cert ca.crt --cert-type PEM --cacert ca.crt --key ca.key
But I got error saying:
curl failed to verify the legitimacy of the server and therefore could not
establish a secure connection to it. To learn more about this situation and
how to fix it, please visit the web page mentioned above.
How do I fix this issue? Also, what is the difference between kubelet.crt, server.crt, and ca.crt in microk8s?
Thank you!
Try this:
curl --verbose \
--cert ./server.crt \
--key ./server.key \
--insecure \
https://127.0.0.1:10250/healthz
The CA cert in the certs directory is not the signer of the cert :10250 presents to the user. I don't know where the CA cert being presented comes from, it looks like it's rotated as the issuer is CN=<servername>-ca#1567568834 ( hence the --insecure).
The kube-apiserver command line will include the exact path to the kubelet client certs (or could also be stored in a config file in the new k8s world)
--kubelet-client-certificate
--kubelet-client-key
$ pgrep -a kube-apiserver | perl -pe 's/ --/\n --/g'
22071 /snap/microk8s/1247/kube-apiserver
--cert-dir=/var/snap/microk8s/1247/certs
--service-cluster-ip-range=10.22.189.0/24
--authorization-mode=RBAC,Node
--basic-auth-file=/var/snap/microk8s/1247/credentials/basic_auth.csv
--service-account-key-file=/var/snap/microk8s/1247/certs/serviceaccount.key
--client-ca-file=/var/snap/microk8s/1247/certs/ca.crt
--tls-cert-file=/var/snap/microk8s/1247/certs/server.crt
--tls-private-key-file=/var/snap/microk8s/1247/certs/server.key
--kubelet-client-certificate=/var/snap/microk8s/1247/certs/server.crt
--kubelet-client-key=/var/snap/microk8s/1247/certs/server.key
--secure-port=16443
--token-auth-file=/var/snap/microk8s/1247/credentials/known_tokens.csv
--token-auth-file=/var/snap/microk8s/1247/credentials/known_tokens.csv
--etcd-servers=https://127.0.0.1:12379
--etcd-cafile=/var/snap/microk8s/1247/certs/ca.crt
--etcd-certfile=/var/snap/microk8s/1247/certs/server.crt
--etcd-keyfile=/var/snap/microk8s/1247/certs/server.key
--requestheader-client-ca-file=/var/snap/microk8s/1247/certs/front-proxy-ca.crt
--requestheader-allowed-names=front-proxy-client
--requestheader-extra-headers-prefix=X-Remote-Extra-
--requestheader-group-headers=X-Remote-Group
--requestheader-username-headers=X-Remote-User
--proxy-client-cert-file=/var/snap/microk8s/1247/certs/front-proxy-client.crt
--proxy-client-key-file=/var/snap/microk8s/1247/certs/front-proxy-client.key
Related
I am trying to run curl command from Red Hat Enterprise Linux 7 server with curl below:-
*curl -0 -v --cert-type p12 --cert /mycert.p12:<password> --cacert /my_sec_cert.cer -X GET "https://<url>"*
and ended up with the error :-
unable to load client cert: -8018 (SEC_ERROR_UNKNOWN_PKCS11_ERROR)
NSS error -8018 (SEC_ERROR_UNKNOWN_PKCS11_ERROR)
When i am using pem certificate for same curl command with private key. Its working fine for me.
Can anyone suggest how to use p12 with Rhel7 here? Thanks
In this article mosquitto_sub with TLS enabled I understand that you need to provide a capath or cafile option to mosquitto_sub (and pub) but I am having trouble figuring out where those files/paths come from.
Back in October I was able to run mosquitto_sub -h mymosquitto.com -p 8883 -v -t 'jim/#' -u <u> -P <pw> --capath ssl/certs from my desktop computer (running Mint 19). That no longer works. I did an apt install ca-certificates and found the .crt files in /usr/share/ca-certificates/mozilla/ but when I used that path, it still gave me: Error: A TLS error occurred.
This is a Ubuntu 18.04 server running Let'sencrypt. I tried to point the --cafile to the chain.pem file which came from:
allow_anonymous false
password_file /etc/mosquitto/pwfile
listener 1883
listener 8883
certfile /etc/letsencrypt/live/mymosquitto.com/cert.pem
cafile /etc/letsencrypt/live/mymosquitto.com/chain.pem
keyfile /etc/letsencrypt/live/mymosquitto.com/privkey.pem
But that didn't work either. Can someone please help me understand what I should be doing?
From the mosquitto_sub man page:
--capath
Define the path to a directory containing PEM encoded CA certificates that are trusted. Used to enable SSL communication.
For --capath to work correctly, the certificate files must have ".crt" as the file ending and you must run "openssl rehash [path to
capath]" each time you add/remove a certificate.
If you want to use a directory of certs you will have to make sure the openssl rehash command mentioned has been run on that directory.
If you want use a file from the letsencrypt --cafile with the fullchain.pem file
I have rethought my situation. Since my certs get regenerated every 3 months or so I'm going to have to redo my apps using the new files so I decided to just go back to rolling my own. I did that using this site: http://www.steves-internet-guide.com/mosquitto-tls/ and I'm back to where I was in October.Thanks to hardillb for the advise.
Jim.
Well, I am trying to run gitlab-runner on my PC, which should be connected to our Gitlab on the server.
I am getting
ERROR: Registering runner... failed runner=XXXXXX status=couldn't execute POST against https://XXXXXXXXXX/api/v4/runners: Post https://XXXXXXXXXX/api/v4/runners: x509: certificate signed by unknown authority
PANIC: Failed to register this runner. Perhaps you are having network problems
I ran through different advices, but nothing really changed.
My current setup is self-signed ceritificate generated by
wget "https://letsencrypt.org/certs/lets-encrypt-x3-cross-signed.pem.txt" -O "/Users/admin/gitlab-runner-certs/fs-tul-letsencrypt.pem"
(I also tried https://futurestud.io/tutorials/how-to-run-gitlab-with-self-signed-ssl-certificate),
script for gitlab-runner registration
#!/usr/bin/env bash
# tried also without sudo
sudo gitlab-runner register \
--non-interactive \
--registration-token OUR_GITLAB_TOKEN \
--url OUR_GITLAB_HOST_URL \
--tls-ca-file /Users/admin/gitlab-runner-certs/fs-tul-letsencrypt.pem \
--executor docker
And I am still getting that error. Any idea?
I also did not change anything on server side. Shouldn't I do anything there? (I did not find any mention about it, but still asking)
PS: gitlab-runner x509: certificate signed by unknown authority did not fix my problem
There was a problem on server side where gitlab was running.
There was wrong path to full-chain certificate.
I know Docker Hub, and I know you can create your own repositories on it.
But you have to pay when you want to create multiple private repo's.
So I want my own Docker Registry Server using self signed certificates.
I'm following the official documentation
So these are the steps:
Create certificates in certs/
mkdir -p certs && openssl req \
-newkey rsa:4096 -nodes -sha256 -keyout certs/domain.key \
-x509 -days 365 -out certs/domain.crt
So this creates a domain.key and domain.cert in my certs/.
Now it's time to start my docker registry (using the keys):
docker run -d -p 5000:5000 --restart=always --name registry \
-v `pwd`/certs:/certs \
-e REGISTRY_HTTP_TLS_CERTIFICATE=certs/domain.crt \
-e REGISTRY_HTTP_TLS_KEY=certs/domain.key \
registry:2
After the deploy I see this:
$ docker ps
"/bin/registry /etc/d" 12 seconds ago Restarting (1) 1 seconds ago 0.0.0.0:5000->5000/tcp
My docker logs are telling me:
time="2015-12-11T10:18:19Z" level=warning msg="No HTTP secret provided - generated random secret. This may cause problems with uploads if multiple registries are behind a load-balancer. To provide a shared secret, fill in http.secret in the configuration file or set the REGISTRY_HTTP_SECRET environment variable." go.version=go1.5.2 instance.id=ee1b0d64-89eb-4be7-bc3e-e0e249bf117d version=v2.2.1
time="2015-12-11T10:18:19Z" level=info msg="redis not configured" go.version=go1.5.2 instance.id=ee1b0d64-89eb-4be7-bc3e-e0e249bf117d version=v2.2.1
time="2015-12-11T10:18:19Z" level=info msg="using inmemory blob descriptor cache" go.version=go1.5.2 instance.id=ee1b0d64-89eb-4be7-bc3e-e0e249bf117d version=v2.2.1
time="2015-12-11T10:18:19Z" level=fatal msg="open certs/domain.crt: permission denied"
Can someone tell me what I'm doing wrong? Thanks
It's an issue with SELinux and Docker:
chcon -Rt svirt_sandbox_file_t ~/certs/
I followed the Docker Registry installation docs precisely, and have a registry running on a remote Ubuntu VM. On that VM, the Docker container is running with the following command:
docker run -d -p 5000:5000 --restart=always --name registry \
-v `pwd`/auth:/auth \
-e "REGISTRY_AUTH_HTPASSWD_REALM=Registry Realm" \
-e REGISTRY_AUTH_HTPASSWD_PATH=/auth/htpasswd \
-v `pwd`/certs:/certs \
-e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/registry.crt \
-e REGISTRY_HTTP_TLS_KEY=/certs/registry.key \
registry:2
On the remote VM, I have the following directory structure:
/home/myuser/
certs/
registry.crt
registry.key
/etc/docker/certs.d/myregistry.example.com:5000/
ca.crt
ca.key
The ca.crt is the same exact cert as ~/certs/registry.crt (just renamed); same goes for ca.key and registry.key being the same/just renamed. I created the ca* files per a suggestion from the error output you'll see below.
I am almost 100% sure the CA cert is still valid, although any help ruling that out (e.g. how can I actually tell?) would be appreciated. When I start the container and look at the Docker logs, I don't see any errors.
I then attempt to login from my local laptop (Mac):
docker login myregistry.example.com:5000
It queries me for my username, password and email (although I don't recall ever specifying an email when setting up Basic Auth). After entering these correctly (I have checked and double checked...) I get the following error:
myuser#mymachine:~/tmp$docker login myregistry.example.com:5000
Username: my_ciuser
Password:
Email: myuser#example.com
Error response from daemon: invalid registry endpoint https://myregistry.example.com:5000/v0/:
unable to ping registry endpoint https://myregistry.example.com:5000/v0/ v2 ping attempt failed with error:
Get https://myregistry.example.com:5000/v2/: x509: certificate has expired or is not yet valid
v1 ping attempt failed with error: Get https://myregistry.example.com:5000/v1/_ping: x509:
certificate has expired or is not yet valid. If this private registry supports only HTTP or
HTTPS with an unknown CA certificate, please add
`--insecure-registry myregistry.example.com:5000` to the daemon's
arguments. In the case of HTTPS, if you have access to the registry's CA
certificate, no need for the flag; simply place the CA certificate
at /etc/docker/certs.d/myregistry.example.com:5000/ca.crt
So from my perspective, I guess the following are possible:
The CA cert is invalid (if so, why?!?)
The CA cert is an intermediary cert (if so, how can I tell?)
The CA cert is expired (if so, how do I tell?)
This is a bad error message, and some other facet of the registry is not configured properly (if so, how do I troubleshoot further?)
Perhaps my cert is not located in the correct place on the server, or doesn't have the right permissions set (if so, where does the cert need to be?)
Something else that I would never expect in a million years
Any ideas/thoughts?
As said in the error message:
... In the case of HTTPS, if you have access to the registry's CA
certificate, no need for the flag; simply place the CA certificate
at /etc/docker/certs.d/myregistry.example.com:5000/ca.crt
where myregistry.example.com:5000 - your CN with port.
You should copy your ca.crt into each Docker Daemon that will connect to your Docker Registry and put it in this folder: /etc/docker/certs.d/myregistry.example.com:5000/ca.crt
After this action you need to restart Docker daemon, for example, via sudo service docker stop && service docker start on CentOS (or call similar procedure on your OS).
I had the similar error:
Then I added my private registry to the insecureregistries list.
See below image for docker-desktop