Permanently replacing API server certificates - ssl

I have microK8S cluster, and expose the API server at my domain.
The server.crt and server.key in /var/snap/microk8s/1079/certs need to be replaced with the ones that include my domain.
Otherwise, as expected, i get the error:
Unable to connect to the server: x509: certificate is valid for kubernetes, kubernetes.default, kubernetes.default.svc, kubernetes.default.svc.cluster, kubernetes.default.svc.cluster.local, not mydonaim.com
With the help of cert-manager I have produced certificates and replaced them, my system works well.
Problem: every time server is restarted, server.crt and server.key are generated again in
/var/snap/microk8s/1079/certs. My custom certs are deleted, making API server unreachable remotely.
How can I stop the system from doing that all the time?
Workaround?
Should I place my certificates elsewhere and edit config files like /var/snap/microk8s/1079/args/kube-controller-manager with the path to those certificates? Are those config files auto-replaced as well?
Cluster information:
Kubernetes version: 1.16.3
Cloud being used: Bare metal, single-node
cluster Installation method: Ubuntu Server with Snaps
Host OS: Ubuntu 18.04.3 LTS

It looks like there is an existing issue that describes copying and modifying the /var/snap/microk8s/current/certs/csr.conf.template to include any extra IP or DNS entries for the generated certificates

Please be aware of the proposed updates in https://discuss.kubernetes.io/t/services-and-ports/11263/6. The following command was required to run in my env:
sudo microk8s refresh-certs

Related

Drone Kubernetes Runner appending CA Certificate

I have a Microk8s cluster running gitea, harbor and droneci. Everything is hosted under *.dev.mydomain.com and there is a wildcard certificate for that. The certificate is signed using a private CA.
I'm trying to push the CA certificate to the Pods running the Drone CI builds such that they can push/pull from Gitea and Harbor while also being able to connect to external sources to (to fetch other docker images from dockerhub for example).
DroneCI and the drone runner are installed using Helm. I have tried the following in the values.yaml file for the runner:
DRONE_RUNNER_VOLUMES: "/sslcerts:/etc/ssl/certs"
This overwrites the /etc/ssl/certs/ folder in the runner pod. Any requests made from the pod to harbor or gitea work, any requests to anything else fail with error x509 certificate signed by unknown authority
I also tried
DRONE_RUNNER_VOLUMES: "/sslcerts/ca-certificates.crt:/etc/ssl/certs/ca-certificates.crt"
This returned the error mounting "/sslcerts/ca-certificates.crt" to rootfs at "/etc/ssl/certs/ca-certificates.crt" caused: mount through procfd: not a directory: unknown"
Any ideas on how to go about what I'm trying to do? Thanks!
As the runners are Alpine Linux based all you should have to do is to mount your certificates in the /usr/local/share/ca-certificates/ folder (not in a subfolder but right into that folder).
Alpine should then add all certificates from there to /etc/ssl/certs for you.

ERR_SSL_PROTOCOL_ERROR After Installing SSL

I am on a very newby level when it comes to AWS and SSL.
I got an SSL from GoDaddy. After that I generated the .csr file on AWS and I got an elastic IP. I created a subdomain on godaddy sub.mydomain.com that points to that IP.
I installed the certs following the instructions a found online, but now I get an error. I've tried installing apache2 on the EC2 and rebooting the instance but no luck yet.
Is there a way to remove the SSL cert or Fix the issue? When I got the SSL from GoDaddy, the zip had 2 files and ran the following command to install them:
sudo java -jar lib/ace.jar import_cert gd_bundle-g2-g1.crt gdroot-g2.crt sfroot-g2.crt 54581acbeba8a74e.crt
System said the certs were installed but now as I get that error, on the EC2 we have a unifi controller and we want to get that SSLrunning to accept payments for the hotspot.
I had the same issues my controller is hosted on an EC2 instance.
Check your system.properties which sits in /var/lib/unifi/ open the file with vim or your text editor of choice.
Have a look at your HTTPS options, the important ones are the ciphers and protocols.
The Protocols you need are TLSv1 and potentially SSLv2Hello there should be no other SSL protocols in there.
The Ciphers you ideally want are TLS, so for example TLS_RSA_WITH_AES_256_CBC_SHA, TLS_RSA_WITH_AES_128_CBC_SHA
If you are having issues throw them all in, CAUTION! only use this in a demo /test environment.
unifi.https.ciphers=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_RC4_128_SHA,TLS_RSA_WITH_AES_128_CBC_SHA256,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,SSL_RSA_WITH_RC4_128_SHA
Remember once you have edited the system.properties you need to restart the controller.
sudo service unifi restart
Lots of help on the Unifi page
UniFi - SSL Certificate Error
UniFi - Explaining the config.properties File
UniFi - system.properties File Explanation

kubectl unable to connect to server: x509: certificate signed by unknown authority

i'm getting an error when running kubectl one one machine (windows)
the k8s cluster is running on CentOs 7 kubernetes cluster 1.7
master, worker
Here's my .kube\config
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: REDACTED
server: https://10.10.12.7:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: system:node:localhost.localdomain
name: system:node:localhost.localdomain#kubernetes
current-context: system:node:localhost.localdomain#kubernetes
kind: Config
preferences: {}
users:
- name: system:node:localhost.localdomain
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
the cluster is built using kubeadm with the default certificates on the pki directory
kubectl unable to connect to server: x509: certificate signed by unknown authority
One more solution in case it helps anyone:
My scenario:
using Windows 10
Kubernetes installed via Docker Desktop ui 2.1.0.1
the installer created config file at ~/.kube/config
the value in ~/.kube/config for server is https://kubernetes.docker.internal:6443
using proxy
Issue: kubectl commands to this endpoint were going through the proxy, I figured it out after running kubectl --insecure-skip-tls-verify cluster-info dump which displayed the proxy html error page.
Fix: just making sure that this URL doesn't go through the proxy, in my case in bash I used export no_proxy=$no_proxy,*.docker.internal
So kubectl doesn't trust the cluster, because for whatever reason the configuration has been messed up (mine included). To fix this, you can use openssl to extract the certificate from the cluster
openssl.exe s_client -showcerts -connect IP:PORT
IP:PORT should be what in your config is written after server:
Copy paste stuff starting from -----BEGIN CERTIFICATE----- to -----END CERTIFICATE----- (these lines included) into a new text file, say... myCert.crt If there are multiple entries, copy all of them.
Now go to .kube\config and instead of
certificate-authority-data: <wrongEncodedPublicKey>`
put
certificate-authority: myCert.crt
(it assumes you put myCert.crt in the same folder as the config file)
If you made the cert correctly it will trust the cluster (tried renaming the file and it no longer trusted afterwards).
I wish I knew what encoding certificate-authority-data uses, but after a few hours of googling I resorted to this solution, and looking back I think it's more elegant anyway.
Run:
gcloud container clusters get-credentials standard-cluster-1 --zone us-central1-a --project devops1-218400
here devops1-218400 is my project name. Replace it with your project name.
I got the same error while running $ kubectl get nodes as a root user. I fixed it by exporting kubelet.conf to environment variable.
$ export KUBECONFIG=/etc/kubernetes/kubelet.conf
$ kubectl get nodes
For my case, its simple worked by adding --insecure-skip-tls-verify at end of kubectl commands, for single time.
Sorry I wasn't able to provide this earlier, I just realized the cause:
So on the master node we're running a kubectl proxy
kubectl proxy --address 0.0.0.0 --accept-hosts '.*'
I stopped this and voila the error was gone.
I'm now able to do
kubectl get nodes
NAME STATUS AGE VERSION
centos-k8s2 Ready 3d v1.7.5
localhost.localdomain Ready 3d v1.7.5
I hope this helps those who stumbled upon this scenario.
I my case I resolved this issue copying the kubelet configuration to my home kube config
cat /etc/kubernetes/kubelet.conf > ~/.kube/config
This was happening because my company's network does not allow self signing certificates through their network. Try switching to a different network
For those of you that were late to the thread like I was and none of these answers worked for you I may have the solution:
When I copied over my .kube/config file to my windows 10 machine (with kubectl installed) I didn't change the IP address from 127.0.0.1:6443 to the master's IP address which was 192.168.x.x. (running windows 10 machine connecting to raspberry pi cluster on the same network). Make sure that you do this and it may fix your problem like it did mine.
On GCP
check: gcloud version
-- localMacOS# gcloud version
Run:
--- localMacOS# gcloud container clusters get-credentials 'clusterName' \ --zone=us-'zoneName'
Get clusterName and zoneName from your console -- here: https://console.cloud.google.com/kubernetes/list?
ref: .x509 #market place deployments on GCP #Kubernetes
I got this because I was not connected to the office's VPN
In case of the error you should export all the kubecfg which contains the certs. kops export kubecfg "your cluster-name and export KOPS_STATE_STORE=s3://"paste your S3 store" .
Now you should be able to access and see the resources of your cluster.
This is an old question but in case that also helps someone else here is another possible reason.
Let's assume that you have deployed Kubernetes with user x. If the .kube dir is under the /home/x user and you connect to the node with root or y user it will give you this error.
You need to switch to the user profile so kubernetes can load the configuration from the .kube dir.
Update: When copying the ~/.kube/config file content on a local pc from a master node make sure to replace the hostname of the loadbalancer with a valid IP. In my case the problem was related to the dns lookup.
Hope this helps.

Which ca.crt does docker.sock use for docker pull?

When I do docker pull from inside a container that uses /var/run/docker.sock to run docker (docker inside docker), I got this error:
FATA[0000] Error response from daemon: v1 ping attempt failed with error: Get https://registry.com:5000/v1/_ping: x509: certificate has expired or is not yet valid. If this private registry supports only HTTP or HTTPS with an unknown CA certificate, please add `--insecure-registry registry.com:5000` to the daemon's arguments. In the case of HTTPS, if you have access to the registry's CA certificate, no need for the flag; simply place the CA certificate at /etc/docker/certs.d/registry.com:5000/ca.crt
So I followed the instruction and added the ca.crt inside that directory and also added the insecure option to /etc/default/docker, but the error didn't go away.
I wonder where /var/run/docker.sock command look for the cert when I pull from inside the container. Especially when pulling works from outside (host) with the same config (ca.crt in the right folder and the insecure option is also added).
/var/run/docker.sock is not the thing that is looking for a cert. That is simply the socket that you use to communicate with dockerd. When you do a pull, you are asking the docker daemon to go talk to a registry.
Where did you get the ca.crt file? Is that really the signing certificate for your registry.com:5000 server's certificate? Did you put that in /et c/default/docker/registry.com:5000/ca.crt on the host where dockerd is running, or inside the container?
That ca.crt file belongs where the daemon is running. Double check that you have that correct file in the correct place on the host, and that should fix the issue.
Got it to work now, the solution is to restart the docker daemon inside the container. I actually tried it before but the docker service kept going down after restart, that made me think it was the docker service from the host.
The reason I could not restart docker service is /var/run/docker.pid existed which prevent docker from starting again. So I deleted that pid and docker restarted successfully.

SSL renew certificate on apache keeps using old certtificate file

I'm trying to renew my SSL certificate but there is some problem i'm probably missing. after i'v done the following steps the server keep using the old certificate and i do'nt know why.
here'w what i have done:
Create new csr file (domain.csr) + key file (domain.key)
openssl req -new -newkey rsa:2048 -nodes -keyout domain.key -out domain.csr
Copy csr file content and paste it to my ssl provider + get approval.
get 5 files from them and upload them to the server (domain.der,domain.pem
,domain.cer, chain.cer , domain.p7b )
set on apache ssl.conf file ,
SSLCertificateFile (domain.cer) SSLCertificateKeyFile (domain.key).
restart apache
for some reason my server is still using my old certificate.
is the something i'm doing wrong?
Well you figured it out yourself but in case anyone else is in same situation, here's some of the things you can check.
First up check locally whether this works, by running the following openssl command on the server (a crucial step we skipped!):
openssl s_client -connect localhost:443
This will show the cert presented to the client from Apache. If that's not the right one, then you know Apache config is at fault. If it is the right one then something downstream is a problem.
In your case you terminate SSL at the load balancer and forgot to change the cert there. Another issue could be browser caching the SSL cert (restart it, Ctrl+F5 to force refresh or better yet try another browser or a third party website like ssllabs.com).
Assuming it's a problem with Apache then you need to check the config to check all instances of the cert have been replace. The below command will show all the vhosts and what config they are configured in:
/usr/local/apache2/bin/apachectl -S
Alternatively just use standard find and grep unix commands to search your Apache config for the old or new cert:
find /usr/local/apache2/conf -name "*.conf" -exec grep olddomain.cer {} \; -print
Both those commands assume apache is installed in /usr/local/apache2 but change the path as appropriate.
If all looks good and you've definitely restarted Apache then you can try a full stop and restart as I have noticed sometimes a graceful restart of Apache doesn't always pick up new config. Before starting the web server back up again, check you can't connect from your browser (to ensure you're connecting to the server you think you're connecting to) and that the process is down with the following command:
ps -ef | grep httpd
and then finally start.
Another thing to check is that the cert you are installing is the one you think it is, using this openssl command to print out the cert details (assuming the cert is in x509 format but there are similar commands for other formats):
openssl x509 -in domain.cer -text
And last but not least check the Apache log files to see if any errors in there. Though would expect that to mean no cert is loaded rather than just the old one.
Good answer from #Barry.
Another aspect is apache is not the front most web server. From this conversation. It is possible that there are other web servers in front of apache.
Something like - nginx. In our case it was AWS ELB. We had to change cert in ELB in order to change.
We had a similiar problem to what #Akshay is saying above.
In order for the server to update the certificates we had to run some commands for Google Cloud Compute Engine Load Balancer:
gcloud compute target-https-proxies update
Hope this helps someone that is using GCP for hosting apache.