I am going around in circles for the past hour trying to change the domain for HTTP(S) Load Balancer's SSL certificates.
I can't seem to find an option from the console or CLI to change/update the domains. After created a new one, I cannot delete the old one because it is attached to the load balancer. To remove the old SSL certificate, I have to delete the LB and its dependencies, and to go through all the steps to create the load balancer again.
May I know if it is a bug or expected behavior?
Thanks.
Before you can delete an SSL certificate, you must first update each target proxy that references the certificate. For each target proxy, run the appropriate gcloud update command to update the target proxy's CERTIFICATE_LIST such that it no longer includes the SSL certificate you need to delete.
Please find below steps to replacing SSL certificates.
1.Create a new SSL certificate resource. The new SSL certificate must have a unique name within the project.
2. Update the target proxy so that its list of SSL certificate(s) includes the new SSL certificate in the first position to make it the primary certificate. After the new certificate, include any existing SSL certificates that you want to retain. Make sure to exclude the old SSL certificate that you no longer need. To avoid downtime, run a single gcloud command with the --ssl-certificates flag. For example:
For external HTTP(S) load balancers:
Use the gcloud compute target-https-proxies update command with the --global flag.
gcloud compute target-https-proxies update TARGET_PROXY_NAME
--global
--ssl-certificates=new-ssl-cert,other-certificates
--global-ssl-certificates.
For internal HTTP(S) load balancers:
gcloud compute target-https-proxies update TARGET_PROXY_NAME
--region REGION
--ssl-certificates=new-ssl-cert,other-certificates
--global-ssl-certificates
For SSL proxy load balancers:
Use the gcloud compute target-ssl-proxies update command with the --backend-service flag.
gcloud compute target-ssl-proxies update TARGET_PROXY_NAME
--ssl-certificates=new-ssl-cert,other-certificates
Verify that the load balancer is serving the replacement certificate by running the following OpenSSL command:
echo | openssl s_client -showcerts -connect IP_ADDRESS:443 -verify 99 -verify_return_error
Wait 15 minutes to ensure that the replacement certificate is available to all Google Front Ends (GFEs).
(Optional) Delete the old SSL certificate.
For further reading please follow the links below:
Deleting/ Replacing SSL certificates :
https://cloud.google.com/load-balancing/docs/ssl-certificates/self-managed-certs#delete-ssl-cert-resource
Replacing an existing SSL certificate
https://cloud.google.com/load-balancing/docs/ssl-certificates/google-managed-certs#replace-ssl
I'm in the process of moving web services from one Kubernetes cluster to another. The goal is to do that without service interruption.
This is difficult with cert-manager and HTTP challenges, because cert-manager on the new cluster can only retrieve a certificate once the DNS entry points to that cluster. However, if I switch the DNS entry to the new cluster, clients will potentially talk to the new cluster before a valid certificate has been generated. This is like a chicken-and-egg problem.
How do I move the cert-manager certificates to the new cluster, so that it already has the certs once I make the DNS switch?
Certificates are stored in Kubernetes secrets. Cert-manager will pick up existing secrets instead of creating new ones, if the secret matches the ingress object.
So assuming that the ingress object looks the same on both clusters, and that the same namespace is used, copying the secret is as simple as this:
kubectl --context OLD_CLUSTER -n NAMESPACE get secret SECRET_NAME --output yaml \
| kubectl --context NEW_CLUSTER -n NAMESPACE apply -f -
Replace OLD_CLUSTER and NEW_CLUSTER with the kubectl context names of the respective clusters (see kubectl config get-contexts).
Replace SECRET_NAME with the name of the secret where the certificate is stored. This name can be found in the ingress.
Replace NAMESPACE with the actual namespace that you're using.
The command simply exports the secret in YAML format, and then uses kubectl apply -f to create the same resource in the new cluster.
Once the ingress is in place on the new cluster, you can verify that the cert works by using openssl s_client:
openssl s_client -connect CLUSTER_IP:443 -servername SERVICE_DNS_NAME
Again, replace CLUSTER_IP and SERVICE_DNS_NAME accordingly.
I am trying to send my app to a Google Cloud Cluster using the kubectl command behind a corporative proxy that needs a certificate ".crt" file to be used when doing HTTPS requests.
I already ran the gcloud container clusters get-credentials... command and it also asked for a certificate. I followed the given instructions by Google and I configured my certificate file without any issue and it worked.
But when I try the kubectl get pods I am getting the following message:
"Unable to connect to the server: x509: certificate signed by unknown authority"
How can I configure my certificate file to be used by the kubectl command?
I did a search about this subject but I found too difficult steps. Could I just run something like this:
kubectl --set_ca_file /path/to/my/cert
Thank you
The short answer up to what I know is no.
here[1] you can see the step by step of how to get this done in the easiest way I found so far, is not a one line way but is the closest to that.
after having your cert files you need to run this:
gcloud compute ssl-certificates create test-ingress-1 \ --certificate [FIRST_CERT_FILE] --private-key [FIRST_KEY_FILE]
then you need to create your YAML file with the configuration (in the link there are two examples)
run this command:
kubectl apply -f [NAME_OF_YOUR_FILE].yaml
[1] https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-multi-ssl
At the moment my kubernetes dashboard shows me that my session in insecure. I have updated the path at with I keep the dashboard certs (/root/certs/) and I need to know how to get kubernetes to use them.
I have tried to:
Delete the secret kubernetes-dashboard-certs which deletes that secret successfully.
Add my new dashboard.crt & dashboard.key to /root/certs/ (readable by all)
Created the secret again with kubectl create secret generic kubernetes-dashboard-certs --from-file=/root/certs -n kubernetes
Logged into dashboard and it still shows insecure (because the SSL cert is not being updated)
Have you created authentication token (RBAC)? You can get more information from here.
Try to regenerate your cluster certificates, check documentation. If you used kubeadm then from control plane node you can run:
$ kubeadm alpha certs renew
Also what can be helpful is to restart apiserver:
user#test-calico:~$ sudo docker ps -a | grep apiserver
834ed10cbce3 5eb2d3fc7a44 "kube-apiserver --ad…" 4 weeks ago Up 4 weeks k8s_kube-apiserver_kube-apiserver-test-calico_kube-system_019eaca18f2defc3759027d8220b3451_0
87c22315ce21 k8s.gcr.io/pause:3.1 "/pause" 4 weeks ago Up 4 weeks k8s_POD_kube-apiserver-test-calico_kube-system_019eaca18f2defc3759027d8220b3451_0
$ sudo docker restart 834ed10cbce3
Please refer to following post
Kubernetes: expired certificate.
i'm getting an error when running kubectl one one machine (windows)
the k8s cluster is running on CentOs 7 kubernetes cluster 1.7
master, worker
Here's my .kube\config
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: REDACTED
server: https://10.10.12.7:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: system:node:localhost.localdomain
name: system:node:localhost.localdomain#kubernetes
current-context: system:node:localhost.localdomain#kubernetes
kind: Config
preferences: {}
users:
- name: system:node:localhost.localdomain
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
the cluster is built using kubeadm with the default certificates on the pki directory
kubectl unable to connect to server: x509: certificate signed by unknown authority
One more solution in case it helps anyone:
My scenario:
using Windows 10
Kubernetes installed via Docker Desktop ui 2.1.0.1
the installer created config file at ~/.kube/config
the value in ~/.kube/config for server is https://kubernetes.docker.internal:6443
using proxy
Issue: kubectl commands to this endpoint were going through the proxy, I figured it out after running kubectl --insecure-skip-tls-verify cluster-info dump which displayed the proxy html error page.
Fix: just making sure that this URL doesn't go through the proxy, in my case in bash I used export no_proxy=$no_proxy,*.docker.internal
So kubectl doesn't trust the cluster, because for whatever reason the configuration has been messed up (mine included). To fix this, you can use openssl to extract the certificate from the cluster
openssl.exe s_client -showcerts -connect IP:PORT
IP:PORT should be what in your config is written after server:
Copy paste stuff starting from -----BEGIN CERTIFICATE----- to -----END CERTIFICATE----- (these lines included) into a new text file, say... myCert.crt If there are multiple entries, copy all of them.
Now go to .kube\config and instead of
certificate-authority-data: <wrongEncodedPublicKey>`
put
certificate-authority: myCert.crt
(it assumes you put myCert.crt in the same folder as the config file)
If you made the cert correctly it will trust the cluster (tried renaming the file and it no longer trusted afterwards).
I wish I knew what encoding certificate-authority-data uses, but after a few hours of googling I resorted to this solution, and looking back I think it's more elegant anyway.
Run:
gcloud container clusters get-credentials standard-cluster-1 --zone us-central1-a --project devops1-218400
here devops1-218400 is my project name. Replace it with your project name.
I got the same error while running $ kubectl get nodes as a root user. I fixed it by exporting kubelet.conf to environment variable.
$ export KUBECONFIG=/etc/kubernetes/kubelet.conf
$ kubectl get nodes
For my case, its simple worked by adding --insecure-skip-tls-verify at end of kubectl commands, for single time.
Sorry I wasn't able to provide this earlier, I just realized the cause:
So on the master node we're running a kubectl proxy
kubectl proxy --address 0.0.0.0 --accept-hosts '.*'
I stopped this and voila the error was gone.
I'm now able to do
kubectl get nodes
NAME STATUS AGE VERSION
centos-k8s2 Ready 3d v1.7.5
localhost.localdomain Ready 3d v1.7.5
I hope this helps those who stumbled upon this scenario.
I my case I resolved this issue copying the kubelet configuration to my home kube config
cat /etc/kubernetes/kubelet.conf > ~/.kube/config
This was happening because my company's network does not allow self signing certificates through their network. Try switching to a different network
For those of you that were late to the thread like I was and none of these answers worked for you I may have the solution:
When I copied over my .kube/config file to my windows 10 machine (with kubectl installed) I didn't change the IP address from 127.0.0.1:6443 to the master's IP address which was 192.168.x.x. (running windows 10 machine connecting to raspberry pi cluster on the same network). Make sure that you do this and it may fix your problem like it did mine.
On GCP
check: gcloud version
-- localMacOS# gcloud version
Run:
--- localMacOS# gcloud container clusters get-credentials 'clusterName' \ --zone=us-'zoneName'
Get clusterName and zoneName from your console -- here: https://console.cloud.google.com/kubernetes/list?
ref: .x509 #market place deployments on GCP #Kubernetes
I got this because I was not connected to the office's VPN
In case of the error you should export all the kubecfg which contains the certs. kops export kubecfg "your cluster-name and export KOPS_STATE_STORE=s3://"paste your S3 store" .
Now you should be able to access and see the resources of your cluster.
This is an old question but in case that also helps someone else here is another possible reason.
Let's assume that you have deployed Kubernetes with user x. If the .kube dir is under the /home/x user and you connect to the node with root or y user it will give you this error.
You need to switch to the user profile so kubernetes can load the configuration from the .kube dir.
Update: When copying the ~/.kube/config file content on a local pc from a master node make sure to replace the hostname of the loadbalancer with a valid IP. In my case the problem was related to the dns lookup.
Hope this helps.