I'm trying to sort out certificates for applications deployed to a k8s cluster (running on docker-for-win, WSL2 on Windows 10 20H2).
I would like to use the DNS to connect to services, e.g. registry.default.svc.cluster.local, which I've verified is reachable. I created a certificate by following these steps:
Create an openssl.conf with content
[ req ]
default_bits = 2048
prompt = no
encrypt_key = no
distinguished_name = req_dn
req_extensions = req_ext
[ req_dn ]
CN = *.default.svc.cluster.local
[ req_ext ]
subjectAltName = #alt_names
[ alt_names ]
DNS.1 = *.default.svc.cluster.local
Create csr and key file with openssl req -new -config openssl.conf -out wildcard.csr -keyout wildcard.key
Created a certificate signing request with
cat <<EOF | kubectl create -f -
apiVersion: certificates.k8s.io/v1beta1
kind: CertificateSigningRequest
metadata:
name: wildcard_csr
spec:
groups:
- system:authenticated
request: $(cat wildcard.csr | base64 | tr -d '\n')
usages:
- digital signature
- key encipherment
- server auth
EOF
Approved the request: kubectl certificate approve wildcard_csr
Extracted the crt file: kubectl get csr wildcard_csr -o jsonpath='{.status.certificate}' | base64 -d > wildcard.crt
Deleted the request kubectl delete csr wildcard_csr.
I've then started a pod with the registry:2 image and configured it to use the wildcard.crt and wildcard.key files.
In a different pod, I then tried to push to that registry, but got the error
Error response from daemon: Get https://registry.default.svc.cluster.local:2100/v2/: x509: certificate signed by unknown authority
So it seems that within the pod, the k8s CA isn't trusted. Am I right with this observation? If so, how can I make k8s trust itself (after all, it was a k8s component that signed the certificate)?
I found a way to achieve this with changes to the yaml only: On my machine (not sure how universal that is), the CA certificate is available in the service-account-token secret default-token-7g75m (kubectl describe secrets to find out the name, look for the secret of type kubernetes.io/service-account-token that contains an entry ca.crt).
So to trust this certificate, add a volume
name: "kube-certificate"
secret:
secretName: "default-token-7g75m"
and to the pod that requires the certificate, add a volumeMount
name: "kube-certificate"
mountPath: "/etc/ssl/certs/kube-ca.crt",
subPath: "ca.crt"
Related
I team I have followed this link to configure cert manager in for My Istio but still I am not able to access the app through Istio ingress.
my manifest file look like this:
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: test-cert
namespace: testing
spec:
secretName: test-cert
dnsNames:
- "example.com"
issuerRef:
name: test-letsencrypt
kind: ClusterIssuer
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: test-letsencrypt
namespace: testing
spec:
acme:
email: abc#example.com
privateKeySecretRef:
name: testing-letsencrypt-private-key
server: https://acme-staging-v02.api.letsencrypt.org/directory
solvers:
- http01:
ingress:
class: istio
selector: {}
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
annotations:
certmanager.k8s.io/acme-challenge-type: http01
certmanager.k8s.io/cluster-issuer: test-letsencrypt
name: test-gateway
namespace: testing
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 443
name: https
protocol: HTTPS
hosts:
- "example.com"
tls:
mode: SIMPLE
credentialName: test-cert
Can anyone help me with what I am missing here?
Error from browser :
Secure Connection Failed
An error occurred during a connection to skydeck-test.asteria.co.in. PR_CONNECT_RESET_ERROR
The page you are trying to view cannot be shown because the authenticity of the received data could not be verified.
Please contact the web site owners to inform them of this problem.
Learn moreā¦
these are few logs may be helpful:
Normal Generated 5m13s cert-manager Stored new private key in temporary Secret resource "test-cert-sthkc"
Normal Requested 5m13s cert-manager Created new CertificateRequest resource "test-cert-htxcr"
Normal Issuing 4m33s cert-manager The certificate has been successfully issued
samirparhi#Samirs-Mac ~ % k get certificate -n testing
NAME READY SECRET AGE
test-cert True test-cert 19m
Note: this Namespace (testing) has Istio side car injection enabled and all the http request is working but HTTPS when I try to setup , it fails
I encountered the same problem when my certificate was not authenticated by a trusted third party but instead signed by me. I had to add an exception to my browser in order to access the site. So a simple money issue.
Also I was able to add my certificate to the /etc/ssl directory of the client machine to connect without problems.
Also I was able to add certificates by using TLS secrets and adding them to my virtual service configuration. You can try them too.
Examples:
TLS Secret:
kubectl create -n istio-system secret tls my-tls-secret --key=www.example.com.key --cert=www.example.com.crt
I assumed that you already have your certificate and its key but in case you need it:
Certificate creation:
openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 -subj '/O=My Company Inc./CN=example.com' -keyout example.com.key -out example.com.crt
openssl req -out www.example.com.csr -newkey rsa:2048 -nodes -keyout www.example.com.key -subj "/CN=www.example.com/O=World Wide Example organization"
openssl x509 -req -days 365 -CA example.com.crt -CAkey example.com.key -set_serial 0 -in www.example.com.csr -out www.example.com.crt
Just don't forget to fill -subj fields in a reasonable manner. They are the working factor of authenticity when it comes to SSL certs as I understand. For example the first line of certificate creation creates a key and certificate for your organisation. Which is not approved by authorities to be added to Mozilla's or Chrome's or OS's ssl database.
That is why you get your "Untrusted certificate" message. So, for that reasons you can simply create a key and create your dns records on a trusted third parties dns zone and database and by paying them, you can use their trusted organisation certificates for authenticating your own site.
Gateway:
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: mygateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 443
name: https
protocol: HTTPS
tls:
mode: SIMPLE
credentialName: my-tls-secret # must be the same as secret
hosts:
- www.example.com
Hope it helps.
Feel free to share "app" details.
I want to use cert-manager for issuing my own SSL certificate on AKS.
I already have a signed certificate (https://www.quovadisglobal.de/Zertifikate/SSLCertificates/BusinessSSLCertificates.aspx) which I want to use. In the docs of cert-manager, I find only two relevant Solutions.
https://cert-manager.io/docs/configuration/
SelfSigned: This should be used to sign a certificate by a CSR.
CA: This should be used to sign incoming certificate requests.
I tried the second one. Here what I did:
Install and verify cert-manager:
$ kubectl apply --validate=false -f https://github.com/jetstack/cert-manager/releases/download/v0.12.0/cert-manager.yaml
$ kubectl get pods --namespace cert-manager
NAME READY STATUS RESTARTS AGE
cert-manager-7c5748846c-b4nqb 1/1 Running 0 2d23h
cert-manager-cainjector-7b5965856-bgk4g 1/1 Running 1 2d23h
cert-manager-webhook-5759dd4547-mlgjs 1/1 Running 0 2d23h
Create Secret from private key and cert:
$ sudo kubectl create secret tls ssl-secret-p --cert=mycert.crt --key=mykey.key --namespace=cert-manager
Create issuer:
apiVersion: cert-manager.io/v1alpha2
kind: ClusterIssuer
metadata:
name: ca-issuer
namespace: cert-manager
spec:
ca:
secretName: ssl-secret-p
Error:
$ sudo kubectl get clusterissuers ca-issuer -n cert-manager -o wide
NAME READY STATUS AGE
ca-issuer False Error getting keypair for CA issuer: certificate is not a CA 5m
What I'm doing wrong?
EDIT:
sudo kubectl -n namespace get ing
NAME HOSTS ADDRESS PORTS AGE
nginx-ingress ***.com 51.105.205.128 80, 443 13m
Cert manager will carry out the acme challenge verification, try passing this secret name to the tls in the ingress rule, once the acme challenge appears valid, you will see a corresponding entry in ingress
kubectl -n namespace get ing
will give you that.
Then the certificate shall acquire ready state
I tried it, but I haven't used any pre-created tls secret. You can refer this stackoverflow post, I guess it turns up somewhat helpful to you
I need to generate my own SSL certificates for Kubernetes cluster components (apiserver, apiserver-kubelet-client, apiserver-etcd-client, front-proxy-client etc.). The reason for this is because Validity period for those certificates are set to 1 year by default and I need to have validity set to more than one year, because of my business reasons. When I generated my own set of certificates and initialized cluster, everything worked perfectly - PODs in kube-system namespaces started, comunication with apiserver worked. But I encoutered that some commands like kubectl logs or kubectl port-forward or kubectl exec stopped working and started throwing following erros:
kubectl logs <kube-apiserver-pod> -n kube-system
error: You must be logged in to the server (the server has asked for the client to provide credentials ( pods/log <kube-apiserver-pod>))
or
kubectl exec -it <kube-apiserver-pod> -n kube-system sh
error: unable to upgrade connection: Unauthorized`
however docker exec command to log to k8s_apiserver container is working properly.
During my debugging I found out that only self generated apiserver-kubelet-client key/cert file is causing this cluster behaviour.
Bellow is process I used to generate and use my own cert/key pair for apiserver-kubelet-client.
I inicialized kubernetes cluster to set its own certificates into /etc/kubernetes/pki folder by running kubeadm init ...
Make a backup of /etc/kubernetes/pki folder into /tmp/pki_k8s
Open apiserver-kubelet-client.crt with openssl to check all set extentions, CN, O etc.
openssl x509 -noout -text -in /tmp/pki_k8s/apiserver-kubelet-client.crt
To ensure same extentions and CN,O parameters to appear in certificate generated by myself I created .conf file for extentions and .csr file for CN and O
cd /tmp/pki_k8s/
cat <<-EOF_api_kubelet_client-ext > apiserver_kubelet_client-ext.conf
[ v3_ca ]
keyUsage = critical, digitalSignature, keyEncipherment
extendedKeyUsage = clientAuth
EOF_api_kubelet_client-ext
openssl req -new -key apiserver-kubelet-client.key -out apiserver-kubelet-client.csr -subj "/O=system:masters,CN=kube-apiserver-kubelet-client"
Finally I generated my own apiserver-kubelet-client.crt. For its generation I reused existing apiserver-kubelet-client.key and ca.crt/ca.key generated by K8S initialization
openssl x509 -req -in apiserver-kubelet-client.csr -CA ca.crt -CAkey ca.key -CAcreateserial -sha256 -out apiserver-kubelet-client.crt -extensions v3_ca -extfile apiserver_kubelet_client-ext.conf -days 3650
Once I had generated my own apiserver-kubelet-client.crt which overides the previous one generated by k8s initialization script itself, I reset kubernetes cluster by hitting kubeadm reset. This purged /etc/kubernetes folder
copy all certificates into /etc/kubernetes/pki from /tmp/pki_k8s
and reinitialize K8S cluster kubeadm init ...
During that I saw that K8S cluster used already existing certificates stored in /etc/kubernetes/pki for setup.
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[preflight] Activating the kubelet service
[certificates] Using the existing ca certificate and key.
[certificates] Using the existing apiserver certificate and key.
[certificates] Using the existing apiserver-kubelet-client certificate and key.
[certificates] Using the existing sa key.
[certificates] Using the existing front-proxy-ca certificate and key.
[certificates] Using the existing front-proxy-client certificate and key.
[certificates] Using the existing etcd/ca certificate and key.
[certificates] Using the existing etcd/server certificate and key.
[certificates] Using the existing etcd/peer certificate and key.
[certificates] Using the existing etcd/healthcheck-client certificate and key.
[certificates] Using the existing apiserver-etcd-client certificate and key.
[certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
[init] this might take a minute or longer if the control plane images have to be pulled
After that, K8S cluster is UP, I can list pods, list description, make deployments etc. however not able to check logs, exec command as described above.
kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-78fcdf6894-kjkp9 1/1 Running 0 2m
coredns-78fcdf6894-q88lx 1/1 Running 0 2m
...
kubectl logs <apiserver_pod> -n kube-system -v 7
I0818 08:51:12.435494 12811 loader.go:359] Config loaded from file /root/.kube/config
I0818 08:51:12.436355 12811 loader.go:359] Config loaded from file /root/.kube/config
I0818 08:51:12.438413 12811 loader.go:359] Config loaded from file /root/.kube/config
I0818 08:51:12.447751 12811 loader.go:359] Config loaded from file /root/.kube/config
I0818 08:51:12.448109 12811 round_trippers.go:383] GET https://<HOST_IP>:6443/api/v1/namespaces/kube-system/pods/<apiserver_pod>
I0818 08:51:12.448126 12811 round_trippers.go:390] Request Headers:
I0818 08:51:12.448135 12811 round_trippers.go:393] Accept: application/json, */*
I0818 08:51:12.448144 12811 round_trippers.go:393] User-Agent: kubectl/v1.11.0 (linux/amd64) kubernetes/91e7b4f
I0818 08:51:12.462931 12811 round_trippers.go:408] Response Status: 200 OK in 14 milliseconds
I0818 08:51:12.471316 12811 loader.go:359] Config loaded from file /root/.kube/config
I0818 08:51:12.471949 12811 round_trippers.go:383] GET https://<HOST_IP>:6443/api/v1/namespaces/kube-system/pods/<apiserver_pod>/log
I0818 08:51:12.471968 12811 round_trippers.go:390] Request Headers:
I0818 08:51:12.471977 12811 round_trippers.go:393] Accept: application/json, */*
I0818 08:51:12.471985 12811 round_trippers.go:393] User-Agent: kubectl/v1.11.0 (linux/amd64) kubernetes/91e7b4f
I0818 08:51:12.475827 12811 round_trippers.go:408] Response Status: 401 Unauthorized in 3 milliseconds
I0818 08:51:12.476288 12811 helpers.go:201] server response object: [{
"metadata": {},
"status": "Failure",
"message": "the server has asked for the client to provide credentials ( pods/log <apiserver_pod>)",
"reason": "Unauthorized",
"details": {
"name": "<apiserver_pod>",
"kind": "pods/log"
},
"code": 401
}]
F0818 08:51:12.476325 12811 helpers.go:119] error: You must be logged in to the server (the server has asked for the client to provide credentials ( pods/log <apiserver_pod>))
See kubelet service file below:
[root#qa053 ~]# cat /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
Note: This dropin only works with kubeadm and kubelet v1.11+
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
Environment="CA_CLIENT_CERT=--client-ca-file=/etc/kubernetes/pki/ca.crt"
Environment="KUBELE=--rotate-certificates=true"
# This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically
EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env
# This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use
# the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file.
EnvironmentFile=-/etc/sysconfig/kubelet
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS $KUBELET_CERTIFICATE_ARGS $CA_CLIENT_CERT
Do you have any ideas ? :)
Thanks
Best Regard
I found out reason why it did not worked.
When creating .csr file i used this:
openssl req -new -key apiserver-kubelet-client.key -out apiserver-kubelet-client.csr -subj "/O=system:masters,CN=kube-apiserver-kubelet-client"
But in -subj was wrong formatting which caused problems with parsing right CN from certificate. Instead of "/O=system:masters,CN=kube-apiserver-kubelet-client" it needs to be
openssl req -new -key apiserver-kubelet-client.key -out apiserver-kubelet-client.csr -subj "/O=system:masters/CN=kube-apiserver-kubelet-client"
Certificates generated by both .csr files looks same in terms of -text view. But they act differently.
I am using the following version:
Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.1", GitCommit:"3a1c9449a956b6026f075fa3134ff92f7d55f812", GitTreeState:"clean", BuildDate:"2018-01-04T11:52:23Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.1", GitCommit:"3a1c9449a956b6026f075fa3134ff92f7d55f812", GitTreeState:"clean", BuildDate:"2018-01-04T11:40:06Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}
Here, I am trying to authenticate a user that makes use of x509 certs using the below custom scrip that I created looking into few online forums and kubernetes docs.
#!/bin/bash
cluster=test-operations-k8
namespace=demo
username=jack
openssl genrsa -out $username.pem 2048
openssl req -new -key $username.pem -out $username.csr -subj "/CN=$username"
cat <<EOF | kubectl create -n $namespace -f -
apiVersion: certificates.k8s.io/v1beta1
kind: CertificateSigningRequest
metadata:
name: user-request-$username
spec:
groups:
- system:authenticated
request: $(cat $username.csr | base64 | tr -d '\n')
usages:
- digital signature
- key encipherment
- server auth
EOF
kubectl certificate approve user-request-$username
kubectl get csr user-request-$username -o jsonpath='{.status.certificate}' | base64 -d > $username.crt
kubectl --kubeconfig ~/.kube/config-$username config set-cluster $cluster --insecure-skip-tls-verify=true --server=https://$cluster.eastus.cloudapp.azure.com
kubectl --kubeconfig ~/.kube/config-$username config set-credentials $username --client-certificate=$username.crt --client-key=$username.pem --embed-certs=true
kubectl --kubeconfig ~/.kube/config-$username config set-context $cluster --cluster=$cluster --user=$username
kubectl --kubeconfig ~/.kube/config-$username config use-context $cluster
echo "Config file for $username has been created successfully !"
But while getting resources I get the below error:
error: You must be logged in to the server (Unauthorized)
Can someone please advise what needs to be done to fix this issue ?
Also please note the appropriate roles and rolebindings have also been created which I have not listed out here.
Make sure the CA used to sign the CSRs (the --cluster-signing-cert-file file given to kube-controller-manager) is in the --client-ca-file bundle given to kube-apiserver (which is what authenticates client certs presented to the apiserver)
Also ensure the certificate requested is a client certificate (has client auth in the usages field)
Iam trying to perform kubernetes webhook authentication using pamhook. But I get a TLS handshake error at the webhook when apiserver tries to contact it.
Following are the steps I followed:
apiserver IP: 192.168.20.30
pam webhook server IP: 192.168.20.50
Created a server certificate for my pam webhook server using the ca.crt present in /etc/kubernetes/pki/ (in master node)
> openssl genrsa -out ubuntuserver.key 2048 openssl req -new -key
> ubuntuserver.key -out ubuntuserver.csr -config myconf.conf openssl
> x509 -req -in ubuntuserver.csr -CA ca.crt -CAkey ca.key
> -CAcreateserial -out ubuntuserver.crt -days 10000
Started my pam webhook server using this certificate.
./pam_hook-master -cert-file /root/newca/ubuntuserver.crt -key-file /root/newca/ubuntuserver.key -signing-key rootroot -bind-port 6000
I1109 07:21:41.388836 3882 main.go:327] Starting pam_hook on :6000
Created a kubeconfig file as follows:
$cat webhook-config.yaml
apiVersion: v1
clusters:
- cluster:
certificate-authority: /etc/kubernetes/pki/ca.crt
server: https://192.168.20.50:6000/authenticate
name: 192.168.20.50
users:
- name: root
user:
client-certificate: /etc/kubernetes/pki/client.crt
client-key: /etc/kubernetes/pki/client.key
current-context: 192.168.20.50
contexts:
- context:
cluster: 192.168.20.50
user: root
name: 192.168.20.50
Configured api server manifest file in
/etc/kubernetes/manifest/kube-apiserver.yaml
> ...
> - --authentication-token-webhook-config-file=/etc/kubernetes/pki/webhook-config.yaml
> - --runtime-config=authorization.k8s.io/v1beta1=true ...
apiserver gets restarted.
Obtain token by requesting pamhook server. (I did this from my master node)
$ curl https://192.168.20.50:6000/token --cacert /etc/kubernetes/pki/ca.crt -u root
Enter host password for user 'root':
eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJhdWQiOiIiLCJleHAiOjE1MTAyMjY3ODksImlhdCI6MTUxMDIyNjE4OSwiaXNzIjoiIiwidXNlcm5hbWUiOiJyb290In0.3LmHBy_anjR62WNqKICCx_b8YWFpF4HSKMWLmyORU0M
Made a request to apiserver using this token, which in turn should communicate with webhook to provide authentication. But Iam getting a 401 Error.
$ curl -vvv --insecure -H "Authorization: Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJhdWQiOiIiLCJleHAiOjE1MTAyMjY3ODksImlhdCI6MTUxMDIyNjE4OSwiaXNzIjoiIiwidXNlcm5hbWUiOiJyb290In0.3LmHBy_anjR62WNqKICCx_b8YWFpF4HSKMWLmyORU0M" https://192.168.20.38:6443/api/
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "Unauthorized",
"reason": "Unauthorized",
"code": 401
}
Message at the webhook server is:
> 2017/11/09 07:50:53 http: TLS handshake error from 192.168.20.38:49712: remote error: tls: bad certificate
Document says that api server sends a http request with the token in its payload. If I try to recreate the same call in curl using the token and ca.crt, it gets authenticated.
> $ curl -X POST https://192.168.20.50:6000/authenticate --cacert /etc/kubernetes/pki/ca.crt -d '{"ApiVersion":"authentication.k8s.io/v1beta1", "Kind": "TokenReview", "Spec":{"Token":"eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJhdWQiOiIiLCJleHAiOjE1MTAyMjY3ODksImlhdCI6MTUxMDIyNjE4OSwiaXNzIjoiIiwidXNlcm5hbWUiOiJyb290In0.3LmHBy_anjR62WNqKICCx_b8YWFpF4HSKMWLmyORU0M"}}'
{"apiVersion":"authentication.k8s.io/v1beta1","kind":"TokenReview","status":{"authenticated":true,"user":{"username":"root","uid":"0","groups":["root"]}}}
But when it is requested by apiserver, TLS handshake gets failed.
My understanding is that the TLS verification is done by checking against the certificate-authority file mentioned in webhook-config.yaml right? If so, the TLS verification should have been successful with ca.crt. But it is failing.
Does that mean api server is performing validation using some other CA? Which CA does it use? How do I go past this TLS verification successfully?
Finally I've fixed this. The mistake I've made is I used IP of the webhook server throughout (192.168.20.50 in this case). I replaced it with the FQDN of the webhook server machine and things worked out.
I changed IP to FQDN in the following places:
CN while generating the server certificate
-server field in the kubeconfig file
$cat webhook-config.yaml
apiVersion: v1
clusters:
- cluster:
certificate-authority: /etc/kubernetes/pki/ca.crt
server: https://ubuntuserver:6000/authenticate
name: 192.168.20.50
users:
- name: root
user:
client-certificate: /etc/kubernetes/pki/client.crt
client-key: /etc/kubernetes/pki/client.key
current-context: 192.168.20.50
contexts:
- context:
cluster: 192.168.20.50
user: root
name: 192.168.20.50