Iam trying to perform kubernetes webhook authentication using pamhook. But I get a TLS handshake error at the webhook when apiserver tries to contact it.
Following are the steps I followed:
apiserver IP: 192.168.20.30
pam webhook server IP: 192.168.20.50
Created a server certificate for my pam webhook server using the ca.crt present in /etc/kubernetes/pki/ (in master node)
> openssl genrsa -out ubuntuserver.key 2048 openssl req -new -key
> ubuntuserver.key -out ubuntuserver.csr -config myconf.conf openssl
> x509 -req -in ubuntuserver.csr -CA ca.crt -CAkey ca.key
> -CAcreateserial -out ubuntuserver.crt -days 10000
Started my pam webhook server using this certificate.
./pam_hook-master -cert-file /root/newca/ubuntuserver.crt -key-file /root/newca/ubuntuserver.key -signing-key rootroot -bind-port 6000
I1109 07:21:41.388836 3882 main.go:327] Starting pam_hook on :6000
Created a kubeconfig file as follows:
$cat webhook-config.yaml
apiVersion: v1
clusters:
- cluster:
certificate-authority: /etc/kubernetes/pki/ca.crt
server: https://192.168.20.50:6000/authenticate
name: 192.168.20.50
users:
- name: root
user:
client-certificate: /etc/kubernetes/pki/client.crt
client-key: /etc/kubernetes/pki/client.key
current-context: 192.168.20.50
contexts:
- context:
cluster: 192.168.20.50
user: root
name: 192.168.20.50
Configured api server manifest file in
/etc/kubernetes/manifest/kube-apiserver.yaml
> ...
> - --authentication-token-webhook-config-file=/etc/kubernetes/pki/webhook-config.yaml
> - --runtime-config=authorization.k8s.io/v1beta1=true ...
apiserver gets restarted.
Obtain token by requesting pamhook server. (I did this from my master node)
$ curl https://192.168.20.50:6000/token --cacert /etc/kubernetes/pki/ca.crt -u root
Enter host password for user 'root':
eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJhdWQiOiIiLCJleHAiOjE1MTAyMjY3ODksImlhdCI6MTUxMDIyNjE4OSwiaXNzIjoiIiwidXNlcm5hbWUiOiJyb290In0.3LmHBy_anjR62WNqKICCx_b8YWFpF4HSKMWLmyORU0M
Made a request to apiserver using this token, which in turn should communicate with webhook to provide authentication. But Iam getting a 401 Error.
$ curl -vvv --insecure -H "Authorization: Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJhdWQiOiIiLCJleHAiOjE1MTAyMjY3ODksImlhdCI6MTUxMDIyNjE4OSwiaXNzIjoiIiwidXNlcm5hbWUiOiJyb290In0.3LmHBy_anjR62WNqKICCx_b8YWFpF4HSKMWLmyORU0M" https://192.168.20.38:6443/api/
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "Unauthorized",
"reason": "Unauthorized",
"code": 401
}
Message at the webhook server is:
> 2017/11/09 07:50:53 http: TLS handshake error from 192.168.20.38:49712: remote error: tls: bad certificate
Document says that api server sends a http request with the token in its payload. If I try to recreate the same call in curl using the token and ca.crt, it gets authenticated.
> $ curl -X POST https://192.168.20.50:6000/authenticate --cacert /etc/kubernetes/pki/ca.crt -d '{"ApiVersion":"authentication.k8s.io/v1beta1", "Kind": "TokenReview", "Spec":{"Token":"eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJhdWQiOiIiLCJleHAiOjE1MTAyMjY3ODksImlhdCI6MTUxMDIyNjE4OSwiaXNzIjoiIiwidXNlcm5hbWUiOiJyb290In0.3LmHBy_anjR62WNqKICCx_b8YWFpF4HSKMWLmyORU0M"}}'
{"apiVersion":"authentication.k8s.io/v1beta1","kind":"TokenReview","status":{"authenticated":true,"user":{"username":"root","uid":"0","groups":["root"]}}}
But when it is requested by apiserver, TLS handshake gets failed.
My understanding is that the TLS verification is done by checking against the certificate-authority file mentioned in webhook-config.yaml right? If so, the TLS verification should have been successful with ca.crt. But it is failing.
Does that mean api server is performing validation using some other CA? Which CA does it use? How do I go past this TLS verification successfully?
Finally I've fixed this. The mistake I've made is I used IP of the webhook server throughout (192.168.20.50 in this case). I replaced it with the FQDN of the webhook server machine and things worked out.
I changed IP to FQDN in the following places:
CN while generating the server certificate
-server field in the kubeconfig file
$cat webhook-config.yaml
apiVersion: v1
clusters:
- cluster:
certificate-authority: /etc/kubernetes/pki/ca.crt
server: https://ubuntuserver:6000/authenticate
name: 192.168.20.50
users:
- name: root
user:
client-certificate: /etc/kubernetes/pki/client.crt
client-key: /etc/kubernetes/pki/client.key
current-context: 192.168.20.50
contexts:
- context:
cluster: 192.168.20.50
user: root
name: 192.168.20.50
Related
I team I have followed this link to configure cert manager in for My Istio but still I am not able to access the app through Istio ingress.
my manifest file look like this:
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: test-cert
namespace: testing
spec:
secretName: test-cert
dnsNames:
- "example.com"
issuerRef:
name: test-letsencrypt
kind: ClusterIssuer
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: test-letsencrypt
namespace: testing
spec:
acme:
email: abc#example.com
privateKeySecretRef:
name: testing-letsencrypt-private-key
server: https://acme-staging-v02.api.letsencrypt.org/directory
solvers:
- http01:
ingress:
class: istio
selector: {}
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
annotations:
certmanager.k8s.io/acme-challenge-type: http01
certmanager.k8s.io/cluster-issuer: test-letsencrypt
name: test-gateway
namespace: testing
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 443
name: https
protocol: HTTPS
hosts:
- "example.com"
tls:
mode: SIMPLE
credentialName: test-cert
Can anyone help me with what I am missing here?
Error from browser :
Secure Connection Failed
An error occurred during a connection to skydeck-test.asteria.co.in. PR_CONNECT_RESET_ERROR
The page you are trying to view cannot be shown because the authenticity of the received data could not be verified.
Please contact the web site owners to inform them of this problem.
Learn more…
these are few logs may be helpful:
Normal Generated 5m13s cert-manager Stored new private key in temporary Secret resource "test-cert-sthkc"
Normal Requested 5m13s cert-manager Created new CertificateRequest resource "test-cert-htxcr"
Normal Issuing 4m33s cert-manager The certificate has been successfully issued
samirparhi#Samirs-Mac ~ % k get certificate -n testing
NAME READY SECRET AGE
test-cert True test-cert 19m
Note: this Namespace (testing) has Istio side car injection enabled and all the http request is working but HTTPS when I try to setup , it fails
I encountered the same problem when my certificate was not authenticated by a trusted third party but instead signed by me. I had to add an exception to my browser in order to access the site. So a simple money issue.
Also I was able to add my certificate to the /etc/ssl directory of the client machine to connect without problems.
Also I was able to add certificates by using TLS secrets and adding them to my virtual service configuration. You can try them too.
Examples:
TLS Secret:
kubectl create -n istio-system secret tls my-tls-secret --key=www.example.com.key --cert=www.example.com.crt
I assumed that you already have your certificate and its key but in case you need it:
Certificate creation:
openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 -subj '/O=My Company Inc./CN=example.com' -keyout example.com.key -out example.com.crt
openssl req -out www.example.com.csr -newkey rsa:2048 -nodes -keyout www.example.com.key -subj "/CN=www.example.com/O=World Wide Example organization"
openssl x509 -req -days 365 -CA example.com.crt -CAkey example.com.key -set_serial 0 -in www.example.com.csr -out www.example.com.crt
Just don't forget to fill -subj fields in a reasonable manner. They are the working factor of authenticity when it comes to SSL certs as I understand. For example the first line of certificate creation creates a key and certificate for your organisation. Which is not approved by authorities to be added to Mozilla's or Chrome's or OS's ssl database.
That is why you get your "Untrusted certificate" message. So, for that reasons you can simply create a key and create your dns records on a trusted third parties dns zone and database and by paying them, you can use their trusted organisation certificates for authenticating your own site.
Gateway:
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: mygateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 443
name: https
protocol: HTTPS
tls:
mode: SIMPLE
credentialName: my-tls-secret # must be the same as secret
hosts:
- www.example.com
Hope it helps.
Feel free to share "app" details.
I'm trying to sort out certificates for applications deployed to a k8s cluster (running on docker-for-win, WSL2 on Windows 10 20H2).
I would like to use the DNS to connect to services, e.g. registry.default.svc.cluster.local, which I've verified is reachable. I created a certificate by following these steps:
Create an openssl.conf with content
[ req ]
default_bits = 2048
prompt = no
encrypt_key = no
distinguished_name = req_dn
req_extensions = req_ext
[ req_dn ]
CN = *.default.svc.cluster.local
[ req_ext ]
subjectAltName = #alt_names
[ alt_names ]
DNS.1 = *.default.svc.cluster.local
Create csr and key file with openssl req -new -config openssl.conf -out wildcard.csr -keyout wildcard.key
Created a certificate signing request with
cat <<EOF | kubectl create -f -
apiVersion: certificates.k8s.io/v1beta1
kind: CertificateSigningRequest
metadata:
name: wildcard_csr
spec:
groups:
- system:authenticated
request: $(cat wildcard.csr | base64 | tr -d '\n')
usages:
- digital signature
- key encipherment
- server auth
EOF
Approved the request: kubectl certificate approve wildcard_csr
Extracted the crt file: kubectl get csr wildcard_csr -o jsonpath='{.status.certificate}' | base64 -d > wildcard.crt
Deleted the request kubectl delete csr wildcard_csr.
I've then started a pod with the registry:2 image and configured it to use the wildcard.crt and wildcard.key files.
In a different pod, I then tried to push to that registry, but got the error
Error response from daemon: Get https://registry.default.svc.cluster.local:2100/v2/: x509: certificate signed by unknown authority
So it seems that within the pod, the k8s CA isn't trusted. Am I right with this observation? If so, how can I make k8s trust itself (after all, it was a k8s component that signed the certificate)?
I found a way to achieve this with changes to the yaml only: On my machine (not sure how universal that is), the CA certificate is available in the service-account-token secret default-token-7g75m (kubectl describe secrets to find out the name, look for the secret of type kubernetes.io/service-account-token that contains an entry ca.crt).
So to trust this certificate, add a volume
name: "kube-certificate"
secret:
secretName: "default-token-7g75m"
and to the pod that requires the certificate, add a volumeMount
name: "kube-certificate"
mountPath: "/etc/ssl/certs/kube-ca.crt",
subPath: "ca.crt"
I am new to kubernetes and I finally realized how to launch the metrics-server as documented kubernetes-sigs/metrics-server. In case that someone else wonders you need to deploy on Master node and also have minimum one worker in the cluster.
So I get this error:
E0818 15:25:22.835094 1 manager.go:111] unable to fully collect metrics: [unable to fully scrape metrics from source kubelet_summary:<hostname-master>: unable to fetch metrics from Kubelet <hostname-master> (<hostname-master>): Get https://<hostname-master>:10250/stats/summary?only_cpu_and_memory=true: x509: certificate signed by unknown authority, unable to fully scrape metrics from source kubelet_summary:<hostname-worker>: unable to fetch metrics from Kubelet <hostname-worker> (<hostname-worker>): Get https://<hostname-worker>:10250/stats/summary?only_cpu_and_memory=true: x509: certificate signed by unknown authority]
I am using my own CAs (not self signed) and I have modified the components.yml file (sample):
args:
- --cert-dir=/tmp/metricsServerCas
- --secure-port=4443
- --kubelet-preferred-address-types=Hostname
I know that I can disable the tls by using this flag --kubelet-insecure-tls I have already tried it. I want to use my own CAs for extra security.
I have see other many relevant questions (few samples) e.g.:
x509 certificate signed by unknown authority- Kubernetes and kubectl unable to connect to server: x509: certificate signed by unknown authority
Although that I have applied chown already my $HOME/.kube/config still I see this error.
Where am I going wrong?
Update: On the worker I am creating a directory e.g. /tmp/ca and I add the ca file(s) in the directory.
I am not really good yet with the mounting points and I assume that I am doing something wrong. The default syntax of the images can be found here kubernetes-sigs/metrics-server/v0.3.7 (see components.yml file).
I tried to create a directory on my worker e.g. /tmp/ca and I modified the flag --cert-dir=/tmp/ca and mountPath: /tmp/ca
When I am deploying the file e.g.:
kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.3.7/components.yaml
I keep getting the error from the metrics-server-xxxx:
panic: open /tmp/client-ca-file805316981: read-only file system
Although I have given full access to the directory e.g.:
$ ls -la /tmp/ca
total 8
drwxr-xr-x. 2 user user 20 Aug 19 16:59 .
drwxrwxrwt. 18 root root 4096 Aug 19 17:34 ..
-rwxr-xr-x. 1 user user 1025 Aug 19 16:59 ca.crt
I am not sure where I am going wrong.
How is meant to be configured so someone can use non self signed certificates? I can see that most people are using non SSL which I would like to avoid.
Sample of my args in the image:
spec:
selector:
matchLabels:
k8s-app: metrics-server
template:
metadata:
name: metrics-server
labels:
k8s-app: metrics-server
spec:
serviceAccountName: metrics-server
volumes:
# mount in tmp so we can safely use from-scratch images and/or read-only containers
- name: tmp-dir
emptyDir: {}
containers:
- name: metrics-server
image: k8s.gcr.io/metrics-server/metrics-server:v0.3.7
imagePullPolicy: IfNotPresent
args:
- --cert-dir=/tmp/ca
- --secure-port=4443
- --kubelet-preferred-address-types=Hostname
ports:
- name: main-port
containerPort: 4443
protocol: TCP
securityContext:
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000
volumeMounts:
- name: tmp-dir
mountPath: /tmp/ca
nodeSelector:
kubernetes.io/os: linux
kubernetes.io/arch: "amd64"
Update 2: Adding curl command from Master to Worker including error output:
$ curl --cacert /etc/kubernetes/pki/ca.crt https://node_hostname:10250/stats/summary?only_cpu_and_memory=true
curl: (60) Peer's certificate issuer has been marked as not trusted by the user.
More details here: http://curl.haxx.se/docs/sslcerts.html
curl performs SSL certificate verification by default, using a "bundle"
of Certificate Authority (CA) public keys (CA certs). If the default
bundle file isn't adequate, you can specify an alternate file
using the --cacert option.
If this HTTPS server uses a certificate signed by a CA represented in
the bundle, the certificate verification probably failed due to a
problem with the certificate (it might be expired, or the name might
not match the domain name in the URL).
If you'd like to turn off curl's verification of the certificate, use
the -k (or --insecure) option.
Posting this answer as a community wiki to give better visibility as the solution was posted in the comments.
The version that I used before was 1.18.2 and metrics server v0.3.6. Deployment was through kubeadm. Yes all requirements was exactly as the metrics-server/requirements. The good news is that I got it running by upgrading my k8s version on 1.19.0 and using the latest version v0.3.7. It works with self signed certificates.
The issue was resolved by upgrading:
Kubernetes: 1.18.2 -> 1.19.0
Metrics-server: 0.3.6 -> 0.3.7
This upgrade allowed to run metrics-server with tls enabled (self-signed certificates).
Additional resources that could help when deploying metrics-server with tls:
Github.com: Kubernetes-sigs: Metrics-server: FAQ: How to run metrics-server-securely
How to run metrics-server securely?
Suggested configuration:
Cluster with RBAC enabled
Kubelet read-only port port disabled
Validate kubelet certificate by mounting CA file and providing --kubelet-certificate-authority flag to metrics server
Avoid passing insecure flags to metrics server (--deprecated-kubelet-completely-insecure, --kubelet-insecure-tls)
Consider using your own certificates (--tls-cert-file, --tls-private-key-file)
Github.com: Metrics-server: x509: certificate signed by unknown authority
Ftclausen.github.io: Setting up K8S with metrics-server
Create a configmap to store the ca certificate which was used to generate kubelet serving certificate.
kubectl -n kube-system create configmap ca --from-file=ca.crt=/etc/kubernetes/pki/ca.crt -o yaml
Then use volumeMounts to use it in metrics server pod
spec:
volumes:
- emptyDir: {}
name: tmp-dir
- configMap:
defaultMode: 420
name: ca
name: ca-dir
containers:
args:
- --cert-dir=/tmp
- --secure-port=4443
- --kubelet-certificate-authority=/ca/ca.crt
- --kubelet-preferred-address-types=Hostname
volumeMounts:
- mountPath: /tmp
name: tmp-dir
- mountPath: /ca
name: ca-dir
You can follow the same approach and use --tls-cert-file and --tls-private-key-file for using your own certificate instead of self signed certificate.
For my friends on EKS make sure you have the username set (and not set to just the session name like I did):
robert ❱ kubectl get configmaps -n kube-system aws-auth -o yaml | grep MyTeamRole$ -A 3
- rolearn: arn:aws:iam::123546789012:role/MyTeamRole
username: {{SessionName}}
groups:
- system:masters
robert ❱ kubectl top node
error: You must be logged in to the server (Unauthorized)
robert ❱ 1 ❱ kubectl edit configmap -n kube-system aws-auth
configmap/aws-auth edited
robert ❱ kubectl get configmaps -n kube-system aws-auth -o yaml | grep MyTeamRole$ -A 3
- rolearn: arn:aws:iam::123546789012:role/MyTeamRole
username: literally_anything:{{SessionName}}
groups:
- system:masters
robert ❱ kubectl top node
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
ip-10-0-3-103.us-west-2.compute.internal 341m 17% 1738Mi 52%
...
robert ❱ kubectl logs -n kube-system -l app.kubernetes.io/instance=metrics-server
E0407 22:34:45.879156 1 authentication.go:63] "Unable to authenticate the request" err="verifying certificate SN=801591513699736721, SKID=, AKID= failed: x509: certificate signed by unknown authority"
E0407 22:34:49.399854 1 authentication.go:63] "Unable to authenticate the request" err="verifying certificate SN=801591513699736721, SKID=, AKID= failed: x509: certificate signed by unknown authority"
E0407 22:34:50.691133 1 authentication.go:63] "Unable to authenticate the request" err="verifying certificate SN=3949940469908359789, SKID=, AKID= failed: x509: certificate signed by unknown authority"
E0407 22:34:51.827629 1 authentication.go:63] "Unable to authenticate the request" err="verifying certificate SN=3949940469908359789, SKID=, AKID= failed: x509: certificate signed by unknown authority"
E0407 22:39:07.288163 1 authentication.go:63] "Unable to authenticate the request" err="verifying certificate SN=3949940469908359789, SKID=, AKID= failed: x509: certificate signed by unknown authority"
E0407 22:39:08.755492 1 authentication.go:63] "Unable to authenticate the request" err="verifying certificate SN=801591513699736721, SKID=, AKID= failed: x509: certificate signed by unknown authority"
E0407 22:39:09.801957 1 authentication.go:63] "Unable to authenticate the request" err="verifying certificate SN=801591513699736721, SKID=, AKID= failed: x509: certificate signed by unknown authority"
E0407 22:40:32.405458 1 authentication.go:63] "Unable to authenticate the request" err="verifying certificate SN=801591513699736721, SKID=, AKID= failed: x509: certificate signed by unknown authority"
E0407 22:43:09.791769 1 authentication.go:63] "Unable to authenticate the request" err="verifying certificate SN=3949940469908359789, SKID=, AKID= failed: x509: certificate signed by unknown authority"
E0407 22:44:14.244221 1 authentication.go:63] "Unable to authenticate the request" err="verifying certificate SN=3949940469908359789, SKID=, AKID= failed: x509: certificate signed by unknown authority"
robert ❱
I need to generate my own SSL certificates for Kubernetes cluster components (apiserver, apiserver-kubelet-client, apiserver-etcd-client, front-proxy-client etc.). The reason for this is because Validity period for those certificates are set to 1 year by default and I need to have validity set to more than one year, because of my business reasons. When I generated my own set of certificates and initialized cluster, everything worked perfectly - PODs in kube-system namespaces started, comunication with apiserver worked. But I encoutered that some commands like kubectl logs or kubectl port-forward or kubectl exec stopped working and started throwing following erros:
kubectl logs <kube-apiserver-pod> -n kube-system
error: You must be logged in to the server (the server has asked for the client to provide credentials ( pods/log <kube-apiserver-pod>))
or
kubectl exec -it <kube-apiserver-pod> -n kube-system sh
error: unable to upgrade connection: Unauthorized`
however docker exec command to log to k8s_apiserver container is working properly.
During my debugging I found out that only self generated apiserver-kubelet-client key/cert file is causing this cluster behaviour.
Bellow is process I used to generate and use my own cert/key pair for apiserver-kubelet-client.
I inicialized kubernetes cluster to set its own certificates into /etc/kubernetes/pki folder by running kubeadm init ...
Make a backup of /etc/kubernetes/pki folder into /tmp/pki_k8s
Open apiserver-kubelet-client.crt with openssl to check all set extentions, CN, O etc.
openssl x509 -noout -text -in /tmp/pki_k8s/apiserver-kubelet-client.crt
To ensure same extentions and CN,O parameters to appear in certificate generated by myself I created .conf file for extentions and .csr file for CN and O
cd /tmp/pki_k8s/
cat <<-EOF_api_kubelet_client-ext > apiserver_kubelet_client-ext.conf
[ v3_ca ]
keyUsage = critical, digitalSignature, keyEncipherment
extendedKeyUsage = clientAuth
EOF_api_kubelet_client-ext
openssl req -new -key apiserver-kubelet-client.key -out apiserver-kubelet-client.csr -subj "/O=system:masters,CN=kube-apiserver-kubelet-client"
Finally I generated my own apiserver-kubelet-client.crt. For its generation I reused existing apiserver-kubelet-client.key and ca.crt/ca.key generated by K8S initialization
openssl x509 -req -in apiserver-kubelet-client.csr -CA ca.crt -CAkey ca.key -CAcreateserial -sha256 -out apiserver-kubelet-client.crt -extensions v3_ca -extfile apiserver_kubelet_client-ext.conf -days 3650
Once I had generated my own apiserver-kubelet-client.crt which overides the previous one generated by k8s initialization script itself, I reset kubernetes cluster by hitting kubeadm reset. This purged /etc/kubernetes folder
copy all certificates into /etc/kubernetes/pki from /tmp/pki_k8s
and reinitialize K8S cluster kubeadm init ...
During that I saw that K8S cluster used already existing certificates stored in /etc/kubernetes/pki for setup.
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[preflight] Activating the kubelet service
[certificates] Using the existing ca certificate and key.
[certificates] Using the existing apiserver certificate and key.
[certificates] Using the existing apiserver-kubelet-client certificate and key.
[certificates] Using the existing sa key.
[certificates] Using the existing front-proxy-ca certificate and key.
[certificates] Using the existing front-proxy-client certificate and key.
[certificates] Using the existing etcd/ca certificate and key.
[certificates] Using the existing etcd/server certificate and key.
[certificates] Using the existing etcd/peer certificate and key.
[certificates] Using the existing etcd/healthcheck-client certificate and key.
[certificates] Using the existing apiserver-etcd-client certificate and key.
[certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
[init] this might take a minute or longer if the control plane images have to be pulled
After that, K8S cluster is UP, I can list pods, list description, make deployments etc. however not able to check logs, exec command as described above.
kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-78fcdf6894-kjkp9 1/1 Running 0 2m
coredns-78fcdf6894-q88lx 1/1 Running 0 2m
...
kubectl logs <apiserver_pod> -n kube-system -v 7
I0818 08:51:12.435494 12811 loader.go:359] Config loaded from file /root/.kube/config
I0818 08:51:12.436355 12811 loader.go:359] Config loaded from file /root/.kube/config
I0818 08:51:12.438413 12811 loader.go:359] Config loaded from file /root/.kube/config
I0818 08:51:12.447751 12811 loader.go:359] Config loaded from file /root/.kube/config
I0818 08:51:12.448109 12811 round_trippers.go:383] GET https://<HOST_IP>:6443/api/v1/namespaces/kube-system/pods/<apiserver_pod>
I0818 08:51:12.448126 12811 round_trippers.go:390] Request Headers:
I0818 08:51:12.448135 12811 round_trippers.go:393] Accept: application/json, */*
I0818 08:51:12.448144 12811 round_trippers.go:393] User-Agent: kubectl/v1.11.0 (linux/amd64) kubernetes/91e7b4f
I0818 08:51:12.462931 12811 round_trippers.go:408] Response Status: 200 OK in 14 milliseconds
I0818 08:51:12.471316 12811 loader.go:359] Config loaded from file /root/.kube/config
I0818 08:51:12.471949 12811 round_trippers.go:383] GET https://<HOST_IP>:6443/api/v1/namespaces/kube-system/pods/<apiserver_pod>/log
I0818 08:51:12.471968 12811 round_trippers.go:390] Request Headers:
I0818 08:51:12.471977 12811 round_trippers.go:393] Accept: application/json, */*
I0818 08:51:12.471985 12811 round_trippers.go:393] User-Agent: kubectl/v1.11.0 (linux/amd64) kubernetes/91e7b4f
I0818 08:51:12.475827 12811 round_trippers.go:408] Response Status: 401 Unauthorized in 3 milliseconds
I0818 08:51:12.476288 12811 helpers.go:201] server response object: [{
"metadata": {},
"status": "Failure",
"message": "the server has asked for the client to provide credentials ( pods/log <apiserver_pod>)",
"reason": "Unauthorized",
"details": {
"name": "<apiserver_pod>",
"kind": "pods/log"
},
"code": 401
}]
F0818 08:51:12.476325 12811 helpers.go:119] error: You must be logged in to the server (the server has asked for the client to provide credentials ( pods/log <apiserver_pod>))
See kubelet service file below:
[root#qa053 ~]# cat /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
Note: This dropin only works with kubeadm and kubelet v1.11+
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
Environment="CA_CLIENT_CERT=--client-ca-file=/etc/kubernetes/pki/ca.crt"
Environment="KUBELE=--rotate-certificates=true"
# This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically
EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env
# This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use
# the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file.
EnvironmentFile=-/etc/sysconfig/kubelet
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS $KUBELET_CERTIFICATE_ARGS $CA_CLIENT_CERT
Do you have any ideas ? :)
Thanks
Best Regard
I found out reason why it did not worked.
When creating .csr file i used this:
openssl req -new -key apiserver-kubelet-client.key -out apiserver-kubelet-client.csr -subj "/O=system:masters,CN=kube-apiserver-kubelet-client"
But in -subj was wrong formatting which caused problems with parsing right CN from certificate. Instead of "/O=system:masters,CN=kube-apiserver-kubelet-client" it needs to be
openssl req -new -key apiserver-kubelet-client.key -out apiserver-kubelet-client.csr -subj "/O=system:masters/CN=kube-apiserver-kubelet-client"
Certificates generated by both .csr files looks same in terms of -text view. But they act differently.
I am using the following version:
Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.1", GitCommit:"3a1c9449a956b6026f075fa3134ff92f7d55f812", GitTreeState:"clean", BuildDate:"2018-01-04T11:52:23Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.1", GitCommit:"3a1c9449a956b6026f075fa3134ff92f7d55f812", GitTreeState:"clean", BuildDate:"2018-01-04T11:40:06Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}
Here, I am trying to authenticate a user that makes use of x509 certs using the below custom scrip that I created looking into few online forums and kubernetes docs.
#!/bin/bash
cluster=test-operations-k8
namespace=demo
username=jack
openssl genrsa -out $username.pem 2048
openssl req -new -key $username.pem -out $username.csr -subj "/CN=$username"
cat <<EOF | kubectl create -n $namespace -f -
apiVersion: certificates.k8s.io/v1beta1
kind: CertificateSigningRequest
metadata:
name: user-request-$username
spec:
groups:
- system:authenticated
request: $(cat $username.csr | base64 | tr -d '\n')
usages:
- digital signature
- key encipherment
- server auth
EOF
kubectl certificate approve user-request-$username
kubectl get csr user-request-$username -o jsonpath='{.status.certificate}' | base64 -d > $username.crt
kubectl --kubeconfig ~/.kube/config-$username config set-cluster $cluster --insecure-skip-tls-verify=true --server=https://$cluster.eastus.cloudapp.azure.com
kubectl --kubeconfig ~/.kube/config-$username config set-credentials $username --client-certificate=$username.crt --client-key=$username.pem --embed-certs=true
kubectl --kubeconfig ~/.kube/config-$username config set-context $cluster --cluster=$cluster --user=$username
kubectl --kubeconfig ~/.kube/config-$username config use-context $cluster
echo "Config file for $username has been created successfully !"
But while getting resources I get the below error:
error: You must be logged in to the server (Unauthorized)
Can someone please advise what needs to be done to fix this issue ?
Also please note the appropriate roles and rolebindings have also been created which I have not listed out here.
Make sure the CA used to sign the CSRs (the --cluster-signing-cert-file file given to kube-controller-manager) is in the --client-ca-file bundle given to kube-apiserver (which is what authenticates client certs presented to the apiserver)
Also ensure the certificate requested is a client certificate (has client auth in the usages field)