Kubernetes authentication with certificate - authentication

I am trying to authenticate with a locally hosted Kubernetes cluster (v1.6.4) using a certificate.
This takes part in the context of using the Kubernetes plugin for Jenkins.
I am following the guidelines for Minikube in the Kubernetes-plugin README file which I adapted to my scenario:
Convert the client certificate to PKCS:
$ sudo openssl pkcs12 -export -out kubernetes.pfx -inkey /etc/kubernetes/pki/apiserver.key -in /etc/kubernetes/pki/apiserver.crt -certfile /etc/kubernetes/pki/ca.crt -passout pass:jenkins
In Jenkins, create credentials using a certificate
Kind: Certificate
Certificate: Upload PKCS#12 certificate and upload file kubernetes.pfx
Password: jenkins (as specified during certificate creation)
Manage Jenkins -> Add new cloud -> Kubernetes
Kubernetes URL: https://10.179.1.121:6443 (as output by kubectl config view)
Kubernetes server certificate key: paste the contents of /etc/kubernetes/pki/ca.crt.
Disable https certificate check: checked because the test setup does not have a signed certificate
Kubernetes Namespace: tried both default and kubernetes-plugin
Credentials: CN=kube-apiserver (i.e. the credentials created above)
Now when I click on Test Connection, this is the error message shown in the Jenkins Web UI:
Error connecting to https://10.179.1.121:6443: Failure executing: GET at: https://10.179.1.121:6443/api/v1/namespaces/kubernetes-plugin/pods. Message: Unauthorized.
The Jenkins logs show this message:
Sep 05, 2017 10:22:03 AM io.fabric8.kubernetes.client.Config tryServiceAccount
WARNING: Error reading service account token from: [/var/run/secrets/kubernetes.io/serviceaccount/token]. Ignoring.
The documentation is, unfortunately, mostly limited to Kubernetes running on Minikube and to Google Cloud Engine, but I do not see a conceptual difference between the former and a locally hosted Kubernetes cluster.
The following Curl call for testing results in a very different error message:
$ curl --insecure --cacert /etc/kubernetes/pki/ca.crt --cert kubernetex.pfx:secret https://10.179.1.121:6443
User "system:anonymous" cannot get at the cluster scope.
More verbose:
$ curl -v --insecure --cacert /etc/kubernetes/pki/ca.crt --cert kubernetex.pfx:secret https://10.179.1.121:6443
* About to connect() to 10.179.1.121 port 6443 (#0)
* Trying 10.179.1.121...
* Connected to 10.179.1.121 (10.179.1.121) port 6443 (#0)
* Initializing NSS with certpath: sql:/etc/pki/nssdb
* skipping SSL peer certificate verification
* NSS: client certificate not found: kubernetex.pfx
* SSL connection using TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
* Server certificate:
* subject: CN=kube-apiserver
* start date: Jun 13 11:33:55 2017 GMT
* expire date: Jun 13 11:33:55 2018 GMT
* common name: kube-apiserver
* issuer: CN=kubernetes
> GET / HTTP/1.1
> User-Agent: curl/7.29.0
> Host: 10.179.1.121:6443
> Accept: */*
>
< HTTP/1.1 403 Forbidden
< Content-Type: text/plain
< X-Content-Type-Options: nosniff
< Date: Tue, 05 Sep 2017 10:34:23 GMT
< Content-Length: 57
<
* Connection #0 to host 10.179.1.121 left intact
I have also set up a ServiceAccount:
$ kubectl describe serviceaccount --namespace=kubernetes-plugin
Name: default
Namespace: kubernetes-plugin
Labels: <none>
Annotations: <none>
Image pull secrets: <none>
Mountable secrets: default-token-6qwj1
Tokens: default-token-6qwj1
Name: jenkins
Namespace: kubernetes-plugin
Labels: <none>
Annotations: <none>
Image pull secrets: <none>
Mountable secrets: jenkins-token-1d623
Tokens: jenkins-token-1d623
This question deals with a related problem, recommending to use either a ServiceAccount or a certificate, but the answer to the latter aproach lacks the details about how to tie an RBAC profile to that certificate. The Kubernetes documentation about authentication does not seem to cover this use case.

The WARNING: Error reading service account token indicates that the key used to encrypt ServiceAccount tokens is different between kube-apiserver (--service-account-key-file) and kube-controller-manager (--service-account-private-key-file). If your kube-apiserver command-line doesn't specify --service-account-key-file then the value of --tls-private-key-file is used and I suspect that this is the issue.
I'd suggest always explicitly setting kube-apiserver --service-account-key-file to match the kube-controller-manager --service-account-private-key-file value.

Related

NodePort Service for http/2 with TLS backend does not work

I have a backend app which implements RESTful APIs over http/2. My requirement is to expose the backend service to the host network and I do it using NodePort.
apiVersion: v1
kind: Service
metadata:
name: gold-service
spec:
selector:
app: gold-app
ports:
- name: gold-port
port: 12349
nodePort: 32349
type: NodePort
When the app runs without TLS, the service is accessible as expected from outside the cluster. However, when the app runs with TLS, the service is no longer accessible. I observe from packet capture that the TLS handshake begins but does not conclude successfully.
$ curl https://10.225.68.106:32349/api/v1/config -kv --cert <cert file> --key <key file>
* About to connect() to 10.225.68.106 port 32349 (#0)
* Trying 10.225.68.106...
* Connected to 10.225.68.106 (10.225.68.106) port 32349 (#0)
* Initializing NSS with certpath: sql:/etc/pki/nssdb
* skipping SSL peer certificate verification
* NSS: client certificate from file
* subject: ...
* start date: Mar 29 07:10:42 2018 GMT
* expire date: Mar 26 07:10:42 2028 GMT
* common name: ...
* issuer: ...
* NSS error -5938 (PR_END_OF_FILE_ERROR)
* Encountered end of file
* Closing connection 0
curl: (35) Encountered end of file
I have gone through the discussion here. Is Ingress the only solution? Furthermore, I am curious to know how and why NodePort is not able to handle http/2 TLS traffic.

Hashicorp Consul - How to do verified TLS from Pods in Kubernetes cluster

I'm having some difficulty understanding Consul end-to-end TLS. For reference, I'm using Consul in Kubernetes (via the hashicorp/consul Helm chart). Only one datacenter and Kubernetes cluster - no external parties or concerns.
I have configured my override values.yaml file like so:
global:
datacenter: sandbox
gossipEncryption:
secretName: "consul"
secretKey: "CONSUL_GOSSIP_ENCRYPTION_KEY"
tls:
enabled: true
httpsOnly: true
enableAutoEncrypt: true
serverAdditionalDNSSANs: ["'consul.service.consul'"]
server:
replicas: 3
bootstrapExpect: 3
storage: 20Gi
dns:
clusterIP: 172.20.53.53
ui:
service:
type: 'LoadBalancer'
syncCatalog:
enabled: true
All other values are as default from the shipped values.yaml file.
This works, and Consul client logs suggest that all agents area connecting nicely using TLS, with relevant certs and keys being created by (as I understand) the Auto-encryption feature of Consul.
What I don't understand is how to initiate a HTTPS connection from an application on Kubernetes, running in a Pod, to a Consul server. Since the Pod's container does not (presumably) have the Consul root CA cert in its trust store, all HTTPS calls fail, as per wget example below:
# Connect to Pod:
laptop$> kubectl exec -it my-pod sh
# Attempt valid HTTPS connection:
my-pod$> wget -q -O - https://consul.service.consul:8501
Connecting to consul.service.consul:8501 (10.110.1.131:8501)
ssl_client: consul.service.consul: certificate verification failed: unable to get local issuer certificate
wget: error getting response: Connection reset by peer
# Retry, but ignore certificate validity issues:
my-pod$> wget --no-check-certificate -q -O - https://consul.service.consul:8501/v1/status/leader
"10.110.1.131:8300"
How am I supposed to enforce end-to-end (verified) HTTPS connections from my apps on Kubernetes to Consul if the container does not recognize the certificate as valid?
Am I misunderstanding something about certificate propagation?
Many thanks - Aaron
Solved with thanks to Hashicorp on their Consul discussion forum.
Create a Kubernetes secret named consul with a key named CONSUL_GOSSIP_ENCRYPTION_KEY and an appropriate encryption key value.
Generate value using consul keygen
Install the hashicorp/consul Helm chart with an values-override.yaml , such as below:
global:
datacenter: sandbox
gossipEncryption:
secretName: "consul"
secretKey: "CONSUL_GOSSIP_ENCRYPTION_KEY"
tls:
enabled: true
httpsOnly: true
enableAutoEncrypt: true
serverAdditionalDNSSANs: ["'consul.service.consul'"]
server:
replicas: 3
bootstrapExpect: 3
storage: 20Gi
dns:
clusterIP: 172.20.53.53
ui:
service:
type: 'LoadBalancer'
syncCatalog:
enabled: true
Create an example Pod spec to represent our application.
Ensure it mounts the Consul server CA cert secret.
Ensure the Pod’s container has HOST_IP exposed as an environment variable.
apiVersion: v1
kind: Pod
metadata:
namespace: default
name: test-pod
spec:
volumes:
- name: consul-consul-ca-cert
secret:
secretName: consul-consul-ca-cert
hostNetwork: false
containers:
- name: consul-test-pod
image: alpine
imagePullPolicy: IfNotPresent
env:
- name: HOST_IP
valueFrom:
fieldRef:
fieldPath: status.hostIP
command: ["/bin/sh"]
args: ["-c", "while true; do sleep 24h; done"]
volumeMounts:
- name: consul-consul-ca-cert
mountPath: /consul/tls/ca
Upon creation of the Pod, kubectl exec into it, and ensure the ca-certificates and curl packages are installed (I’m using Alpine Linux in this example).
(curl is purely for testing purposes)
#> apk update
#> apk add ca-certificates curl
Copy the mounted Consul server CA certificate into the /usr/local/share/ca-certificates/ and execute update-ca-certificates to add it to the system root CA store.
#> cp /consul/tls/ca/tls.crt /usr/local/share/ca-certificates/consul-server-ca.crt
#> update-ca-certificates # might give a trivial warning - ignore it
The Consul server is now accessible (and trusted) over HTTPS as below:
#> curl https://consul.service.consul:8501/v1/status/leader
## No TLS errors ##
We also want to talk to the Consul client (instead of the server) over HTTPS, for performance reasons.
Since the Consul client has its own CA cert, we need to retrieve that from the server.
This requires the consul-k8s binary, so we need to get that.
#> cd /usr/local/bin
#> wget https://releases.hashicorp.com/consul-k8s/0.15.0/consul-k8s_0.15.0_linux_amd64.zip # (or whatever latest version is)
#> unzip consul-k8s_0.15.0_linux_amd64.zip
#> rm consul-k8s_0.15.0_linux_amd64.zip
Get the Consul client CA cert and install it via update-ca-certificates :
#> consul-k8s get-consul-client-ca -server-addr consul.service.consul -server-port 8501 -output-file /usr/local/share/ca-certificates/consul-client-ca.crt
#> update-ca-certificates # might give a trivial warning - ignore it
The Consul client is now accessible (and trusted) over HTTPS as below:
#> curl https://$HOST_IP:8501/v1/status/leader
## No TLS errors ##
We can also access the Consul KV service from the client without issue:
#> curl https://$HOST_IP:8501/v1/kv/foo/bar/baz
## No TLS errors ##
Naturally, all of the above should be automated by the implementer. These manual steps are purely for demonstration purposes.

Hyperledger Fabric CA: x509: certificate is valid for rca-ord, not localhost

we have started an instance of fabric-ca-server with following settings in docker-compose.yml
version: '2'
networks:
test:
services:
myservice:
container_name: my-container
image: hyperledger/fabric-ca
command: /bin/bash -c "fabric-ca-server start -b admin:adminpw"
environment:
- FABRIC_CA_SERVER_HOME=/etc/hyperledger/fabric-ca
- FABRIC_CA_SERVER_TLS_ENABLED=true
- FABRIC_CA_SERVER_CSR_CN=rca-ord
- FABRIC_CA_SERVER_CSR_HOSTS=rca-ord
- FABRIC_CA_SERVER_DEBUG=true
volumes:
- ./scripts:/scripts
- ./data:/data
networks:
- test
ports:
- 7054:7054
but when we try to enroll a user against this server using the command below:
root#fd85cc416f52:/# fabric-ca-client enroll -u https://user:userpw#localhost:7054 --tls.certfiles $FABRIC_CA_SERVER_HOME/tls-cert.pem
we get the error below:
2018/12/08 22:18:03 [INFO] TLS Enabled
2018/12/08 22:18:03 [INFO] generating key: &{A:ecdsa S:256}
2018/12/08 22:18:03 [INFO] encoded CSR
Error: POST failure of request: POST https://localhost:7054/enroll
{"hosts":["fd85cc416f52"],"certificate_request":"-----BEGIN CERTIFICATE REQUEST-----\nMIIBQDCB6AIBADBcMQswCQYDVQQGEwJVUzEXMBUGA1UECBMOTm9ydGggQ2Fyb2xp\nbmExFDASBgNVBAoTC0h5cGVybGVkZ2VyMQ8wDQYDVQQLEwZGYWJyaWMxDTALBgNV\nBAMTBHVzZXIwWTATBgcqhkjOPQIBBggqhkjOPQMBBwNCAATREdPvOeaWG9TzaEyk\nhFXRnJFJouDXShr0D1745bCt/0n3qjpqviZiApd1t62VrpMX0j8DBa6tkF7C+rEr\nRvwnoCowKAYJKoZIhvcNAQkOMRswGTAXBgNVHREEEDAOggxmZDg1Y2M0MTZmNTIw\nCgYIKoZIzj0EAwIDRwAwRAIgASXupobxJia/FFlLiwYzYpacvSA6RiIc/LR/kvdB\nT8ICIA1nJ2RfHrwMhOWocxMAIuLUsBvKS3S5DIwCHp0/gBpn\n-----END CERTIFICATE REQUEST-----\n","profile":"","crl_override":"","label":"","NotBefore":"0001-01-01T00:00:00Z","NotAfter":"0001-01-01T00:00:00Z","CAName":""}: Post https://localhost:7054/enroll: x509: certificate is valid for rca-ord, not localhost
on the server-side we can see following message printed out when the request is sent:
my-container | 2018/12/08 22:18:03 http: TLS handshake error from 127.0.0.1:56518: remote error: tls: bad certificate
we have also tried:
root#fd85cc416f52:/# ls $FABRIC_CA_SERVER_HOME
IssuerPublicKey IssuerRevocationPublicKey ca-cert.pem fabric-ca-server-config.yaml fabric-ca-server.db msp tls-cert.pem
root#fd85cc416f52:/# fabric-ca-client enroll -u https://user:userpw#localhost:7054 --tls.certfiles $FABRIC_CA_SERVER_HOME/ca-cert.pem
with same result
wondering if someone can help us what is wrong here and how can we fix it? thanks
You have generated a TLS certificate on the server using FABRIC_CA_SERVER_CSR_HOSTS=rca-ord, but then you are sending your request to localhost in the URL you specify in the enroll command.
To get this to work, you should change your environment variable to also include 'localhost'. For example: FABRIC_CA_SERVER_CSR_HOSTS=rca-ord,localhost.
Delete the old TLS certificate and generate a new one, and it should work.

Can an insecure docker registry be given a CA signed certificate so that clients automatically trust it?

Currently, I have set up a registry in the following manner:
docker run -d \
-p 10.0.1.4:443:5000 \
--name registry \
-v `pwd`/certs/:/certs \
-v `pwd`/registry:/var/lib/registry \
-e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/certificate.crt \
-e REGISTRY_HTTP_TLS_KEY=/certs/private.key \
registry:latest
Using Docker version 17.06.2-ce, build cec0b72
I have obtained my certificate.crt, private.key, and ca_bundle.crt from Let's Encrypt. And I have been able to establish https connections when using these certs on a nginx server, without having to explicitly trust the certificates on the client machine/browser.
Is it possible to setup a user experience with a docker registry similar to that of a CA certified website being accessed via https, where the browser/machine trusts the root CA and those along the chain, including my certificates?
Note:
I can of course specify the certificate in the clients docker files as described in this tutorial: https://docs.docker.com/registry/insecure/#use-self-signed-certificates . However, this is not an adequate solution for my needs.
Output of curl -v https://docks.behar.cloud/v2/:
* Trying 10.0.1.4...
* TCP_NODELAY set
* Connected to docks.behar.cloud (10.0.1.4) port 443 (#0)
* TLS 1.2 connection using TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
* Server certificate: docks.behar.cloud
* Server certificate: Let's Encrypt Authority X3
* Server certificate: DST Root CA X3
> GET /v2/ HTTP/1.1
> Host: docks.behar.cloud
> User-Agent: curl/7.54.0
> Accept: */*
>
< HTTP/1.1 200 OK
< Content-Length: 2
< Content-Type: application/json; charset=utf-8
< Docker-Distribution-Api-Version: registry/2.0
< X-Content-Type-Options: nosniff
< Date: Sun, 10 Sep 2017 23:05:01 GMT
<
* Connection #0 to host docks.behar.cloud left intact
Short answer: Yes.
My issue was caused by my os not having a build in trust of the root certificates from which my SSL certificate was signed by. This is likely due to the age of my os. See the answer from Matt for more information.
Docker will normally use the the OS provided CA bundle, so certificates signed by trusted roots should work without extra config.
Let's Encrypt certificates are cross signed by an IdentTrust root certificate (DST Root CA X3) so most CA bundles should already trust their certificates. The Lets Encrypt root cert (ISRG Root X1) is also distributed but will not be as widespread due to it being more recent.
Docker 1.13+ will use the host systems CA bundle to verify certificates. Prior to 1.13 this may not happen if you have installed a custom root cert. So if you use curl without any TLS warning then docker commands should also work the same.
To have DTR recognize the certificates you need to edit the configuration file so that you specify your certs correctly. DTR accepts and has special parameters for LetsEncrypt Certs. They also have specific requirements for them. You will need to make a configuration file and mount the appropriate directories and then there should be no further issues with insecure-registry errors and unrecognized certs.
...
http:
addr: localhost:5000
prefix: /my/nested/registry/
host: https://myregistryaddress.org:5000
secret: asecretforlocaldevelopment
relativeurls: false
tls:
certificate: /path/to/x509/public
key: /path/to/x509/private
clientcas:
- /path/to/ca.pem
- /path/to/another/ca.pem
letsencrypt:
cachefile: /path/to/cache-file
email: emailused#letsencrypt.com
...

why kubelet communicate with apiserver by using TLS needs password?v1.3

I deployed apiserver using TLS on master node and it worked fine,my question appeared when I deploying the kubelet and tring to communicate with apiserver.
the kubelet conf as follows:
/opt/bin/kubelet \
--logtostderr=true \
--v=0 \
--api_servers=https://kube-master:6443 \
--address=0.0.0.0 \
--port=10250 \
--allow-privileged=false \
--tls-cert-file="/var/run/kubernetes/kubelet_client.crt" \
--tls-private-key-file="/var/run/kubernetes/kubelet_client.key"
--kubeconfig="/var/lib/kubelet/kubeconfig"
/var/lib/kubelet/kubeconfig is following:
apiVersion: v1
kind: Config
users:
- name: kubelet
user:
client-certificate: /var/run/kubernetes/kubelet_client.crt
client-key: /var/run/kubernetes/kubelet_client.key
clusters:
- name: kube-cluster
cluster:
certificate-authority: /var/run/kubernetes/ca.crt
contexts:
- context:
cluster: kube-cluster
user: kubelet
name: ctx-kube-system
current-context: ctx-kube-system
As I want to achieve the comunication using a two-way(both client and server)CA authentication and expect for a fluky reply,but apiserver ask me to provide my username and password which I have never used before,some command lines as following:
> kubectl version
> Client Version: version.Info{Major:"1", Minor:"3", GitVersion:"v1.3.2", GitCommit:"9bafa3400a77c14ee50782bb05f9efc5c91b3185", GitTreeState:"clean", BuildDate:"2016-07-17T18:30:39Z", GoVersion:"go1.6.2", Compiler:"gc", Platform:"linux/amd64"}
> Please enter Username: kubelet
> Please enter Password: kubelet
> error: You must be logged in to the server (the server has asked for the client to provide credentials)
I tried all these on master minion.Could anyone please resolve this conundrum?Thanks in advance.
You have to enable client certificate authorization via the --client-ca-file flag on the apiserver.
From http://kubernetes.io/docs/admin/authentication/:
Client certificate authentication is enabled by passing the --client-ca-file=SOMEFILE option to apiserver. The referenced file must contain one or more certificates authorities to use to validate client certificates presented to the apiserver. If a client certificate is presented and verified, the common name of the subject is used as the user name for the request.
From http://kubernetes.io/docs/admin/kube-apiserver/:
--client-ca-file="": If set, any request presenting a client certificate signed by one of the authorities in the client-ca-file is authenticated with an identity corresponding to the CommonName of the client certificate.
--cloud-config="": The path to the cloud provider configuration file. Empty string for no configuration file.