why kubelet communicate with apiserver by using TLS needs password?v1.3 - ssl

I deployed apiserver using TLS on master node and it worked fine,my question appeared when I deploying the kubelet and tring to communicate with apiserver.
the kubelet conf as follows:
/opt/bin/kubelet \
--logtostderr=true \
--v=0 \
--api_servers=https://kube-master:6443 \
--address=0.0.0.0 \
--port=10250 \
--allow-privileged=false \
--tls-cert-file="/var/run/kubernetes/kubelet_client.crt" \
--tls-private-key-file="/var/run/kubernetes/kubelet_client.key"
--kubeconfig="/var/lib/kubelet/kubeconfig"
/var/lib/kubelet/kubeconfig is following:
apiVersion: v1
kind: Config
users:
- name: kubelet
user:
client-certificate: /var/run/kubernetes/kubelet_client.crt
client-key: /var/run/kubernetes/kubelet_client.key
clusters:
- name: kube-cluster
cluster:
certificate-authority: /var/run/kubernetes/ca.crt
contexts:
- context:
cluster: kube-cluster
user: kubelet
name: ctx-kube-system
current-context: ctx-kube-system
As I want to achieve the comunication using a two-way(both client and server)CA authentication and expect for a fluky reply,but apiserver ask me to provide my username and password which I have never used before,some command lines as following:
> kubectl version
> Client Version: version.Info{Major:"1", Minor:"3", GitVersion:"v1.3.2", GitCommit:"9bafa3400a77c14ee50782bb05f9efc5c91b3185", GitTreeState:"clean", BuildDate:"2016-07-17T18:30:39Z", GoVersion:"go1.6.2", Compiler:"gc", Platform:"linux/amd64"}
> Please enter Username: kubelet
> Please enter Password: kubelet
> error: You must be logged in to the server (the server has asked for the client to provide credentials)
I tried all these on master minion.Could anyone please resolve this conundrum?Thanks in advance.

You have to enable client certificate authorization via the --client-ca-file flag on the apiserver.
From http://kubernetes.io/docs/admin/authentication/:
Client certificate authentication is enabled by passing the --client-ca-file=SOMEFILE option to apiserver. The referenced file must contain one or more certificates authorities to use to validate client certificates presented to the apiserver. If a client certificate is presented and verified, the common name of the subject is used as the user name for the request.
From http://kubernetes.io/docs/admin/kube-apiserver/:
--client-ca-file="": If set, any request presenting a client certificate signed by one of the authorities in the client-ca-file is authenticated with an identity corresponding to the CommonName of the client certificate.
--cloud-config="": The path to the cloud provider configuration file. Empty string for no configuration file.

Related

Prometheus Discovering Services with Consul: tls:Bad Certificate

I want to make use of Consul with Prometheus. But receive the tls:Bad Certificate error.
See:
caller=consul.go:513 level=error component="discovery manager scrape" discovery=consul msg="Error refreshing service" service=NodeExporter tags= err="Get \"https://consul.service.dc1.consul:8500/v1/health/service/NodeExporter?dc=dc1&stale=&wait=120000ms\": remote error: tls: bad certificate"
At the same time when running the same manually with curl, I am able to get an expected output:
curl -v -s -X GET "https://consul.service.dc1.consul:8500/v1/health/service/NodeExporter?dc=dc1&stale=&wait=120000ms" --key /secrets/consul.key --cert /secrets/consul.pem --cacert /secrets/cachain.pem
[{"Node":{"ID":"e53188ef-16ec-xxxx-xxxx-xxxx","Node":"dc1-runner-dev-1.test.io","Address":"30.10.xx.xx","Datacenter":"dc1","TaggedAddresses":{"lan":"30.10.xx.xx","lan_ipv4":"30.10.xx.xx","wan":"30.10.xx.xx","wan_ipv4":"30.10.xx.xx"},"Meta":{"consul-network-segment":""},"CreateIndex":71388,"ModifyIndex":71391},"Service":{"ID":"dc1-runner-dev-1.test.io-NodeExporter","Service":"NodeExporter","Tags":["service=node_exporter","environment=dev","datacenter=dc1"]...
To see more details from curl debug output, please see here:
LINK
The Prometheus is running in Docker. The Prometheus version is 2.31.1
curl command I also execute from the same Docker container.
Here Prometheus config:
# my global config
global:
scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
- job_name: "node_exporter"
consul_sd_configs:
- server: "consul.service.dc1.consul:8500"
scheme: "https"
datacenter: "dc1"
services: [
"NodeExporter"]
tls_config:
ca_file: "/secrets/cachain.pem"
cert_file: "/secrets/consul.pem"
key_file: "/secrets/consul.key"
The Prometheus is able to access the specified certificates.
I have also tried to add "insecure_skip_verify" property into the prometheus config file. I receive the same error.
The steps how the certificates are created:
I create an offline self-signed root CA by using Ansible modules from community.crypto collection
Create CSR and sign Intermediate CA1 with that root CA
I upload the Intermediate CA1 and the corresponding key into PKI secret engine in Hashicorp Vault.
After that inside Vault PKI I create new CSR and use Intermediate CA1 to sign Intermediate CA2.
Create a PKI role
The certificates in Prometheus are leaf certificates of Intermediate CA2 issued against the mentioned PKI role.
See the output of openssl x509 -text command for the used certificates here
Any ideas what I am missing here?

Consul Helm TLS erro: unknown PEM block type for signing key: CERTIFICATE

I'm trying to understand this error. I am deploying Consul with TLS and Consul Server and Clients. On the tls init containers I get this error.
kubectl logs consul-consul-tls-init-b2rfv
==> WARNING: Server Certificates grants authority to become a
server and access all state in the cluster including root keys
and all ACL tokens. Do not distribute them to production hosts
that are not server nodes. Store them as securely as CA keys.
==> Using /consul/tls/ca/cert/tls.crt and /consul/tls/ca/key/tls.key
unknown PEM block type for signing key: CERTIFICATE
I have tried to create CA certificate and the key in a number of ways. First I tried with openssl, then I tried with cfssl and finally I tried with the consul client. All the same error.
From best I can tell, the volumes are mounting from the secrets. Here is an example of my values.yaml I am deploying consul with through helm 3.
global:
gossipEncryption:
secretName: "gossip"
secretKey: "key"
tls:
enabled: true
verify: false # only for troubleshooting hoping it would help, also tried with true
caCert:
secretName: "consul-tls-ca"
secretKey: "tls.crt"
caKey:
secretName: "consul-server-tls"
secretKey: "tls.crt"
Examples of how I create my gossip and tls secrets
export GOSSIP_ENCRYPTION_KEY=$(consul keygen)
kubectl create secret generic gossip --from-literal="key=${GOSSIP_ENCRYPTION_KEY}"
kubectl create secret generic consul-tls-ca --from-file="tls.crt=./ca.pem"
kubectl create secret generic consul-server-tls --from-file="tls.crt=./server.pem" --from-file="tls.key=./server-key.pem"
I have not found any similar reported errors from others by googling or searching SO. Hashicorps documentation says nothing about it, or I have not found it.
They fixed it in consul 1.10 - previously it knew only ECP certificates. More in https://github.com/hashicorp/consul/issues/7622

Hashicorp Consul - How to do verified TLS from Pods in Kubernetes cluster

I'm having some difficulty understanding Consul end-to-end TLS. For reference, I'm using Consul in Kubernetes (via the hashicorp/consul Helm chart). Only one datacenter and Kubernetes cluster - no external parties or concerns.
I have configured my override values.yaml file like so:
global:
datacenter: sandbox
gossipEncryption:
secretName: "consul"
secretKey: "CONSUL_GOSSIP_ENCRYPTION_KEY"
tls:
enabled: true
httpsOnly: true
enableAutoEncrypt: true
serverAdditionalDNSSANs: ["'consul.service.consul'"]
server:
replicas: 3
bootstrapExpect: 3
storage: 20Gi
dns:
clusterIP: 172.20.53.53
ui:
service:
type: 'LoadBalancer'
syncCatalog:
enabled: true
All other values are as default from the shipped values.yaml file.
This works, and Consul client logs suggest that all agents area connecting nicely using TLS, with relevant certs and keys being created by (as I understand) the Auto-encryption feature of Consul.
What I don't understand is how to initiate a HTTPS connection from an application on Kubernetes, running in a Pod, to a Consul server. Since the Pod's container does not (presumably) have the Consul root CA cert in its trust store, all HTTPS calls fail, as per wget example below:
# Connect to Pod:
laptop$> kubectl exec -it my-pod sh
# Attempt valid HTTPS connection:
my-pod$> wget -q -O - https://consul.service.consul:8501
Connecting to consul.service.consul:8501 (10.110.1.131:8501)
ssl_client: consul.service.consul: certificate verification failed: unable to get local issuer certificate
wget: error getting response: Connection reset by peer
# Retry, but ignore certificate validity issues:
my-pod$> wget --no-check-certificate -q -O - https://consul.service.consul:8501/v1/status/leader
"10.110.1.131:8300"
How am I supposed to enforce end-to-end (verified) HTTPS connections from my apps on Kubernetes to Consul if the container does not recognize the certificate as valid?
Am I misunderstanding something about certificate propagation?
Many thanks - Aaron
Solved with thanks to Hashicorp on their Consul discussion forum.
Create a Kubernetes secret named consul with a key named CONSUL_GOSSIP_ENCRYPTION_KEY and an appropriate encryption key value.
Generate value using consul keygen
Install the hashicorp/consul Helm chart with an values-override.yaml , such as below:
global:
datacenter: sandbox
gossipEncryption:
secretName: "consul"
secretKey: "CONSUL_GOSSIP_ENCRYPTION_KEY"
tls:
enabled: true
httpsOnly: true
enableAutoEncrypt: true
serverAdditionalDNSSANs: ["'consul.service.consul'"]
server:
replicas: 3
bootstrapExpect: 3
storage: 20Gi
dns:
clusterIP: 172.20.53.53
ui:
service:
type: 'LoadBalancer'
syncCatalog:
enabled: true
Create an example Pod spec to represent our application.
Ensure it mounts the Consul server CA cert secret.
Ensure the Pod’s container has HOST_IP exposed as an environment variable.
apiVersion: v1
kind: Pod
metadata:
namespace: default
name: test-pod
spec:
volumes:
- name: consul-consul-ca-cert
secret:
secretName: consul-consul-ca-cert
hostNetwork: false
containers:
- name: consul-test-pod
image: alpine
imagePullPolicy: IfNotPresent
env:
- name: HOST_IP
valueFrom:
fieldRef:
fieldPath: status.hostIP
command: ["/bin/sh"]
args: ["-c", "while true; do sleep 24h; done"]
volumeMounts:
- name: consul-consul-ca-cert
mountPath: /consul/tls/ca
Upon creation of the Pod, kubectl exec into it, and ensure the ca-certificates and curl packages are installed (I’m using Alpine Linux in this example).
(curl is purely for testing purposes)
#> apk update
#> apk add ca-certificates curl
Copy the mounted Consul server CA certificate into the /usr/local/share/ca-certificates/ and execute update-ca-certificates to add it to the system root CA store.
#> cp /consul/tls/ca/tls.crt /usr/local/share/ca-certificates/consul-server-ca.crt
#> update-ca-certificates # might give a trivial warning - ignore it
The Consul server is now accessible (and trusted) over HTTPS as below:
#> curl https://consul.service.consul:8501/v1/status/leader
## No TLS errors ##
We also want to talk to the Consul client (instead of the server) over HTTPS, for performance reasons.
Since the Consul client has its own CA cert, we need to retrieve that from the server.
This requires the consul-k8s binary, so we need to get that.
#> cd /usr/local/bin
#> wget https://releases.hashicorp.com/consul-k8s/0.15.0/consul-k8s_0.15.0_linux_amd64.zip # (or whatever latest version is)
#> unzip consul-k8s_0.15.0_linux_amd64.zip
#> rm consul-k8s_0.15.0_linux_amd64.zip
Get the Consul client CA cert and install it via update-ca-certificates :
#> consul-k8s get-consul-client-ca -server-addr consul.service.consul -server-port 8501 -output-file /usr/local/share/ca-certificates/consul-client-ca.crt
#> update-ca-certificates # might give a trivial warning - ignore it
The Consul client is now accessible (and trusted) over HTTPS as below:
#> curl https://$HOST_IP:8501/v1/status/leader
## No TLS errors ##
We can also access the Consul KV service from the client without issue:
#> curl https://$HOST_IP:8501/v1/kv/foo/bar/baz
## No TLS errors ##
Naturally, all of the above should be automated by the implementer. These manual steps are purely for demonstration purposes.

Server not found in Kerberos database -ERR_S_PRINCIPLE_UNKNOWN(7)

I am setting up kerberos authentication for one application. Active directory is installed on Windows Server 2012 R2 and I am generating keytab file on a centos machine which is not added to domain.
I can successfully test generated keytab file with following command
kinit -k -t /tmp/hirosrv.keytab hirosrv#HIRO.COM
Security event created on windows machine is:
A Kerberos authentication ticket (TGT) was requested.
Account Information: Account Name: hirosrv Supplied Realm
Name: HIRO.COM User ID: HIRO\hirosrv
Service Information: Service Name: krbtgt Service ID: HIRO\krbtgt
Network Information: Client Address: 10.XX.XX.2 Client Port: 37142
Additional Information: Ticket Options: 0x40800000 Result
Code: 0x0 Ticket Encryption Type: 0x17 Pre-Authentication Type: 2
Certificate Information: Certificate Issuer Name: Certificate
Serial Number: Certificate Thumbprint:
Certificate information is only provided if a certificate was used for
pre-authentication.
Following is the output of klist -A
Ticket cache: FILE:/tmp/krb5cc_0
Default principal: hirosrv#HIRO.COM
Valid starting Expires Service principal
12/03/18 16:26:48 12/04/18 02:26:48 krbtgt/HIRO.COM#HIRO.COM
renew until 12/10/18 16:26:48
Same service principal is associated with user account:
PS C:\Users\administrator> setspn.exe -Q krbtgt/HIRO.COM
Checking domain DC=hiro,DC=com
CN=hirosrv,CN=Users,DC=hiro,DC=com
krbtgt/HIRO.COM
krbtgt/HIRO.COM#HIRO.COM
Existing SPN found!
Application is configured to use the keytab file for authentication but I am getting the following error:
ERR_S_PRINCIPLE_UNKNOWN(7) error.

How to run remote code as user with certificate from a worker node

I created a user in the Master.
First I created a key and certificate for him: dan.key and dan.crt
Then I created it inside Kubernetes:
kubectl config set-credentials dan \
--client-certificate=/tmp/dan.crt \
--client-key=/tmp/dan.key
This is the ~/.kube/config:
users:
- name: dan
user:
as-user-extra: {}
client-certificate: /tmp/dan.crt
client-key: /tmp/dan.key
I want to be able to run commands from a remote worker as the user I created.
I know how to do it with service account token:
kubectl --server=https://192.168.0.13:6443 --insecure-skip-tls-verify=true --token="<service_account_token>" get pods
I copied the certifiacte and the key to the remote worker and ran:
[workernode tmp]$ kubectl --server=https://192.168.0.13:6443 --client-certificate=/tmp/dan.crt --client-key=/tmp/dan.key get pods
Unable to connect to the server: x509: certificate signed by unknown authority
I followed this question:
kubectl unable to connect to server: x509: certificate signed by unknown authority
I tried like he wrote:
kubectl proxy --address 0.0.0.0 --accept-hosts '.*'
But I am still receiving:
Unable to connect to the server: x509: certificate signed by unknown authority
I copied the certifiacte and the key to the remote worker and ran:
[workernode tmp]$ kubectl --server=https://192.168.0.13:6443 --client-certificate=/tmp/dan.crt --client-key=/tmp/dan.key get pods
Unable to connect to the server: x509: certificate signed by unknown authority
You were missing the critical piece of data telling kubectl how to trust the https: part of that request, namely --certificate-authority=/path/to/kubernetes/ca.pem
You didn't encounter that error while using --token=... because of the --insecure-skip-tls-verify=true which you should definitely, definitely not do.
I tried like he wrote:
kubectl proxy --address 0.0.0.0 --accept-hosts '.*'
But I am still receiving:
You have followed the wrong piece of advice from whatever article you were reading; that --accept-hosts flag only controls the remote hostnames from which kubectl proxy will accept connections, and has zero to do with SSL anythings.