Hashicorp Consul - How to do verified TLS from Pods in Kubernetes cluster - ssl

I'm having some difficulty understanding Consul end-to-end TLS. For reference, I'm using Consul in Kubernetes (via the hashicorp/consul Helm chart). Only one datacenter and Kubernetes cluster - no external parties or concerns.
I have configured my override values.yaml file like so:
global:
datacenter: sandbox
gossipEncryption:
secretName: "consul"
secretKey: "CONSUL_GOSSIP_ENCRYPTION_KEY"
tls:
enabled: true
httpsOnly: true
enableAutoEncrypt: true
serverAdditionalDNSSANs: ["'consul.service.consul'"]
server:
replicas: 3
bootstrapExpect: 3
storage: 20Gi
dns:
clusterIP: 172.20.53.53
ui:
service:
type: 'LoadBalancer'
syncCatalog:
enabled: true
All other values are as default from the shipped values.yaml file.
This works, and Consul client logs suggest that all agents area connecting nicely using TLS, with relevant certs and keys being created by (as I understand) the Auto-encryption feature of Consul.
What I don't understand is how to initiate a HTTPS connection from an application on Kubernetes, running in a Pod, to a Consul server. Since the Pod's container does not (presumably) have the Consul root CA cert in its trust store, all HTTPS calls fail, as per wget example below:
# Connect to Pod:
laptop$> kubectl exec -it my-pod sh
# Attempt valid HTTPS connection:
my-pod$> wget -q -O - https://consul.service.consul:8501
Connecting to consul.service.consul:8501 (10.110.1.131:8501)
ssl_client: consul.service.consul: certificate verification failed: unable to get local issuer certificate
wget: error getting response: Connection reset by peer
# Retry, but ignore certificate validity issues:
my-pod$> wget --no-check-certificate -q -O - https://consul.service.consul:8501/v1/status/leader
"10.110.1.131:8300"
How am I supposed to enforce end-to-end (verified) HTTPS connections from my apps on Kubernetes to Consul if the container does not recognize the certificate as valid?
Am I misunderstanding something about certificate propagation?
Many thanks - Aaron

Solved with thanks to Hashicorp on their Consul discussion forum.
Create a Kubernetes secret named consul with a key named CONSUL_GOSSIP_ENCRYPTION_KEY and an appropriate encryption key value.
Generate value using consul keygen
Install the hashicorp/consul Helm chart with an values-override.yaml , such as below:
global:
datacenter: sandbox
gossipEncryption:
secretName: "consul"
secretKey: "CONSUL_GOSSIP_ENCRYPTION_KEY"
tls:
enabled: true
httpsOnly: true
enableAutoEncrypt: true
serverAdditionalDNSSANs: ["'consul.service.consul'"]
server:
replicas: 3
bootstrapExpect: 3
storage: 20Gi
dns:
clusterIP: 172.20.53.53
ui:
service:
type: 'LoadBalancer'
syncCatalog:
enabled: true
Create an example Pod spec to represent our application.
Ensure it mounts the Consul server CA cert secret.
Ensure the Pod’s container has HOST_IP exposed as an environment variable.
apiVersion: v1
kind: Pod
metadata:
namespace: default
name: test-pod
spec:
volumes:
- name: consul-consul-ca-cert
secret:
secretName: consul-consul-ca-cert
hostNetwork: false
containers:
- name: consul-test-pod
image: alpine
imagePullPolicy: IfNotPresent
env:
- name: HOST_IP
valueFrom:
fieldRef:
fieldPath: status.hostIP
command: ["/bin/sh"]
args: ["-c", "while true; do sleep 24h; done"]
volumeMounts:
- name: consul-consul-ca-cert
mountPath: /consul/tls/ca
Upon creation of the Pod, kubectl exec into it, and ensure the ca-certificates and curl packages are installed (I’m using Alpine Linux in this example).
(curl is purely for testing purposes)
#> apk update
#> apk add ca-certificates curl
Copy the mounted Consul server CA certificate into the /usr/local/share/ca-certificates/ and execute update-ca-certificates to add it to the system root CA store.
#> cp /consul/tls/ca/tls.crt /usr/local/share/ca-certificates/consul-server-ca.crt
#> update-ca-certificates # might give a trivial warning - ignore it
The Consul server is now accessible (and trusted) over HTTPS as below:
#> curl https://consul.service.consul:8501/v1/status/leader
## No TLS errors ##
We also want to talk to the Consul client (instead of the server) over HTTPS, for performance reasons.
Since the Consul client has its own CA cert, we need to retrieve that from the server.
This requires the consul-k8s binary, so we need to get that.
#> cd /usr/local/bin
#> wget https://releases.hashicorp.com/consul-k8s/0.15.0/consul-k8s_0.15.0_linux_amd64.zip # (or whatever latest version is)
#> unzip consul-k8s_0.15.0_linux_amd64.zip
#> rm consul-k8s_0.15.0_linux_amd64.zip
Get the Consul client CA cert and install it via update-ca-certificates :
#> consul-k8s get-consul-client-ca -server-addr consul.service.consul -server-port 8501 -output-file /usr/local/share/ca-certificates/consul-client-ca.crt
#> update-ca-certificates # might give a trivial warning - ignore it
The Consul client is now accessible (and trusted) over HTTPS as below:
#> curl https://$HOST_IP:8501/v1/status/leader
## No TLS errors ##
We can also access the Consul KV service from the client without issue:
#> curl https://$HOST_IP:8501/v1/kv/foo/bar/baz
## No TLS errors ##
Naturally, all of the above should be automated by the implementer. These manual steps are purely for demonstration purposes.

Related

Vault On GKE - x509: cannot validate certificate for 127.0.0.1 because it doesn't contain any IP SANs

I've created a certificate for the commonName "vault-lab.company.com" within Certificate Manager in the Istio namespace.
I've then used Reflector to copy that secret across to the Vault namespace, like so:
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: vault-lab.company.com-cert
namespace: istio-system
spec:
secretName: vault-lab.company.com-cert
commonName: vault-lab.company.com
dnsNames:
- vault-lab.company.com
issuerRef:
name: letsencrypt-prod-istio
kind: ClusterIssuer
secretTemplate:
annotations:
reflector.v1.k8s.emberstack.com/reflection-allowed: "true"
reflector.v1.k8s.emberstack.com/reflection-allowed-namespaces: "vault" # Control destination namespaces
reflector.v1.k8s.emberstack.com/reflection-auto-enabled: "true" # Auto create reflection for matching namespaces
reflector.v1.k8s.emberstack.com/reflection-auto-namespaces: "vault" # Control auto-reflection namespaces
The secret is successfully mounted via the volumes and volumeMounts section of the values.yaml for Vault:
volumes:
- name: vault-lab-cert
secret:
secretName: vault-lab.company.com-cert
volumeMounts:
mountPath: /etc/tls
readOnly: true
And in reading https://github.com/hashicorp/vault/issues/212, I've set the following as well in the listener configuration:
config: |
ui = false
listener "tcp" {
tls_disable = false
address = "0.0.0.0:8200"
tls_cert_file = "/etc/tls/tls.crt"
tls_key_file = "/etc/tls/tls.key"
}
api_addr = "https://vault-lab.company.com:8200"
cluster_addr = "https://vault-lab.company.com:8201"
However, I'm still seeing:
Get "https://127.0.0.1:8200/v1/sys/seal-status": x509: cannot validate certificate for 127.0.0.1 because it doesn't contain any IP SANs
When running:
kubectl exec vault-0 -- vault status
Interestingly, if I describe the vault-0 pod via kubectl, I see:
VAULT_ADDR: https://127.0.0.1:8200
VAULT_API_ADDR: https://$(POD_IP):8200
What could I be missing?
Is there something more I need to do if the certificate is configured via Cert Manager?
There's not a great deal of documentation on how to set this up at all.
When you run vault status, the binary runs as a client, unaware of the server configuration, even if it's running on the same machine or container. That means vault status can't read the listener stanza of your configuration file. It defaults to https://127.0.0.1:8200 that is missing from your certificate. The solution is not to add this IP address, but to tell Vault CLI where to find the server.
You can confirm that this is the problem with this command (should work, assuming your certificate is OK):
kubectl exec vault-0 -- vault status --address https://vault-lab.company.com:8200
For the client to pick up the API address automatically, set the VAULT_ADDR environment variable in your container to:
VAULT_ADDR=https://vault-lab.company.com:8200

Kubernetes Dashboard TLS cert issue

I am deploying the standard Kubernetes Dashboard (Jetstack) to the K3s cluster I have deployed on my RPI cluster, I am using lets-encrypt to provision the TLS cert and setting the following options on the dashboard deployment:
spec:
args:
- --tls-cert-file=/tls.crt
- --tls-key-file=/tls.key
volumeMounts:
- mountPath: /certs
name: kubernetes-dashboard-certs
volumes:
- name: kubernetes-dashboard-certs
secret:
secretName: cluster.smigula.com-tls
The cert is valid when I visit the URL in my browser, however the pod raises this:
http: TLS handshake error from 10.42.0.8:43704: remote error: tls: bad certificate
It appears that the ingress is terminating the TLS connection when the pod expects to terminate it. What should I do? Thanks.
[edit] I changed the resource kind from Ingress to IngressRouteTCP and set passthrough: true in the tls: section. Still same result.

Consul Helm TLS erro: unknown PEM block type for signing key: CERTIFICATE

I'm trying to understand this error. I am deploying Consul with TLS and Consul Server and Clients. On the tls init containers I get this error.
kubectl logs consul-consul-tls-init-b2rfv
==> WARNING: Server Certificates grants authority to become a
server and access all state in the cluster including root keys
and all ACL tokens. Do not distribute them to production hosts
that are not server nodes. Store them as securely as CA keys.
==> Using /consul/tls/ca/cert/tls.crt and /consul/tls/ca/key/tls.key
unknown PEM block type for signing key: CERTIFICATE
I have tried to create CA certificate and the key in a number of ways. First I tried with openssl, then I tried with cfssl and finally I tried with the consul client. All the same error.
From best I can tell, the volumes are mounting from the secrets. Here is an example of my values.yaml I am deploying consul with through helm 3.
global:
gossipEncryption:
secretName: "gossip"
secretKey: "key"
tls:
enabled: true
verify: false # only for troubleshooting hoping it would help, also tried with true
caCert:
secretName: "consul-tls-ca"
secretKey: "tls.crt"
caKey:
secretName: "consul-server-tls"
secretKey: "tls.crt"
Examples of how I create my gossip and tls secrets
export GOSSIP_ENCRYPTION_KEY=$(consul keygen)
kubectl create secret generic gossip --from-literal="key=${GOSSIP_ENCRYPTION_KEY}"
kubectl create secret generic consul-tls-ca --from-file="tls.crt=./ca.pem"
kubectl create secret generic consul-server-tls --from-file="tls.crt=./server.pem" --from-file="tls.key=./server-key.pem"
I have not found any similar reported errors from others by googling or searching SO. Hashicorps documentation says nothing about it, or I have not found it.
They fixed it in consul 1.10 - previously it knew only ECP certificates. More in https://github.com/hashicorp/consul/issues/7622

Is it possible to use PodPresets in OpenShift 3.11 (3.7+)?

I've installed an OpenShift cluster for testing purposes, and since I'm behind a corporate network, I need to include some Root Certificates in any Pod that wants to make external requests. What can I do to inject those certificates automatically at Pod creation?
I'm running OpenShift Origin (OKD) 3.11 in a local CentOS 7 VM, with a GlusterFS storage provisioning on top of it. I already had multiple issues with the VM itself, which gave me errors when trying to access the network: x509: certificate signed by unknown authority. I fixed that by adding my corporation root certificates in /etc/pki/ca-trust/source/anchors and by running the update-ca-trust command.
When I was running for example the docker-registry deployment in the OpenShift cluster, since the created Pods didn't have access to the host root certificates, they gave again x509: certificate signed by unknown authority errors when trying to pull images from docker.io. I resolved that by creating a ConfigMap containing all needed root certificates, and mounting them in a volume on the registry deployment config.
I thought I only needed to mount a volume in all deployment configs which want to request the external network. But then I provisioned a Jenkins instance and I realised something new: When a pipeline runs, Jenkins creates a Pod with an adapted agent (example: a Spring Boot app will need a Maven agent). Since I have no control to those created pods, they can't have the mounted volume with all root certificates. So for instance I have a pipeline that runs helm init --client-only before releasing my app chart, and this command gives a x509: certificate signed by unknown authority error, because this pod hasn't the root certificates.
x509 Error screenshot
I found that a PodPreset could be the perfect way to resolve my problem, but when I enable this feature in the cluster and create the PodPreset, no new pod is populated. I read on the OpenShift documentation that PodPresets are no longer supported as of 3.7, so I think that it could be the reason it is not working.
OpenShift docs screenshot
Here is my PodPreset definition file:
kind: PodPreset
apiVersion: settings.k8s.io/v1alpha1
metadata:
name: inject-certs
spec:
selector: {}
volumeMounts:
- mountPath: /etc/ssl/certs/cert1.pem
name: ca
subPath: cert1.pem
- mountPath: /etc/ssl/certs/cert2.pem
name: ca
subPath: cert2.pem
- mountPath: /etc/ssl/certs/cert3.pem
name: ca
subPath: cert3.pem
- mountPath: /etc/ssl/certs/cert4.pem
name: ca
subPath: cert4.pem
- mountPath: /etc/ssl/certs/cert5.pem
name: ca
subPath: cert5.pem
- mountPath: /etc/ssl/certs/cert6.pem
name: ca
subPath: cert6.pem
volumes:
- configMap:
defaultMode: 420
name: ca-pemstore
name: ca
I don't know if there is any way to make PodPresets work on OpenShift 3.11, or if there is another solution to inject certs file like this in created pods. This would be really great.
The RedHat COP on GitHub contains a project with a podpresent admission webhook controller you can use:
https://github.com/redhat-cop/podpreset-webhook
basically you deploy that project and change the apiVersion in your PodPresent to apiVersion: redhatcop.redhat.io/v1alpha1

Kubernetes add ca certificate to pods' trust root

In my 10-machines bare-metal Kubernetes cluster, one service needs to call another https-based service which is using a self-signed certificate.
However, since this self-signed certificate is not added into pods' trusted root ca, the call failed saying can't validate x.509 certificate.
All pods are based on ubuntu docker images. However the way to add ca cert to trust list on ubuntu (using dpkg-reconfigure ca-certificates) is not working on this pod any longer. Of course even I succeeded adding the ca cert to trust root on one pod, it's gone when another pod is kicked.
I searched Kubernetes documents, and surprised not found any except configuring cert to talk to API service which is not what I'm looking for. It should be quite common scenario if any secure channel needed between pods. Any ideas?
If you want to bake the cert in at buildtime, edit your Dockerfile adding the commands to copy the cert from the build context and update the trust. You could even add this as a layer to something from docker hub etc.
COPY my-cert.crt /usr/local/share/ca-certificates/
RUN update-ca-certificates
If you're trying to update the trust at runtime things get more complicated. I haven't done this myself, but you might be able to create a configMap containing the certificate, mount it into your container at the above path, and then use an entrypoint script to run update-ca-certificates before your main process.
Updated Edit read option 3:
I can think of 3 options to solve your issue if I was in your scenario:
Option 1.) (The only complete solution I can offer, my other solutions are half solutions unfortunately, credit to Paras Patidar/the following site:)
https://medium.com/#paraspatidar/add-ssl-tls-certificate-or-pem-file-to-kubernetes-pod-s-trusted-root-ca-store-7bed5cd683d
1.) Add certificate to config map:
lets say your pem file is my-cert.pem
kubectl -n <namespace-for-config-map-optional> create configmap ca-pemstore — from-file=my-cert.pem
2.) Mount configmap as volume to exiting CA root location of container:
mount that config map’s file as one to one file relationship in volume mount in directory /etc/ssl/certs/ as file for example
apiVersion: v1
kind: Pod
metadata:
name: cacheconnectsample
spec:
containers:
- name: cacheconnectsample
image: cacheconnectsample:v1
volumeMounts:
- name: ca-pemstore
mountPath: /etc/ssl/certs/my-cert.pem
subPath: my-cert.pem
readOnly: false
ports:
- containerPort: 80
command: [ "dotnet" ]
args: [ "cacheconnectsample.dll" ]
volumes:
- name: ca-pemstore
configMap:
name: ca-pemstore
So I believe the idea here is that /etc/ssl/certs/ is the location of tls certs that are trusted by pods, and the subPath method allows you to add a file without wiping out the contents of the folder, which would contain the k8s secrets.
If all pods share this mountPath, then you might be able to add a pod present and configmap to every namespace, but that's in alpha and is only helpful for static namespaces. (but if this were true then all your pods would trust that cert.)
Option 2.) (Half solution/idea + doesn't exactly answer your question but solves your problem, I'm fairly confident will work in theory, that will require research on your part, but I think you'll find it's the best option:)
In theory you should be able to leverage cert-manager + external-dns + Lets Encrypt Free + a public domain name to replace the self signed cert with a Public Cert.
(there's cert-manager's end result is to auto gen a k8s tls secret signed by Lets Encrypt Free in your cluster, they have a dns01 challenge that can be used to prove you own the cert, which means that you should be able to leverage that solution even without an ingress / even if the cluster is only meant for private network.)
Edit: Option 3.) (After gaining more hands on experience with Kubernetes)
I believe that switchboard.op's answer is probably the best/should be the accepted answer. This "can" be done at runtime, but I'd argue that it should never be done at runtime, doing it at runtime is super hacky and full of edge cases/there's not a universal solution way of doing it.
Also it turns out that my Option 1 doing it is only half correct.
mounting the ca.crt on the pod alone isn't enough. After that file is mounted on the pod you'd need to run a command to trust it. And that means you probably need to override the pods startup command. Example you can't do something like connect to database (the default startup command) and then update trusted CA Certs's command. You'd have to override the startup file to be a hand jammed, overwrite the default startup script, update trusted CA Certs's, connect to the database. And the problem is Ubuntu, RHEL, Alpine, and others have different locations where you have to mount the CA Cert's and sometimes different commands to trust the CA Certs so a universal at runtime solution that you can apply to all pods in the cluster to update their ca.cert isn't impossible, but would require tons of if statements and mutating webhooks/complexity. (a hand crafted per pod solution is very possible though if you just need to be able to dynamically update it for a single pod.)
switchboard.op's answer is the way I'd do it if I had to do it. Build a new custom docker image with your custom ca.cert being trusted baked into the image. This is a universal solution, and greatly simplifies the YAML side. And it's relatively easy to do on the docker image side.
Just for curiosity, here is an example of manifest utilizing the init container approach.
---
apiVersion: v1
kind: ConfigMap
metadata:
name: demo
data:
# in my case it is CloudFlare CA used to sign certificates for origin servers
origin_ca_rsa_root.pem: |
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----
---
apiVersion: v1
kind: Pod
metadata:
name: demo
labels:
name: demo
spec:
nodeSelector:
kubernetes.io/os: linux
initContainers:
- name: init
# image: ubuntu
# command: ["/bin/sh", "-c"]
# args: ["apt -qq update && apt -qq install -y ca-certificates && update-ca-certificates && cp -r /etc/ssl/certs/* /artifact/"]
# # alternative image with preinstalled ca-certificates utilities
image: grafana/alpine:3.15.4
command: ["/bin/sh", "-c"]
args: ["update-ca-certificates && cp -r /etc/ssl/certs/* /artifact/"]
volumeMounts:
- name: demo
# note - we need change extension to crt here
mountPath: /usr/local/share/ca-certificates/origin_ca_rsa_root.crt
subPath: origin_ca_rsa_root.pem
readOnly: false
- name: tmp
mountPath: /artifact
readOnly: false
containers:
- name: demo
# note - even so init container is alpine base, and this one is ubuntu based everything still works
image: nginx
ports:
- containerPort: 80
volumeMounts:
- name: tmp
mountPath: /etc/ssl/certs
readOnly: false
volumes:
- name: demo
configMap:
name: demo
# will be used to pass files between init container and actual container
- name: tmp
emptyDir: {}
and its usage:
kubectl apply -f demo.yml
kubectl exec demo -c demo -- curl --resolve foo.bar.com:443:10.0.14.14 https://foo.bar.com/swagger/v1/swagger.json
kubectl delete -f demp.yml
notes:
replace foo.bar.com to your domain name
replace 10.0.14.14 to ingress controller cluster IP
you may want to add -vv flag to see more details
Indeed it is kind of ugly and monstrous, but at least it does work and is proof of concept. Workarounds with simple ConfigMap do not work because curl reads ca-certificates.crt, which is not modified in that approach.