Cert Manager working with only example.com not with svc.cluster.local - ssl-certificate

I was trying to use Cert-manager with Private CA issuer. I have written a custom implementation which works with the CA. When I create a new service with kn like.
kn service create helloworld17-go \
--image gcr.io/knative-samples/helloworld-go
--env TARGET=17-With-http
If the DNS config is set to default, This creates an endpoint:
http://helloworld17-go..example.com
and the certificate is created.
But If I change the DNS to svc.cluster.local then the certificates are not getting created.
Any Config in Knative or Cert-Manager that I am missing?

Related

Prometheus Discovering Services with Consul: tls:Bad Certificate

I want to make use of Consul with Prometheus. But receive the tls:Bad Certificate error.
See:
caller=consul.go:513 level=error component="discovery manager scrape" discovery=consul msg="Error refreshing service" service=NodeExporter tags= err="Get \"https://consul.service.dc1.consul:8500/v1/health/service/NodeExporter?dc=dc1&stale=&wait=120000ms\": remote error: tls: bad certificate"
At the same time when running the same manually with curl, I am able to get an expected output:
curl -v -s -X GET "https://consul.service.dc1.consul:8500/v1/health/service/NodeExporter?dc=dc1&stale=&wait=120000ms" --key /secrets/consul.key --cert /secrets/consul.pem --cacert /secrets/cachain.pem
[{"Node":{"ID":"e53188ef-16ec-xxxx-xxxx-xxxx","Node":"dc1-runner-dev-1.test.io","Address":"30.10.xx.xx","Datacenter":"dc1","TaggedAddresses":{"lan":"30.10.xx.xx","lan_ipv4":"30.10.xx.xx","wan":"30.10.xx.xx","wan_ipv4":"30.10.xx.xx"},"Meta":{"consul-network-segment":""},"CreateIndex":71388,"ModifyIndex":71391},"Service":{"ID":"dc1-runner-dev-1.test.io-NodeExporter","Service":"NodeExporter","Tags":["service=node_exporter","environment=dev","datacenter=dc1"]...
To see more details from curl debug output, please see here:
LINK
The Prometheus is running in Docker. The Prometheus version is 2.31.1
curl command I also execute from the same Docker container.
Here Prometheus config:
# my global config
global:
scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
- job_name: "node_exporter"
consul_sd_configs:
- server: "consul.service.dc1.consul:8500"
scheme: "https"
datacenter: "dc1"
services: [
"NodeExporter"]
tls_config:
ca_file: "/secrets/cachain.pem"
cert_file: "/secrets/consul.pem"
key_file: "/secrets/consul.key"
The Prometheus is able to access the specified certificates.
I have also tried to add "insecure_skip_verify" property into the prometheus config file. I receive the same error.
The steps how the certificates are created:
I create an offline self-signed root CA by using Ansible modules from community.crypto collection
Create CSR and sign Intermediate CA1 with that root CA
I upload the Intermediate CA1 and the corresponding key into PKI secret engine in Hashicorp Vault.
After that inside Vault PKI I create new CSR and use Intermediate CA1 to sign Intermediate CA2.
Create a PKI role
The certificates in Prometheus are leaf certificates of Intermediate CA2 issued against the mentioned PKI role.
See the output of openssl x509 -text command for the used certificates here
Any ideas what I am missing here?

Move cert-manager certificate to another Kubernetes cluster

I'm in the process of moving web services from one Kubernetes cluster to another. The goal is to do that without service interruption.
This is difficult with cert-manager and HTTP challenges, because cert-manager on the new cluster can only retrieve a certificate once the DNS entry points to that cluster. However, if I switch the DNS entry to the new cluster, clients will potentially talk to the new cluster before a valid certificate has been generated. This is like a chicken-and-egg problem.
How do I move the cert-manager certificates to the new cluster, so that it already has the certs once I make the DNS switch?
Certificates are stored in Kubernetes secrets. Cert-manager will pick up existing secrets instead of creating new ones, if the secret matches the ingress object.
So assuming that the ingress object looks the same on both clusters, and that the same namespace is used, copying the secret is as simple as this:
kubectl --context OLD_CLUSTER -n NAMESPACE get secret SECRET_NAME --output yaml \
| kubectl --context NEW_CLUSTER -n NAMESPACE apply -f -
Replace OLD_CLUSTER and NEW_CLUSTER with the kubectl context names of the respective clusters (see kubectl config get-contexts).
Replace SECRET_NAME with the name of the secret where the certificate is stored. This name can be found in the ingress.
Replace NAMESPACE with the actual namespace that you're using.
The command simply exports the secret in YAML format, and then uses kubectl apply -f to create the same resource in the new cluster.
Once the ingress is in place on the new cluster, you can verify that the cert works by using openssl s_client:
openssl s_client -connect CLUSTER_IP:443 -servername SERVICE_DNS_NAME
Again, replace CLUSTER_IP and SERVICE_DNS_NAME accordingly.

How to do TLS between microservices in Kubernetes?

Sorry for my bad English but I don't know how to solve my problem.
So...
Introduction:
I have 2 microservices (I called them gRPCClient and gRPCServer, although it doesn’t matter what exactly). They need to communicate via TLS. Without Kubernets, everything is quite simple. I create my CA via cfssl in a docker container, then I get the root certificate from CA and I put it in trust for my grpc applications (I do this in Dockerfile), so that any certificate signed by my CA passes the test.
Now Kubernetes is included in the game. I'm playing locally with minikube. I create local cluster "minikube start" on mac (maybe this is important, I don’t know ...)
Problem:
How will this flow work with the Kubernetes? As I understand it, there is already a CA inside the Kubernetes (correct me if this is not so). I read many articles, but I really didn’t understand anything. I tried the examples from this article https://kubernetes.io/docs/tasks/tls/managing-tls-in-a-cluster/
Step by step:
Create a signature request
cat <<EOF | cfssl genkey - | cfssljson -bare server
{
"hosts": [
"my-svc.my-namespace.svc.cluster.local",
"my-pod.my-namespace.pod.cluster.local",
"192.0.2.24",
"10.0.34.2"
],
"CN": "my-pod.my-namespace.pod.cluster.local",
"key": {
"algo": "ecdsa",
"size": 256
}
}
EOF
The first thing I did not understand was the hosts. For example, my-svc.my-namespace.svc.cluster.local is the full name of my service? (I mean the service in Kubernetes as kind: Service). I have it in the namespace "dev" and its name is user-app-sesrvice. Should I specify user-app-sesrvice.dev.svc.cluster.local then? or it just user-app-sesrvice. Or is there some kind of command to get the full name of the service? 192.0.2.24 - as I understand it, is the IP of service, it is also unclear whether it is mandatory to specify it or is it possible only the name of the service? What if I have clusterIP: None installed, then I don't have IP for it. my-pod.my-namespace.pod.cluster.local - Should I specify this? If I have several pods, should I list them all? Then the problem is in the dynamics, because the pods are recreated, deleted and added, and I need to send a new request for signature each time. The same questions that I asked about service including some parts "my-pod" and "namespace"? Is it possible to see the full name of the pod with all this data. 10.0.34.2 - pods' IP. The same question about pods' IP.
I tried to specify the host and CN as name of my service name "user-app-service" (as if I was working without a Kubernetes). I created a signature and a key. Then all the steps, created a request object for signature in the Kubernetes
cat <<EOF | kubectl apply -f -
apiVersion: certificates.k8s.io/v1beta1
kind: CertificateSigningRequest
metadata:
name: my-svc.my-namespace
spec:
request: $(cat server.csr | base64 | tr -d '\n')
usages:
- digital signature
- key encipherment
- server auth
EOF
Then I made it and I received a certificate
Further, based on security, I need to store the key and a certificate in secrets and then get it in the container (for the purposes of the test, I just put them in the container in the docker file, hard-coded), this is in the gRPC server. I run the deployment and created a client on golang, specifying config: = &tls.Config{} in the code so that it would pull the trusted certificates from the system itself, I thought that the Kubernetes has a CA, but did not find how to get its certificate in the docs. I thought the Kubernetes adds them to all the containers himself. But I got the error Unavailable desc = connection error: desc = "transport: authentication handshake failed: x509: certificate signed by unknown authority". How should all this work? Where can I get a CA certificate from a Kubernetes? And then, do I need to add it to each container with my hands in dockerfile? or is this not the right tactic and is there some kind of automation from the Kubernetes?
I found another way, this is to try to deploy cfssl https://hub.docker.com/r/cfssl/cfssl/ on the Kubernetes and already work with it, like there was no Kubernetes (I have not tried this method yet)
How to put all this into a working system, what options to use and why? Maybe there are some full articles. I wrote a lot, but I hope it’s clear. I really need the help.
I am going to break down my answer into a couple of parts:
Kubernetes Services and DNS Discovery
In general, it is recommended to put a Service in front of a Deployment that manages pods in Kubernetes.
The Service creates a stable DNS and IP endpoint for pods that may be deleted and be assigned a different
IP address when recreated. DNS service discovery is automatically enabled with a ClusterIP type service and
is in the format: <service name>.<kubernetes namespace>.svc.<cluster domain> where cluster domain is usually
cluster.local. This means that we can use the autocreated DNS and assigned ClusterIP in our altnames for our
certificate.
Kubernetes Internal CA
Kubernetes does have an internal CA along with API methods to post CSRs and have those CSRs signed
by the CA however I would not use the internal CA for securing microservices. The internal CA is
primarily used by the kubelet and other internal cluster processes to authenticate to the Kubernetes
API server. There is no functionality for autorenewal and I think the cert will always be signed for 30 days.
Kubernetes-native Certificate Management
You can install and use cert-manager to have the cluster automatically create and manage certificates
for you using custom resources. They have excellent examples on their website so I would encourage you
to check that out if it is of interest. You should be able to use the CA Issuer Type and create
Certificate Resources that will create a certificate as a Kubernetes Secret. For the altnames, refer
to the below certificate generation steps in the manual section of my response.
Manually Create and Deploy Certificates
You should be able to achieve they same result using your "without Kubernetes" approach using cfssl:
generate CA using cfssl
add CA as trusted in image (using your Dockerfile approach)
create Kubernetes Service (for example purposes I will use kubectl create)
$ kubectl create service clusterip grpcserver --tcp=8000
describe the created Kubernetes Service, note IP will most likely be different in your case
$ kubectl describe service/grpcserver
Name: grpcserver
Namespace: default
Labels: app=grpcserver
Annotations: <none>
Selector: app=grpcserver
Type: ClusterIP
IP: 10.108.125.158
Port: 8000 8000/TCP
TargetPort: 8000/TCP
Endpoints: <none>
Session Affinity: None
Events: <none>
generate certificate for gRPCServer with a CN of grpcserver.default.svc.cluster.local the following altnames:
grpcserver
grpcserver.default.svc
grpcserver.default.svc.cluster.local
10.108.125.158
generate the client certificate with cfssl
put both certificates into Secret objects
kubectl create secret tls server --cert=server.pem --key=server.key
kubectl create secret tls client --cert=client.pem --key=client.key
mount the secret into the podspec
There is a lot of boilerplate work that you need to do with this bespoke approach. If you have an option I would suggest exploring service mesh such as istio or linkerd to secure communication between micro-services using TLS in kubernetes.

Allow kubernetes storageclass resturl HTTPS with self-signed certificate

I'm currently trying to setup GlusterFS integration for a Kubernetes cluster. Volume provisioning is done with Heketi.
GlusterFS-cluster has a pool of 3 VMs
1st node has Heketi server and client configured. Heketi API is secured with a self-signed certificate OpenSSL and can be accessed.
e.g. curl https://heketinodeip:8080/hello -k
returns the expected response.
StorageClass definition sets the "resturl" to Heketi API https://heketinodeip:8080
When storageclass was created successfully and I try to create a PVC, this fails:
"x509: certificate signed by unknown authority"
This is expected, as ususally one has to allow this insecure HTTPS-connection or explicitly import the issuer CA (e.g. a file simply containing the pem-String)
But: How is this done for Kubernetes? How do I allow this insecure connection to Heketi from Kubernetes, allowing insecure self-signed cert HTTPS or where/how do I import a CA?
It is not an DNS/IP problem, this was resolved with correct subjectAltName settings.
(seems that everybody is using Heketi, and it seems to be still a standard usecase for GlusterFS integration, but always without SSL, if connected to Kubernetes)
Thank you!
To skip verification of server cert, caller just need specify InsecureSkipVerify: true. Refer this github issue for more information (https://github.com/heketi/heketi/issues/1467)
In this page, they have specified a way to use self signed certificate. Not explained thoroughly but still can be useful (https://github.com/gluster/gluster-kubernetes/blob/master/docs/design/tls-security.md#self-signed-keys).

Emtpy "ca.crt" file from cert-manager

I use cert-manager to generate TLS certificates for my application on Kubernetes with Let's Encrypt.
It is running and I can see "ca.crt", "tls.crt" and "tsl.key" inside the container of my application (in /etc/letsencrypt/).
But "ca.crt" is empty, and the application complains about it (Error: Unable to load CA certificates. Check cafile "/etc/letsencrypt/ca.crt"). The two other files look like normal certificates.
What does that mean?
With cert-manager you have to use the nginx-ingress controller which will work as expose point.
ingress nginx controller will create one load balancer and you can setup your application tls certificate there.
There is nothing regarding certificate inside the pod of cert-manager.
so setup nginx ingress with cert-manager that will help to manage the tls certificate. that certificate will be stored in kubernetes secret.
Please follow this guide for more details:
https://www.digitalocean.com/community/tutorials/how-to-set-up-an-nginx-ingress-with-cert-manager-on-digitalocean-kubernetes
I noticed this:
$ kubectl describe certificate iot-mysmartliving -n mqtt
...
Status:
Conditions:
...
Message: Certificate issuance in progress. Temporary certificate issued.
and a related line in the docs:
https://docs.cert-manager.io/en/latest/tasks/issuing-certificates/index.html?highlight=gce#temporary-certificates-whilst-issuing
They explain that the two existing certificates are generated for some compatibility, but they are not valid until the issuer has done its work.
So that suggests that the issuer is not properly set up.
Edit: yes it was. The DNS challenge was failing, the debug line that helped was
kubectl describe challenge --all-namespaces=true
More generally,
kubectl describe clusterissuer,certificate,order,challenge --all-namespaces=true
According to the documentation, cafile is for something else (trusted root certificates), and it would probably be more correct to use capath /etc/ssl/certs on most systems.
You can follow this guide if you have Windows Operating System:
tls.
Article is about how to enable Mosquitto and clients to use the TLS protocol.
Establishing a secure TLS connection to the Mosquitto broker requires key and certificate files. Creating all these files with the correct settings is not the easiest thing, but is rewarded with a secure way to communicate with the MQTT broker.
If you want to use TLS certificates you've generated using the Let's Encrypt service.
You need to be aware that current versions of mosquitto never update listener settings when running, so when you regenerate the server certificates you will need to completely restart the broker.
If you use DigitalOcean Kubernetes try to follow this instruction: ca-ninx, you can use Cert-Manager and ingress nginx controller, they will work like certbot.
Another solution is to create the certificate locally on your machine and then upload it to kubernetes secret and use secret on ingress.