How to do TLS between microservices in Kubernetes? - ssl

Sorry for my bad English but I don't know how to solve my problem.
So...
Introduction:
I have 2 microservices (I called them gRPCClient and gRPCServer, although it doesn’t matter what exactly). They need to communicate via TLS. Without Kubernets, everything is quite simple. I create my CA via cfssl in a docker container, then I get the root certificate from CA and I put it in trust for my grpc applications (I do this in Dockerfile), so that any certificate signed by my CA passes the test.
Now Kubernetes is included in the game. I'm playing locally with minikube. I create local cluster "minikube start" on mac (maybe this is important, I don’t know ...)
Problem:
How will this flow work with the Kubernetes? As I understand it, there is already a CA inside the Kubernetes (correct me if this is not so). I read many articles, but I really didn’t understand anything. I tried the examples from this article https://kubernetes.io/docs/tasks/tls/managing-tls-in-a-cluster/
Step by step:
Create a signature request
cat <<EOF | cfssl genkey - | cfssljson -bare server
{
"hosts": [
"my-svc.my-namespace.svc.cluster.local",
"my-pod.my-namespace.pod.cluster.local",
"192.0.2.24",
"10.0.34.2"
],
"CN": "my-pod.my-namespace.pod.cluster.local",
"key": {
"algo": "ecdsa",
"size": 256
}
}
EOF
The first thing I did not understand was the hosts. For example, my-svc.my-namespace.svc.cluster.local is the full name of my service? (I mean the service in Kubernetes as kind: Service). I have it in the namespace "dev" and its name is user-app-sesrvice. Should I specify user-app-sesrvice.dev.svc.cluster.local then? or it just user-app-sesrvice. Or is there some kind of command to get the full name of the service? 192.0.2.24 - as I understand it, is the IP of service, it is also unclear whether it is mandatory to specify it or is it possible only the name of the service? What if I have clusterIP: None installed, then I don't have IP for it. my-pod.my-namespace.pod.cluster.local - Should I specify this? If I have several pods, should I list them all? Then the problem is in the dynamics, because the pods are recreated, deleted and added, and I need to send a new request for signature each time. The same questions that I asked about service including some parts "my-pod" and "namespace"? Is it possible to see the full name of the pod with all this data. 10.0.34.2 - pods' IP. The same question about pods' IP.
I tried to specify the host and CN as name of my service name "user-app-service" (as if I was working without a Kubernetes). I created a signature and a key. Then all the steps, created a request object for signature in the Kubernetes
cat <<EOF | kubectl apply -f -
apiVersion: certificates.k8s.io/v1beta1
kind: CertificateSigningRequest
metadata:
name: my-svc.my-namespace
spec:
request: $(cat server.csr | base64 | tr -d '\n')
usages:
- digital signature
- key encipherment
- server auth
EOF
Then I made it and I received a certificate
Further, based on security, I need to store the key and a certificate in secrets and then get it in the container (for the purposes of the test, I just put them in the container in the docker file, hard-coded), this is in the gRPC server. I run the deployment and created a client on golang, specifying config: = &tls.Config{} in the code so that it would pull the trusted certificates from the system itself, I thought that the Kubernetes has a CA, but did not find how to get its certificate in the docs. I thought the Kubernetes adds them to all the containers himself. But I got the error Unavailable desc = connection error: desc = "transport: authentication handshake failed: x509: certificate signed by unknown authority". How should all this work? Where can I get a CA certificate from a Kubernetes? And then, do I need to add it to each container with my hands in dockerfile? or is this not the right tactic and is there some kind of automation from the Kubernetes?
I found another way, this is to try to deploy cfssl https://hub.docker.com/r/cfssl/cfssl/ on the Kubernetes and already work with it, like there was no Kubernetes (I have not tried this method yet)
How to put all this into a working system, what options to use and why? Maybe there are some full articles. I wrote a lot, but I hope it’s clear. I really need the help.

I am going to break down my answer into a couple of parts:
Kubernetes Services and DNS Discovery
In general, it is recommended to put a Service in front of a Deployment that manages pods in Kubernetes.
The Service creates a stable DNS and IP endpoint for pods that may be deleted and be assigned a different
IP address when recreated. DNS service discovery is automatically enabled with a ClusterIP type service and
is in the format: <service name>.<kubernetes namespace>.svc.<cluster domain> where cluster domain is usually
cluster.local. This means that we can use the autocreated DNS and assigned ClusterIP in our altnames for our
certificate.
Kubernetes Internal CA
Kubernetes does have an internal CA along with API methods to post CSRs and have those CSRs signed
by the CA however I would not use the internal CA for securing microservices. The internal CA is
primarily used by the kubelet and other internal cluster processes to authenticate to the Kubernetes
API server. There is no functionality for autorenewal and I think the cert will always be signed for 30 days.
Kubernetes-native Certificate Management
You can install and use cert-manager to have the cluster automatically create and manage certificates
for you using custom resources. They have excellent examples on their website so I would encourage you
to check that out if it is of interest. You should be able to use the CA Issuer Type and create
Certificate Resources that will create a certificate as a Kubernetes Secret. For the altnames, refer
to the below certificate generation steps in the manual section of my response.
Manually Create and Deploy Certificates
You should be able to achieve they same result using your "without Kubernetes" approach using cfssl:
generate CA using cfssl
add CA as trusted in image (using your Dockerfile approach)
create Kubernetes Service (for example purposes I will use kubectl create)
$ kubectl create service clusterip grpcserver --tcp=8000
describe the created Kubernetes Service, note IP will most likely be different in your case
$ kubectl describe service/grpcserver
Name: grpcserver
Namespace: default
Labels: app=grpcserver
Annotations: <none>
Selector: app=grpcserver
Type: ClusterIP
IP: 10.108.125.158
Port: 8000 8000/TCP
TargetPort: 8000/TCP
Endpoints: <none>
Session Affinity: None
Events: <none>
generate certificate for gRPCServer with a CN of grpcserver.default.svc.cluster.local the following altnames:
grpcserver
grpcserver.default.svc
grpcserver.default.svc.cluster.local
10.108.125.158
generate the client certificate with cfssl
put both certificates into Secret objects
kubectl create secret tls server --cert=server.pem --key=server.key
kubectl create secret tls client --cert=client.pem --key=client.key
mount the secret into the podspec

There is a lot of boilerplate work that you need to do with this bespoke approach. If you have an option I would suggest exploring service mesh such as istio or linkerd to secure communication between micro-services using TLS in kubernetes.

Related

How to add extra nodes to the certificate-authority-data from a self signed k8s cluster?

I am trying to create an HA cluster with HAProxy and below 3 master nodes.
On the proxy I am following the official documentation High Availability Considerations/haproxy configuration. I am passing the ssl verification to the Server Api option ssl-hello-chk.
Having said that I can understand that on my ~/.kube/config file I am using the wrong certificate-authority-data that I picked up from the prime master node e.g.:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: <something-something>
server: https://ip:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: kubernetes-admin
name: kubernetes-admin#kubernetes
current-context: kubernetes-admin#kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
user:
client-certificate-data: <something-something>
client-key-data: <something-something>
token: <something-something>
I found a relevant ticket on GitHub Unable to connect to the server: x509: certificate signed by unknown authority/onesolpark which makes sense that I should extract the certificate-authority-data of the proxy.
On this case I assume that I should extract the certificate-authority-data from one of the certs in /etc/kubernetes/pki/ most likely apiserver.*?
Any idea on this?
Thanks in advance for your time and effort.
Okay I managed to figured it out.
When a k8s admin decides to create a HA ckuster he should have minimum one LB but ideally he should have two LB that both are able to LB towards all Master nodes (3,5 etc).
So when the user wants to send a request to Server API towards one of the Master nodes, the request will go through ideally through a Virtual IP and forward to one the LB. As a second step the LB will forward the request to one of the Master nodes.
The problem that I wanted to solve is that the Server API had no record of the IP of the LB(s).
In result the user will get the error Unable to connect to the server: x509: certificate signed by unknown authority.
The solution can be found on this relevant question How can I add an additional IP / hostname to my Kubernetes certificate?.
Straight answer is simply add the LB(s) in the kubeadm config file before launch of Master Prime node e.g.:
apiServer:
certSANs:
- "ip-of-LB1"
- "domain-of-LB1"
- "ip-of-LB2"
- "domain-of-LB2" # etc etc
extraArgs:
authorization-mode: Node,RBAC
timeoutForControlPlane: 4m0s
But as it is also mentioned the analytical documentation can be found here Adding a Name to the Kubernetes API Server Certificate.
Also if the user decides to create its own certificates and not use the default self sign certificates (populated from k8s by default) he can add the nodes manually as documented from the official site Certificates.
Then if you want to copy the ca.crt is under the default dir /etc/kubernetes/pki/ca.crt (unless defined differently), or the user can choose to simply copy the ~/.kube/config file for the kubectl communication.
Hope this helps someone else to spend less time in the future.

kubernetes authentication against the API server

I have setup a kubernetes cluster from scratch. This just means I did not use services provided by others, but used the k8s installer it self. Before we used to have other clusters, but with providers and they give you tls cert and key for auth, etc. Now this cluster was setup by myself, I have access via kubectl:
$ kubectl get all
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 21h
$
I also tried this and I can add a custom key, but then when I try to query via curl I get pods is forbidden: User "system:anonymous" cannot list resource "pods" in API group "" at the cluster scope.
I can not figure out where can I get the cert and key for a user to authenticate using the API for tls auth. I have tried to understand the official docs, but I have got nowhere. Can someone help me find where those files are or how to add or get certificates that i can use for the rest API?
Edit1: my .kube.config file looks like this:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0t(...)=
server: https://private_IP:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: kubernetes-admin
name: kubernetes-admin#kubernetes
current-context: kubernetes-admin#kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
user:
client-certificate-data: LS0tLS(...)Qo=
client-key-data: LS0(...)tCg==
It works from the localhost just normally.
In the other hand I noticed something. From the localhost I can access the cluster by generating the token using this method.
Also notice that for now I do not mind about creating multiple roles for multiple users, etc. I just need access to the API from remote and can be using "default" authentication or roles.
Now when I try to do the same from remote I get the following:
I tried using that config to run kubectl get all from remote, it runs for a while and then ends in Unable to connect to the server: dial tcpprivate_IP:6443: i/o timeout.
This happens because the config has private_IP, then I changed the IP to Public_IP:6443 and now get the following : Unable to connect to the server: x509: certificate is valid for some_private_IP, My_private_IP, not Public_IP:6443
Keep present that this is and AWS ec2 instance with elastic IP (You can think of Elastic IP as just a public IP on a traditional setup, but this public ip is on your public router and then this router routes requests to your actual server on private network). For AWS fans like I said, I can not use the EKS service here.
So how do I get this to be able to use the Public IP?
It seems your main problem is the TLS server certificate validation.
One option is to tell kubectl to skip the validation of the server certificate:
kubectl --insecure-skip-tls-verify ...
This has obviously the potential to be "insecure", but that depends on your use case
Another option is to recreate the cluster with the public IP address added to the server certificate. And it should also be possible to recreate only the certificate with kubeadm without recreating the cluster. Details about the latter two points can be found in this answer.
You need to setup RBAC for the user. define roles and rolebinding. follow the link for reference -> https://docs.bitnami.com/kubernetes/how-to/configure-rbac-in-your-kubernetes-cluster/

Kubernetes custom CA and certificate between proxy service and deployment

My Kubernetes cluster has 2 applications.
A deployment connecting to an external API through https:// - lets call it Fetcher
A proxy service which terminates the HTTPs request to inspect the headers for rate limiting - called Proxy
The deployment uses the mentioned proxy, picture the following architecture
Fetcher deployment <-- private network / Kubernetes --> Proxy <-- Internet --> external API
Before I moved to Kubernetes this was solved by creating a self-signed certificate and certificate authority CA to trust and place them on the Fetcher and proxy. The certificate simply contained the IP address of docker as SAN.
X509v3 Subject Alternative Name:
DNS:example.com, DNS:www.example.com, DNS:mail.example.com, DNS:ftp.example.com, IP Address:192.168.99.100, IP Address:192.168.56.1, IP Address:192.168.2.75
However I can't do this in Kubernetes, can I? Since the IP addresses of both the deployment and service are not guaranteed, the IP's could change. I am using a Kubernetes CoreDNS solution, could I add the dns addresses in the certificate? I dont know enough about ssl/certificates to understand.
How can I create a certificate and CA in Kubernetes to create a trust between the certificate sent by the proxy with a custom certificate authority on the fetcher?
If you expose the proxy deployment via a service, then by default it will be assigned a ClusterIP which will be stable even as the IPs of the pods running the proxy may change over time. You will want to generate a cert with an IPSAN corresponding to the ClusterIP of the service, rather than any of the IPs of the pods. Check out the official docs regarding the "service" concept.

Emtpy "ca.crt" file from cert-manager

I use cert-manager to generate TLS certificates for my application on Kubernetes with Let's Encrypt.
It is running and I can see "ca.crt", "tls.crt" and "tsl.key" inside the container of my application (in /etc/letsencrypt/).
But "ca.crt" is empty, and the application complains about it (Error: Unable to load CA certificates. Check cafile "/etc/letsencrypt/ca.crt"). The two other files look like normal certificates.
What does that mean?
With cert-manager you have to use the nginx-ingress controller which will work as expose point.
ingress nginx controller will create one load balancer and you can setup your application tls certificate there.
There is nothing regarding certificate inside the pod of cert-manager.
so setup nginx ingress with cert-manager that will help to manage the tls certificate. that certificate will be stored in kubernetes secret.
Please follow this guide for more details:
https://www.digitalocean.com/community/tutorials/how-to-set-up-an-nginx-ingress-with-cert-manager-on-digitalocean-kubernetes
I noticed this:
$ kubectl describe certificate iot-mysmartliving -n mqtt
...
Status:
Conditions:
...
Message: Certificate issuance in progress. Temporary certificate issued.
and a related line in the docs:
https://docs.cert-manager.io/en/latest/tasks/issuing-certificates/index.html?highlight=gce#temporary-certificates-whilst-issuing
They explain that the two existing certificates are generated for some compatibility, but they are not valid until the issuer has done its work.
So that suggests that the issuer is not properly set up.
Edit: yes it was. The DNS challenge was failing, the debug line that helped was
kubectl describe challenge --all-namespaces=true
More generally,
kubectl describe clusterissuer,certificate,order,challenge --all-namespaces=true
According to the documentation, cafile is for something else (trusted root certificates), and it would probably be more correct to use capath /etc/ssl/certs on most systems.
You can follow this guide if you have Windows Operating System:
tls.
Article is about how to enable Mosquitto and clients to use the TLS protocol.
Establishing a secure TLS connection to the Mosquitto broker requires key and certificate files. Creating all these files with the correct settings is not the easiest thing, but is rewarded with a secure way to communicate with the MQTT broker.
If you want to use TLS certificates you've generated using the Let's Encrypt service.
You need to be aware that current versions of mosquitto never update listener settings when running, so when you regenerate the server certificates you will need to completely restart the broker.
If you use DigitalOcean Kubernetes try to follow this instruction: ca-ninx, you can use Cert-Manager and ingress nginx controller, they will work like certbot.
Another solution is to create the certificate locally on your machine and then upload it to kubernetes secret and use secret on ingress.

Let's encrypt, Kubernetes and Traefik on GKE

I am trying to setup Traefik on Kubernetes with Let's Encrypt enabled. I managed yesterday to retrieve the first SSL certificated from Let's Encrypt but am a little bit stuck on how to store the SSL certificates.
I am able to create a Volume to store the Traefik certificates but that would mean that I am limited to a single replica (when having multiple replicas am I unable to retrieve a certificate since the validation goes wrong most of the times due to that the volume is not shared).
I read that Traefik is able to use something like Consul but I am wondering if I have to setup/run a complete Consul cluster to just store the fetched certificates etc.?
You can store the certificate in a kubernetes secret and you reference to this secret in your ingress.
spec:
tls:
- secretName: testsecret
The secret has to be in same namespace the ingress is running in.
See also https://docs.traefik.io/user-guide/kubernetes/#add-a-tls-certificate-to-the-ingress
You can set up the ingress with controller and apply for the SSL certificate of let's encrypt.
You can use cluster issuer to manage the SSL certificates and store that tls certificate on ingress.you can also use different ingress controllers like nginx also can use service mess istio.
For more details you can check : https://docs.traefik.io/user-guide/kubernetes/