My Kubernetes cluster has 2 applications.
A deployment connecting to an external API through https:// - lets call it Fetcher
A proxy service which terminates the HTTPs request to inspect the headers for rate limiting - called Proxy
The deployment uses the mentioned proxy, picture the following architecture
Fetcher deployment <-- private network / Kubernetes --> Proxy <-- Internet --> external API
Before I moved to Kubernetes this was solved by creating a self-signed certificate and certificate authority CA to trust and place them on the Fetcher and proxy. The certificate simply contained the IP address of docker as SAN.
X509v3 Subject Alternative Name:
DNS:example.com, DNS:www.example.com, DNS:mail.example.com, DNS:ftp.example.com, IP Address:192.168.99.100, IP Address:192.168.56.1, IP Address:192.168.2.75
However I can't do this in Kubernetes, can I? Since the IP addresses of both the deployment and service are not guaranteed, the IP's could change. I am using a Kubernetes CoreDNS solution, could I add the dns addresses in the certificate? I dont know enough about ssl/certificates to understand.
How can I create a certificate and CA in Kubernetes to create a trust between the certificate sent by the proxy with a custom certificate authority on the fetcher?
If you expose the proxy deployment via a service, then by default it will be assigned a ClusterIP which will be stable even as the IPs of the pods running the proxy may change over time. You will want to generate a cert with an IPSAN corresponding to the ClusterIP of the service, rather than any of the IPs of the pods. Check out the official docs regarding the "service" concept.
Related
I am using an Ingress using Google-managed SSL certs mostly similar to what is described here:
https://cloud.google.com/kubernetes-engine/docs/how-to/managed-certs#setting_up_a_google-managed_certificate
However my backend service is a grpc service that is using HTTP2. According to the same documentation if I am using HTTP2 my backend needs to be "configured with SSL".
This sounds like I need a separate set of certificates for my backend service to configure it with SSL.
Is there a way to use the same Google managed certs here as well?
What are my other options here? I am using, Google managed certs for the Ingress not to manage any certs on my own, if I then use self signed certificates for my service, that kind of defeats the purpose.
i don't think it's required to create SSL for the backend services if you are terminating the HTTPS at LB level. You can attach your certs to at LB level and the backed-end will be HTTPS > HTTP.
You might need to create SSL/TLS new cert in case there is diff version ssl-protocols: TLSv1.2 TLSv1.3, Cipher set in your ingress controller configmap which you are using Nginx ingress controller, Kong etc.
If you are looking for End to End HTTPS traffic definitely you need to create a cert for the backend service.
You can also create/manage the Managed certificate or Custom cert with Cert manager the K8s secret and mount to deployment which will be used further by the service, in that case, no need to manage or create the certs. Ingress will passthrough the HTTPS request to service directly.
In this case, it will be an end-to-end HTTPS setup.
Update :
Note: To ensure the load balancer can make a correct HTTP2 request to
your backend, your backend must be configured with SSL. For more
information on what types of certificates are accepted, see Encryption
from the load balancer to the backends ." end to end tls seems to be a
requirement for HTTP2
This is my site https://findmeip.com it's running on HTTP2 and terminating the SSL/TLS at the Nginx level only.
Definitely, it's good to go with the suggested practice so you can use the ESP option from the Google, setting GKE ingress + ESP + grpc stack.
https://cloud.google.com/endpoints/docs/openapi/specify-proxy-startup-options?hl=tr
If not want to use ESP check above suggested :
You can Mount Managed certificate to
deployment which will be used further by the service, in that case, no
need to manage or create the certs. In other words, cert-manager will create/manage/re-new SSL/TLS on behalf of you in K8s secret which will used by service.
Google Managed Certificates can only be used for the frontend portion of the load balancer (aka client to LB). If you need encryption from the LB to the backends you will have use self-signed certificates or some other way to store said certificates on GKE as secrets and configuring the Ingress to connect to the backend using these secrets.
Like this https://cloud.google.com/kubernetes-engine/docs/concepts/ingress-xlb#setting_up_https_tls_between_client_and_load_balancer
When I check the definition of "WebhookClientConfig" of API of Kubernetes I found comments like this:
// `caBundle` is a PEM encoded CA bundle which will be used to validate the webhook's server certificate.
// If unspecified, system trust roots on the apiserver are used.
// +optional
CABundle []byte `json:"caBundle,omitempty" protobuf:"bytes,2,opt,name=caBundle"`
in WebhookClientConfig
I wonder to know, what's exactly the "system trust roots "?
and I'm afraid the internal signer for CSR API of Kubernetes is not one of them.
It is a good practice to use secure network connections. A Webhook-endpoint in Kubernetes is typically an endpoint in a private network. A custom private CABundle can be used to generate the TLS certificate to achieve a secure connection within the cluster. See e.g. contacting the webhook.
Webhooks can either be called via a URL or a service reference, and can optionally include a custom CA bundle to use to verify the TLS connection.
This CABundle is optional. See also service reference for how to connect.
If the webhook is running within the cluster, then you should use service instead of url. The service namespace and name are required. The port is optional and defaults to 443. The path is optional and defaults to "/".
Here is an example of a mutating webhook configured to call a service on port "1234" at the subpath "/my-path", and to verify the TLS connection against the ServerName my-service-name.my-service-namespace.svc using a custom CA bundle
Sorry for my bad English but I don't know how to solve my problem.
So...
Introduction:
I have 2 microservices (I called them gRPCClient and gRPCServer, although it doesn’t matter what exactly). They need to communicate via TLS. Without Kubernets, everything is quite simple. I create my CA via cfssl in a docker container, then I get the root certificate from CA and I put it in trust for my grpc applications (I do this in Dockerfile), so that any certificate signed by my CA passes the test.
Now Kubernetes is included in the game. I'm playing locally with minikube. I create local cluster "minikube start" on mac (maybe this is important, I don’t know ...)
Problem:
How will this flow work with the Kubernetes? As I understand it, there is already a CA inside the Kubernetes (correct me if this is not so). I read many articles, but I really didn’t understand anything. I tried the examples from this article https://kubernetes.io/docs/tasks/tls/managing-tls-in-a-cluster/
Step by step:
Create a signature request
cat <<EOF | cfssl genkey - | cfssljson -bare server
{
"hosts": [
"my-svc.my-namespace.svc.cluster.local",
"my-pod.my-namespace.pod.cluster.local",
"192.0.2.24",
"10.0.34.2"
],
"CN": "my-pod.my-namespace.pod.cluster.local",
"key": {
"algo": "ecdsa",
"size": 256
}
}
EOF
The first thing I did not understand was the hosts. For example, my-svc.my-namespace.svc.cluster.local is the full name of my service? (I mean the service in Kubernetes as kind: Service). I have it in the namespace "dev" and its name is user-app-sesrvice. Should I specify user-app-sesrvice.dev.svc.cluster.local then? or it just user-app-sesrvice. Or is there some kind of command to get the full name of the service? 192.0.2.24 - as I understand it, is the IP of service, it is also unclear whether it is mandatory to specify it or is it possible only the name of the service? What if I have clusterIP: None installed, then I don't have IP for it. my-pod.my-namespace.pod.cluster.local - Should I specify this? If I have several pods, should I list them all? Then the problem is in the dynamics, because the pods are recreated, deleted and added, and I need to send a new request for signature each time. The same questions that I asked about service including some parts "my-pod" and "namespace"? Is it possible to see the full name of the pod with all this data. 10.0.34.2 - pods' IP. The same question about pods' IP.
I tried to specify the host and CN as name of my service name "user-app-service" (as if I was working without a Kubernetes). I created a signature and a key. Then all the steps, created a request object for signature in the Kubernetes
cat <<EOF | kubectl apply -f -
apiVersion: certificates.k8s.io/v1beta1
kind: CertificateSigningRequest
metadata:
name: my-svc.my-namespace
spec:
request: $(cat server.csr | base64 | tr -d '\n')
usages:
- digital signature
- key encipherment
- server auth
EOF
Then I made it and I received a certificate
Further, based on security, I need to store the key and a certificate in secrets and then get it in the container (for the purposes of the test, I just put them in the container in the docker file, hard-coded), this is in the gRPC server. I run the deployment and created a client on golang, specifying config: = &tls.Config{} in the code so that it would pull the trusted certificates from the system itself, I thought that the Kubernetes has a CA, but did not find how to get its certificate in the docs. I thought the Kubernetes adds them to all the containers himself. But I got the error Unavailable desc = connection error: desc = "transport: authentication handshake failed: x509: certificate signed by unknown authority". How should all this work? Where can I get a CA certificate from a Kubernetes? And then, do I need to add it to each container with my hands in dockerfile? or is this not the right tactic and is there some kind of automation from the Kubernetes?
I found another way, this is to try to deploy cfssl https://hub.docker.com/r/cfssl/cfssl/ on the Kubernetes and already work with it, like there was no Kubernetes (I have not tried this method yet)
How to put all this into a working system, what options to use and why? Maybe there are some full articles. I wrote a lot, but I hope it’s clear. I really need the help.
I am going to break down my answer into a couple of parts:
Kubernetes Services and DNS Discovery
In general, it is recommended to put a Service in front of a Deployment that manages pods in Kubernetes.
The Service creates a stable DNS and IP endpoint for pods that may be deleted and be assigned a different
IP address when recreated. DNS service discovery is automatically enabled with a ClusterIP type service and
is in the format: <service name>.<kubernetes namespace>.svc.<cluster domain> where cluster domain is usually
cluster.local. This means that we can use the autocreated DNS and assigned ClusterIP in our altnames for our
certificate.
Kubernetes Internal CA
Kubernetes does have an internal CA along with API methods to post CSRs and have those CSRs signed
by the CA however I would not use the internal CA for securing microservices. The internal CA is
primarily used by the kubelet and other internal cluster processes to authenticate to the Kubernetes
API server. There is no functionality for autorenewal and I think the cert will always be signed for 30 days.
Kubernetes-native Certificate Management
You can install and use cert-manager to have the cluster automatically create and manage certificates
for you using custom resources. They have excellent examples on their website so I would encourage you
to check that out if it is of interest. You should be able to use the CA Issuer Type and create
Certificate Resources that will create a certificate as a Kubernetes Secret. For the altnames, refer
to the below certificate generation steps in the manual section of my response.
Manually Create and Deploy Certificates
You should be able to achieve they same result using your "without Kubernetes" approach using cfssl:
generate CA using cfssl
add CA as trusted in image (using your Dockerfile approach)
create Kubernetes Service (for example purposes I will use kubectl create)
$ kubectl create service clusterip grpcserver --tcp=8000
describe the created Kubernetes Service, note IP will most likely be different in your case
$ kubectl describe service/grpcserver
Name: grpcserver
Namespace: default
Labels: app=grpcserver
Annotations: <none>
Selector: app=grpcserver
Type: ClusterIP
IP: 10.108.125.158
Port: 8000 8000/TCP
TargetPort: 8000/TCP
Endpoints: <none>
Session Affinity: None
Events: <none>
generate certificate for gRPCServer with a CN of grpcserver.default.svc.cluster.local the following altnames:
grpcserver
grpcserver.default.svc
grpcserver.default.svc.cluster.local
10.108.125.158
generate the client certificate with cfssl
put both certificates into Secret objects
kubectl create secret tls server --cert=server.pem --key=server.key
kubectl create secret tls client --cert=client.pem --key=client.key
mount the secret into the podspec
There is a lot of boilerplate work that you need to do with this bespoke approach. If you have an option I would suggest exploring service mesh such as istio or linkerd to secure communication between micro-services using TLS in kubernetes.
We have 10 different kubernetes pods which runs inside a private VPN, this pods are HTTP serving endpoints(not HTTPS). But this services would interact with HTTPS serving endpoints. Logically to make call to HTTP-S serving endpoints from a HTTP serving pod , the SSL server certificate trust is required. Hence we decided to store the SSL certificates inside each HTTP Service pods to make call to HTTPS serving pods.
I am wondering is there are any alternative approaches for managing SSL certificates across different pods in Kubernetes cluster? How about kubeadm for K8s certificate management ... any suggestions ?
This is more of a general SSL certificate question rather than specific to Kubernetes.
If the containers/pods providing the HTTPS endpoint already have their SSL correctly configured and the SSL certificate you are using was purchased/generated from a known, trusted CA (like letsencrypt or any one of the known, trusted certificate companies out there) then there is no reason your other container apps that are making connections to your HTTPS endpoint serving pods would need anything special stored in them.
The only exception to this is if you have your own private CA and you've generated certificates on that internally and are installing them in your HTTPS serving containers. (Or if you are generating self-signed certs). Your pods/containers connecting to the https endpoints would then need to know about the CA certificate. Here is a stackoverflow question/answer that deals with this scenario:
How do I add a CA root certificate inside a docker image?
Lastly, there are better patterns to manage SSL in containers and container schedulers like Kubernetes. It all depends on your design/architecture.
Some general ideas:
Terminate SSL at a load balancer before traffic hits your pods. The load balancer then handles the traffic from itself to the pods as HTTP, and your clients terminate SSL at the Load Balancer. (This doesn't really tackle your specific use case though)
Use something like Hashicorp Vault as an internal CA, and use automation around this product and Kubernetes to manage certificates automatically.
Use something like cert-manager by jetstack to manage SSL in your kubernetes environment automatically. It can connect to a multitude of 'providers' such as letsencrypt for free SSL. https://github.com/jetstack/cert-manager
Hope that helps.
I am trying to setup Traefik on Kubernetes with Let's Encrypt enabled. I managed yesterday to retrieve the first SSL certificated from Let's Encrypt but am a little bit stuck on how to store the SSL certificates.
I am able to create a Volume to store the Traefik certificates but that would mean that I am limited to a single replica (when having multiple replicas am I unable to retrieve a certificate since the validation goes wrong most of the times due to that the volume is not shared).
I read that Traefik is able to use something like Consul but I am wondering if I have to setup/run a complete Consul cluster to just store the fetched certificates etc.?
You can store the certificate in a kubernetes secret and you reference to this secret in your ingress.
spec:
tls:
- secretName: testsecret
The secret has to be in same namespace the ingress is running in.
See also https://docs.traefik.io/user-guide/kubernetes/#add-a-tls-certificate-to-the-ingress
You can set up the ingress with controller and apply for the SSL certificate of let's encrypt.
You can use cluster issuer to manage the SSL certificates and store that tls certificate on ingress.you can also use different ingress controllers like nginx also can use service mess istio.
For more details you can check : https://docs.traefik.io/user-guide/kubernetes/