TLS setup on K8S Ingress with Traefik - ssl

I have a setup that is not too much different than the user guide for use with k8s. For some reason I can only access http://app.minikube and not https://app.minikube.
Can someone look at my setup and see what I am obviously missing?
apiVersion: v1
kind: Service
metadata:
name: myapp
labels:
app: myapp
spec:
ports:
- name: http
port: 80
targetPort: 7777
selector:
app: myapp
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: myingress
annotations:
kubernetes.io/ingress.class: traefik
spec:
rules:
- host: app.minikube
http:
paths:
- path: /
backend:
serviceName: myapp
servicePort: http
tls:
- secretName: mytls
FYI, according to the Traefik user guide, the hosts definition in tls is unneeded, which is why I left it out.
The field hosts in the TLS configuration is ignored. Instead, the domains provided by the certificate are used for this purpose. It is recommended to not use wildcard certificates as they will match globally)

You're missing the hosts section:
tls:
- hosts:
- my-host.example.com
secretName: my-secret

Related

How can I use Cert manager letsencrypt-prod in my kubernetes service?

I have 4 yaml file
Deployment.yaml
Service.yaml
Ingress.yaml
issuer.yaml
I want to use letsencrypt-prod for my service for certification . But it doesn't work.
When I use to be sure ingress is working or issuer is working both of them are done!
kubectl get ing
kubectl get issuer
But when I run:
kubectl get cert
Cert is not readt during 2 days . Like below:
it creates problem like below. certification is not binding mandrakee.xyz.Mandrakee.xyz looks still not secure! how can I make my website secyre via cert manager?
Deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: echo-deployment
spec:
replicas: 1
selector:
matchLabels:
app: echo-server
template:
metadata:
labels:
app: echo-server
spec:
containers:
- name: httpapi-host
image: jmalloc/echo-server
imagePullPolicy: Always
resources:
requests:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 80
Service.yaml:
apiVersion: v1
kind: Service
metadata:
name: echo-service
spec:
ports:
- name: http-port
port: 80
targetPort: 8080
selector:
app: echo-server
Ingress.yaml:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: ambassador
cert-manager.io/issuer: letsencrypt-prod
name: test-ingress
spec:
tls:
- hosts:
- mandrakee.xyz
secretName: letsencrypt-prod
rules:
- host: mandrakee.xyz
http:
paths:
- backend:
service:
name: echo-service
port:
number: 80
path: /
pathType: Prefix
issuer.yaml:
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
name: letsencrypt-prod
spec:
acme:
email: ykaratoprak#sphereinc.com
server: https://acme-v02.api.letsencrypt.org/directory
privateKeySecretRef:
name: letsencrypt-prod
solvers:
- dns01:
digitalocean:
tokenSecretRef:
name: digitalocean-dns
key: ce28952b5b4e33ea7d98de190f3148a7cc82d31f030bde966ad13b22c1abc524
If you have setup your issuer correctly, which you have assured us, you will see in your namespace a pod belonging to cert manager. This creates a pod that will validate that the server requesting the certificate resolves to the DNS record.
In your case, you would need to point your DNS towards your ingress.
If this is done successfully, then the next stage of debugging is to validate that both 443 and 80 can be resolved. The Validation Pod created by Cert Manager uses port 80 to validate the communication. A common mistake people make is assuming that they will only use port 443 for ssl and disable 80 for security reasons to find out later that letsencrypt can't validate the hostname without port 80.
Otherwise, the common scenario is that cert-manager is installed in the namespace cert-manager and so you should check the logs of the manager. This will provided a limited amount of logs and can be sometimes cryptic to finding the remedy to your issues.
To find the direct error, the pod spawned by cert-manager in the namespace you have deployed the ingress is a good place to focus.
A test I would run is to setup the ingress with both 80 and 443, if you use your domain from your browser you should get some invalid kubernetes generic certificates response on the port 443 and just "Not Found" on port 80. If this is successful, it rules out the limitation I have mentioned before.

GCP Health Checks with SSL enabled

I kind of new on Kubernetes stuff and I'm trying to improve one current system we have here.
The Application is developed using Spring Boot and until now it was using HTTP (Port 8080) without any encryption. The system requirement is to enable e2e-encryption for all Data In-Transit. So here is the problem.
Currently, we have GCE Ingress with TLS enabled using Let's Encrypt to provide the Certificates on Cluster entrance. This is working fine. Our Ingress has some Path Rules to redirect the traffic to the correct microservice and those microservices are not using TLS on the communication.
I managed to create a Self-Signed certificate and embedded it inside the WAR and this is working on the Local machine just fine (using certificate validation disabled). When I deploy this on GKE, the GCP Health Check and Kubernetes Probes are not working at all (I can't see any communication attempt on the Application logs).
When I try to configure the Backend and Health Check on GCP changing both to HTTPS, they don't show any error, but after some time they quietly switch back to HTTP.
Here are my YAML files:
admin-service.yaml
---
apiVersion: v1
kind: Service
metadata:
name: admin-service
namespace: default
spec:
type: NodePort
selector:
app: admin
ports:
- port: 443
targetPort: 8443
name: https
protocol: TCP
admin-deployment.yaml
---
apiVersion: "apps/v1"
kind: "Deployment"
metadata:
name: "admin"
namespace: "default"
labels:
app: "admin"
spec:
replicas: 1
selector:
matchLabels:
app: "admin"
template:
metadata:
labels:
app: "admin"
spec:
containers:
- name: "backend-admin"
image: "gcr.io/my-project/backend-admin:X.Y.Z-SNAPSHOT"
livenessProbe:
httpGet:
path: /actuator/health/liveness
port: 8443
scheme: HTTPS
initialDelaySeconds: 8
periodSeconds: 30
readinessProbe:
httpGet:
path: /actuator/health/readiness
port: 8443
scheme: HTTPS
initialDelaySeconds: 8
periodSeconds: 30
env:
- name: "FIREBASE_PROJECT_ID"
valueFrom:
configMapKeyRef:
key: "FIREBASE_PROJECT_ID"
name: "service-config"
---
apiVersion: "autoscaling/v2beta1"
kind: "HorizontalPodAutoscaler"
metadata:
name: "admin-etu-vk1a"
namespace: "default"
labels:
app: "admin"
spec:
scaleTargetRef:
kind: "Deployment"
name: "admin"
apiVersion: "apps/v1"
minReplicas: 1
maxReplicas: 3
metrics:
- type: "Resource"
resource:
name: "cpu"
targetAverageUtilization: 80
ingress.yaml
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: my-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: my-ingress-addr
kubernetes.io/ingress.class: "gce"
kubernetes.io/tls-acme: "true"
cert-manager.io/cluster-issuer: "letsencrypt-prod"
acme.cert-manager.io/http01-edit-in-place: "true"
kubernetes.io/ingress.allow-http: "false"
spec:
tls:
- hosts:
- my-domain.com
secretName: mydomain-com-tls
rules:
- host: my-domain.com
http:
paths:
- path: /admin/v1/*
backend:
serviceName: admin-service
servicePort: 443
status:
loadBalancer:
ingress:
- ip: XXX.YYY.WWW.ZZZ
Reading this document from GCP I understood that Loadbalancer it's compatible with Self-signed certificates.
I would appreciate any insight or new directions you guys can provide.
Thanks in advance.
EDIT 1: I've added here the ingress YAML file which may help to a better understanding of the issue.
EDIT 2: I've updated the deployment YAML with the solution I found for liveness and readiness probes (scheme).
EDIT 3: I've found the solution for GCP Health Checks using annotation on Services declaration. I will put all the details on the response to my own question.
Here is what I found on how to fix the issue.
After reading a lot of documentation related to Kubernetes and GCP I found a document on GCP explaining to use annotations on Service declaration. Take a look at lines 7-8.
---
apiVersion: v1
kind: Service
metadata:
name: admin-service
namespace: default
annotations:
cloud.google.com/app-protocols: '{"https":"HTTPS"}'
spec:
type: NodePort
selector:
app: iteam-admin
ports:
- port: 443
targetPort: 8443
name: https
protocol: TCP
This will hint GCP to create the backend-service and health-check using HTTPS and everything will work as expected.
Reference: https://cloud.google.com/kubernetes-engine/docs/concepts/ingress-xlb#https_tls_between_load_balancer_and_your_application

Why Traefik 2.2 & Let's Encrypt does not support new annotations?

I have setup traefik 2.2 in my self managed kubernetes cluster with Let's Encrypt support.
So far everything works. But the ingress Route configuration in my eyes is still clumsy. It only works if I define two IntgresRoutes - one for HTTP with a redirect middleware to https and one for the https. So my objects look like this:
# Middleware for Redirect http -> https
---
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
name: https-redirect
spec:
redirectScheme:
scheme: https
# IngressRoute http for a simple whoami service
---
kind: IngressRoute
apiVersion: traefik.containo.us/v1alpha1
metadata:
name: whoami-notls
namespace: default
spec:
entryPoints:
- web
routes:
- match: Host(`mydomain.foo.com`)
kind: Rule
services:
- name: whoami
port: 8080
# redirect http to https
middlewares:
- name: https-redirect
# IngresRoute https
---
kind: IngressRoute
apiVersion: traefik.containo.us/v1alpha1
metadata:
name: whoami-tls
namespace: default
spec:
entryPoints:
- websecure
routes:
- match: Host(`mydomain.foo.com`)
kind: Rule
services:
- name: whoami
port: 8080
tls:
certResolver: default
Is there not a more easy way to simply tell traefik that my service - which is listening on port 8080 - should be redirected to HTTPS in any case. Why do I need two separate ingresRoutes in my setup?
In the announcements for traefik 2.2. there was something like this:
kind: Ingress
apiVersion: networking.k8s.io/v1beta1
metadata:
name: foo
namespace: bar
annotations:
traefik.ingress.kubernetes.io/router.entrypoints: web, websecure
traefik.ingress.kubernetes.io/router.middlewares: redirect-http#kuberntes-crd
spec:
rules:
- host: foo.com
http:
paths:
- path: ""
backend:
serviceName: service1
servicePort: 80
It looks very simple. But this did not work for me - traefik is not recognizing this Ingress configuration.
With the help of the Traefik.io team in this discussion, I now solved the problem:
To use traefik annotations in Ingress make sure that in your deployment object you have added the ‘kubernetesingress’ provider:
...
spec:
containers:
- args:
- --api
....
- --providers.kubernetescrd=true
- --providers.kubernetesingress=true
....
For a global redirect form HTTP to HTTPS you can also configure this in your traefik deplyoment object:
# permanent redirecting of all requests on http (80) to https (443)
- --entrypoints.web.http.redirections.entryPoint.to=websecure
- --entrypoints.websecure.http.tls.certResolver=default
Now you can configure your ingress in an easy way:
kind: Ingress
apiVersion: networking.k8s.io/v1beta1
metadata:
name: myingress
annotations:
traefik.ingress.kubernetes.io/router.entrypoints: web, websecure
spec:
rules:
- host: example.foo.com
http:
paths:
- path: /
backend:
serviceName: whoami
servicePort: 80
See also my latest Blog post.

Which Kubernetes ingress "wins" (tls and multiple ingresses for same host)?

Assume I have have two ingresses ingress-a and ingress-b for the same host but with different paths:
ingress-a:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
name: app-a
namespace: namespace-a
spec:
rules:
- host: myhost.com
http:
paths:
- backend:
serviceName: app-a
servicePort: 8080
path: /path-a
tls:
- hosts:
- myhost.com
secretName: tls-a
ingress-b:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
name: app-b
namespace: namespace-b
spec:
rules:
- host: myhost.com
http:
paths:
- backend:
serviceName: app-b
servicePort: 8080
path: /path-b
tls:
- hosts:
- myhost.com
secretName: tls-b
Now I need to update the certificate. Assume I create the new secret in tls-new but only update ingress-a to point to that. Which of the two ingresses would win?
I guess I should simply overwrite the existing secret but I am trying to understand how the rules for ingresses would work in the above scenario where two different tls secrets are being referenced for the same host.
NGINX and NGINX Plus Ingress controller for Kubernetes has support for mergeable Ingress Types.
A Master is declared using nginx.org/mergeable-ingress-type: master. A Master will process all configurations at the host level, which includes the TLS configuration, and any annotations which will be applied for the complete host. There can only be one ingress resource on a unique host that contains the master value. Paths cannot be part of the ingress resource.
A Minion is declared using nginx.org/mergeable-ingress-type: minion. A Minion will be used to append different locations to an ingress resource with the Master value. TLS configurations are not allowed. Multiple minions can be applied per master as long as they do not have conflicting paths. If a conflicting path is present then the path defined on the oldest minion will be used.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: cafe-ingress-master
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.org/mergeable-ingress-type: "master"
spec:
tls:
- hosts:
- cafe.example.com
secretName: cafe-secret
rules:
- host: cafe.example.com
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: cafe-ingress-coffee-minion
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.org/mergeable-ingress-type: "minion"
spec:
rules:
- host: cafe.example.com
http:
paths:
- path: /coffee
backend:
serviceName: coffee-svc
servicePort: 80
The minion can not have TLS, only the master can have TLS and you change TLS in master.

GKE, NGINX ingress, HTTPS, and certificates

I have set up a Kubernetes cluster on GCP/GKE and it's all working well except for one thing. When I access the external IP for the service the (default?) "Kubernetes Ingress Controller Fake Certificate" is served.
I am trying to use the NGINX Ingress (https://kubernetes.github.io/ingress-nginx/) and have followed what I believe are the correct instructions for associating a TLS secret with the Ingress. For example:
https://estl.tech/configuring-https-to-a-web-service-on-google-kubernetes-engine-2d71849520d
https://kubernetes.github.io/ingress-nginx/user-guide/tls/
I have created a secret like this:
apiVersion: v1
kind: Secret
metadata:
name: example-tls
namespace: default
data:
tls.crt: [removed]
tls.key: [removed]
type: kubernetes.io/tls
And associated that secret (which I can confirm is applied correctly and I can see in the cluster config) with the Ingress like this:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: example-ingress
namespace: default
annotations:
kubernetes.io/ingress.class: "nginx"
kubernetes.io/ingress.allow-http: "false"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/affinity: "cookie"
spec:
backend:
serviceName: example-service
servicePort: 80
tls:
- secretName: example-tls
From the documentation I feel that this should work (but, barring a bug, I am obviously mistaken!).
I've also seen some documentation around requiring target proxies for HTTPS. Perhaps that is the way that I should be doing this?
Many thanks for your help in advance.
Cheers,
Ben
PS: This is my load balancer configuration:
kind: Service
apiVersion: v1
metadata:
name: ingress-nginx
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
externalTrafficPolicy: Cluster
loadBalancerIP: [removed]
sessionAffinity: ClientIP
type: LoadBalancer
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
ports:
- name: http
port: 80
targetPort: http
- name: https
port: 443
targetPort: https
Edit 1:
Looking at my Ingress I can see this:
➜ gke git:(develop) ✗ kubectl describe ing example-tls-ingress
Name: example-tls-ingress
Namespace: default
Address: [removed]
Default backend: example-webapp-service:80 ([removed])
TLS:
example-tls terminates
Rules:
Host Path Backends
---- ---- --------
* * example-webapp-service:80 ([removed])
So it looks like the secret is picked up.
And this makes me think that there is a difference between Ingress-terminated TLS and Load Balancer-terminated TLS?
You can just refer to this stackoverflow post.
You need to install jetstack cert-Manager, create clusterissuer/issuer, along with a certificate in which you have to pass domain name / hostname and jetstack will automatically create the secret for you, by the name you mentioned in the 'Certificate'.
That secret has to be patched to TLS in ingress rule.