I had my original infrastructure built around this tutorial. https://www.digitalocean.com/community/tutorials/how-to-set-up-an-nginx-ingress-with-cert-manager-on-digitalocean-kubernetes.
Now I am trying to migrate to managing my own cert and terminating SSL at the load balancer.
With my YAML updates, the load balancer in DigitalOcean shows that all nodes are unhealthy and I the URL response with "503 Service Unavailable
No server is available to handle this request." However, the endpoint shows a secure HTTPS connection. What am I doing wrong?
My new non-functional YAML definitions below.
LoadBalancer
apiVersion: v1
kind: Service
metadata:
name: ingress-nginx
namespace: ingress-nginx
annotations:
service.beta.kubernetes.io/do-loadbalancer-redirect-http-to-https: "true"
service.beta.kubernetes.io/do-loadbalancer-enable-proxy-protocol: "true"
service.beta.kubernetes.io/do-loadbalancer-certificate-id: "**************"
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
type: LoadBalancer
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
ports:
- name: http
port: 80
targetPort: http
- name: https
port: 443
targetPort: http
Ingress
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/proxy-body-size: "50m"
spec:
tls:
- hosts:
- ******.com
- api.*******.com
rules:
- host: **********.com
http:
paths:
- backend:
serviceName: frontend-angular
servicePort: 80
- host: api.********.com
http:
paths:
- backend:
serviceName: backend-server
servicePort: 80
I reached out to DigitalOcean support (which is incredible). My issue was I didn't create the ingress-nginx pod. These are the two steps, as listed in the tutorial, that I missed.
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.26.1/deploy/static/mandatory.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.26.1/deploy/static/provider/cloud-generic.yaml.
My actual YAML definitions were correct.
The best way is just using Digital Ocean Marketplace (https://marketplace.digitalocean.com/apps/nginx-ingress-controller).
Installing it manually will cause a lot of issues due do outdated yml files.
Related
I kind of new on Kubernetes stuff and I'm trying to improve one current system we have here.
The Application is developed using Spring Boot and until now it was using HTTP (Port 8080) without any encryption. The system requirement is to enable e2e-encryption for all Data In-Transit. So here is the problem.
Currently, we have GCE Ingress with TLS enabled using Let's Encrypt to provide the Certificates on Cluster entrance. This is working fine. Our Ingress has some Path Rules to redirect the traffic to the correct microservice and those microservices are not using TLS on the communication.
I managed to create a Self-Signed certificate and embedded it inside the WAR and this is working on the Local machine just fine (using certificate validation disabled). When I deploy this on GKE, the GCP Health Check and Kubernetes Probes are not working at all (I can't see any communication attempt on the Application logs).
When I try to configure the Backend and Health Check on GCP changing both to HTTPS, they don't show any error, but after some time they quietly switch back to HTTP.
Here are my YAML files:
admin-service.yaml
---
apiVersion: v1
kind: Service
metadata:
name: admin-service
namespace: default
spec:
type: NodePort
selector:
app: admin
ports:
- port: 443
targetPort: 8443
name: https
protocol: TCP
admin-deployment.yaml
---
apiVersion: "apps/v1"
kind: "Deployment"
metadata:
name: "admin"
namespace: "default"
labels:
app: "admin"
spec:
replicas: 1
selector:
matchLabels:
app: "admin"
template:
metadata:
labels:
app: "admin"
spec:
containers:
- name: "backend-admin"
image: "gcr.io/my-project/backend-admin:X.Y.Z-SNAPSHOT"
livenessProbe:
httpGet:
path: /actuator/health/liveness
port: 8443
scheme: HTTPS
initialDelaySeconds: 8
periodSeconds: 30
readinessProbe:
httpGet:
path: /actuator/health/readiness
port: 8443
scheme: HTTPS
initialDelaySeconds: 8
periodSeconds: 30
env:
- name: "FIREBASE_PROJECT_ID"
valueFrom:
configMapKeyRef:
key: "FIREBASE_PROJECT_ID"
name: "service-config"
---
apiVersion: "autoscaling/v2beta1"
kind: "HorizontalPodAutoscaler"
metadata:
name: "admin-etu-vk1a"
namespace: "default"
labels:
app: "admin"
spec:
scaleTargetRef:
kind: "Deployment"
name: "admin"
apiVersion: "apps/v1"
minReplicas: 1
maxReplicas: 3
metrics:
- type: "Resource"
resource:
name: "cpu"
targetAverageUtilization: 80
ingress.yaml
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: my-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: my-ingress-addr
kubernetes.io/ingress.class: "gce"
kubernetes.io/tls-acme: "true"
cert-manager.io/cluster-issuer: "letsencrypt-prod"
acme.cert-manager.io/http01-edit-in-place: "true"
kubernetes.io/ingress.allow-http: "false"
spec:
tls:
- hosts:
- my-domain.com
secretName: mydomain-com-tls
rules:
- host: my-domain.com
http:
paths:
- path: /admin/v1/*
backend:
serviceName: admin-service
servicePort: 443
status:
loadBalancer:
ingress:
- ip: XXX.YYY.WWW.ZZZ
Reading this document from GCP I understood that Loadbalancer it's compatible with Self-signed certificates.
I would appreciate any insight or new directions you guys can provide.
Thanks in advance.
EDIT 1: I've added here the ingress YAML file which may help to a better understanding of the issue.
EDIT 2: I've updated the deployment YAML with the solution I found for liveness and readiness probes (scheme).
EDIT 3: I've found the solution for GCP Health Checks using annotation on Services declaration. I will put all the details on the response to my own question.
Here is what I found on how to fix the issue.
After reading a lot of documentation related to Kubernetes and GCP I found a document on GCP explaining to use annotations on Service declaration. Take a look at lines 7-8.
---
apiVersion: v1
kind: Service
metadata:
name: admin-service
namespace: default
annotations:
cloud.google.com/app-protocols: '{"https":"HTTPS"}'
spec:
type: NodePort
selector:
app: iteam-admin
ports:
- port: 443
targetPort: 8443
name: https
protocol: TCP
This will hint GCP to create the backend-service and health-check using HTTPS and everything will work as expected.
Reference: https://cloud.google.com/kubernetes-engine/docs/concepts/ingress-xlb#https_tls_between_load_balancer_and_your_application
Assume I have have two ingresses ingress-a and ingress-b for the same host but with different paths:
ingress-a:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
name: app-a
namespace: namespace-a
spec:
rules:
- host: myhost.com
http:
paths:
- backend:
serviceName: app-a
servicePort: 8080
path: /path-a
tls:
- hosts:
- myhost.com
secretName: tls-a
ingress-b:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
name: app-b
namespace: namespace-b
spec:
rules:
- host: myhost.com
http:
paths:
- backend:
serviceName: app-b
servicePort: 8080
path: /path-b
tls:
- hosts:
- myhost.com
secretName: tls-b
Now I need to update the certificate. Assume I create the new secret in tls-new but only update ingress-a to point to that. Which of the two ingresses would win?
I guess I should simply overwrite the existing secret but I am trying to understand how the rules for ingresses would work in the above scenario where two different tls secrets are being referenced for the same host.
NGINX and NGINX Plus Ingress controller for Kubernetes has support for mergeable Ingress Types.
A Master is declared using nginx.org/mergeable-ingress-type: master. A Master will process all configurations at the host level, which includes the TLS configuration, and any annotations which will be applied for the complete host. There can only be one ingress resource on a unique host that contains the master value. Paths cannot be part of the ingress resource.
A Minion is declared using nginx.org/mergeable-ingress-type: minion. A Minion will be used to append different locations to an ingress resource with the Master value. TLS configurations are not allowed. Multiple minions can be applied per master as long as they do not have conflicting paths. If a conflicting path is present then the path defined on the oldest minion will be used.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: cafe-ingress-master
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.org/mergeable-ingress-type: "master"
spec:
tls:
- hosts:
- cafe.example.com
secretName: cafe-secret
rules:
- host: cafe.example.com
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: cafe-ingress-coffee-minion
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.org/mergeable-ingress-type: "minion"
spec:
rules:
- host: cafe.example.com
http:
paths:
- path: /coffee
backend:
serviceName: coffee-svc
servicePort: 80
The minion can not have TLS, only the master can have TLS and you change TLS in master.
I have set up a Kubernetes cluster on GCP/GKE and it's all working well except for one thing. When I access the external IP for the service the (default?) "Kubernetes Ingress Controller Fake Certificate" is served.
I am trying to use the NGINX Ingress (https://kubernetes.github.io/ingress-nginx/) and have followed what I believe are the correct instructions for associating a TLS secret with the Ingress. For example:
https://estl.tech/configuring-https-to-a-web-service-on-google-kubernetes-engine-2d71849520d
https://kubernetes.github.io/ingress-nginx/user-guide/tls/
I have created a secret like this:
apiVersion: v1
kind: Secret
metadata:
name: example-tls
namespace: default
data:
tls.crt: [removed]
tls.key: [removed]
type: kubernetes.io/tls
And associated that secret (which I can confirm is applied correctly and I can see in the cluster config) with the Ingress like this:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: example-ingress
namespace: default
annotations:
kubernetes.io/ingress.class: "nginx"
kubernetes.io/ingress.allow-http: "false"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/affinity: "cookie"
spec:
backend:
serviceName: example-service
servicePort: 80
tls:
- secretName: example-tls
From the documentation I feel that this should work (but, barring a bug, I am obviously mistaken!).
I've also seen some documentation around requiring target proxies for HTTPS. Perhaps that is the way that I should be doing this?
Many thanks for your help in advance.
Cheers,
Ben
PS: This is my load balancer configuration:
kind: Service
apiVersion: v1
metadata:
name: ingress-nginx
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
externalTrafficPolicy: Cluster
loadBalancerIP: [removed]
sessionAffinity: ClientIP
type: LoadBalancer
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
ports:
- name: http
port: 80
targetPort: http
- name: https
port: 443
targetPort: https
Edit 1:
Looking at my Ingress I can see this:
➜ gke git:(develop) ✗ kubectl describe ing example-tls-ingress
Name: example-tls-ingress
Namespace: default
Address: [removed]
Default backend: example-webapp-service:80 ([removed])
TLS:
example-tls terminates
Rules:
Host Path Backends
---- ---- --------
* * example-webapp-service:80 ([removed])
So it looks like the secret is picked up.
And this makes me think that there is a difference between Ingress-terminated TLS and Load Balancer-terminated TLS?
You can just refer to this stackoverflow post.
You need to install jetstack cert-Manager, create clusterissuer/issuer, along with a certificate in which you have to pass domain name / hostname and jetstack will automatically create the secret for you, by the name you mentioned in the 'Certificate'.
That secret has to be patched to TLS in ingress rule.
Right after enabling cert-manager in Ingress Controller by TTFB (time to first byte) increased by 200+ms in most of the regions. Without SSL, it was <200ms to 80% of the regions. After enabling SSL only 30% have TTFB <200ms
without SSL
with SSL
My Ingress definition:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-ingress
annotations:
kubernetes.io/ingress.class: nginx
certmanager.k8s.io/cluster-issuer: letsencrypt-prod
kubernetes.io/tls-acme: "true"
spec:
rules:
- host: gce.wpspeedmatters.com
http:
paths:
- path: /
backend:
serviceName: wordpress
servicePort: 80
tls:
- secretName: tls-prod-cert
hosts:
- gce.wpspeedmatters.com
Switched to TLS 1.3 and I was able to above shave off extra 50-150ms!
I wrote a detailed blog post too: https://wpspeedmatters.com/tls-1-3-in-wordpress/
With TLS 1.3:
I have a setup that is not too much different than the user guide for use with k8s. For some reason I can only access http://app.minikube and not https://app.minikube.
Can someone look at my setup and see what I am obviously missing?
apiVersion: v1
kind: Service
metadata:
name: myapp
labels:
app: myapp
spec:
ports:
- name: http
port: 80
targetPort: 7777
selector:
app: myapp
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: myingress
annotations:
kubernetes.io/ingress.class: traefik
spec:
rules:
- host: app.minikube
http:
paths:
- path: /
backend:
serviceName: myapp
servicePort: http
tls:
- secretName: mytls
FYI, according to the Traefik user guide, the hosts definition in tls is unneeded, which is why I left it out.
The field hosts in the TLS configuration is ignored. Instead, the domains provided by the certificate are used for this purpose. It is recommended to not use wildcard certificates as they will match globally)
You're missing the hosts section:
tls:
- hosts:
- my-host.example.com
secretName: my-secret