Kubernetes Ingress not working with Traefik and TLS - ssl

I am trying to get some stuff working on K8s (1.21.0 on Ubuntu 20.04 on bare metal) and am likely missing something simple. I have installed Traefik (2.4.8) using their helm chart (9.19.1) and the following values file:
deployment:
kind: DaemonSet
dashboard:
enabled: true
hostNetwork: true
ports:
web:
port: 80
websecure:
port: 443
securityContext:
capabilities:
drop: [ALL]
add: [NET_BIND_SERVICE]
readOnlyRootFilesystem: true
runAsGroup: 0
runAsNonRoot: false
runAsUser: 0
additionalArguments:
- "--log.level=DEBUG"
I can ssh tunnel in and see the Traefik dashboard. I installed httpbin to have something to test against:
kind: Service
metadata:
name: httpbin
namespace: default
spec:
selector:
app: httpbin
ports:
- port: 8080
protocol: TCP
targetPort: 80
---
apiVersion: v1
kind: Pod
metadata:
name: httpbin
namespace: default
labels:
app: httpbin
spec:
containers:
- image: kennethreitz/httpbin:latest
name: httpbin
ports:
- containerPort: 80
protocol: TCP
I created a secret with my certificate (a real *.brandseye.com cert) and an Ingress:
kind: Ingress
metadata:
name: test-ingress
namespace: default
spec:
tls:
- hosts:
- aragorn.brandseye.com
secretName: brandseye-com-cert
rules:
- host: aragorn.brandseye.com
http:
paths:
- path: /get
pathType: Exact
backend:
service:
name: httpbin
port:
number: 8080
Now I can go to: http://aragorn.brandseye.com/get and it works. However https://aragorn.brandseye.com/get gives a 404. The correct certificate is used.
The Traefik looks seem ok:
time="2021-05-18T13:35:38Z" level=debug msg="Configuration received from provider kubernetes: {\"http\":{\"routers\":{\"test-ingress-default-aragorn-brandseye-com-get\":{\"service\":\"default-httpbin-8080\",\"rule\":\"Host(`aragorn.brandseye.com`) \\u0026\\u0026 Path(`/get`)\"}},\"services\":{\"default-httpbin-8080\":{\"loadBalancer\":{\"servers\":[{\"url\":\"http://172.20.1.9:80\"}],\"passHostHeader\":true}}}},\"tcp\":{},\"tls\":{}}" providerName=kubernetes
time="2021-05-18T13:35:38Z" level=debug msg="No entryPoint defined for this router, using the default one(s) instead: [web websecure]" routerName=test-ingress-default-aragorn-brandseye-com-get
time="2021-05-18T13:35:38Z" level=debug msg="No store is defined to add the certificate MIIGkDCCBXigAwIBAgIQCYfAPbF1vuf5b72JgcBPEDANBgkqhk, it will be added to the default store."
time="2021-05-18T13:35:38Z" level=debug msg="Adding certificate for domain(s) *.brandseye.com,brandseye.com"
time="2021-05-18T13:35:38Z" level=debug msg="No default certificate, generating one"
time="2021-05-18T13:35:38Z" level=debug msg="Added outgoing tracing middleware ping#internal" middlewareType=TracingForwarder entryPointName=traefik routerName=ping#internal middlewareName=tracing
time="2021-05-18T13:35:38Z" level=debug msg="Added outgoing tracing middleware api#internal" routerName=kube-system-traefik-dashboard-d012b7f875133eeab4e5#kubernetescrd entryPointName=traefik middlewareName=tracing middlewareType=TracingForwarder
time="2021-05-18T13:35:38Z" level=debug msg="Added outgoing tracing middleware api#internal" middlewareType=TracingForwarder entryPointName=traefik routerName=traefik-traefik-dashboard-d012b7f875133eeab4e5#kubernetescrd middlewareName=tracing
time="2021-05-18T13:35:38Z" level=debug msg="Creating middleware" entryPointName=traefik middlewareName=traefik-internal-recovery middlewareType=Recovery
time="2021-05-18T13:35:38Z" level=debug msg="Creating middleware" serviceName=default-httpbin-8080 middlewareName=pipelining middlewareType=Pipelining entryPointName=web routerName=test-ingress-default-aragorn-brandseye-com-get#kubernetes
time="2021-05-18T13:35:38Z" level=debug msg="Creating load-balancer" routerName=test-ingress-default-aragorn-brandseye-com-get#kubernetes serviceName=default-httpbin-8080 entryPointName=web
time="2021-05-18T13:35:38Z" level=debug msg="Creating server 0 http://172.20.1.9:80" serviceName=default-httpbin-8080 serverName=0 entryPointName=web routerName=test-ingress-default-aragorn-brandseye-com-get#kubernetes
time="2021-05-18T13:35:38Z" level=debug msg="Added outgoing tracing middleware default-httpbin-8080" middlewareType=TracingForwarder entryPointName=web routerName=test-ingress-default-aragorn-brandseye-com-get#kubernetes middlewareName=tracing
time="2021-05-18T13:35:38Z" level=debug msg="Creating middleware" entryPointName=web middlewareName=traefik-internal-recovery middlewareType=Recovery
time="2021-05-18T13:35:38Z" level=debug msg="Creating middleware" entryPointName=websecure middlewareName=traefik-internal-recovery middlewareType=Recovery
time="2021-05-18T13:35:38Z" level=debug msg="No default certificate, generating one"
Any ideas? Tx.
If I look at the router details on the Traefik Dashboard it has nothing in the TLS block which doesn't seem right:

I don't know if this will help you or no.
But my configuration work well like this.
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: laznp-www-ingress-route
namespace: wordpress
spec:
entryPoints:
- websecure
routes:
- match: Host(`laznp.id`)
kind: Rule
services:
- name: laznp-www-svc
port: 80
tls: {}
I use IngressRoute kind from Traefik CRD, hope it help.

Related

traefik always send traffic via tcp to middleware not over TLS

We are trying to encrypt communication between traefik ingress and middleware (forwardauth) & ingress to backend server also.
Forwardauth redirects traffic to authentication server which is running over https and used selfsinged certificate.
In the wireshark i can see that ingress is communicating with authentication server using TCP insted TLS, but the communication between ingress and backend server using tls.
Please help how to enable tls communication between traefik ingress and middleware .
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
annotations:
kubernetes.io/ingress.class: traefik
name: traefik-tls-1
namespace: sample-domain1-ns
spec:
entryPoints:
- websecure
routes:
- kind: Rule
match: PathPrefix(`/api`)
middlewares:
- name: test-auth-https
namespace: sample-domain1-ns
- name: test-auth
namespace: sample-domain1-ns
services:
- kind: Service
name: sample-svc
port: 8002
scheme: "https"
serversTransport: mytransport
tls:
secretName: domain1-tls-cert
options:
name: mtlsoption-ecprt
namespace: sample-domain1-ns
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
name: test-auth-https
namespace: sample-domain1-ns
spec:
redirectScheme:
scheme: https
permanent: true
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
name: test-auth
namespace: sample-domain1-ns
spec:
forwardAuth:
address: https://s-lb.sample-ns.svc.cluster.local:8080/auth
tls:
insecureSkipVerify: true
communication between traefik ingress and middleware as well as backend server should be on TLS.

GKE Ingress TLS vs jupyter TLS but not both?

I'm setting up a jupyter-lab container in a kubernetes cluster and want to enable TLS. I have successfully done this in 2 ways:
Include the certificate and key files inside the container and enable TLS when running the jupyter command. Add a LoadBalancer Service to expose the container.
#Dockerfile
...
CMD jupyter-lab --no-browser --allow-root --ip 0.0.0.0 --port=443 --certfile=<crt path> --keyfile=<key path>
#yaml
apiVersion: v1
kind: Service
metadata:
name: <service-name>
spec:
type: LoadBalancer
selector:
app: <app-name>
ports:
- protocol: TCP
port: 443
targetPort: 443
Run jupyter with no TLS. Add certificate and key in base64 to a Secret. Add NodePort, Ingress and BackendConfig yamls.
#Dockerfile
...
CMD jupyter-lab --no-browser --allow-root --ip 0.0.0.0 --port=443
apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
name: http-hc-config
spec:
healthCheck:
checkIntervalSec: 300
timeoutSec: 10
healthyThreshold: 2
unhealthyThreshold: 5
type: HTTP
requestPath: /login
port: 443
---
apiVersion: v1
kind: Service
metadata:
name: <service-name>
annotations:
cloud.google.com/backend-config: '{"ports": {"443":"http-hc-config"}}'
spec:
type: NodePort
selector:
app: <app-name>
ports:
- protocol: TCP
port: 443
targetPort: 443
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: <ingress-name>
annotations:
kubernetes.io/ingress.allow-http: "false"
spec:
tls:
- secretName: <secret-name>
defaultBackend:
service:
name: <service-name>
port:
number: 443
---
apiVersion: v1
data:
tls-crt: <base64 crt>
tls-key: <base64 key>
kind: Secret
metadata:
name: <secret-name>
type: kubernetes.io/tls
However, when I try to combine both (follow steps in 2, but also enable tls in jupyter-lab), I get 502 errors. Why is this?
Also, which setup is better?
If you want TLS between the HTTP(S) LB created by Ingress, you'll need to modify your BackendConfig to specify HTTPS for the healthcheck:
apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
name: http-hc-config
spec:
healthCheck:
checkIntervalSec: 300
timeoutSec: 10
healthyThreshold: 2
unhealthyThreshold: 5
type: HTTPS
requestPath: /login
port: 443

Nginx ingress with oauth proxy and CORS

I have two services in Kubernetes which are exposed through nginx controller. Service a wants to invoke content on domain b but at the same time both services need to be authenticated through Google using the oauth-proxy service.
So I have managed to enable CORS and a can invoke b without any issues. But the problem is when I include the authentication as well, I am constantly getting:
Access to manifest at 'https://accounts.google.com/o/oauth2/auth?access_type=offline&approval_prompt=force&client_id=<obscured>.apps.googleusercontent.com&redirect_uri=https%3A%2F%2Fa.example.com%2Foauth2%2Fcallback&response_type=code&scope=profile+email&state=<obscured>manifest.json' (redirected from 'https://a.example.com/manifest.json') from origin 'https://a.example.com' has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource.
Here are the ingresses:
Service a
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
kubernetes.io/tls-acme: "true"
nginx.ingress.kubernetes.io/auth-url: "https://$host/oauth2/auth"
nginx.ingress.kubernetes.io/auth-signin: "https://$host/oauth2/start?rd=$escaped_request_uri"
nginx.ingress.kubernetes.io/whitelist-source-range: 0.0.0.0/0
name: a
spec:
rules:
- host: a.example.com
http:
paths:
- backend:
service:
name: a-svc
port:
number: 8080
path: /
pathType: Prefix
tls:
- hosts:
- a.example.com
secretName: a-tls
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/whitelist-source-range: 0.0.0.0/0
labels:
k8s-app: oauth2-proxy
name: a-oauth
spec:
rules:
- host: a.example.com
http:
paths:
- backend:
service:
name: oauth2-proxy
port:
number: 4180
path: /oauth2
pathType: Prefix
tls:
- hosts:
- a.example.com
secretName: a-oauth-tls
Service b
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
kubernetes.io/tls-acme: "true"
nginx.ingress.kubernetes.io/auth-url: "https://$host/oauth2/auth"
nginx.ingress.kubernetes.io/auth-signin: "https://$host/oauth2/start?rd=$escaped_request_uri"
nginx.ingress.kubernetes.io/enable-cors: "true"
nginx.ingress.kubernetes.io/whitelist-source-range: 0.0.0.0/0
name: b
spec:
rules:
- host: b.example.com
http:
paths:
- backend:
service:
name: b-svc
port:
number: 8080
path: /
pathType: Prefix
tls:
- hosts:
- b.example.com
secretName: b-tls
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/enable-cors: "true"
nginx.ingress.kubernetes.io/auth-url: "https://$host/oauth2/auth"
nginx.ingress.kubernetes.io/auth-signin: "https://$host/oauth2/start?rd=$escaped_request_uri"
nginx.ingress.kubernetes.io/whitelist-source-range: 0.0.0.0/0
labels:
k8s-app: oauth2-proxy
name: b-oauth
spec:
rules:
- host: b.example.com
http:
paths:
- backend:
service:
name: oauth2-proxy
port:
number: 4180
path: /oauth2
pathType: Prefix
tls:
- hosts:
- b.example.com
secretName: b-oauth-tls
Obviously, there is only one difference between these two and that is the cors annotations nginx.ingress.kubernetes.io/enable-cors: "true" in the service b ingresses.
I am not sure of what is causing the issue but I am guessing that the authentication done against Google in service a is not being passed to the CORS request so that the service b can also be authenticate with the same token/credentials.
What am I doing wrong and how can I resolve this?
Based on the documentation, it looks like you lack of annotations for using HTTP.
Try to add fragment like this in service configuration file:
nginx.ingress.kubernetes.io/enable-cors: "true"
nginx.ingress.kubernetes.io/cors-allow-methods: "PUT, GET, POST, OPTIONS"
nginx.ingress.kubernetes.io/cors-allow-credentials: "true"
nginx.ingress.kubernetes.io/configuration-snippet: |
more_set_headers "Access-Control-Allow-Origin: $http_origin";
But it can be a problem with CORS. In such case you should add the following line:
--cors-allowed-origins=["http://*"]
to /etc/default/kube-apiserver or /etc/kubernetes/manifests/kube-apiserver.yaml file (depending on the location your kube-apiserver configuration file).
After that restart kube-apiserver.
See also this similar question.
So, it turns out that the whole Kubernetes and Nginx config was correct, so the solution was implementing the usage of the saved cookie on client side when invoking a CORS request to the second service.
Essentially, this was already answered here: Set cookies for cross origin requests
Excerpt from the answer:
Front-end (client): Set the XMLHttpRequest.withCredentials flag to
true, this can be achieved in different ways depending on the
request-response library used:
jQuery 1.5.1 xhrFields: {withCredentials: true}
ES6 fetch() credentials: 'include'
axios: withCredentials: true

Traefik v2 Middlewares not being detected

Middlewares are not being detected and therefore paths are not being stripped resulting in 404s in the backend api.
Middleware exists in k8s apps namespace
$ kubectl get -n apps middlewares
NAME AGE
traefik-middlewares-backend-users-service 1d
configuration for middleware and ingress route
---
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
annotations:
kubernetes.io/ingress.class: traefik
name: apps-services
namespace: apps
spec:
entryPoints:
- web
routes:
- kind: Rule
match: Host(`example.com`) && PathPrefix(`/users/`)
middlewares:
- name: traefik-middlewares-backend-users-service
priority: 0
services:
- name: backend-users-service
port: 8080
---
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
name: traefik-middlewares-backend-users-service
namespace: apps
spec:
stripPrefix:
prefixes:
- /users
Static configuration
global:
checkNewVersion: true
sendAnonymousUsage: true
entryPoints:
http:
address: :80
traefik:
address: :8080
providers:
providersThrottleDuration: 2s
kubernetesIngress: {}
api:
# TODO: make this secure later
insecure: true
ping:
entryPoint: http
log: {}
Traefik dasboard has no middlewares
Spring Boot 404 page. Route is on example.com/actuator/health
The /users is not being stripped. This worked for me in traefik v1 perfectly fine.
Note: actual domain has been replaced with example.com and domain.com in the examples.
To get this working, I had to:
Add the Kubernetes CRD provider with the namespaces where the custom k8s CRDs for traefik v2 exist
Add TLSOption resource definition
Update cluster role for traefik to have permissions for listing and watching new v2 resources
Make sure all namespaces with new resources are configured
Traefik Static Configuration File
providers:
providersThrottleDuration: 2s
kubernetesCRD:
namespaces:
- apps
- traefik
TLSOption CRD
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: tlsoptions.traefik.containo.us
spec:
group: traefik.containo.us
version: v1alpha1
names:
kind: TLSOption
plural: tlsoptions
singular: tlsoption
scope: Namespaced
Updated Static Configuration for Traefik
global:
checkNewVersion: true
sendAnonymousUsage: true
entryPoints:
http:
address: :80
traefik:
address: :8080
providers:
providersThrottleDuration: 2s
kubernetesCRD:
namespaces:
- apps
- traefik
api:
# TODO: make this secure later
insecure: true
ping:
entryPoint: http
log: {}

Can't get kubernetes to pass my tls certificate to browsers

I've been struggling for a while trying to get HTTPS access to my Elasticsearch cluster in Kubernetes.
I think the problem is that Kubernetes doesn't like the TLS certificate I'm trying to use, which is why it's not passing it all the way through to the browser.
Everything else seems to work, since when I accept the Kubernetes Ingress Controller Fake Certificate, the requests go through as expected.
In my attempt to do this I've set up:
The cluster itself
An nginx-ingress controller
An ingress resource
Here's the related yaml:
Cluster:
apiVersion: v1
kind: Service
metadata:
creationTimestamp: 2018-08-03T03:20:47Z
labels:
run: my-es
name: my-es
namespace: default
resourceVersion: "3159488"
selfLink: /api/v1/namespaces/default/services/my-es
uid: 373047e0-96cc-11e8-932b-42010a800043
spec:
clusterIP: 10.63.241.39
ports:
- name: http
port: 8080
protocol: TCP
targetPort: 9200
selector:
run: my-es
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
The ingress resource
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/cors-allow-methods: PUT, GET, POST, OPTIONS
nginx.ingress.kubernetes.io/cors-origins: http://localhost:3425 https://mydomain.ca
https://myOtherDomain.ca
nginx.ingress.kubernetes.io/enable-cors: "true"
nginx.ingress.kubernetes.io/rewrite-target: /
creationTimestamp: 2018-08-12T08:44:29Z
generation: 16
name: es-ingress
namespace: default
resourceVersion: "3159625"
selfLink: /apis/extensions/v1beta1/namespaces/default/ingresses/es-ingress
uid: ece0071d-9e0b-11e8-8a45-42001a8000fc
spec:
rules:
- http:
paths:
- backend:
serviceName: my-es
servicePort: 8080
path: /
tls:
- hosts:
- mydomain.ca
secretName: my-tls-secret
status:
loadBalancer:
ingress:
- ip: 130.211.179.225
The nginx-ingress controller:
apiVersion: v1
kind: Service
metadata:
creationTimestamp: 2018-08-12T00:41:32Z
labels:
app: nginx-ingress
chart: nginx-ingress-0.23.0
component: controller
heritage: Tiller
release: nginx-ingress
name: nginx-ingress-controller
namespace: default
resourceVersion: "2781955"
selfLink: /api/v1/namespaces/default/services/nginx-ingress-controller
uid: 755ee4b8-9dc8-11e8-85a4-4201a08000fc
spec:
clusterIP: 10.63.250.256
externalTrafficPolicy: Cluster
ports:
- name: http
nodePort: 32084
port: 80
protocol: TCP
targetPort: http
- name: https
nodePort: 31182
port: 443
protocol: TCP
targetPort: https
selector:
app: nginx-ingress
component: controller
release: nginx-ingress
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer:
ingress:
- ip: 35.212.6.131
I feel like I'm missing something basic, because it doesn't seem like it should be this hard to expose something this simple...
To get my certificate, I just requested one for mydomain.ca from godaddy.
Do I need to somehow get a certificate using my ingress resource's cluster IP as the common name?
It doesn't seem possible to verify ownership of an IP.
I've seen people mention ways for Kubernetes to automatically create certificates for ingress resources, but those seem to be self signed.
Here are some logs from the nginx-controller:
This one is talking about a PEM with the tls-secret, but it's only a warning.
{
insertId: "1kvvhm7g1q7e0ej"
labels: {
compute.googleapis.com/resource_name: "fluentd-gcp-v2.0.17-5b82n"
container.googleapis.com/namespace_name: "default"
container.googleapis.com/pod_name: "nginx-ingress-controller-58f57fc597-zl25s"
container.googleapis.com/stream: "stderr"
}
logName: "projects/project-7d320/logs/nginx-ingress-controller"
receiveTimestamp: "2018-08-14T02:58:42.135388365Z"
resource: {
labels: {
cluster_name: "my-elasticsearch-cluster"
container_name: "nginx-ingress-controller"
instance_id: "2341889542400230234"
namespace_id: "default"
pod_id: "nginx-ingress-controller-58f57fc597-zl25s"
project_id: "project-7d320"
zone: "us-central1-a"
}
type: "container"
}
severity: "WARNING"
textPayload: "error obtaining PEM from secret default/my-tls-cert: error retrieving secret default/my-tls-cert: secret default/my-tls-cert was not found"
timestamp: "2018-08-14T02:58:37Z"
}
I have a few occurences of this handshake error, which may be a result of the last warning...
{
insertId: "148t6rfg1xmz978"
labels: {
compute.googleapis.com/resource_name: "fluentd-gcp-v2.0.17-5b82n"
container.googleapis.com/namespace_name: "default"
container.googleapis.com/pod_name: "nginx-ingress-controller-58f57fc597-zl25s"
container.googleapis.com/stream: "stderr"
}
logName: "projects/project-7d320/logs/nginx-ingress-controller"
receiveTimestamp: "2018-08-14T15:55:52.438035706Z"
resource: {
labels: {
cluster_name: "my-elasticsearch-cluster"
container_name: "nginx-ingress-controller"
instance_id: "2341889542400230234"
namespace_id: "default"
pod_id: "nginx-ingress-controller-58f57fc597-zl25s"
project_id: "project-7d320"
zone: "us-central1-a"
}
type: "container"
}
severity: "ERROR"
textPayload: "2018/08/14 15:55:50 [crit] 1548#1548: *860 SSL_do_handshake() failed (SSL: error:1417D18C:SSL routines:tls_process_client_hello:version too low) while SSL handshaking, client: 127.0.0.1, server: 0.0.0.0:442"
timestamp: "2018-08-14T15:55:50Z"
}
The above logs make it seem like my tls secret isnt working, but when I run kubectl describe ingress, it says my secret terminates.
aaronmw#project-7d320:~$ kubectl describe ing
Name: es-ingress
Namespace: default
Address: 130.221.179.212
Default backend: default-http-backend:80 (10.61.3.7:8080)
TLS:
my-tls-secret terminates mydomain.ca
Rules:
Host Path Backends
---- ---- --------
*
/ my-es:8080 (<none>)
Annotations:
Events: <none>
I figured it out!
What I ended up doing was adding a default ssl certificate to my nginx-ingress controller on creation using the following command
helm install --name nginx-ingress --set controller.extraArgs.default-ssl-certificate=default/search-tls-secret stable/nginx-ingress
Once I had that, it was passing the cert as expected, but I still had the wrong cert as the CN didn't match my load balancer IP.
So what I did was:
Make my load balancer IP static
Add an A record to my domain, to map a subdomain to that IP
Re-key my cert to match that new subdomain
And I'm in business!
Thanks to #Crou, who's comment reminded me to look at the logs and got me on the right track.