I installed Kubernetes NGINX Ingress in kubernetes cluster
I deployed everything on AWS EC2 Instance and Classic Load balancer is in front to Ingress controller. I am able to access the service with http port but not able to access it with https.
I have a valid domain purchase from godaddy and i got AWS SSL certificate from Certificate Manager
Load balancer listeners are configured as below.
I modified the Ingress NGINX Service (added certificate ARN)
kind: Service
apiVersion: v1
metadata:
name: ingress-nginx
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
annotations:
# Enable PROXY protocol
service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*"
# Ensure the ELB idle timeout is less than nginx keep-alive timeout. By default,
# NGINX keep-alive is set to 75s. If using WebSockets, the value will need to be
# increased to '3600' to avoid any potential issues.
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "60"
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "arn:aws:acm:us-east-2:297483230626:certificate/ffe5a2b3-ceff-4ef2-bf13-8da5b4212121"
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
spec:
type: LoadBalancer
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
- name: https
port: 443
protocol: TCP
targetPort: https
Ingress rules
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: practice-ingress
namespace: practice
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
spec:
rules:
- host: kdhut.com
http:
paths:
- backend:
serviceName: customer-service
servicePort: 9090
path: /customer
- backend:
serviceName: prac-service
servicePort: 8000
path: /prac
I am able to access the service in http but https is not working.
i tried curl
curl -v https://kdhut.com -H 'Host: kdhut.com'
* Rebuilt URL to: https://kdhut.com/
* Trying 3.12.176.17...
* TCP_NODELAY set
* Connected to kdhut.com (3.12.176.17) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
* CAfile: /etc/ssl/certs/ca-certificates.crt
CApath: /etc/ssl/certs
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (IN), TLS handshake, Server key exchange (12):
* TLSv1.2 (IN), TLS handshake, Server finished (14):
* TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
* TLSv1.2 (OUT), TLS change cipher, Client hello (1):
* TLSv1.2 (OUT), TLS handshake, Finished (20):
* TLSv1.2 (IN), TLS handshake, Finished (20):
* SSL connection using TLSv1.2 / ECDHE-RSA-AES128-GCM-SHA256
* ALPN, server did not agree to a protocol
* Server certificate:
* subject: CN=kdhut.com
* start date: Mar 20 00:00:00 2020 GMT
* expire date: Apr 20 12:00:00 2021 GMT
* subjectAltName: host "kdhut.com" matched cert's "kdhut.com"
* issuer: C=US; O=Amazon; OU=Server CA 1B; CN=Amazon
* SSL certificate verify ok.
> GET / HTTP/1.1
> Host: kdhut.com
> User-Agent: curl/7.58.0
> Accept: */*
I think thats an issue with the AWS load balancer. I ran into something awhile back with a AWS NLB and found this link for a 'workaround/hack':
Workaround
HTH
Related
I have created a GKE Cluster 1.18.17-gke.1901 and I have installed Istio 1.9.5 on it. My Ingress Gateway Service is of type: LoadBalancer.
I am trying to implement MUTUAL TLS mode in my istio-ingressgateway. The Gateway configuration looks like this:
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
name: mutual-domain
namespace: test
spec:
selector:
istio: ingressgateway
servers:
- hosts:
- mutual.domain.com
port:
name: mutual-domain-https
number: 443
protocol: HTTPS
tls:
credentialName: mutual-secret
minProtocolVersion: TLSV1_2
mode: MUTUAL
I have also setup a corresponding VirtualService and DestinationRule too.
Now, whenever I try to connect to https://mutual.domain.com I get the following error:
* Trying 100.50.76.97...
* TCP_NODELAY set
* Connected to mutual.domain.com (100.50.76.97) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
* CAfile: /etc/ssl/cert.pem
CApath: none
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
* LibreSSL SSL_connect: SSL_ERROR_SYSCALL in connection to mutual.domain.com:443
* Closing connection 0
curl: (35) LibreSSL SSL_connect: SSL_ERROR_SYSCALL in connection to mutual.domain.com:443
If I change the tls: mode: to SIMPLE I am able to reach the service via the domain name but when it's MUTUAL the error above shows up.
The mutual-secret is a tls type Kubernets secret and it contains the tls.crt and tls.key.
$ kubectl describe mutual-secret
Name: mutual-secret
Namespace: istio-system
Labels: <none>
Annotations: <none>
Type: kubernetes.io/tls
Data
====
tls.crt: 4585 bytes
tls.key: 1674 bytes
Is there something missing? Why can't I access my service in MUTUAL mode but the same secret works for SIMPLE mode?
Assuming you are following this.
It seems you are missing ca.crt in your secret. Create a new secret with tls.crt, tsl.key and ca.crt, and try again.
The ERR_BAD_SSL_CLIENT_AUTH_CERT error mentioned in the comments is Chrome/Chromium specific. It means the browser does not recognise Certificate Authority.
Add your CA certificate (.pem file) to Chrome/Chromium, you can follow this. Hopefully it will solve your problem.
I have a Kubernetes cluster running in AWS, and I am working through upgrading various components. Internally, we are using NGINX, and it is currently at v1.1.1 of the nginx-ingress chart (as served from old stable), with the following configuration:
controller:
publishService:
enabled: "true"
replicaCount: 3
service:
annotations:
external-dns.alpha.kubernetes.io/hostname: '*.MY.TOP.LEVEL.DOMAIN'
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: [SNIP]
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443"
targetPorts:
http: http
https: http
My service's ingress resource looks like...
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
[SNIP]
spec:
rules:
- host: MY-SERVICE.MY.TOP.LEVEL.DOMAIN
http:
paths:
- backend:
serviceName: MY-SERVICE
servicePort: 80
path: /
status:
loadBalancer:
ingress:
- hostname: [SNIP]
This configuration works just fine, however, when I upgrade to v3.11.1 of the ingress-nginx chart (as served from the k8s museum).
With an unmodified config, curling to the HTTPS scheme redirects back to itself:
curl -v https://MY-SERVICE.MY.TOP.LEVEL.DOMAIN/INTERNAL/ROUTE
* Trying W.X.Y.Z...
* TCP_NODELAY set
* Connected to MY-SERVICE.MY.TOP.LEVEL.DOMAIN (W.X.Y.Z) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
* CAfile: /etc/ssl/cert.pem
CApath: none
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
* TLSv1.2 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (IN), TLS handshake, Server key exchange (12):
* TLSv1.2 (IN), TLS handshake, Server finished (14):
* TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
* TLSv1.2 (OUT), TLS change cipher, Change cipher spec (1):
* TLSv1.2 (OUT), TLS handshake, Finished (20):
* TLSv1.2 (IN), TLS change cipher, Change cipher spec (1):
* TLSv1.2 (IN), TLS handshake, Finished (20):
* SSL connection using TLSv1.2 / ECDHE-RSA-AES128-GCM-SHA256
* ALPN, server did not agree to a protocol
* Server certificate:
* subject: CN=*.MY.TOP.LEVEL.DOMAIN
* start date: Aug 21 00:00:00 2020 GMT
* expire date: Sep 20 12:00:00 2021 GMT
* subjectAltName: host "MY-SERVICE.MY.TOP.LEVEL.DOMAIN" matched cert's "*.MY.TOP.LEVEL.DOMAIN"
* issuer: C=US; O=Amazon; OU=Server CA 1B; CN=Amazon
* SSL certificate verify ok.
> GET INTERNAL/ROUTE HTTP/1.1
> Host: MY-SERVICE.MY.TOP.LEVEL.DOMAIN
> User-Agent: curl/7.64.1
> Accept: */*
>
< HTTP/1.1 308 Permanent Redirect
< Content-Type: text/html
< Date: Wed, 28 Apr 2021 19:07:57 GMT
< Location: https://MY-SERVICE.MY.TOP.LEVEL.DOMAIN/INTERNAL/ROUTE
< Content-Length: 164
< Connection: keep-alive
<
<html>
<head><title>308 Permanent Redirect</title></head>
<body>
<center><h1>308 Permanent Redirect</h1></center>
<hr><center>nginx</center>
</body>
</html>
* Connection #0 to host MY-SERVICE.MY.TOP.LEVEL.DOMAIN left intact
* Closing connection 0
(I wish I had captured more verbose output...)
I tried modifying the NGINX config to append the following:
config:
use-forwarded-headers: "true"
and then...
config:
compute-full-forwarded-for: "true"
use-forwarded-headers: "true"
These did not seem to make a difference. It was in the middle of the day, so I wasn't able to dive too far in before rolling back.
What should I look at, and how should I debug this?
Update:
I wish that I had posted a complete copy of the updated config, because I would have noticed that I did not correctly apply the change to add config.compute-full-forwarded-for: "true". It need to be within the controller block, and I had placed it elsewhere.
Once the compute-full-forwarded-for: "true" config was added, everything started to work immediately.
This is a community wiki answer posted for better visibility. Feel free to expand it.
As confirmed by #object88 the issue was with misplaced config.compute-full-forwarded-for: "true" configuration which was located in the wrong block. Adding it to the controller block solved the issue.
I'm trying to implement an endpoint through nginx ingress on kubernetes. The same configuration seems to work on another controller deployment in the same cluster, but here I'm getting very random 404 responses mixed in with the expected response.
Configuration for ingress-nginx-controller deployment, modified from the helm chart:
apiVersion: v1
kind: Service
metadata:
annotations:
cloud.google.com/load-balancer-type: Internal
labels:
helm.sh/chart: ingress-nginx-3.23.0
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: "0.44.0"
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx-controller
namespace: kube-system
spec:
type: LoadBalancer
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
- name: https
port: 443
protocol: TCP
targetPort: https
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/component: controller
---
# Source: ingress-nginx/templates/controller-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
helm.sh/chart: ingress-nginx-3.23.0
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: "0.44.0"
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx-controller
namespace: kube-system
spec:
selector:
matchLabels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/component: controller
replicas: 1
revisionHistoryLimit: 10
minReadySeconds: 0
template:
metadata:
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/component: controller
spec:
dnsPolicy: ClusterFirst
containers:
- name: controller
image: "k8s.gcr.io/ingress-nginx/controller:v0.44.0#sha256:3dd0fac48073beaca2d67a78c746c7593f9c575168a17139a9955a82c63c4b9a"
imagePullPolicy: IfNotPresent
lifecycle:
preStop:
exec:
command:
- /wait-shutdown
args:
- /nginx-ingress-controller
- --publish-service=$(POD_NAMESPACE)/ingress-nginx-controller
- --election-id=ingress-controller-leader
- --ingress-class=nginx
- --configmap=$(POD_NAMESPACE)/ingress-nginx-controller
- --validating-webhook=:8443
- --validating-webhook-certificate=/usr/local/certificates/cert
- --validating-webhook-key=/usr/local/certificates/key
- --default-ssl-certificate=kube-system/nginx-certificates ##custom by environment, must be created
securityContext:
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
runAsUser: 101
allowPrivilegeEscalation: true
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: LD_PRELOAD
value: /usr/local/lib/libmimalloc.so
- name: GODEBUG
value: x509ignoreCN=0
livenessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 1
successThreshold: 1
failureThreshold: 5
readinessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 1
successThreshold: 1
failureThreshold: 3
ports:
- name: http
containerPort: 80
protocol: TCP
- name: https
containerPort: 443
protocol: TCP
- name: webhook
containerPort: 8443
protocol: TCP
volumeMounts:
- name: webhook-cert
mountPath: /usr/local/certificates/
readOnly: true
resources:
requests:
cpu: 100m
memory: 90Mi
nodeSelector:
kubernetes.io/os: linux
serviceAccountName: ingress-nginx
terminationGracePeriodSeconds: 300
volumes:
- name: webhook-cert
secret:
secretName: ingress-nginx-admission
The ingress configuration (service names/endpoints are changed for the sake of this post):
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
ingress.kubernetes.io/proxy-body-size: 50m
ingress.kubernetes.io/proxy-request-buffering: "off"
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/backend-protocol: HTTPS
nginx.ingress.kubernetes.io/default-backend: test-endpoint-svc
nginx.ingress.kubernetes.io/proxy-body-size: 50m
nginx.ingress.kubernetes.io/proxy-http-version: "1.1"
nginx.ingress.kubernetes.io/proxy-request-buffering: "off"
nginx.ingress.kubernetes.io/ssl-passthrough: "False"
labels:
app: test-endpoint
name: test-endpoint
namespace: default
spec:
backend:
serviceName: test-endpoint-svc
servicePort: 443
rules:
- host: test.internal
http:
paths:
- backend:
serviceName: test-endpoint-svc
servicePort: 443
path: /
tls:
- hosts:
- test.internal
secretName: nginx-certificates
And here's an example working output of curl -k -vvv -u <user>:<password> https://test.internal
* Rebuilt URL to: https://test.internal/
* Trying <correct ip>...
* TCP_NODELAY set
* Connected to test.internal (<correct ip>) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
* CAfile: /etc/pki/tls/certs/ca-bundle.crt
CApath: none
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.3 (IN), TLS handshake, [no content] (0):
* TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8):
* TLSv1.3 (IN), TLS handshake, [no content] (0):
* TLSv1.3 (IN), TLS handshake, Certificate (11):
* TLSv1.3 (IN), TLS handshake, [no content] (0):
* TLSv1.3 (IN), TLS handshake, CERT verify (15):
* TLSv1.3 (IN), TLS handshake, [no content] (0):
* TLSv1.3 (IN), TLS handshake, Finished (20):
* TLSv1.3 (OUT), TLS change cipher, Change cipher spec (1):
* TLSv1.3 (OUT), TLS handshake, [no content] (0):
* TLSv1.3 (OUT), TLS handshake, Finished (20):
* SSL connection using TLSv1.3 / TLS_AES_256_GCM_SHA384
* ALPN, server accepted to use h2
* Server certificate:
* subject: CN=test.internal; O=test.internal
* start date: Mar 4 00:53:27 2021 GMT
* expire date: Mar 4 00:53:27 2022 GMT
* issuer: CN=test.internal; O=test.internal
* SSL certificate verify result: self signed certificate (18), continuing anyway.
* Using HTTP2, server supports multi-use
* Connection state changed (HTTP/2 confirmed)
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
* TLSv1.3 (OUT), TLS app data, [no content] (0):
* TLSv1.3 (OUT), TLS app data, [no content] (0):
* TLSv1.3 (OUT), TLS app data, [no content] (0):
* Server auth using Basic with user '<user>'
* Using Stream ID: 1 (easy handle 0x55a3643114c0)
* TLSv1.3 (OUT), TLS app data, [no content] (0):
> GET / HTTP/2
> Host: test.internal
> Authorization: Basic <password>
> User-Agent: curl/7.61.1
> Accept: */*
>
* TLSv1.3 (IN), TLS handshake, [no content] (0):
* TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
* TLSv1.3 (IN), TLS handshake, [no content] (0):
* TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
* TLSv1.3 (IN), TLS app data, [no content] (0):
* Connection state changed (MAX_CONCURRENT_STREAMS == 128)!
* TLSv1.3 (OUT), TLS app data, [no content] (0):
* TLSv1.3 (IN), TLS app data, [no content] (0):
* TLSv1.3 (IN), TLS app data, [no content] (0):
< HTTP/2 200
< date: Thu, 04 Mar 2021 01:05:43 GMT
< content-type: application/json; charset=UTF-8
< content-length: 533
< strict-transport-security: max-age=15724800; includeSubDomains
<expected response>
Trying the same curl call half a second later:
* Rebuilt URL to: https://test.internal/
* Trying <correct ip>...
* TCP_NODELAY set
* Connected to test.internal (<correct ip>) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
* CAfile: /etc/pki/tls/certs/ca-bundle.crt
CApath: none
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.3 (IN), TLS handshake, [no content] (0):
* TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8):
* TLSv1.3 (IN), TLS handshake, [no content] (0):
* TLSv1.3 (IN), TLS handshake, Certificate (11):
* TLSv1.3 (IN), TLS handshake, [no content] (0):
* TLSv1.3 (IN), TLS handshake, CERT verify (15):
* TLSv1.3 (IN), TLS handshake, [no content] (0):
* TLSv1.3 (IN), TLS handshake, Finished (20):
* TLSv1.3 (OUT), TLS change cipher, Change cipher spec (1):
* TLSv1.3 (OUT), TLS handshake, [no content] (0):
* TLSv1.3 (OUT), TLS handshake, Finished (20):
* SSL connection using TLSv1.3 / TLS_AES_256_GCM_SHA384
* ALPN, server accepted to use h2
* Server certificate:
* subject: O=Acme Co; CN=Kubernetes Ingress Controller Fake Certificate
* start date: Feb 5 20:51:55 2021 GMT
* expire date: Feb 5 20:51:55 2022 GMT
* issuer: O=Acme Co; CN=Kubernetes Ingress Controller Fake Certificate
* SSL certificate verify result: unable to get local issuer certificate (20), continuing anyway.
* Using HTTP2, server supports multi-use
* Connection state changed (HTTP/2 confirmed)
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
* TLSv1.3 (OUT), TLS app data, [no content] (0):
* TLSv1.3 (OUT), TLS app data, [no content] (0):
* TLSv1.3 (OUT), TLS app data, [no content] (0):
* Server auth using Basic with user <user>
* Using Stream ID: 1 (easy handle 0x560637cb34c0)
* TLSv1.3 (OUT), TLS app data, [no content] (0):
> GET / HTTP/2
> Host: test.internal
> Authorization: Basic <password>
> User-Agent: curl/7.61.1
> Accept: */*
>
* TLSv1.3 (IN), TLS handshake, [no content] (0):
* TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
* TLSv1.3 (IN), TLS handshake, [no content] (0):
* TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
* TLSv1.3 (IN), TLS app data, [no content] (0):
* Connection state changed (MAX_CONCURRENT_STREAMS == 128)!
* TLSv1.3 (OUT), TLS app data, [no content] (0):
* TLSv1.3 (IN), TLS app data, [no content] (0):
* TLSv1.3 (IN), TLS app data, [no content] (0):
< HTTP/2 404
< date: Thu, 04 Mar 2021 01:05:44 GMT
< content-type: text/html
< content-length: 146
< strict-transport-security: max-age=15724800; includeSubDomains
<
<html>
<head><title>404 Not Found</title></head>
<body>
<center><h1>404 Not Found</h1></center>
<hr><center>nginx</center>
</body>
</html>
* TLSv1.3 (IN), TLS app data, [no content] (0):
* Connection #0 to host test.internal left intact
I've tried various changes to the ingress annotations, adding/removing the default host, and adding/removing the GODEBUG environment variable from the controller. There doesn't seem to be a pattern in when these calls succeed vs. 404, and I'm hesitant to dive into turning on 404 logs, as a custom template is needed (https://github.com/kubernetes/ingress-nginx/issues/4856).
The nginx-certificates secret is present in both kube-system & default namespaces, and was generated with openssl.
What is going on here?
I was working on a React app, which was deployed using Kubernetes. From my experience, 404 page shows up - the fact that some response is returned - means deployment is working.
In my case whenever, I got 404, there was issue with the front end code. So, you should check your front end - specifically, the routing config.
Hopefully, this should give you some direction.
I've kubernetes installed on Ubuntu 19.10.
I've setup ingress-nginx and can access my test service using http.
However, I get a "Connection refused" when I try to access via https.
[Edit] I'm trying to get https to terminate in the ingress and pass unencrypted traffic to my service the same way http does. I've implemented the below based on many examples I've seen but with little luck.
Yaml
kind: Service
apiVersion: v1
metadata:
name: messagemanager-service
namespace: default
labels:
name: messagemanager-service
spec:
type: NodePort
selector:
app: messagemanager
ports:
- port: 80
protocol: TCP
targetPort: 8080
nodePort: 31212
name: http
externalIPs:
- 192.168.0.210
---
kind: Deployment
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
metadata:
name: messagemanager
labels:
app: messagemanager
version: v1
spec:
replicas: 3
selector:
matchLabels:
app: messagemanager
template:
metadata:
labels:
app: messagemanager
version: v1
spec:
containers:
- name: messagemanager
image: test/messagemanager:1.0
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8080
protocol: TCP
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: messagemanager-ingress
annotations:
nginx.ingress.kubernetes.io/ssl-passthrough: false
ingress.kubernetes.io/rewrite-target: /
spec:
tls:
- secretName: tls-secret
rules:
- http:
paths:
- path: /message
backend:
serviceName: messagemanager-service
servicePort: 8080
https test
curl -kL https://192.168.0.210/message -verbose
* Trying 192.168.0.210:443...
* TCP_NODELAY set
* connect to 192.168.0.210 port 443 failed: Connection refused
* Failed to connect to 192.168.0.210 port 443: Connection refused
* Closing connection 0
curl: (7) Failed to connect to 192.168.0.210 port 443: Connection refused
http test
curl -kL http://192.168.0.210/message -verbose
* Trying 192.168.0.210:80...
* TCP_NODELAY set
* Connected to 192.168.0.210 (192.168.0.210) port 80 (#0)
> GET /message HTTP/1.1
> Host: 192.168.0.210
> User-Agent: curl/7.65.3
> Accept: */*
> Referer: rbose
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< Content-Type: text/plain;charset=UTF-8
< Date: Fri, 24 Apr 2020 18:44:07 GMT
< connection: keep-alive
< content-length: 50
<
* Connection #0 to host 192.168.0.210 left intact
$ kubectl -n ingress-nginx get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller LoadBalancer 10.105.92.236 <pending> 80:31752/TCP,443:32035/TCP 2d
ingress-nginx-controller-admission ClusterIP 10.100.223.87 <none> 443/TCP 2d
$ kubectl get ingress -o wide
NAME CLASS HOSTS ADDRESS PORTS AGE
messagemanager-ingress <none> * 80, 443 37m
key creation
openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout tls.key -out tls.crt -subj "/CN=nginxsvc/O=nginxsvc"
kubectl create secret tls tls-secret --key tls.key --cert tls.crt
$ kubectl describe ingress
Name: messagemanager-ingress
Namespace: default
Address:
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
TLS:
tls-secret terminates
Rules:
Host Path Backends
---- ---- --------
*
/message messagemanager-service:8080 ()
Annotations: Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal CREATE 107s nginx-ingress-controller Ingress default/messagemanager-ingress
I was under the assumption that TLS would terminate in the ingress and the request would be passed on to the service as http.
I had to add the external IPs in the service to get HTTP to work.
Am I missing something similar for HTTPS?
Any help and guidance is appreciated.
Thanks
Mark
I've reproduced your scenario in my lab and after a few changes in your ingress it's working as you described.
In my lab I used an nginx image that serves a default landing page on port 80 and with this Ingress rule, it's possible to serve it on port 80 and 443.
kind: Deployment
apiVersion: apps/v1
metadata:
name: nginx
labels:
app: nginx
version: v1
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
version: v1
spec:
containers:
- name: nginx
image: nginx
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
protocol: TCP
---
kind: Service
apiVersion: v1
metadata:
name: nginx-service
namespace: default
labels:
name: nginx-service
spec:
type: NodePort
selector:
app: nginx
ports:
- port: 80
protocol: TCP
targetPort: 80
nodePort: 31000
name: http
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: nginx
labels:
app: nginx
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
tls:
- secretName: tls-secret
rules:
- http:
paths:
- path: /nginx
backend:
serviceName: nginx-service
servicePort: 80
The only difference between my ingress and yours is that I removed nginx.ingress.kubernetes.io/ssl-passthrough: false. In the documentation we can read:
note SSL Passthrough is disabled by default
So there is no need for you to specify that.
I used the same secret as you:
$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout tls.key -out tls.crt -subj "/CN=nginxsvc/O=nginxsvc"
$ kubectl create secret tls tls-secret --key tls.key --cert tls.crt
In your question I have the impression that you are trying to reach your ingress through the IP 192.168.0.210. This is your service IP and not your Ingress IP.
If you are using Cloud managed Kubernetes you have to run the following command to find your Ingress IP:
$ kubectl get ingresses nginx
NAME HOSTS ADDRESS PORTS AGE
nginx * 34.89.108.48 80, 443 6m32s
If you are running on Bare Metal without any LoadBalancer solution as MetalLB, you can see that your ingress-nginx service will be with EXTERNAL-IP on Pending forever.
$ kubectl get service -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-1587980954-controller LoadBalancer 10.110.188.236 <pending> 80:31024/TCP,443:30039/TCP 23s
You can do the same thing as you did with your service and add an externalIP manually:
kubectl get service -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-1587980954-controller LoadBalancer 10.110.188.236 10.156.0.24 80:31024/TCP,443:30039/TCP 9m14s
After this change, your ingress will have the same IP as you defined in your Ingress Service:
$ kubectl get ingress nginx
NAME CLASS HOSTS ADDRESS PORTS AGE
nginx <none> * 10.156.0.24 80, 443 118s
$ curl -kL https://10.156.0.24/nginx --verbose
* Trying 10.156.0.24...
* TCP_NODELAY set
* Connected to 10.156.0.24 (10.156.0.24) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* Cipher selection: ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:#STRENGTH
* successfully set certificate verify locations:
* CAfile: /etc/ssl/certs/ca-certificates.crt
CApath: /etc/ssl/certs
* TLSv1.2 (OUT), TLS header, Certificate Status (22):
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
* TLSv1.2 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (IN), TLS handshake, Server key exchange (12):
* TLSv1.2 (IN), TLS handshake, Server finished (14):
* TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
* TLSv1.2 (OUT), TLS change cipher, Client hello (1):
* TLSv1.2 (OUT), TLS handshake, Finished (20):
* TLSv1.2 (IN), TLS change cipher, Client hello (1):
* TLSv1.2 (IN), TLS handshake, Finished (20):
* SSL connection using TLSv1.2 / ECDHE-RSA-AES128-GCM-SHA256
* ALPN, server accepted to use h2
* Server certificate:
* subject: O=Acme Co; CN=Kubernetes Ingress Controller Fake Certificate
* start date: Apr 27 09:49:19 2020 GMT
* expire date: Apr 27 09:49:19 2021 GMT
* issuer: O=Acme Co; CN=Kubernetes Ingress Controller Fake Certificate
* SSL certificate verify result: unable to get local issuer certificate (20), continuing anyway.
* Using HTTP2, server supports multi-use
* Connection state changed (HTTP/2 confirmed)
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
* Using Stream ID: 1 (easy handle 0x560cee14fe90)
> GET /nginx HTTP/1.1
> Host: 10.156.0.24
> User-Agent: curl/7.52.1
> Accept: */*
>
* Connection state changed (MAX_CONCURRENT_STREAMS updated)!
< HTTP/2 200
< server: nginx/1.17.10
< date: Mon, 27 Apr 2020 10:01:29 GMT
< content-type: text/html
< content-length: 612
< vary: Accept-Encoding
< last-modified: Tue, 14 Apr 2020 14:19:26 GMT
< etag: "5e95c66e-264"
< accept-ranges: bytes
< strict-transport-security: max-age=15724800; includeSubDomains
<
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
nginx.org.<br/>
Commercial support is available at
nginx.com.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
* Curl_http_done: called premature == 0
* Connection #0 to host 10.156.0.24 left intact
EDIT:
There does not seem to be a way to manually set the "external IPs" for
the ingress as can be done in the service. If you know of one please
let me know :-). Looks like my best bet is to try MetalLB.
MetalLB would be the best option for production. If you are running it for lab only, you have the option to add your node public IP (the same you can get by running kubectl get nodes -o wide) and attach it to your NGINX ingress controller.
Adding your node IP to your NGINX ingress controller
spec:
externalIPs:
- 192.168.0.210
Create a file called ingress-nginx-svc-patch.yaml and paste the contents above.
Next apply the changes with the following command:
kubectl patch service ingress-nginx-controller -n kube-system --patch "$(cat ingress-nginx-svc-patch.yaml)"
And as result:
$ kubectl get service -n kube-system ingress-nginx-controller
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller LoadBalancer 10.97.0.243 192.168.0.210 80:31409/TCP,443:30341/TCP 39m
I'm no expert, but any time I've seen a service handle http, and https traffic, it's specified two ports in the svc yaml, one for http and one for https. Apparently there are ways to get around this, reading here might be a good start.
For two ports, look at the official k8s example here
I was struggling with the same situation for a while and was about to start using metalLB but realised after running kubectl -n ingress-nginx get svc the ingress-nginx had created its own nodePort and i could access the cluster through that.
Im not an expert but its probably not good for production but i don't see why not.
This is driving me crazy, I am NO Kubernetes expert but I am also not a novice.
I have tried unsuccessfully for three days to get past this issue but I can't and I am at the end of my rope.
I can query the cluster from my desktop after I copied the certificates from (kube-apiserver-1:/etc/kubernetes/pki/*) to my desktop.
$ kubectl -n kube-system get nodes
NAME STATUS ROLES AGE VERSION
kube-apiserver-1 Ready master 71m v1.14.2
The Kubernetes cluster appears healthy when I query the kube-system pods:
$ kubectl -n kube-system get pods
NAME READY STATUS RESTARTS AGE
coredns-fb8b8dccf-6c85q 1/1 Running 3 65m
coredns-fb8b8dccf-qwxlp 1/1 Running 3 65m
kube-apiserver-kube-apiserver-1 1/1 Running 2 72m
kube-controller-manager-kube-apiserver-1 1/1 Running 2 72m
kube-flannel-ds-amd64-phntk 1/1 Running 2 62m
kube-proxy-swxrz 1/1 Running 2 65m
kube-scheduler-kube-apiserver-1 1/1 Running 1 54m
but when I query the api kubelet:
$ kubectl -n kube-system logs kube-apiserver-kube-apiserver-1
...
I0526 04:33:51.523828 1 log.go:172] http: TLS handshake error from 192.168.5.32:43122: remote error: tls: bad certificate
I0526 04:33:51.537258 1 log.go:172] http: TLS handshake error from 192.168.5.32:43124: remote error: tls: bad certificate
I0526 04:33:51.540617 1 log.go:172] http: TLS handshake error from 192.168.5.32:43126: remote error: tls: bad certificate
I0526 04:33:52.333817 1 log.go:172] http: TLS handshake error from 192.168.5.32:43130: remote error: tls: bad certificate
I0526 04:33:52.334354 1 log.go:172] http: TLS handshake error from 192.168.5.32:43128: remote error: tls: bad certificate
I0526 04:33:52.335570 1 log.go:172] http: TLS handshake error from 192.168.5.32:43132: remote error: tls: bad certificate
I0526 04:33:52.336703 1 log.go:172] http: TLS handshake error from 192.168.5.32:43134: remote error: tls: bad certificate
I0526 04:33:52.338792 1 log.go:172] http: TLS handshake error from 192.168.5.32:43136: remote error: tls: bad certificate
I0526 04:33:52.391557 1 log.go:172] http: TLS handshake error from 192.168.5.32:43138: remote error: tls: bad certificate
I0526 04:33:52.396566 1 log.go:172] http: TLS handshake error from 192.168.5.32:43140: remote error: tls: bad certificate
I0526 04:33:52.519666 1 log.go:172] http: TLS handshake error from 192.168.5.32:43142: remote error: tls: bad certificate
I0526 04:33:52.524702 1 log.go:172] http: TLS handshake error from 192.168.5.32:43144: remote error: tls: bad certificate
I0526 04:33:52.537127 1 log.go:172] http: TLS handshake error from 192.168.5.32:43146: remote error: tls: bad certificate
I0526 04:33:52.550177 1 log.go:172] http: TLS handshake error from 192.168.5.32:43150: remote error: tls: bad certificate
I0526 04:33:52.550613 1 log.go:172] http: TLS handshake error from 192.168.5.32:43148: remote error: tls: bad certificate
On the NGINX load balancer (IP: 192.168.5.32) I have configured the TCP passthrough option as specified in the Kubernetes documentation:
upstream kubernetes-api-cluster {
server 192.168.5.19:6443;
server 192.168.5.29:6443;
}
server {
listen 6443;
ssl_certificate /etc/nginx/ssl/kube-apiserver.pem;
ssl_certificate_key /etc/nginx/ssl/private/kube-apiserver.key;
ssl_prefer_server_ciphers on;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS;
proxy_pass kubernetes-api-cluster;
}
I can query the API server directly from the NGINX LB (IP: 192.168.5.32):
$ curl -v https://192.168.5.29:6443
* Rebuilt URL to: https://192.168.5.29:6443/
* Trying 192.168.5.29...
* TCP_NODELAY set
* Connected to 192.168.5.29 (192.168.5.29) port 6443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
* CAfile: /etc/ssl/certs/ca-certificates.crt
CApath: /etc/ssl/certs
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
* TLSv1.2 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (IN), TLS handshake, Server key exchange (12):
* TLSv1.2 (IN), TLS handshake, Request CERT (13):
* TLSv1.2 (IN), TLS handshake, Server finished (14):
* TLSv1.2 (OUT), TLS handshake, Certificate (11):
* TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
* TLSv1.2 (OUT), TLS change cipher, Client hello (1):
* TLSv1.2 (OUT), TLS handshake, Finished (20):
* TLSv1.2 (IN), TLS handshake, Finished (20):
* SSL connection using TLSv1.2 / ECDHE-RSA-AES128-GCM-SHA256
* ALPN, server accepted to use h2
* Server certificate:
* subject: CN=kube-apiserver
* start date: May 26 03:39:36 2019 GMT
* expire date: May 25 03:39:36 2020 GMT
* subjectAltName: host "192.168.5.29" matched cert's IP address!
* issuer: CN=kubernetes
* SSL certificate verify ok.
* Using HTTP2, server supports multi-use
* Connection state changed (HTTP/2 confirmed)
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
* Using Stream ID: 1 (easy handle 0x55840f1d9900)
> GET / HTTP/2
> Host: 192.168.5.29:6443
> User-Agent: curl/7.58.0
> Accept: */*
I can also query the api using the DNS entry to the api as specified in the documents:
curl -v https://kube-apiserver.mydomain.com:6443
* Rebuilt URL to: https://kube-apiserver.mydomain.com:6443/
* Trying 10.50.1.50...
* TCP_NODELAY set
* Connected to kube-apiserver.mydomain.com (10.50.1.50) port 6443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
* CAfile: /etc/ssl/certs/ca-certificates.crt
CApath: /etc/ssl/certs
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
* TLSv1.2 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (IN), TLS handshake, Server key exchange (12):
* TLSv1.2 (IN), TLS handshake, Request CERT (13):
* TLSv1.2 (IN), TLS handshake, Server finished (14):
* TLSv1.2 (OUT), TLS handshake, Certificate (11):
* TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
* TLSv1.2 (OUT), TLS change cipher, Client hello (1):
* TLSv1.2 (OUT), TLS handshake, Finished (20):
* TLSv1.2 (IN), TLS handshake, Finished (20):
* SSL connection using TLSv1.2 / ECDHE-RSA-AES128-GCM-SHA256
* ALPN, server accepted to use h2
* Server certificate:
* subject: CN=kube-apiserver
* start date: May 26 03:39:36 2019 GMT
* expire date: May 25 03:39:36 2020 GMT
* subjectAltName: host "kube-apiserver.mydomain.com" matched cert's "kube-apiserver.mydomain.com"
* issuer: CN=kubernetes
* SSL certificate verify ok.
* Using HTTP2, server supports multi-use
* Connection state changed (HTTP/2 confirmed)
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
* Using Stream ID: 1 (easy handle 0x564287cbd900)
> GET / HTTP/2
> Host: kube-apiserver.mydomain.com:6443
> User-Agent: curl/7.58.0
> Accept: */*
I can query the api server using curl as well on the API server:
curl -v https://kube-apiserver.mydomain.com:6443
* Rebuilt URL to: https://kube-apiserver.mydomain.com:6443/
* Trying 10.50.1.50...
* TCP_NODELAY set
* Connected to kube-apiserver.epc-instore.com (10.50.1.50) port 6443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
* CAfile: /etc/ssl/certs/ca-certificates.crt
CApath: /etc/ssl/certs
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
* TLSv1.2 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (IN), TLS handshake, Server key exchange (12):
* TLSv1.2 (IN), TLS handshake, Request CERT (13):
* TLSv1.2 (IN), TLS handshake, Server finished (14):
* TLSv1.2 (OUT), TLS handshake, Certificate (11):
* TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
* TLSv1.2 (OUT), TLS change cipher, Client hello (1):
* TLSv1.2 (OUT), TLS handshake, Finished (20):
* TLSv1.2 (IN), TLS handshake, Finished (20):
* SSL connection using TLSv1.2 / ECDHE-RSA-AES128-GCM-SHA256
* ALPN, server accepted to use h2
* Server certificate:
* subject: CN=kube-apiserver
* start date: May 26 03:39:36 2019 GMT
* expire date: May 25 03:39:36 2020 GMT
* subjectAltName: host "kube-apiserver.mydomain.com" matched cert's "kube-apiserver.mydomain.com"
* issuer: CN=kubernetes
* SSL certificate verify ok.
* Using HTTP2, server supports multi-use
* Connection state changed (HTTP/2 confirmed)
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
* Using Stream ID: 1 (easy handle 0x5628b9dbc900)
> GET / HTTP/2
> Host: kube-apiserver.mydomain.com:6443
> User-Agent: curl/7.58.0
> Accept: */*
The manifest on the api server contains:
cat /etc/kubernetes/manifest/kube-apiserver.yaml
...
- command:
- kube-apiserver
- --advertise-address=192.168.5.29
- --allow-privileged=true
- --authorization-mode=Node,RBAC
- --client-ca-file=/etc/kubernetes/pki/ca.crt
- --enable-admission-plugins=NodeRestriction
- --enable-bootstrap-token-auth=true
- --etcd-servers=http://etcd-cluster.mydomain.com:2379
- --insecure-port=0
- --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt
- --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
- --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt
- --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key
- --requestheader-allowed-names=front-proxy-client
- --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
- --requestheader-extra-headers-prefix=X-Remote-Extra-
- --requestheader-group-headers=X-Remote-Group
- --requestheader-username-headers=X-Remote-User
- --secure-port=6443
- --service-account-key-file=/etc/kubernetes/pki/sa.pub
- --service-cluster-ip-range=10.96.0.0/12
- --tls-cert-file=/etc/kubernetes/pki/apiserver.crt
- --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
image: k8s.gcr.io/kube-apiserver:v1.14.2
imagePullPolicy: IfNotPresent
...
If you have any idea or hints on how to fix this I am all ears. I am so frustrated with this issue, it really has gotten to me at this point. I will continue to work on it but if anyone has a clue about this issue and can help it will be great.
Thank you.
The actual root cause of the original issue was (citing the author of this post #Daniel Maldonado):
This was my mistake, I had a firewall configuration error and all
tests indicated that it was the load balancer probing the
kube-apiserver when in fact it was not. The issue was completely local
to the api-server itself. If anyone gets to this point please verify
that ALL ports are available to the API server from itself i.e.
loopback.
Your current nginx config isn't setting up a client cert. ssl_certificate is the server cert, if you want it to present a client cert to kubernetes-api-cluster you'll have to configure nginx to forward the incoming client certificate. I've previously done this using proxy_set_header X-SSL-CERT $ssl_client_escaped_cert (documentation)
upstream kubernetes-api-cluster {
server 192.168.5.19:6443;
server 192.168.5.29:6443;
}
server {
listen 6443;
ssl_certificate /etc/nginx/ssl/kube-apiserver.pem;
ssl_certificate_key /etc/nginx/ssl/private/kube-apiserver.key;
ssl_prefer_server_ciphers on;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS;
proxy_pass kubernetes-api-cluster;
#forward incoming client certificate
ssl_verify_client optional; #requests the client certificate and verifies it if the certificate is present
proxy_set_header X-SSL-CERT $ssl_client_escaped_cert;
}
This is more of a troubleshooting idea to really target the source of the problem.
If you can do:
kubectl --kubeconfig /etc/kubernetes/admin.conf get nodes
from the api server and you get a response then the problem is NOT the load balancer. To further prove this you can copy the appropriate certificates and files to a remote workstation and do the same:
kubectl --kubeconfig [workstation location]/admin.conf get nodes
This second one obviously implies that you have direct access to the load balancer.
If this works too you have confirmation that the certificates are being passed through the TCP load balancer.
However, the error will persist as the load balancer has a check "availability" of a backend server. This check does NOT use a certificate which produces the exception.