I am deploying the standard Kubernetes Dashboard (Jetstack) to the K3s cluster I have deployed on my RPI cluster, I am using lets-encrypt to provision the TLS cert and setting the following options on the dashboard deployment:
spec:
args:
- --tls-cert-file=/tls.crt
- --tls-key-file=/tls.key
volumeMounts:
- mountPath: /certs
name: kubernetes-dashboard-certs
volumes:
- name: kubernetes-dashboard-certs
secret:
secretName: cluster.smigula.com-tls
The cert is valid when I visit the URL in my browser, however the pod raises this:
http: TLS handshake error from 10.42.0.8:43704: remote error: tls: bad certificate
It appears that the ingress is terminating the TLS connection when the pod expects to terminate it. What should I do? Thanks.
[edit] I changed the resource kind from Ingress to IngressRouteTCP and set passthrough: true in the tls: section. Still same result.
Related
I am currently trying to figure out how to enable istio to use a client certificate to authenticate to an external https service that requires client authentication. The client is a pod deployed in a kubernetes cluster that has istio installed. It currently accesses the external service using http, and cannot be changed. I know and have verified that istio can perform TLS origination so that the client can still use http to refer to the service, and istio will perform the TLS connection. But if the service also requires client certificate authentication, is there a way for me to configure istio to utilize a given certificate to do that?
I have tried by creating a ServiceEntry as described in some tutorials, as well as DestinationRules for that ServiceEntry. Is there a configuration in the DestinationRule, or elsewhere that will allow me to do that?
This is my current attempt. The hostname that requires client authentication is app.k8s.ssg-masamune.com. I have already verified that all the certificates I'm using appear to work through curl.
The certificates though are signed by a custom CA.
apiVersion: networking.istio.io/v1beta1
kind: ServiceEntry
metadata:
name: external-svc-https
spec:
hosts:
- api.dropboxapi.com
- www.googleapis.com
- developers.facebook.com
- app.k8s.ssg-masamune.com
- bookinfo.k8s.ssg-masamune.com
- edition.cnn.com
- artifactory.pds-centauri.com
location: MESH_EXTERNAL
ports:
- number: 80
name: http
protocol: HTTP
targetPort: 443
resolution: DNS
---
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
name: app-dr
spec:
host: app.k8s.ssg-masamune.com
trafficPolicy:
portLevelSettings:
- port:
number: 80
tls:
mode: SIMPLE
credentialName: app-secret
insecureSkipVerify: true
sni: app.k8s.ssg-masamune.com
subjectAltNames:
- app
I have a couple of microservices and our backend is opendj/ldap. It has been configured to use SSL. Now we are trying to use ISTIO as our k8s service mesh. Every other service works fine but the ldap server - opendj - is not. My gues is it's because of the ssl configuration. It's meant to use self-signed cert.
I have a script that creates a self-signed cert in istio namespace and I have tried to use it like this on the gateway.yaml
- port:
number: 4444
name: tcp-admin
protocol: TCP
hosts:
- "*"
tls:
mode: SIMPLE # enable https on this port
credentialName: tls-certificate # fetch cert from k8s secret
I also have tried to use
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: opendj-istio-mtls
spec:
host: opendj.{{.Release.Namespace }}.svc.cluster.local
trafficPolicy:
tls:
mode: ISTIO_MUTUAL
credentialName: tls-certificate
---
apiVersion: authentication.istio.io/v1alpha1
kind: Policy
metadata:
name: opendj-receive-tls
spec:
targets:
- name: opendj
peers:
- mtls: {}
For the ldap server but it's not connecting. While trying to use the tls spec in gateway.yaml I am getting this error
Error: admission webhook "pilot.validation.istio.io" denied the request: configuration is invalid: server cannot have TLS settings for non HTTPS/TLS ports
And the logs from opendj server
INFO - entrypoint - 2020-06-17 12:49:44,768 - Configuring OpenDJ.
WARNING - entrypoint - 2020-06-17 12:49:48,987 -
Unable to connect to the server at
"oj-opendj-0.opendj.default.svc.cluster.local" on port 4444
WARNING - entrypoint - 2020-06-17 12:49:53,293 -
Unable to connect to the server at
"oj-opendj-0.opendj.default.svc.cluster.local" on port 4444
Can someone please help me out how I should approach this.
To Enable non-https traffic over TLS connections you have to use Protocol TLS. TLS implies the connection will be routed based on the SNI header to the destination without terminating the TLS connection. You can check this.
- port:
number: 4444
name: tls
protocol: TLS
hosts:
- "*"
tls:
mode: SIMPLE # enable https on this port
credentialName: tls-certificate # fetch cert from k8s secret
Please check this istio documentation also.
I'm trying to use the certificates obtained through digicert to enable https on my nginx-ingress. We've obtained a wildcard certificate and I have the following files.
domain_name_2019-2021.csr
domain_name_2019-2021.key
domain_name_2019-2021.pem
DigiCertCA2_2019-2021.pem
star_domain_name_2019_2021.pem
TrustedRoot.pem
I've created the tls secrets by running the following commands
kubectl create secret tls tls-secret --key ${KEY_FILE} --cert ${CERT_FILE}
And used these secrets in my ingress configuration like so
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-service
namespace: default
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
spec:
tls:
- hosts:
- {{ .Values.host }}
secretName: tls-secret
rules:
- host: {{ .Values.host }}
http:
paths:
- path: /
backend:
serviceName: service_name
servicePort: 443
However when browse to subdomain.domain_name.com I get an invalid certificate with an error of This certificate has not been verified by a third party. And the certificate its using says Kubernetes Ingress Controller Fake Certificate
you can follow this, to install Jetstack cert-manager, once you make this installed, please follow this stackoverflow post.
It will solve your query.
The current certificates created by you are not necessary for this, here the certificate will be automatically created by jetstack once it would be able to get the acme challenge verified, for that verification sake you need to map the DNS or hostname to the Load balancer IP of nginx.
This should solve your purpose to get http to https conversion
I have a K8s cluster (v1.12.8-gke.10) in GKE and have a nginx ingress with hosts rules. I am trying to enable TLS using cert-manager for ingress routes. I am using a selfsign cluster issuer. But, when I access the site over HTTPS, I am still getting the default K8s certificate. (The certificate is only valid for the following names: kubernetes, kubernetes.default, kubernetes.default.svc, kubernetes.default.svc.cluster.local)
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
namespace: test
name: test
annotations:
kubernetes.io/ingress.class: nginx
kubernetes.io/ingress.allow-http: "false"
nginx.ingress.kubernetes.io/rewrite-target: /
certmanager.k8s.io/cluster-issuer: selfsign
spec:
tls:
- secretName: test
hosts:
- test.example.com
rules:
- host: test.example.com
http:
paths:
- path: /
backend:
serviceName: test
servicePort: 80
I have checked the following and is working fine:
A cluster issuer named "selfsign"
A valid self-signed certificate backed by a secret "test"
A healthy and running nginx ingress deployment
A healthy and running ingress service of type load-balancer
I think it's issue of clusterissuer
Just have a look at my cluster issuer and check
apiVersion: certmanager.k8s.io/v1alpha1
kind: ClusterIssuer
metadata:
name: prod
spec:
acme:
# The ACME server URL
server: https://acme-v02.api.letsencrypt.org/directory
# Email address used for ACME registration
email: it-support#something.com
# Name of a secret used to store the ACME account private key
privateKeySecretRef:
name: prod
# Enable the HTTP-01 challenge provider
http01: {}
Check for the right url to get production-grade certificates:
server: https://acme-v02.api.letsencrypt.org/directory
If your server url is something like this :
server: https://acme-staging-v02.api.letsencrypt.org/directory
which means you are applying for the staging certificate which may occur the error.
I've followed the tutorial from Digital Ocean and was able to enable TLS using cert-manager for ingress routes using Helm, Tiller, Letsencrypt and Nginx Ingress controller in GKE.
Instead of host test-example.com, I used my own domain name and spun up dummy backend services (echo1 and echo2) for testing purposes.
After followed the tutorial and to verify that HTTPS is working correctly, try to curl the host:
$ curl test.example.com
you should see a 308 http response (Permanent Redirect). This indicates that HTTP requests are being redirected to use HTTPS.
On the other hand, try running curl on:
$ curl https://test.example.com
Should show you the site response.
You can run the previous commands with the verbose -v flag to check into the certificate handshake and to verify the certificate information.
We have setup Kubernetes with nginx-ingress combined with cert-manager to automatically obtain and use SSL certificates for ingress domains using LetsEncrypt using this guide: https://medium.com/#maninder.bindra/auto-provisioning-of-letsencrypt-tls-certificates-for-kubernetes-services-deployed-to-an-aks-52fd437b06b0. The result is that each Ingress defines its own SSL certificate that is automatically provisioned by cert-manager.
This all works well but for one problem, the source IP address of the traffic is lost to applications in Pods.
There is an annotation that is advised to use to apply to the nginx-ingress controller service service.beta.kubernetes.io/aws-load-balancer-backend-protocol: '*'. This has the effect of preserving source IP addresses. However, doing it breaks SSL:
An error occurred during a connection to {my.domain.com}. SSL received a record that exceeded the maximum permissible length. Error code: SSL_ERROR_RX_RECORD_TOO_LONG
My head is starting to spin. Does anyone know of any approaches to this (it seems to me that this would be a common requirement)?
Ingress configuration:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-http-ingress
annotations:
kubernetes.io/ingress.class: nginx
certmanager.k8s.io/cluster-issuer: letsencrypt-prod
spec:
rules:
- host: my.host.com
http:
paths:
- path: /
backend:
serviceName: my-http-service
servicePort: 80
tls:
- hosts:
- "my.host.com"
secretName: malcolmqa-tls
As #dom_watson mentioned in the comments, adding parameter controller.service.externalTrafficPolicy=Local to Helm install configuration solved the issue due to the fact that Local value preserves the client source IP, thus the network traffic will reach target Pod in Kubernetes cluster. Find more information in the official Kubernetes guidelines.
helm upgrade my-nginx stable/nginx-ingress --set rbac.create=true --set controller.service.externalTrafficPolicy=Local