How to configure ssl for ldap/opendj while using ISTIO service mesh - ssl

I have a couple of microservices and our backend is opendj/ldap. It has been configured to use SSL. Now we are trying to use ISTIO as our k8s service mesh. Every other service works fine but the ldap server - opendj - is not. My gues is it's because of the ssl configuration. It's meant to use self-signed cert.
I have a script that creates a self-signed cert in istio namespace and I have tried to use it like this on the gateway.yaml
- port:
number: 4444
name: tcp-admin
protocol: TCP
hosts:
- "*"
tls:
mode: SIMPLE # enable https on this port
credentialName: tls-certificate # fetch cert from k8s secret
I also have tried to use
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: opendj-istio-mtls
spec:
host: opendj.{{.Release.Namespace }}.svc.cluster.local
trafficPolicy:
tls:
mode: ISTIO_MUTUAL
credentialName: tls-certificate
---
apiVersion: authentication.istio.io/v1alpha1
kind: Policy
metadata:
name: opendj-receive-tls
spec:
targets:
- name: opendj
peers:
- mtls: {}
For the ldap server but it's not connecting. While trying to use the tls spec in gateway.yaml I am getting this error
Error: admission webhook "pilot.validation.istio.io" denied the request: configuration is invalid: server cannot have TLS settings for non HTTPS/TLS ports
And the logs from opendj server
INFO - entrypoint - 2020-06-17 12:49:44,768 - Configuring OpenDJ.
WARNING - entrypoint - 2020-06-17 12:49:48,987 -
Unable to connect to the server at
"oj-opendj-0.opendj.default.svc.cluster.local" on port 4444
WARNING - entrypoint - 2020-06-17 12:49:53,293 -
Unable to connect to the server at
"oj-opendj-0.opendj.default.svc.cluster.local" on port 4444
Can someone please help me out how I should approach this.

To Enable non-https traffic over TLS connections you have to use Protocol TLS. TLS implies the connection will be routed based on the SNI header to the destination without terminating the TLS connection. You can check this.
- port:
number: 4444
name: tls
protocol: TLS
hosts:
- "*"
tls:
mode: SIMPLE # enable https on this port
credentialName: tls-certificate # fetch cert from k8s secret
Please check this istio documentation also.

Related

Kubernetes Dashboard TLS cert issue

I am deploying the standard Kubernetes Dashboard (Jetstack) to the K3s cluster I have deployed on my RPI cluster, I am using lets-encrypt to provision the TLS cert and setting the following options on the dashboard deployment:
spec:
args:
- --tls-cert-file=/tls.crt
- --tls-key-file=/tls.key
volumeMounts:
- mountPath: /certs
name: kubernetes-dashboard-certs
volumes:
- name: kubernetes-dashboard-certs
secret:
secretName: cluster.smigula.com-tls
The cert is valid when I visit the URL in my browser, however the pod raises this:
http: TLS handshake error from 10.42.0.8:43704: remote error: tls: bad certificate
It appears that the ingress is terminating the TLS connection when the pod expects to terminate it. What should I do? Thanks.
[edit] I changed the resource kind from Ingress to IngressRouteTCP and set passthrough: true in the tls: section. Still same result.

TLS client certificate authentication in istio

I am currently trying to figure out how to enable istio to use a client certificate to authenticate to an external https service that requires client authentication. The client is a pod deployed in a kubernetes cluster that has istio installed. It currently accesses the external service using http, and cannot be changed. I know and have verified that istio can perform TLS origination so that the client can still use http to refer to the service, and istio will perform the TLS connection. But if the service also requires client certificate authentication, is there a way for me to configure istio to utilize a given certificate to do that?
I have tried by creating a ServiceEntry as described in some tutorials, as well as DestinationRules for that ServiceEntry. Is there a configuration in the DestinationRule, or elsewhere that will allow me to do that?
This is my current attempt. The hostname that requires client authentication is app.k8s.ssg-masamune.com. I have already verified that all the certificates I'm using appear to work through curl.
The certificates though are signed by a custom CA.
apiVersion: networking.istio.io/v1beta1
kind: ServiceEntry
metadata:
name: external-svc-https
spec:
hosts:
- api.dropboxapi.com
- www.googleapis.com
- developers.facebook.com
- app.k8s.ssg-masamune.com
- bookinfo.k8s.ssg-masamune.com
- edition.cnn.com
- artifactory.pds-centauri.com
location: MESH_EXTERNAL
ports:
- number: 80
name: http
protocol: HTTP
targetPort: 443
resolution: DNS
---
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
name: app-dr
spec:
host: app.k8s.ssg-masamune.com
trafficPolicy:
portLevelSettings:
- port:
number: 80
tls:
mode: SIMPLE
credentialName: app-secret
insecureSkipVerify: true
sni: app.k8s.ssg-masamune.com
subjectAltNames:
- app

error:1400410B:SSL routines:CONNECT_CR_SRVR_HELLO:wrong version number for https on Istio ingressgateway

I am trying to set up SSL on port 443 on an ingressgateway. I can consistently reproduce with a very basic setup. I know it is something I am probably doing wrong but haven't been able to figure it out.
My k8s cluster is running on EKS. k version 1.19
I created a certificate with AWS Certificate Manager for domain api.foo.com and additional names *.api.foo.com
The certificate was created successfully and has ARN arn:aws:acm:us-west-2:<some-numbers>:certificate/<id>
Then I did a vanilla install of istio:
istioctl install --set meshConfig.accessLogFile=/dev/stdout
With version:
client version: 1.7.0
control plane version: 1.7.0
This is my gateway definition:
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: foo-gateway
annotations:
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "arn:aws:acm:us-west-2:<some-numbers>:certificate/<id>"
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "60"
service.beta.kubernetes.io/aws-load-balancer-type: "elb"
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
- port:
number: 443
name: https-443
protocol: HTTP
hosts:
- "*"
Note that port 443 has protocol HTTP, I don't believe that is the problem (since I want to use SSL termination). Also even if I change it to HTTPS, then I get this:
Resource: "networking.istio.io/v1alpha3, Resource=gateways", GroupVersionKind: "networking.istio.io/v1alpha3, Kind=Gateway"
Name: "foo-gateway", Namespace: "default"
for: "foo-gateway.yaml": admission webhook "validation.istio.io" denied the request: configuration is invalid: server must have TLS settings for HTTPS/TLS protocols
But then what would be the tls settings? I need the certificate key to be picked up through the annotation (from AWS CM) not placed in /etc. As an aside, is there a way to do this without ssl termination?
My VirtualService definition is this:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: foo-api
spec:
hosts:
- "*"
gateways:
- foo-gateway
http:
- match:
- uri:
prefix: /users
route:
- destination:
host: https-user-manager
port:
number: 7070
I then k apply -f a super simple REST service called https-user-manager on port 7070. I then find the host name for the load balancer from a k get svc -n istio-system which yields:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
istio-ingressgateway LoadBalancer <cluster-ip> blahblahblah.us-west-2.elb.amazonaws.com 15021:30048/TCP,80:30210/TCP,443:31349/TCP,15443:32587/TCP 32m
I can successfully use http like:
curl http://blahblahblah.us-west-2.elb.amazonaws.com/users and get a valid response
But then if I do this:
curl -vi https://blahblahblah.us-west-2.elb.amazonaws.com/users I get the following:
* Trying <ip>...
* TCP_NODELAY set
* Connected to api.foo.com (<ip>) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
* CAfile: /etc/ssl/cert.pem
CApath: none
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
* error:1400410B:SSL routines:CONNECT_CR_SRVR_HELLO:wrong version number
* Closing connection 0
curl: (35) error:1400410B:SSL routines:CONNECT_CR_SRVR_HELLO:wrong version number
What am I doing wrong? I have seen these https://medium.com/faun/managing-tls-keys-and-certs-in-istio-using-amazons-acm-8ff9a0b99033, Istio-ingressgateway with https - Connection refused, Setting up istio ingressgateway, SSL Error - wrong version number (HTTPS to HTTP), Updating Istio-IngressGateway TLS Cert, https://github.com/kubernetes/ingress-nginx/issues/3556, https://github.com/istio/istio/issues/14264, https://preliminary.istio.io/latest/docs/tasks/traffic-management/ingress/secure-ingress/, https://preliminary.istio.io/latest/docs/tasks/traffic-management/ingress/ingress-sni-passthrough/, among many others that I don't even remember anymore. Would appreciate any help!
low level nginx
ssl on;
high level nginx
listen 443 ssl;
this works for me

TLS handshake fails intermittently when using HAProxy Ingress Controller

I'm using HAProxy Ingress Controller (https://github.com/helm/charts/tree/master/incubator/haproxy-ingress) for TLS-termination for my app.
I have a simple Node.JS server listening on 8080 for HTTP, and 1935 as a simple echo server (not HTTP).
And I use HAProxy Ingress controller to wrap the ports in TLS. (8080 -> 443 (HTTPS), 1935 -> 1936 (TCP + TLS))
I installed HAProxy Ingress Controller with
helm upgrade --install haproxy-ingress incubator/haproxy-ingress \
--namespace test \
-f ./haproxy-ingress-values.yaml \
--version v0.0.27
, where the content of haproxy-ingress-values.yaml is
controller:
ingressClass: haproxy
replicaCount: 1
service:
type: LoadBalancer
tcp:
1936: "test/simple-server:1935:::test/ingress-cert"
nodeSelector:
"kubernetes.io/os": linux
defaultBackend:
nodeSelector:
"kubernetes.io/os": linux
And here's my ingress:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ingress
annotations:
kubernetes.io/ingress.class: "haproxy"
spec:
tls:
- hosts:
secretName: ingress-cert
rules:
- http:
paths:
- path: /
backend:
serviceName: "simple-server"
servicePort: 8080
The cert is self-signed.
If I test the TLS handshake with
echo | openssl s_client -connect "<IP>":1936
Sometimes (about 1/3 of the times) it fails with
CONNECTED(00000005)
139828847829440:error:1408F10B:SSL routines:ssl3_get_record:wrong version number:../ssl/record/ssl3_record.c:332:
---
no peer certificate available
---
No client certificate CA names sent
---
SSL handshake has read 5 bytes and written 316 bytes
Verification: OK
---
New, (NONE), Cipher is (NONE)
Secure Renegotiation IS NOT supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
Early data was not sent
Verify return code: 0 (ok)
---
The same problem doesn't happen for 443 port.
See here for the details of the settings to reproduce the problem.
[edit]
As pointed out by #JoaoMorais, it's because the default statistic port is 1936.
Although I didn't turn on statistics, it seems like it still interferes with the behavior.
There're two solutions that work for me.
Change my service's 1936 port to another
Change the stats port by adding values like below when installing the haproxy-ingress chart.
controller:
stats:
port: 5000
HAProxy by default allows to reuse the same port number across the same or other frontend/listen sections and also across other haproxy process. This can be changed adding noreuseport in the global section.
The default HAProxy Ingress configuration uses port number 1936 to expose stats. If such port number is reused by eg a tcp proxy, the incoming requests will be distributed between both frontends - sometimes your service will be called, sometimes the stats page. Changing the tcp proxy or the stats page (doc here) to another port should solve the issue.

Ingress TLS routes with cert-manager not applied

I have a K8s cluster (v1.12.8-gke.10) in GKE and have a nginx ingress with hosts rules. I am trying to enable TLS using cert-manager for ingress routes. I am using a selfsign cluster issuer. But, when I access the site over HTTPS, I am still getting the default K8s certificate. (The certificate is only valid for the following names: kubernetes, kubernetes.default, kubernetes.default.svc, kubernetes.default.svc.cluster.local)
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
namespace: test
name: test
annotations:
kubernetes.io/ingress.class: nginx
kubernetes.io/ingress.allow-http: "false"
nginx.ingress.kubernetes.io/rewrite-target: /
certmanager.k8s.io/cluster-issuer: selfsign
spec:
tls:
- secretName: test
hosts:
- test.example.com
rules:
- host: test.example.com
http:
paths:
- path: /
backend:
serviceName: test
servicePort: 80
I have checked the following and is working fine:
A cluster issuer named "selfsign"
A valid self-signed certificate backed by a secret "test"
A healthy and running nginx ingress deployment
A healthy and running ingress service of type load-balancer
I think it's issue of clusterissuer
Just have a look at my cluster issuer and check
apiVersion: certmanager.k8s.io/v1alpha1
kind: ClusterIssuer
metadata:
name: prod
spec:
acme:
# The ACME server URL
server: https://acme-v02.api.letsencrypt.org/directory
# Email address used for ACME registration
email: it-support#something.com
# Name of a secret used to store the ACME account private key
privateKeySecretRef:
name: prod
# Enable the HTTP-01 challenge provider
http01: {}
Check for the right url to get production-grade certificates:
server: https://acme-v02.api.letsencrypt.org/directory
If your server url is something like this :
server: https://acme-staging-v02.api.letsencrypt.org/directory
which means you are applying for the staging certificate which may occur the error.
I've followed the tutorial from Digital Ocean and was able to enable TLS using cert-manager for ingress routes using Helm, Tiller, Letsencrypt and Nginx Ingress controller in GKE.
Instead of host test-example.com, I used my own domain name and spun up dummy backend services (echo1 and echo2) for testing purposes.
After followed the tutorial and to verify that HTTPS is working correctly, try to curl the host:
$ curl test.example.com
you should see a 308 http response (Permanent Redirect). This indicates that HTTP requests are being redirected to use HTTPS.
On the other hand, try running curl on:
$ curl https://test.example.com
Should show you the site response.
You can run the previous commands with the verbose -v flag to check into the certificate handshake and to verify the certificate information.