TLS handshake fails intermittently when using HAProxy Ingress Controller - ssl

I'm using HAProxy Ingress Controller (https://github.com/helm/charts/tree/master/incubator/haproxy-ingress) for TLS-termination for my app.
I have a simple Node.JS server listening on 8080 for HTTP, and 1935 as a simple echo server (not HTTP).
And I use HAProxy Ingress controller to wrap the ports in TLS. (8080 -> 443 (HTTPS), 1935 -> 1936 (TCP + TLS))
I installed HAProxy Ingress Controller with
helm upgrade --install haproxy-ingress incubator/haproxy-ingress \
--namespace test \
-f ./haproxy-ingress-values.yaml \
--version v0.0.27
, where the content of haproxy-ingress-values.yaml is
controller:
ingressClass: haproxy
replicaCount: 1
service:
type: LoadBalancer
tcp:
1936: "test/simple-server:1935:::test/ingress-cert"
nodeSelector:
"kubernetes.io/os": linux
defaultBackend:
nodeSelector:
"kubernetes.io/os": linux
And here's my ingress:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ingress
annotations:
kubernetes.io/ingress.class: "haproxy"
spec:
tls:
- hosts:
secretName: ingress-cert
rules:
- http:
paths:
- path: /
backend:
serviceName: "simple-server"
servicePort: 8080
The cert is self-signed.
If I test the TLS handshake with
echo | openssl s_client -connect "<IP>":1936
Sometimes (about 1/3 of the times) it fails with
CONNECTED(00000005)
139828847829440:error:1408F10B:SSL routines:ssl3_get_record:wrong version number:../ssl/record/ssl3_record.c:332:
---
no peer certificate available
---
No client certificate CA names sent
---
SSL handshake has read 5 bytes and written 316 bytes
Verification: OK
---
New, (NONE), Cipher is (NONE)
Secure Renegotiation IS NOT supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
Early data was not sent
Verify return code: 0 (ok)
---
The same problem doesn't happen for 443 port.
See here for the details of the settings to reproduce the problem.
[edit]
As pointed out by #JoaoMorais, it's because the default statistic port is 1936.
Although I didn't turn on statistics, it seems like it still interferes with the behavior.
There're two solutions that work for me.
Change my service's 1936 port to another
Change the stats port by adding values like below when installing the haproxy-ingress chart.
controller:
stats:
port: 5000

HAProxy by default allows to reuse the same port number across the same or other frontend/listen sections and also across other haproxy process. This can be changed adding noreuseport in the global section.
The default HAProxy Ingress configuration uses port number 1936 to expose stats. If such port number is reused by eg a tcp proxy, the incoming requests will be distributed between both frontends - sometimes your service will be called, sometimes the stats page. Changing the tcp proxy or the stats page (doc here) to another port should solve the issue.

Related

TLS client certificate authentication in istio

I am currently trying to figure out how to enable istio to use a client certificate to authenticate to an external https service that requires client authentication. The client is a pod deployed in a kubernetes cluster that has istio installed. It currently accesses the external service using http, and cannot be changed. I know and have verified that istio can perform TLS origination so that the client can still use http to refer to the service, and istio will perform the TLS connection. But if the service also requires client certificate authentication, is there a way for me to configure istio to utilize a given certificate to do that?
I have tried by creating a ServiceEntry as described in some tutorials, as well as DestinationRules for that ServiceEntry. Is there a configuration in the DestinationRule, or elsewhere that will allow me to do that?
This is my current attempt. The hostname that requires client authentication is app.k8s.ssg-masamune.com. I have already verified that all the certificates I'm using appear to work through curl.
The certificates though are signed by a custom CA.
apiVersion: networking.istio.io/v1beta1
kind: ServiceEntry
metadata:
name: external-svc-https
spec:
hosts:
- api.dropboxapi.com
- www.googleapis.com
- developers.facebook.com
- app.k8s.ssg-masamune.com
- bookinfo.k8s.ssg-masamune.com
- edition.cnn.com
- artifactory.pds-centauri.com
location: MESH_EXTERNAL
ports:
- number: 80
name: http
protocol: HTTP
targetPort: 443
resolution: DNS
---
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
name: app-dr
spec:
host: app.k8s.ssg-masamune.com
trafficPolicy:
portLevelSettings:
- port:
number: 80
tls:
mode: SIMPLE
credentialName: app-secret
insecureSkipVerify: true
sni: app.k8s.ssg-masamune.com
subjectAltNames:
- app

error:1400410B:SSL routines:CONNECT_CR_SRVR_HELLO:wrong version number for https on Istio ingressgateway

I am trying to set up SSL on port 443 on an ingressgateway. I can consistently reproduce with a very basic setup. I know it is something I am probably doing wrong but haven't been able to figure it out.
My k8s cluster is running on EKS. k version 1.19
I created a certificate with AWS Certificate Manager for domain api.foo.com and additional names *.api.foo.com
The certificate was created successfully and has ARN arn:aws:acm:us-west-2:<some-numbers>:certificate/<id>
Then I did a vanilla install of istio:
istioctl install --set meshConfig.accessLogFile=/dev/stdout
With version:
client version: 1.7.0
control plane version: 1.7.0
This is my gateway definition:
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: foo-gateway
annotations:
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "arn:aws:acm:us-west-2:<some-numbers>:certificate/<id>"
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "60"
service.beta.kubernetes.io/aws-load-balancer-type: "elb"
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
- port:
number: 443
name: https-443
protocol: HTTP
hosts:
- "*"
Note that port 443 has protocol HTTP, I don't believe that is the problem (since I want to use SSL termination). Also even if I change it to HTTPS, then I get this:
Resource: "networking.istio.io/v1alpha3, Resource=gateways", GroupVersionKind: "networking.istio.io/v1alpha3, Kind=Gateway"
Name: "foo-gateway", Namespace: "default"
for: "foo-gateway.yaml": admission webhook "validation.istio.io" denied the request: configuration is invalid: server must have TLS settings for HTTPS/TLS protocols
But then what would be the tls settings? I need the certificate key to be picked up through the annotation (from AWS CM) not placed in /etc. As an aside, is there a way to do this without ssl termination?
My VirtualService definition is this:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: foo-api
spec:
hosts:
- "*"
gateways:
- foo-gateway
http:
- match:
- uri:
prefix: /users
route:
- destination:
host: https-user-manager
port:
number: 7070
I then k apply -f a super simple REST service called https-user-manager on port 7070. I then find the host name for the load balancer from a k get svc -n istio-system which yields:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
istio-ingressgateway LoadBalancer <cluster-ip> blahblahblah.us-west-2.elb.amazonaws.com 15021:30048/TCP,80:30210/TCP,443:31349/TCP,15443:32587/TCP 32m
I can successfully use http like:
curl http://blahblahblah.us-west-2.elb.amazonaws.com/users and get a valid response
But then if I do this:
curl -vi https://blahblahblah.us-west-2.elb.amazonaws.com/users I get the following:
* Trying <ip>...
* TCP_NODELAY set
* Connected to api.foo.com (<ip>) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
* CAfile: /etc/ssl/cert.pem
CApath: none
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
* error:1400410B:SSL routines:CONNECT_CR_SRVR_HELLO:wrong version number
* Closing connection 0
curl: (35) error:1400410B:SSL routines:CONNECT_CR_SRVR_HELLO:wrong version number
What am I doing wrong? I have seen these https://medium.com/faun/managing-tls-keys-and-certs-in-istio-using-amazons-acm-8ff9a0b99033, Istio-ingressgateway with https - Connection refused, Setting up istio ingressgateway, SSL Error - wrong version number (HTTPS to HTTP), Updating Istio-IngressGateway TLS Cert, https://github.com/kubernetes/ingress-nginx/issues/3556, https://github.com/istio/istio/issues/14264, https://preliminary.istio.io/latest/docs/tasks/traffic-management/ingress/secure-ingress/, https://preliminary.istio.io/latest/docs/tasks/traffic-management/ingress/ingress-sni-passthrough/, among many others that I don't even remember anymore. Would appreciate any help!
low level nginx
ssl on;
high level nginx
listen 443 ssl;
this works for me

TCP exposed service in an Ingress Nginx works with ssl?

If I have a backend implementation for TLS, does Ingress NGINX expose it correctly?
I'm exposing an MQTT service through an Ingress NGNIX with the following configuration:
ConfigMap:
---
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-ingress-tcp-microk8s-conf
namespace: ingress
#Add the service we want to expose
data:
1883: "default/mosquitto-broker:1883"
DaemonSet:
---
apiVersion: apps/v1
kind: DaemonSet
...
spec:
selector:
matchLabels:
name: nginx-ingress-microk8s
template:
metadata:
...
spec:
...
ports:
- containerPort: 80
- containerPort: 443
#Add the service we want to expose
- name: prx-tcp-1883
containerPort: 1883
hostPort: 1883
protocol: TCP
args:
- /nginx-ingress-controller
- --configmap=$(POD_NAMESPACE)/nginx-load-balancer-microk8s-conf
- --tcp-services-configmap=$(POD_NAMESPACE)/nginx-ingress-tcp-microk8s-conf
- --udp-services-configmap=$(POD_NAMESPACE)/nginx-ingress-udp-microk8s-conf
$DEFAULT_CERT
$EXTRA_ARGS
I have configured the MQTT broker to use TLS in the backend. When I run the broker in my machine, outside the kubernetes cluster, Wireshark detects the messages as TLS, and it doesn't show anything about MQTT:
However, if I run the broker inside the cluster, it shows that im using MQTT, and nothing about TLS. But the messages aren't read correctly:
And finally, if I run the MQTT broker inside the cluster without TLS, Wireshark detects correctly the MQTT pakcets:
My question is: Is the connection encrypted when I use TLS inside the cluster? It's true that Wireshark doesn't show the content of the packets, but it knows I'm using MQTT. Maybe it's because the headers aren't encrypted, but the payload is? Does anyone knows exactly?
The problem was that I was running TLS MQTT in port 8883 as recommended by the documentation (not in 1883 port for standar MQTT), but Wireshark didn't recognise this port as an MQTT port, so the format given by Wireshark was kinda broken.

How to configure ssl for ldap/opendj while using ISTIO service mesh

I have a couple of microservices and our backend is opendj/ldap. It has been configured to use SSL. Now we are trying to use ISTIO as our k8s service mesh. Every other service works fine but the ldap server - opendj - is not. My gues is it's because of the ssl configuration. It's meant to use self-signed cert.
I have a script that creates a self-signed cert in istio namespace and I have tried to use it like this on the gateway.yaml
- port:
number: 4444
name: tcp-admin
protocol: TCP
hosts:
- "*"
tls:
mode: SIMPLE # enable https on this port
credentialName: tls-certificate # fetch cert from k8s secret
I also have tried to use
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: opendj-istio-mtls
spec:
host: opendj.{{.Release.Namespace }}.svc.cluster.local
trafficPolicy:
tls:
mode: ISTIO_MUTUAL
credentialName: tls-certificate
---
apiVersion: authentication.istio.io/v1alpha1
kind: Policy
metadata:
name: opendj-receive-tls
spec:
targets:
- name: opendj
peers:
- mtls: {}
For the ldap server but it's not connecting. While trying to use the tls spec in gateway.yaml I am getting this error
Error: admission webhook "pilot.validation.istio.io" denied the request: configuration is invalid: server cannot have TLS settings for non HTTPS/TLS ports
And the logs from opendj server
INFO - entrypoint - 2020-06-17 12:49:44,768 - Configuring OpenDJ.
WARNING - entrypoint - 2020-06-17 12:49:48,987 -
Unable to connect to the server at
"oj-opendj-0.opendj.default.svc.cluster.local" on port 4444
WARNING - entrypoint - 2020-06-17 12:49:53,293 -
Unable to connect to the server at
"oj-opendj-0.opendj.default.svc.cluster.local" on port 4444
Can someone please help me out how I should approach this.
To Enable non-https traffic over TLS connections you have to use Protocol TLS. TLS implies the connection will be routed based on the SNI header to the destination without terminating the TLS connection. You can check this.
- port:
number: 4444
name: tls
protocol: TLS
hosts:
- "*"
tls:
mode: SIMPLE # enable https on this port
credentialName: tls-certificate # fetch cert from k8s secret
Please check this istio documentation also.

Don't prepend http:// to Endpoint Subset IP

I have a Kubernetes Ingress, pointing to a headless service, pointing finally to an Endpoints object that routes to an external IP address. The following is the configuration for the endpoint
apiVersion: v1
kind: Endpoints
metadata:
name: my-chart
subsets:
- addresses:
- ip: **.**.**.**
ports:
- port: 443
However, the upstream connection fails with 'connection reset by peer', and on looking at the logs I see the following error in the Kubernetes nginx-ingress-controller:
2020/01/15 14:39:50 [error] 24546#24546: *240425068 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: *****, server: dev.somehost.com, request: "GET / HTTP/1.1", upstream: "http://**.**.**.**:443/", host: "dev.somehost.com"
My theory is that the combination of http:// and the 443 port is what is triggering this (tested with cURL commands). How do I either 1) Specify a different protocol for the endpoint object or 2) just prevent the prepending of http://
Additional notes:
1) SSL is enabled on the target IP, and if I curl it I can set up a secure connection
2) SSL passthrough doesn't really work here. The incoming and outgoing requests will use two different SSL connections with two different certificates.
3) I want the Ingress host to be the SNI (and it looks like this may default to being the case)
Edit: Ingress controller version: 0.21.0-rancher3
We were able to solve this by adding the following to the metadata of our Ingress
annotations:
nginx.ingress.kubernetes.io/backend-protocol: HTTPS
nginx.ingress.kubernetes.io/configuration-snippet: |-
proxy_ssl_server_name on;
proxy_ssl_name $host;
The first command turns on HTTPS for the backend protocol, and the second command enables SNI