I'm baffled by this one. I followed instructions setting up mqtts port and server certs -
enable the port
create a k8s secret with my ca cert and intermediate cert file as well as .key ( I tried with both .crt/.key and pem files, neither option worked).
set up mount
set up docker environment variables as below -
- name: DOCKER_VERNEMQ_LISTENER__SSL__CAFILE
value: "/etc/ssl/vernemq/vernemq_ca.crt"
- name: DOCKER_VERNEMQ_LISTENER__SSL__CERTFILE
value: "/etc/ssl/vernemq/vernemq.crt"
- name: DOCKER_VERNEMQ_LISTENER__SSL__KEYFILE
value: "/etc/ssl/vernemq/vernemq.key"
- name: DOCKER_VERNEMQ_LISTENER__SSL__REQUIRE_CERTIFICATE
value: "on"
- name: DOCKER_VERNEMQ_LISTENER__SSL__DEFAULT
value: "0.0.0.0:8883"
- name: DOCKER_VERNEMQ_LISTENER__TCP__DEFAULT
value: "0.0.0.0:1883"
However, when I tried running test with mosquitto_pub by passing in --cafile, I'm getting "protocol error". Did anyone set up VerneMQ TLS to work?
mosquitto_pub -h <hostIP> -t test -m ‘test’ --cafile ca.crt -i testClient -d
Client testClient sending CONNECT
Error: Protocol error
I used openssl s-client connect to test the connection, I'm getting write: error(54) with some other output. How can I get around that? Thanks!
CONNECTED(00000003)
write:errno=54
---
no peer certificate available
---
No client certificate CA names sent
---
SSL handshake has read 0 bytes and written 287 bytes
---
New, (NONE), Cipher is (NONE)
Secure Renegotiation IS NOT supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
SSL-Session:
Protocol : TLSv1.3
Cipher : 0000
Session-ID:
Session-ID-ctx:
Master-Key:
Start Time: 1670975815
Timeout : 7200 (sec)
Verify return code: 0 (ok)
I tried using self generated certs, it didn't work either. I also tested try setting up chained CA files, using different format...etc. To no avail
I have created a GKE Cluster 1.18.17-gke.1901 and I have installed Istio 1.9.5 on it. My Ingress Gateway Service is of type: LoadBalancer.
I am trying to implement MUTUAL TLS mode in my istio-ingressgateway. The Gateway configuration looks like this:
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
name: mutual-domain
namespace: test
spec:
selector:
istio: ingressgateway
servers:
- hosts:
- mutual.domain.com
port:
name: mutual-domain-https
number: 443
protocol: HTTPS
tls:
credentialName: mutual-secret
minProtocolVersion: TLSV1_2
mode: MUTUAL
I have also setup a corresponding VirtualService and DestinationRule too.
Now, whenever I try to connect to https://mutual.domain.com I get the following error:
* Trying 100.50.76.97...
* TCP_NODELAY set
* Connected to mutual.domain.com (100.50.76.97) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
* CAfile: /etc/ssl/cert.pem
CApath: none
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
* LibreSSL SSL_connect: SSL_ERROR_SYSCALL in connection to mutual.domain.com:443
* Closing connection 0
curl: (35) LibreSSL SSL_connect: SSL_ERROR_SYSCALL in connection to mutual.domain.com:443
If I change the tls: mode: to SIMPLE I am able to reach the service via the domain name but when it's MUTUAL the error above shows up.
The mutual-secret is a tls type Kubernets secret and it contains the tls.crt and tls.key.
$ kubectl describe mutual-secret
Name: mutual-secret
Namespace: istio-system
Labels: <none>
Annotations: <none>
Type: kubernetes.io/tls
Data
====
tls.crt: 4585 bytes
tls.key: 1674 bytes
Is there something missing? Why can't I access my service in MUTUAL mode but the same secret works for SIMPLE mode?
Assuming you are following this.
It seems you are missing ca.crt in your secret. Create a new secret with tls.crt, tsl.key and ca.crt, and try again.
The ERR_BAD_SSL_CLIENT_AUTH_CERT error mentioned in the comments is Chrome/Chromium specific. It means the browser does not recognise Certificate Authority.
Add your CA certificate (.pem file) to Chrome/Chromium, you can follow this. Hopefully it will solve your problem.
I’m trying to setup istio (v1.7.3) on AKS (v1.16.13) in a way that for all HTTPS requests within my domain, the TLS handshake is performed transparently by egress gateway.
I ended up with something like this (abc.mydomain.com is an external URL so that’s why I created a ServiceEntry for it):
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: istio-egressgateway
spec:
selector:
istio: egressgateway
servers:
- port:
number: 443
name: https
protocol: HTTPS
hosts:
- "*.mydomain.com"
tls:
mode: PASSTHROUGH
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: egressgateway-for-mydomain
spec:
host: istio-egressgateway.istio-system.svc.cluster.local
subsets:
- name: mydomain
trafficPolicy:
tls:
mode: SIMPLE
caCertificates: /etc/istio/mydomain-ca-certs/mydomain.crt
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: direct-mydomain-through-egress-gateway
spec:
hosts:
- "*.mydomain.com"
gateways:
- mesh
- istio-egressgateway
tls:
- match:
- gateways:
- mesh
port: 443
sniHosts:
- "*.mydomain.com"
route:
- destination:
host: istio-egressgateway.istio-system.svc.cluster.local
subset: mydomain
port:
number: 443
weight: 100
- match:
- gateways:
- istio-egressgateway
port: 443
sniHosts:
- "*.mydomain.com"
route:
- destination:
host: abc.mydomain.com
port:
number: 443
weight: 100
---
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: www-mydomain
spec:
hosts:
- abc.mydomain.com
ports:
- number: 443
name: https
protocol: HTTPS
resolution: DNS
I have mounted my certificate in the egress gateway and verified with: kubectl exec -n istio-system “$(kubectl -n istio-system get pods -l istio=egressgateway -o jsonpath=’{.items[0].metadata.name}’)” – ls -al /etc/istio/mydomain-ca-certs
I’m getting the following when invoking curl -vvI https://abc.mydomain.com from one of the pods running in another namespace:
curl: (35) OpenSSL SSL_connect: SSL_ERROR_SYSCALL in connection to abc.mydomain.com:443
I’ve also tried what’s described here (Trust custom Root CA on Egress Gateway) but I’m getting the error as above.
Any idea what I might be doing wrong?
UPDATE1
Here is an output of istioctl proxy-status (egress rds is STALE):
istio-egressgateway-695dc4fc7c-p5p42.istio-system SYNCED SYNCED SYNCED STALE istiod-5c6b7b5b8f-csggg 1.7.3 41d
istio-ingressgateway-5689f7c67-j54m7.istio-system SYNCED SYNCED SYNCED SYNCED istiod-5c6b7b5b8f-csggg 1.7.3 118d
Output of curl -vvI https://abc.mydomain.com:
* Expire in 0 ms for 1 (transfer 0x55ce54104f50)
* Trying 10.223.24.254...
* TCP_NODELAY set
* Expire in 200 ms for 4 (transfer 0x55ce54104f50)
* Connected to abc.mydomain.com (10.223.24.254) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
* CAfile: none
CApath: /etc/ssl/certs
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* OpenSSL SSL_connect: SSL_ERROR_SYSCALL in connection to abc.mydomain.com:443
* Closing connection 0
curl: (35) OpenSSL SSL_connect: SSL_ERROR_SYSCALL in connection to abc.mydomain.com:443
Output of openssl s_client -connect abc.mydomain.com:443
CONNECTED(00000003)
write:errno=104
---
no peer certificate available
---
No client certificate CA names sent
---
SSL handshake has read 0 bytes and written 328 bytes
Verification: OK
---
New, (NONE), Cipher is (NONE)
Secure Renegotiation IS NOT supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
Early data was not sent
Verify return code: 0 (ok)
---
I am running a baremetal Kubernetes , with nginx ingress and metallb , and some hostnames mapped to the external ip provided by metallb.
I have created an nginx deployment , exposed it via service and created an ingress with the hostname.
I have created with openssl a self-signed certificate :
openssl req -x509 -newkey rsa:4096 -sha256 -nodes -keyout tls.key -out tls.crt -subj "/CN=fake.example.com" -days 365
Then created a secret in the correct namespace:
kubectl -n demo create secret tls fake-self-secret --cert=tls.crt --key=tls.key
Then created the ingress :
apiVersion: v1
items:
- apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "false"
name: demo-ingress
namespace: demo
spec:
rules:
- host: fake.example.com
http:
paths:
- backend:
serviceName: nginx
servicePort: 80
path: /
tls:
- hosts:
- fake.example.com
secretName: fake-self-secret
Http works ( because of ssl-redirect false annotation) , https returns SSL_ERROR_RX_RECORD_TOO_LONG, on the nginx ingress controller log i see something like
"\x16\x03\x01\x00\xA8\x01\x00\x00\xA4\x03\x03*\x22\xA8\x8F\x07q\xAD\x98\xC1!\
openssl s_client -connect fake.example.com:443 -servername fake.example.com -crlf
140027703674768:error:140770FC:SSL routines:SSL23_GET_SERVER_HELLO:unknown protocol:s23_clnt.c:794:
Nginx ingress-controller version is 0.30, with the default configuration, ssl-protocols enabled in the configmap are : TLSv1 TLSv1.1 TLSv1.2 TLSv1.3
Any help / new ideas are welcomed :)
i have switched from kubernetes nginx ingress controller, to NGINX Ingress Controller, version nginx/nginx-ingress:1.7.0 ,and the config works
I've kubernetes installed on Ubuntu 19.10.
I've setup ingress-nginx and can access my test service using http.
However, I get a "Connection refused" when I try to access via https.
[Edit] I'm trying to get https to terminate in the ingress and pass unencrypted traffic to my service the same way http does. I've implemented the below based on many examples I've seen but with little luck.
Yaml
kind: Service
apiVersion: v1
metadata:
name: messagemanager-service
namespace: default
labels:
name: messagemanager-service
spec:
type: NodePort
selector:
app: messagemanager
ports:
- port: 80
protocol: TCP
targetPort: 8080
nodePort: 31212
name: http
externalIPs:
- 192.168.0.210
---
kind: Deployment
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
metadata:
name: messagemanager
labels:
app: messagemanager
version: v1
spec:
replicas: 3
selector:
matchLabels:
app: messagemanager
template:
metadata:
labels:
app: messagemanager
version: v1
spec:
containers:
- name: messagemanager
image: test/messagemanager:1.0
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8080
protocol: TCP
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: messagemanager-ingress
annotations:
nginx.ingress.kubernetes.io/ssl-passthrough: false
ingress.kubernetes.io/rewrite-target: /
spec:
tls:
- secretName: tls-secret
rules:
- http:
paths:
- path: /message
backend:
serviceName: messagemanager-service
servicePort: 8080
https test
curl -kL https://192.168.0.210/message -verbose
* Trying 192.168.0.210:443...
* TCP_NODELAY set
* connect to 192.168.0.210 port 443 failed: Connection refused
* Failed to connect to 192.168.0.210 port 443: Connection refused
* Closing connection 0
curl: (7) Failed to connect to 192.168.0.210 port 443: Connection refused
http test
curl -kL http://192.168.0.210/message -verbose
* Trying 192.168.0.210:80...
* TCP_NODELAY set
* Connected to 192.168.0.210 (192.168.0.210) port 80 (#0)
> GET /message HTTP/1.1
> Host: 192.168.0.210
> User-Agent: curl/7.65.3
> Accept: */*
> Referer: rbose
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< Content-Type: text/plain;charset=UTF-8
< Date: Fri, 24 Apr 2020 18:44:07 GMT
< connection: keep-alive
< content-length: 50
<
* Connection #0 to host 192.168.0.210 left intact
$ kubectl -n ingress-nginx get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller LoadBalancer 10.105.92.236 <pending> 80:31752/TCP,443:32035/TCP 2d
ingress-nginx-controller-admission ClusterIP 10.100.223.87 <none> 443/TCP 2d
$ kubectl get ingress -o wide
NAME CLASS HOSTS ADDRESS PORTS AGE
messagemanager-ingress <none> * 80, 443 37m
key creation
openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout tls.key -out tls.crt -subj "/CN=nginxsvc/O=nginxsvc"
kubectl create secret tls tls-secret --key tls.key --cert tls.crt
$ kubectl describe ingress
Name: messagemanager-ingress
Namespace: default
Address:
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
TLS:
tls-secret terminates
Rules:
Host Path Backends
---- ---- --------
*
/message messagemanager-service:8080 ()
Annotations: Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal CREATE 107s nginx-ingress-controller Ingress default/messagemanager-ingress
I was under the assumption that TLS would terminate in the ingress and the request would be passed on to the service as http.
I had to add the external IPs in the service to get HTTP to work.
Am I missing something similar for HTTPS?
Any help and guidance is appreciated.
Thanks
Mark
I've reproduced your scenario in my lab and after a few changes in your ingress it's working as you described.
In my lab I used an nginx image that serves a default landing page on port 80 and with this Ingress rule, it's possible to serve it on port 80 and 443.
kind: Deployment
apiVersion: apps/v1
metadata:
name: nginx
labels:
app: nginx
version: v1
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
version: v1
spec:
containers:
- name: nginx
image: nginx
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
protocol: TCP
---
kind: Service
apiVersion: v1
metadata:
name: nginx-service
namespace: default
labels:
name: nginx-service
spec:
type: NodePort
selector:
app: nginx
ports:
- port: 80
protocol: TCP
targetPort: 80
nodePort: 31000
name: http
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: nginx
labels:
app: nginx
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
tls:
- secretName: tls-secret
rules:
- http:
paths:
- path: /nginx
backend:
serviceName: nginx-service
servicePort: 80
The only difference between my ingress and yours is that I removed nginx.ingress.kubernetes.io/ssl-passthrough: false. In the documentation we can read:
note SSL Passthrough is disabled by default
So there is no need for you to specify that.
I used the same secret as you:
$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout tls.key -out tls.crt -subj "/CN=nginxsvc/O=nginxsvc"
$ kubectl create secret tls tls-secret --key tls.key --cert tls.crt
In your question I have the impression that you are trying to reach your ingress through the IP 192.168.0.210. This is your service IP and not your Ingress IP.
If you are using Cloud managed Kubernetes you have to run the following command to find your Ingress IP:
$ kubectl get ingresses nginx
NAME HOSTS ADDRESS PORTS AGE
nginx * 34.89.108.48 80, 443 6m32s
If you are running on Bare Metal without any LoadBalancer solution as MetalLB, you can see that your ingress-nginx service will be with EXTERNAL-IP on Pending forever.
$ kubectl get service -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-1587980954-controller LoadBalancer 10.110.188.236 <pending> 80:31024/TCP,443:30039/TCP 23s
You can do the same thing as you did with your service and add an externalIP manually:
kubectl get service -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-1587980954-controller LoadBalancer 10.110.188.236 10.156.0.24 80:31024/TCP,443:30039/TCP 9m14s
After this change, your ingress will have the same IP as you defined in your Ingress Service:
$ kubectl get ingress nginx
NAME CLASS HOSTS ADDRESS PORTS AGE
nginx <none> * 10.156.0.24 80, 443 118s
$ curl -kL https://10.156.0.24/nginx --verbose
* Trying 10.156.0.24...
* TCP_NODELAY set
* Connected to 10.156.0.24 (10.156.0.24) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* Cipher selection: ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:#STRENGTH
* successfully set certificate verify locations:
* CAfile: /etc/ssl/certs/ca-certificates.crt
CApath: /etc/ssl/certs
* TLSv1.2 (OUT), TLS header, Certificate Status (22):
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
* TLSv1.2 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (IN), TLS handshake, Server key exchange (12):
* TLSv1.2 (IN), TLS handshake, Server finished (14):
* TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
* TLSv1.2 (OUT), TLS change cipher, Client hello (1):
* TLSv1.2 (OUT), TLS handshake, Finished (20):
* TLSv1.2 (IN), TLS change cipher, Client hello (1):
* TLSv1.2 (IN), TLS handshake, Finished (20):
* SSL connection using TLSv1.2 / ECDHE-RSA-AES128-GCM-SHA256
* ALPN, server accepted to use h2
* Server certificate:
* subject: O=Acme Co; CN=Kubernetes Ingress Controller Fake Certificate
* start date: Apr 27 09:49:19 2020 GMT
* expire date: Apr 27 09:49:19 2021 GMT
* issuer: O=Acme Co; CN=Kubernetes Ingress Controller Fake Certificate
* SSL certificate verify result: unable to get local issuer certificate (20), continuing anyway.
* Using HTTP2, server supports multi-use
* Connection state changed (HTTP/2 confirmed)
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
* Using Stream ID: 1 (easy handle 0x560cee14fe90)
> GET /nginx HTTP/1.1
> Host: 10.156.0.24
> User-Agent: curl/7.52.1
> Accept: */*
>
* Connection state changed (MAX_CONCURRENT_STREAMS updated)!
< HTTP/2 200
< server: nginx/1.17.10
< date: Mon, 27 Apr 2020 10:01:29 GMT
< content-type: text/html
< content-length: 612
< vary: Accept-Encoding
< last-modified: Tue, 14 Apr 2020 14:19:26 GMT
< etag: "5e95c66e-264"
< accept-ranges: bytes
< strict-transport-security: max-age=15724800; includeSubDomains
<
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
nginx.org.<br/>
Commercial support is available at
nginx.com.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
* Curl_http_done: called premature == 0
* Connection #0 to host 10.156.0.24 left intact
EDIT:
There does not seem to be a way to manually set the "external IPs" for
the ingress as can be done in the service. If you know of one please
let me know :-). Looks like my best bet is to try MetalLB.
MetalLB would be the best option for production. If you are running it for lab only, you have the option to add your node public IP (the same you can get by running kubectl get nodes -o wide) and attach it to your NGINX ingress controller.
Adding your node IP to your NGINX ingress controller
spec:
externalIPs:
- 192.168.0.210
Create a file called ingress-nginx-svc-patch.yaml and paste the contents above.
Next apply the changes with the following command:
kubectl patch service ingress-nginx-controller -n kube-system --patch "$(cat ingress-nginx-svc-patch.yaml)"
And as result:
$ kubectl get service -n kube-system ingress-nginx-controller
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller LoadBalancer 10.97.0.243 192.168.0.210 80:31409/TCP,443:30341/TCP 39m
I'm no expert, but any time I've seen a service handle http, and https traffic, it's specified two ports in the svc yaml, one for http and one for https. Apparently there are ways to get around this, reading here might be a good start.
For two ports, look at the official k8s example here
I was struggling with the same situation for a while and was about to start using metalLB but realised after running kubectl -n ingress-nginx get svc the ingress-nginx had created its own nodePort and i could access the cluster through that.
Im not an expert but its probably not good for production but i don't see why not.