Brief of the problem:
If I try to attach multiple TLS gateways (using the same certificate)
to one ingressgateway, only one TLS will work. (The last applied)
Attaching multiple non-TLS gateways to the same ingressgateway works ok.
Error messages:
Domain 1 (ok):
✗ curl -I https://integration.domain.com
HTTP/2 200
server: envoy
[...]
Domain 2 (bad):
✗ curl -vI https://staging.domain.com
* Rebuilt URL to: https://staging.domain.com/
* Trying 35.205.120.133...
* TCP_NODELAY set
* Connected to staging.domain.com (35.x.x.x) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* Cipher selection: ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:#STRENGTH
* successfully set certificate verify locations:
* CAfile: /etc/ssl/certs/ca-certificates.crt
CApath: /etc/ssl/certs
* TLSv1.2 (OUT), TLS header, Certificate Status (22):
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
* Unknown SSL protocol error in connection to staging.domain.com:443
* Curl_http_done: called premature == 1
* stopped the pause stream!
* Closing connection 0
curl: (35) Unknown SSL protocol error in connection to staging.domain.com:443
Facts:
I have a wildcard TLS cert (lets say '*.domain.com') I've put in a secret with:
kubectl create -n istio-system secret tls istio-ingressgateway-certs --key tls.key --cert tls.crt
I have the default istio-ingressgateway attached to a static IP:
apiVersion: v1
kind: Service
metadata:
name: istio-ingressgateway
namespace: istio-system
annotations:
labels:
chart: gateways-1.0.0
release: istio
heritage: Tiller
app: istio-ingressgateway
istio: ingressgateway
spec:
loadBalancerIP: "35.x.x.x"
type: LoadBalancer
selector:
app: istio-ingressgateway
istio: ingressgateway
[...]
Then I have two gateways in different namespaces, for two domains included on the TLS wildcard (staging.domain.com, integration.domain.com):
staging:
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: domain-web-gateway
namespace: staging
spec:
selector:
istio: ingressgateway # use Istio default gateway implementation
servers:
- port:
number: 443
name: https
protocol: HTTPS
tls:
mode: SIMPLE
serverCertificate: /etc/istio/ingressgateway-certs/tls.crt
privateKey: /etc/istio/ingressgateway-certs/tls.key
hosts:
- "staging.domain.com"
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "staging.domain.com"
integration:
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: domain-web-gateway
namespace: integration
spec:
selector:
istio: ingressgateway # use Istio default gateway implementation
servers:
- port:
number: 443
name: https
protocol: HTTPS
tls:
mode: SIMPLE
serverCertificate: /etc/istio/ingressgateway-certs/tls.crt
privateKey: /etc/istio/ingressgateway-certs/tls.key
hosts:
- "integration.domain.com"
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "integration.domain.com"
The problem is that you are using the same name (https) for port 443 in two Gateways managed by the same workload (selector). They need to have unique names. This restriction is documented here.
You can fix it by just changing the name of your second Gateway, for example:
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: domain-web-gateway
namespace: integration
spec:
selector:
istio: ingressgateway # use Istio default gateway implementation
servers:
- port:
number: 443
name: https-integration
protocol: HTTPS
tls:
mode: SIMPLE
serverCertificate: /etc/istio/ingressgateway-certs/tls.crt
privateKey: /etc/istio/ingressgateway-certs/tls.key
hosts:
- "integration.domain.com"
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "integration.domain.com"
Related
I have following problem. I have deployment and service, frontend config and ingres as follows (skiping deployment as it is not really interesting):
---
apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
name: backend-config
spec:
healthCheck:
type: HTTP
requestPath: /readiness
port: 8080
---
apiVersion: v1
kind: Service
metadata:
annotations:
cloud.google.com/app-protocols: '{"http":"HTTP"}'
cloud.google.com/backend-config: '{"default": "backend-config"}'
name: app
labels:
app: app
spec:
type: ClusterIP
selector:
app: app
ports:
- name: http
port: 80
targetPort: http
protocol: TCP
---
apiVersion: networking.gke.io/v1beta1
kind: FrontendConfig
metadata:
name: frontend-config
spec:
redirectToHttps:
enabled: true
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: app-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: 'app-ip'
networking.gke.io/v1beta1.FrontendConfig: 'frontend-config'
ingress.gcp.kubernetes.io/pre-shared-cert: 'ssl-certificate'
kubernetes.io/ingress.allow-http: 'false'
labels:
app: app
spec:
# tls:
# - secretName: tls-secret
rules:
- host: myhost.com
http:
paths:
- path: /*
pathType: Prefix
backend:
service:
name: app
port:
name: http
as you can see there is static IP address and ssl-certificate, which I registered with GCP.
By this configuration I am getting mostly
OpenSSL SSL_connect: SSL_ERROR_SYSCALL in connection to ...
if I use curl and "Remote host terminated the handshake" with java clients. But sporadically packets come through. It has something to do with change of replicas. As you can see above I also tried to use kubernetes secret before. Had the same effect.
Does anybody had the same problem or maybe somebody has any clue what I am doing wrong?
Just to prevent the question with ssl. As you can see certificate is added as self managed certificates and is valid. Also it worked within other environment. In the cert part is complete chain stored till the root certificate.
Thank you in advance.
UPDATE:
I am using wildcard certificate. Means I have sub.domain.com covered by *.domain.com certificate. Tried to change configuration in ingress like this:
spec:
tls:
- hosts:
- telematics.tranziit.com
secretName: tranziit-tls-secret
Here how the certificate looks in GCP:
Certificate in GCP
with no success - same effect. I used this certificate as self managed when I was using VMs and normal external load balancer - no problem.
UPDATE 2:
In between I removed completely backend config and frontend config. So service and ingress are looking as follows:
---
apiVersion: v1
kind: Service
metadata:
name: app-service
labels:
app: app-service
spec:
type: ClusterIP
selector:
app: app
ports:
- name: http
port: 80
targetPort: http
protocol: TCP
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: app-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: 'app-ip'
kubernetes.io/ingress.allow-http: 'false'
labels:
app: app
spec:
tls:
- hosts:
- sub.domain.com
secretName: tls-secret
rules:
- host: sub.domain.com
http:
paths:
- path: /api
pathType: Prefix
backend:
service:
name: app-service
port:
name: http
names are changed of course. But the process was as follows:
As soon as ingress was green I started several times curl -v https://mysubdomain-address and I have got at first several answers as expected with expected responses and in verbose log of curl I could see that handshakes are done. But then after third time or so I have got the issue again.
curl -v https://mydomain/path
* Trying XX.XXX.XXX.XX:443...
* Connected to mydomain (34.149.251.22) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
* CAfile: /etc/ssl/certs/ca-certificates.crt
* CApath: /etc/ssl/certs
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* OpenSSL SSL_connect: SSL_ERROR_SYSCALL in connection to mydomain:443
* Closing connection 0
curl: (35) OpenSSL SSL_connect: SSL_ERROR_SYSCALL in connection to mydomain:443
That what I mean that it is no stable result. I can see that packets comming through also because this interface receives messages and pushes them to pub/sub and on the other end I have a function. So I can see that the function gets invoked as following:
Function Invocations
I really do not know what shall I try else. Only thing would be to change to managed non-wildcard certificate from GCP. It is IMHO only thing to try.
It looks like the answer was - use NodePort as it described in documentation. Since I have changed service type to NodePort ingress is running stable. Unfortunatelly, google statement that there is no limitation and ingress can work with NodePort or with ClusterIP through proxies is not realy confirmed. In my particular case ingress was created with ClusterIP backend but was instable in particular during the tls handshake.
Thanks to #boredabdel for mental support :)
I have an AKS cluster with Istio install and I'm trying to deploy a containerised web api with TLS.
The api runs and is accessible but is showing as Not secure.
I have followed the directions on istios website to set this so not sure what I've missed.
I have created the secret with the command
kubectl create secret tls mycredential -n istio-system --key mycert.key --cert mycert.crt
and setup a gateway as follows
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
name: my-gateway
namespace: mynamespace
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 443
name: https
protocol: HTTPS
tls:
mode: SIMPLE
credentialName: mycredential # must be the same as secret
hosts:
- 'dev.api2.mydomain.com'
The following virtual service
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: myapi
namespace: mynamespace
spec:
hosts:
- "dev.api2.mydomain.com"
gateways:
- my-gateway
http:
- match:
- uri:
prefix: "/myendpoint"
rewrite:
uri: " "
route:
- destination:
port:
number: 8080
host: myapi
and service
apiVersion: v1
kind: Service
metadata:
name: myapi
namespace: mynamespace
labels:
app: myapi
service: myapi
spec:
ports:
- name: http
port: 8080
targetPort: 80
selector:
app: myapi
The container exposes port 80
Can someone please point me in the right direction because I'm not sure what I've done wrong
I managed to resolve the issue by setting up cert manager and pointing it at letsencrypt to generate the certificate, rather than using the pre-purchased one I was trying to add manually.
Although it took some searching to find how to correctly configure this, it is now working and actually saves having to purchase certificates, so win win :)
I want to get egress traces and metrics from a pod which I don't control much (in terms of code) to a third-party egress endpoint (that I don't control at all). You can think of it as e.g. traffic from a wordpress installation to api.wordpress.org.
I plan to terminate the tls on the egress and then create a new tls session from there. For that I generate a certificate for api.wordpress.org from a CA that I can inject into the pod.
I have the following configuration:
---
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: egress-api-wordpress-org
spec:
hosts:
- api.wordpress.org
gateways:
- mesh
- egress-api-wordpress-org
tls:
- match:
- gateways:
- mesh
port: 443
sniHosts:
- api.wordpress.org
route:
- destination:
host: istio-egressgateway.istio-egress.svc.cluster.local
port:
number: 443
http:
- match:
- gateways:
- egress-api-wordpress-org
port: 443
route:
- destination:
host: api.wordpress.org
port:
number: 443
---
apiVersion: networking.istio.io/v1beta1
kind: ServiceEntry
metadata:
name: egress-api-wordpress-org
spec:
hosts:
- api.wordpress.org
location: MESH_EXTERNAL
ports:
- number: 443
name: https
protocol: HTTPS
resolution: DNS
---
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
spec:
host: api.wordpress.org
trafficPolicy:
loadBalancer:
simple: ROUND_ROBIN
portLevelSettings:
- port:
number: 443
tls:
mode: SIMPLE
With this setup I see the traffic passing through the egress (and I have the metrics and traces egress-side). However, there are no details on the origin -- which kind of makes sense as the sidecar's envoy can't see what's the traffic inside.
Is there any way to provide the origin details to the egress without hacking on the origin pod's source code? I'm generally fine with weird things like tls-in-tls if it's possible to set it up (I'm not sure I can terminate tls on egress twice -- for istio_mutual and simple layers).
I am trying to set up SSL on port 443 on an ingressgateway. I can consistently reproduce with a very basic setup. I know it is something I am probably doing wrong but haven't been able to figure it out.
My k8s cluster is running on EKS. k version 1.19
I created a certificate with AWS Certificate Manager for domain api.foo.com and additional names *.api.foo.com
The certificate was created successfully and has ARN arn:aws:acm:us-west-2:<some-numbers>:certificate/<id>
Then I did a vanilla install of istio:
istioctl install --set meshConfig.accessLogFile=/dev/stdout
With version:
client version: 1.7.0
control plane version: 1.7.0
This is my gateway definition:
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: foo-gateway
annotations:
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "arn:aws:acm:us-west-2:<some-numbers>:certificate/<id>"
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "60"
service.beta.kubernetes.io/aws-load-balancer-type: "elb"
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
- port:
number: 443
name: https-443
protocol: HTTP
hosts:
- "*"
Note that port 443 has protocol HTTP, I don't believe that is the problem (since I want to use SSL termination). Also even if I change it to HTTPS, then I get this:
Resource: "networking.istio.io/v1alpha3, Resource=gateways", GroupVersionKind: "networking.istio.io/v1alpha3, Kind=Gateway"
Name: "foo-gateway", Namespace: "default"
for: "foo-gateway.yaml": admission webhook "validation.istio.io" denied the request: configuration is invalid: server must have TLS settings for HTTPS/TLS protocols
But then what would be the tls settings? I need the certificate key to be picked up through the annotation (from AWS CM) not placed in /etc. As an aside, is there a way to do this without ssl termination?
My VirtualService definition is this:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: foo-api
spec:
hosts:
- "*"
gateways:
- foo-gateway
http:
- match:
- uri:
prefix: /users
route:
- destination:
host: https-user-manager
port:
number: 7070
I then k apply -f a super simple REST service called https-user-manager on port 7070. I then find the host name for the load balancer from a k get svc -n istio-system which yields:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
istio-ingressgateway LoadBalancer <cluster-ip> blahblahblah.us-west-2.elb.amazonaws.com 15021:30048/TCP,80:30210/TCP,443:31349/TCP,15443:32587/TCP 32m
I can successfully use http like:
curl http://blahblahblah.us-west-2.elb.amazonaws.com/users and get a valid response
But then if I do this:
curl -vi https://blahblahblah.us-west-2.elb.amazonaws.com/users I get the following:
* Trying <ip>...
* TCP_NODELAY set
* Connected to api.foo.com (<ip>) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
* CAfile: /etc/ssl/cert.pem
CApath: none
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
* error:1400410B:SSL routines:CONNECT_CR_SRVR_HELLO:wrong version number
* Closing connection 0
curl: (35) error:1400410B:SSL routines:CONNECT_CR_SRVR_HELLO:wrong version number
What am I doing wrong? I have seen these https://medium.com/faun/managing-tls-keys-and-certs-in-istio-using-amazons-acm-8ff9a0b99033, Istio-ingressgateway with https - Connection refused, Setting up istio ingressgateway, SSL Error - wrong version number (HTTPS to HTTP), Updating Istio-IngressGateway TLS Cert, https://github.com/kubernetes/ingress-nginx/issues/3556, https://github.com/istio/istio/issues/14264, https://preliminary.istio.io/latest/docs/tasks/traffic-management/ingress/secure-ingress/, https://preliminary.istio.io/latest/docs/tasks/traffic-management/ingress/ingress-sni-passthrough/, among many others that I don't even remember anymore. Would appreciate any help!
low level nginx
ssl on;
high level nginx
listen 443 ssl;
this works for me
I'm trying to setup istio (v1.7.3) on AKS (v1.16.13) in a way that for some of the HTTP destinations a TLS Origination will be performed. So when one of my pods is invoking abc.mydomain.com with HTTP, the Egress request will be upgraded to HTTPS and the TLS verification done through the Egress gateway.
I have followed these 2 tutorials to achieve that:
https://istio.io/latest/docs/tasks/traffic-management/egress/egress-gateway-tls-origination-sds/
https://istio.io/latest/docs/tasks/traffic-management/egress/egress-gateway-tls-origination/
I ended up with something like this (abc.mydomain.com is an external URL so that why I created a ServiceEntry for it):
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: abc.mydomain.com
spec:
hosts:
- abc.mydomain.com
ports:
- number: 80
name: http
protocol: HTTP
- number: 443
name: https
protocol: HTTPS
resolution: DNS
---
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: istio-egressgateway
namespace: istio-system
spec:
selector:
istio: egressgateway
servers:
- port:
number: 443
name: https
protocol: HTTPS
hosts:
- abc.mydomain.com
tls:
mode: ISTIO_MUTUAL
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: egressgateway-for-abc
namespace: istio-system
spec:
host: istio-egressgateway.istio-system.svc.cluster.local
subsets:
- name: abc
trafficPolicy:
loadBalancer:
simple: ROUND_ROBIN
portLevelSettings:
- port:
number: 443
tls:
mode: ISTIO_MUTUAL
sni: abc.mydomain.com
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: direct-abc-through-egress-gateway
namespace: istio-system
spec:
hosts:
- abc.mydomain.com
gateways:
- istio-egressgateway
- mesh
http:
- match:
- gateways:
- mesh
port: 80
route:
- destination:
host: istio-egressgateway.istio-system.svc.cluster.local
subset: abc
port:
number: 443
weight: 100
- match:
- gateways:
- istio-egressgateway
port: 443
route:
- destination:
host: abc.mydomain.com
port:
number: 443
weight: 100
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: originate-tls-for-abc
namespace: istio-system
spec:
host: abc.mydomain.com
trafficPolicy:
loadBalancer:
simple: ROUND_ROBIN
portLevelSettings:
- port:
number: 443
tls:
mode: SIMPLE
credentialName: abc # this must match the secret created earlier without the "-cacert" suffix
sni: abc.mydomain.com
I'm creating a secret for my CA root with: kubectl create secret generic abc-cacert --from-file=ca.crt=mydomainrootca.crt -n istio-system
I've used the same certificate for my java applications and I can successfully invoke HTTPS for the same url using JKS. It seems the certificate is loaded properly into egress (kubectl logs -f -l istio=egressgateway -n istio-system):
2020-10-06T20:00:36.611607Z info sds resource:abc-cacert new connection
2020-10-06T20:00:36.612907Z info sds Skipping waiting for gateway secret
2020-10-06T20:00:36.612994Z info cache GenerateSecret abc-cacert
2020-10-06T20:00:36.613063Z info sds resource:abc-cacert pushed root cert to proxy
When I invoke curl abc.mydomain.com from a pod running on my cluster I'm getting this error from egress gateway:
[2020-10-06T19:33:40.902Z] "GET / HTTP/1.1" 503 UF,URX "-" "TLS error: 268435581:SSL routines:OPENSSL_internal:CERTIFICATE_VERIFY_FAILED" 0 91 172 - "192.244.0.191" "curl/7.64.0" "b618b1e6-e543-4053-bf2f-8ae56664545f" "abc.mydomain.com" "192.223.24.254:443" outbound|443||abc.mydomain.com - 192.244.0.188:8443 192.244.0.191:41306 abc.mydomain.com -
Any idea what I might be doing wrong? I'm quite new to istio and I don't understand all of the need of DestinationRule/VirtualService so please bare with me.
UPDATE1
After putting the DestinationRules in the namespace where my pod is running, I'm getting the following:
curl abc.mydomain.com
<html>
<head><title>400 The plain HTTP request was sent to HTTPS port</title></head>
<body>
<center><h1>400 Bad Request</h1></center>
<center>The plain HTTP request was sent to HTTPS port</center>
<hr><center>nginx/1.17.10</center>
</body>
</html>
Here is the output of istioctl proxy-status:
NAME CDS LDS EDS RDS ISTIOD VERSION
istio-egressgateway-695dc4fc7c-p5p42.istio-system SYNCED SYNCED SYNCED SYNCED istiod-5c6b7b5b8f-csggg 1.7.3
istio-ingressgateway-5689f7c67-j54m7.istio-system SYNCED SYNCED SYNCED SYNCED istiod-5c6b7b5b8f-csggg 1.7.3
test-5bbfdb8f4b-hg7vf.test SYNCED SYNCED SYNCED SYNCED istiod-5c6b7b5b8f-csggg 1.7.3