I want to get egress traces and metrics from a pod which I don't control much (in terms of code) to a third-party egress endpoint (that I don't control at all). You can think of it as e.g. traffic from a wordpress installation to api.wordpress.org.
I plan to terminate the tls on the egress and then create a new tls session from there. For that I generate a certificate for api.wordpress.org from a CA that I can inject into the pod.
I have the following configuration:
---
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: egress-api-wordpress-org
spec:
hosts:
- api.wordpress.org
gateways:
- mesh
- egress-api-wordpress-org
tls:
- match:
- gateways:
- mesh
port: 443
sniHosts:
- api.wordpress.org
route:
- destination:
host: istio-egressgateway.istio-egress.svc.cluster.local
port:
number: 443
http:
- match:
- gateways:
- egress-api-wordpress-org
port: 443
route:
- destination:
host: api.wordpress.org
port:
number: 443
---
apiVersion: networking.istio.io/v1beta1
kind: ServiceEntry
metadata:
name: egress-api-wordpress-org
spec:
hosts:
- api.wordpress.org
location: MESH_EXTERNAL
ports:
- number: 443
name: https
protocol: HTTPS
resolution: DNS
---
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
spec:
host: api.wordpress.org
trafficPolicy:
loadBalancer:
simple: ROUND_ROBIN
portLevelSettings:
- port:
number: 443
tls:
mode: SIMPLE
With this setup I see the traffic passing through the egress (and I have the metrics and traces egress-side). However, there are no details on the origin -- which kind of makes sense as the sidecar's envoy can't see what's the traffic inside.
Is there any way to provide the origin details to the egress without hacking on the origin pod's source code? I'm generally fine with weird things like tls-in-tls if it's possible to set it up (I'm not sure I can terminate tls on egress twice -- for istio_mutual and simple layers).
Related
I have following problem. I have deployment and service, frontend config and ingres as follows (skiping deployment as it is not really interesting):
---
apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
name: backend-config
spec:
healthCheck:
type: HTTP
requestPath: /readiness
port: 8080
---
apiVersion: v1
kind: Service
metadata:
annotations:
cloud.google.com/app-protocols: '{"http":"HTTP"}'
cloud.google.com/backend-config: '{"default": "backend-config"}'
name: app
labels:
app: app
spec:
type: ClusterIP
selector:
app: app
ports:
- name: http
port: 80
targetPort: http
protocol: TCP
---
apiVersion: networking.gke.io/v1beta1
kind: FrontendConfig
metadata:
name: frontend-config
spec:
redirectToHttps:
enabled: true
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: app-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: 'app-ip'
networking.gke.io/v1beta1.FrontendConfig: 'frontend-config'
ingress.gcp.kubernetes.io/pre-shared-cert: 'ssl-certificate'
kubernetes.io/ingress.allow-http: 'false'
labels:
app: app
spec:
# tls:
# - secretName: tls-secret
rules:
- host: myhost.com
http:
paths:
- path: /*
pathType: Prefix
backend:
service:
name: app
port:
name: http
as you can see there is static IP address and ssl-certificate, which I registered with GCP.
By this configuration I am getting mostly
OpenSSL SSL_connect: SSL_ERROR_SYSCALL in connection to ...
if I use curl and "Remote host terminated the handshake" with java clients. But sporadically packets come through. It has something to do with change of replicas. As you can see above I also tried to use kubernetes secret before. Had the same effect.
Does anybody had the same problem or maybe somebody has any clue what I am doing wrong?
Just to prevent the question with ssl. As you can see certificate is added as self managed certificates and is valid. Also it worked within other environment. In the cert part is complete chain stored till the root certificate.
Thank you in advance.
UPDATE:
I am using wildcard certificate. Means I have sub.domain.com covered by *.domain.com certificate. Tried to change configuration in ingress like this:
spec:
tls:
- hosts:
- telematics.tranziit.com
secretName: tranziit-tls-secret
Here how the certificate looks in GCP:
Certificate in GCP
with no success - same effect. I used this certificate as self managed when I was using VMs and normal external load balancer - no problem.
UPDATE 2:
In between I removed completely backend config and frontend config. So service and ingress are looking as follows:
---
apiVersion: v1
kind: Service
metadata:
name: app-service
labels:
app: app-service
spec:
type: ClusterIP
selector:
app: app
ports:
- name: http
port: 80
targetPort: http
protocol: TCP
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: app-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: 'app-ip'
kubernetes.io/ingress.allow-http: 'false'
labels:
app: app
spec:
tls:
- hosts:
- sub.domain.com
secretName: tls-secret
rules:
- host: sub.domain.com
http:
paths:
- path: /api
pathType: Prefix
backend:
service:
name: app-service
port:
name: http
names are changed of course. But the process was as follows:
As soon as ingress was green I started several times curl -v https://mysubdomain-address and I have got at first several answers as expected with expected responses and in verbose log of curl I could see that handshakes are done. But then after third time or so I have got the issue again.
curl -v https://mydomain/path
* Trying XX.XXX.XXX.XX:443...
* Connected to mydomain (34.149.251.22) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
* CAfile: /etc/ssl/certs/ca-certificates.crt
* CApath: /etc/ssl/certs
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* OpenSSL SSL_connect: SSL_ERROR_SYSCALL in connection to mydomain:443
* Closing connection 0
curl: (35) OpenSSL SSL_connect: SSL_ERROR_SYSCALL in connection to mydomain:443
That what I mean that it is no stable result. I can see that packets comming through also because this interface receives messages and pushes them to pub/sub and on the other end I have a function. So I can see that the function gets invoked as following:
Function Invocations
I really do not know what shall I try else. Only thing would be to change to managed non-wildcard certificate from GCP. It is IMHO only thing to try.
It looks like the answer was - use NodePort as it described in documentation. Since I have changed service type to NodePort ingress is running stable. Unfortunatelly, google statement that there is no limitation and ingress can work with NodePort or with ClusterIP through proxies is not realy confirmed. In my particular case ingress was created with ClusterIP backend but was instable in particular during the tls handshake.
Thanks to #boredabdel for mental support :)
I am working on setting up an ingress-controller for my microk8s setup.
Minimal whoami-service is up and running:
microk8s kubectl describe service whoami
Name: whoami
Namespace: default
Labels: <none>
Annotations: <none>
Selector: app=whoami
Type: ClusterIP
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.152.183.112
IPs: 10.152.183.112
Port: <unset> 80/TCP
TargetPort: 80/TCP
Endpoints: 10.1.76.35:80
Session Affinity: None
Events: <none>
Response via clusterIP working:
curl 10.152.183.112:80
Hostname: whoami-84f56668f5-g2j8j
IP: 127.0.0.1
IP: ::1
IP: 10.1.76.35
IP: fe80::90cb:25ff:fe3f:2fe7
RemoteAddr: 192.168.0.100:46568
GET / HTTP/1.1
Host: 10.152.183.112
User-Agent: curl/7.68.0
Accept: */*
I have now configured a minimal ingress.yaml as follows:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: whoami-ingress
namespace: default
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
cert-manager.io/cluster-issuer: "cert-manager"
spec:
tls:
- hosts:
- www.example-domain.com
secretName: demo-key
rules:
- host: www.example-domain.com
http:
paths:
- path: /whoami
pathType: Prefix
backend:
service:
name: whoami
port:
number: 80
ingress seems to be up and running.
Name: whoami-ingress
Namespace: default
Address: 127.0.0.1
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
TLS:
demo-key terminates www.example-domain.com
Rules:
Host Path Backends
---- ---- --------
www.example-domain.com
/whoami whoami:80 (10.1.76.35:80)
Annotations: cert-manager.io/cluster-issuer: cert-manager
nginx.ingress.kubernetes.io/rewrite-target: /
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Sync 22m (x2 over 22m) nginx-ingress-controller Scheduled for sync
Normal Sync 15m nginx-ingress-controller Scheduled for sync
Pinging the domain works (so DNS-resolving seems to work).
But when checking the certificate, there aren't any.
microk8s kubectl get certificates
No resources found in default namespace.
Where did I go wrong? Shouldn't cert-manager.io take care of the certificate?
UPDATE:
It was pointed that I seem to lack a ClusterIssuer. I have now set one up according to the cert-manager-docs using ACME:
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: cert-manager
spec:
acme:
# You must replace this email address with your own.
# Let's Encrypt will use this to contact you about expiring
# certificates, and issues related to your account.
email: mail#domain.com
server: https://acme-staging-v02.api.letsencrypt.org/directory
privateKeySecretRef:
# Secret resource that will be used to store the account's private key.
name: demo-key
# Add a single challenge solver, HTTP01 using nginx
solvers:
- http01:
ingress:
class: nginx
But again, no luck. I can reach my cluster from outside, but only without https. Still, get certificate shows no resources found message. Certificate is classified as non trusted, issued to Kubernetes Ingress Controller Fake Certificate.
I have 4 yaml file
Deployment.yaml
Service.yaml
Ingress.yaml
issuer.yaml
I want to use letsencrypt-prod for my service for certification . But it doesn't work.
When I use to be sure ingress is working or issuer is working both of them are done!
kubectl get ing
kubectl get issuer
But when I run:
kubectl get cert
Cert is not readt during 2 days . Like below:
it creates problem like below. certification is not binding mandrakee.xyz.Mandrakee.xyz looks still not secure! how can I make my website secyre via cert manager?
Deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: echo-deployment
spec:
replicas: 1
selector:
matchLabels:
app: echo-server
template:
metadata:
labels:
app: echo-server
spec:
containers:
- name: httpapi-host
image: jmalloc/echo-server
imagePullPolicy: Always
resources:
requests:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 80
Service.yaml:
apiVersion: v1
kind: Service
metadata:
name: echo-service
spec:
ports:
- name: http-port
port: 80
targetPort: 8080
selector:
app: echo-server
Ingress.yaml:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: ambassador
cert-manager.io/issuer: letsencrypt-prod
name: test-ingress
spec:
tls:
- hosts:
- mandrakee.xyz
secretName: letsencrypt-prod
rules:
- host: mandrakee.xyz
http:
paths:
- backend:
service:
name: echo-service
port:
number: 80
path: /
pathType: Prefix
issuer.yaml:
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
name: letsencrypt-prod
spec:
acme:
email: ykaratoprak#sphereinc.com
server: https://acme-v02.api.letsencrypt.org/directory
privateKeySecretRef:
name: letsencrypt-prod
solvers:
- dns01:
digitalocean:
tokenSecretRef:
name: digitalocean-dns
key: ce28952b5b4e33ea7d98de190f3148a7cc82d31f030bde966ad13b22c1abc524
If you have setup your issuer correctly, which you have assured us, you will see in your namespace a pod belonging to cert manager. This creates a pod that will validate that the server requesting the certificate resolves to the DNS record.
In your case, you would need to point your DNS towards your ingress.
If this is done successfully, then the next stage of debugging is to validate that both 443 and 80 can be resolved. The Validation Pod created by Cert Manager uses port 80 to validate the communication. A common mistake people make is assuming that they will only use port 443 for ssl and disable 80 for security reasons to find out later that letsencrypt can't validate the hostname without port 80.
Otherwise, the common scenario is that cert-manager is installed in the namespace cert-manager and so you should check the logs of the manager. This will provided a limited amount of logs and can be sometimes cryptic to finding the remedy to your issues.
To find the direct error, the pod spawned by cert-manager in the namespace you have deployed the ingress is a good place to focus.
A test I would run is to setup the ingress with both 80 and 443, if you use your domain from your browser you should get some invalid kubernetes generic certificates response on the port 443 and just "Not Found" on port 80. If this is successful, it rules out the limitation I have mentioned before.
I am getting the error "400 Bad Request Your browser sent a request that this server could not understand.
Reason: You're speaking plain HTTP to an SSL-enabled server port.
Instead use the HTTPS scheme to access this URL, please."
What I am trying to achieve is :
1.Docker Run
Docker Image which is using apache2 and Shibboleth both are running on port http(8090) & https(8443) respectively with self signed certificate. Running the image locally using the docker run it is working fine.
http://localhost:8090/ ----> working fine
https://localhost:8443/Shibboleth.sso/Status ----> giving cert error but after accept and ignore working fine.
(Shibboleth service which is being accessed via apache2 000-default.conf ProxyPass /Shibboleth.sso/ https://localhost:8443/Shibboleth.sso/Status)
Kubernetes Platform
Below are the deployment,Service and Ingress created to access the same image.
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: test
name: demo
labels:
app: demo
spec:
#replicas: 1
selector:
matchLabels:
app: demo-pod
template:
metadata:
labels:
app: demo-pod
spec:
containers:
- image: <repository>public/demo-v1
name: demo
ports:
- containerPort: 8154
name: demo-ui
- containerPort: 8090
name: http
- containerPort: 8443
name: https
securityContext:
runAsNonRoot: true
runAsUser: 1000
resources:
limits:
cpu: 1000m
memory: 8024Mi
requests:
cpu: 500m
memory: 4096Mi
dnsPolicy: ClusterFirst
imagePullSecrets:
- name: regcred
restartPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
namespace: test
name: demo-svc
labels:
app: demo
spec:
selector:
app: demo-pod
ports:
- port: 8154
name: demo-ui
targetPort: 8154
protocol: TCP
- port: 8090
name: http
targetPort: 8090
protocol: TCP
- port: 8443
name: https
targetPort: 8443
protocol: TCP
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
namespace: test
name: demo-ing
labels:
app: demo
spec:
ingressClassName: internal
tls:
- hosts:
- demo.example.com
rules:
- host: demo.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: demo-svc
port:
number: 8090
- path: /demo-ui
pathType: Prefix
backend:
service:
name: demo-svc
port:
number: 8090
- path: /Shibboleth.sso
pathType: Prefix
backend:
service:
name: demo-svc
port:
number: 8443
the default domain is using the https for *.example.com
when hitting **https://demo.example.com/ --> http://<pod-IP>:8090** and working fine
but when accessing the **https://demo.example.com/Shibboleth.sso/Status --- > http://<pod-IP>:8443**
And returning "400 Bad Request Your browser sent a request that this server could not understand. Reason: You're speaking plain HTTP to an SSL-enabled server port. Instead use the HTTPS scheme to access this URL, please"
I have tried multiple solutions via ingress annotations and apache2 redirect as well but nothing seems to help.
when doing redirect on apache2 it is not taking the localhost as variable.
RewriteEngine on
ReWriteCond %{SERVER_PORT} !^8443$
RewriteRule ^/Shibboleth.sso(.*) https://localhost:8443/Shibboleth.sso/$1 [NC,R,L]
not considering localhost and taking as dns.
Also tried to redirect at ingress level also which is giving 404 not found error.
Please help here !!!
Can you please try adding this annotation to your ingress file?
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
nginx.ingress.kubernetes.io/secure-backends: "true"
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
I'm trying to setup istio (v1.7.3) on AKS (v1.16.13) in a way that for some of the HTTP destinations a TLS Origination will be performed. So when one of my pods is invoking abc.mydomain.com with HTTP, the Egress request will be upgraded to HTTPS and the TLS verification done through the Egress gateway.
I have followed these 2 tutorials to achieve that:
https://istio.io/latest/docs/tasks/traffic-management/egress/egress-gateway-tls-origination-sds/
https://istio.io/latest/docs/tasks/traffic-management/egress/egress-gateway-tls-origination/
I ended up with something like this (abc.mydomain.com is an external URL so that why I created a ServiceEntry for it):
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: abc.mydomain.com
spec:
hosts:
- abc.mydomain.com
ports:
- number: 80
name: http
protocol: HTTP
- number: 443
name: https
protocol: HTTPS
resolution: DNS
---
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: istio-egressgateway
namespace: istio-system
spec:
selector:
istio: egressgateway
servers:
- port:
number: 443
name: https
protocol: HTTPS
hosts:
- abc.mydomain.com
tls:
mode: ISTIO_MUTUAL
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: egressgateway-for-abc
namespace: istio-system
spec:
host: istio-egressgateway.istio-system.svc.cluster.local
subsets:
- name: abc
trafficPolicy:
loadBalancer:
simple: ROUND_ROBIN
portLevelSettings:
- port:
number: 443
tls:
mode: ISTIO_MUTUAL
sni: abc.mydomain.com
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: direct-abc-through-egress-gateway
namespace: istio-system
spec:
hosts:
- abc.mydomain.com
gateways:
- istio-egressgateway
- mesh
http:
- match:
- gateways:
- mesh
port: 80
route:
- destination:
host: istio-egressgateway.istio-system.svc.cluster.local
subset: abc
port:
number: 443
weight: 100
- match:
- gateways:
- istio-egressgateway
port: 443
route:
- destination:
host: abc.mydomain.com
port:
number: 443
weight: 100
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: originate-tls-for-abc
namespace: istio-system
spec:
host: abc.mydomain.com
trafficPolicy:
loadBalancer:
simple: ROUND_ROBIN
portLevelSettings:
- port:
number: 443
tls:
mode: SIMPLE
credentialName: abc # this must match the secret created earlier without the "-cacert" suffix
sni: abc.mydomain.com
I'm creating a secret for my CA root with: kubectl create secret generic abc-cacert --from-file=ca.crt=mydomainrootca.crt -n istio-system
I've used the same certificate for my java applications and I can successfully invoke HTTPS for the same url using JKS. It seems the certificate is loaded properly into egress (kubectl logs -f -l istio=egressgateway -n istio-system):
2020-10-06T20:00:36.611607Z info sds resource:abc-cacert new connection
2020-10-06T20:00:36.612907Z info sds Skipping waiting for gateway secret
2020-10-06T20:00:36.612994Z info cache GenerateSecret abc-cacert
2020-10-06T20:00:36.613063Z info sds resource:abc-cacert pushed root cert to proxy
When I invoke curl abc.mydomain.com from a pod running on my cluster I'm getting this error from egress gateway:
[2020-10-06T19:33:40.902Z] "GET / HTTP/1.1" 503 UF,URX "-" "TLS error: 268435581:SSL routines:OPENSSL_internal:CERTIFICATE_VERIFY_FAILED" 0 91 172 - "192.244.0.191" "curl/7.64.0" "b618b1e6-e543-4053-bf2f-8ae56664545f" "abc.mydomain.com" "192.223.24.254:443" outbound|443||abc.mydomain.com - 192.244.0.188:8443 192.244.0.191:41306 abc.mydomain.com -
Any idea what I might be doing wrong? I'm quite new to istio and I don't understand all of the need of DestinationRule/VirtualService so please bare with me.
UPDATE1
After putting the DestinationRules in the namespace where my pod is running, I'm getting the following:
curl abc.mydomain.com
<html>
<head><title>400 The plain HTTP request was sent to HTTPS port</title></head>
<body>
<center><h1>400 Bad Request</h1></center>
<center>The plain HTTP request was sent to HTTPS port</center>
<hr><center>nginx/1.17.10</center>
</body>
</html>
Here is the output of istioctl proxy-status:
NAME CDS LDS EDS RDS ISTIOD VERSION
istio-egressgateway-695dc4fc7c-p5p42.istio-system SYNCED SYNCED SYNCED SYNCED istiod-5c6b7b5b8f-csggg 1.7.3
istio-ingressgateway-5689f7c67-j54m7.istio-system SYNCED SYNCED SYNCED SYNCED istiod-5c6b7b5b8f-csggg 1.7.3
test-5bbfdb8f4b-hg7vf.test SYNCED SYNCED SYNCED SYNCED istiod-5c6b7b5b8f-csggg 1.7.3