Traefik, EKS, TLS Termination, X-Forwarded-For - ssl

Deploying Traefik on EKS cluster. Need to terminate the TLS session using the ALB and also pass the X-Forwarded-For header. Running Traefik v2.4.8 using official helm chart ( traefik/traefik from https://helm.traefik.io/traefik )
There is already an AWS managed certificate present arn:aws:acm:us-east-1:xxx:certificate/xxx. I can manually set the DNS CNAME to point the ALB. No need to automate that.
What overrides are needed for Traefik helm chart, and also how to configure Ingress and/or IngressRoute to terminate TLS session on AWS ALB and populate the X-Forwarded-For header?

Related

Using HTTP2 with GKE and Google Managed Certificates

I am using an Ingress using Google-managed SSL certs mostly similar to what is described here:
https://cloud.google.com/kubernetes-engine/docs/how-to/managed-certs#setting_up_a_google-managed_certificate
However my backend service is a grpc service that is using HTTP2. According to the same documentation if I am using HTTP2 my backend needs to be "configured with SSL".
This sounds like I need a separate set of certificates for my backend service to configure it with SSL.
Is there a way to use the same Google managed certs here as well?
What are my other options here? I am using, Google managed certs for the Ingress not to manage any certs on my own, if I then use self signed certificates for my service, that kind of defeats the purpose.
i don't think it's required to create SSL for the backend services if you are terminating the HTTPS at LB level. You can attach your certs to at LB level and the backed-end will be HTTPS > HTTP.
You might need to create SSL/TLS new cert in case there is diff version ssl-protocols: TLSv1.2 TLSv1.3, Cipher set in your ingress controller configmap which you are using Nginx ingress controller, Kong etc.
If you are looking for End to End HTTPS traffic definitely you need to create a cert for the backend service.
You can also create/manage the Managed certificate or Custom cert with Cert manager the K8s secret and mount to deployment which will be used further by the service, in that case, no need to manage or create the certs. Ingress will passthrough the HTTPS request to service directly.
In this case, it will be an end-to-end HTTPS setup.
Update :
Note: To ensure the load balancer can make a correct HTTP2 request to
your backend, your backend must be configured with SSL. For more
information on what types of certificates are accepted, see Encryption
from the load balancer to the backends ." end to end tls seems to be a
requirement for HTTP2
This is my site https://findmeip.com it's running on HTTP2 and terminating the SSL/TLS at the Nginx level only.
Definitely, it's good to go with the suggested practice so you can use the ESP option from the Google, setting GKE ingress + ESP + grpc stack.
https://cloud.google.com/endpoints/docs/openapi/specify-proxy-startup-options?hl=tr
If not want to use ESP check above suggested :
You can Mount Managed certificate to
deployment which will be used further by the service, in that case, no
need to manage or create the certs. In other words, cert-manager will create/manage/re-new SSL/TLS on behalf of you in K8s secret which will used by service.
Google Managed Certificates can only be used for the frontend portion of the load balancer (aka client to LB). If you need encryption from the LB to the backends you will have use self-signed certificates or some other way to store said certificates on GKE as secrets and configuring the Ingress to connect to the backend using these secrets.
Like this https://cloud.google.com/kubernetes-engine/docs/concepts/ingress-xlb#setting_up_https_tls_between_client_and_load_balancer

How to use port 8443 with ECS Fargate and ALB?

Is it possible to run spring boot containerized apps on port 8443 going through a 443 ALB listener and deployed on ECS Fargate in AWS? The 443 listener would have an issued cert, not a self-signed cert. I would use an NLB but I need to set route paths, so that's a no go. Would using nginx as a proxy be used in a situation like this?
Is it possible to run spring boot containerized apps on port 8443
going through a 443 ALB listener and deployed on ECS Fargate in AWS?
Yes it is absolutely possible, there should be no issue with this at all. What you are describing is actually just a very standard and basic ECS/Fargate setup.
Would using nginx as a proxy be used in a situation like this?
Only if you want to. You don't need Nginx just to make this work.

Istio Ingress with cert-manager

I have Kubernetes with Kafka where is also running Istio with Strimzi. Certificates are stored in cert-manager. I want to use TLS passthrough in my ingress but I am a little bit confused of that.
When SIMPLE is used, there is credentialName, which must be the same as secret.
tls:
mode: SIMPLE
credentialName: httpbin-credential
It is nice and simple way. But how about mode: PASSTHROUGH when I have many hosts? I studied demo on istio web (https://istio.io/latest/docs/tasks/traffic-management/ingress/ingress-sni-passthrough/#deploy-an-nginx-server) and their certificate details are stored in server configuration file and they are creating configmap. In official Istio documentation is noted that this parameter is only for MUTUAL and SIMPLE.
What is correct and simple way to expose my hosts using istio ingress to external traffic using cert-manager?
The difference between SIMPLE & PASSTHROUGH is:
SIMPLE TLS instructs the gateway to pass the ingress traffic by terminating TLS.
PASSTHROUGH TLS instructs the gateway to pass the ingress traffic AS IS, without terminating TLS.

Don't prepend http:// to Endpoint Subset IP

I have a Kubernetes Ingress, pointing to a headless service, pointing finally to an Endpoints object that routes to an external IP address. The following is the configuration for the endpoint
apiVersion: v1
kind: Endpoints
metadata:
name: my-chart
subsets:
- addresses:
- ip: **.**.**.**
ports:
- port: 443
However, the upstream connection fails with 'connection reset by peer', and on looking at the logs I see the following error in the Kubernetes nginx-ingress-controller:
2020/01/15 14:39:50 [error] 24546#24546: *240425068 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: *****, server: dev.somehost.com, request: "GET / HTTP/1.1", upstream: "http://**.**.**.**:443/", host: "dev.somehost.com"
My theory is that the combination of http:// and the 443 port is what is triggering this (tested with cURL commands). How do I either 1) Specify a different protocol for the endpoint object or 2) just prevent the prepending of http://
Additional notes:
1) SSL is enabled on the target IP, and if I curl it I can set up a secure connection
2) SSL passthrough doesn't really work here. The incoming and outgoing requests will use two different SSL connections with two different certificates.
3) I want the Ingress host to be the SNI (and it looks like this may default to being the case)
Edit: Ingress controller version: 0.21.0-rancher3
We were able to solve this by adding the following to the metadata of our Ingress
annotations:
nginx.ingress.kubernetes.io/backend-protocol: HTTPS
nginx.ingress.kubernetes.io/configuration-snippet: |-
proxy_ssl_server_name on;
proxy_ssl_name $host;
The first command turns on HTTPS for the backend protocol, and the second command enables SNI

What is the recommended way to update SSL certs in a Nginx cluster behind HAProxy?

So I want to have this:
/ Nginx1 (SSL)
HAProxy-- Nginx2 (SSL)
\ Nginx3 (SSL)
But I have questions:
How do I update Letsencrypt certs on all nodes?
If I can't do this with certbot (+some config) - how do you do this? Maybe some distributed k/v storages?
The best thing is to use HTTP only services (not HTTPS) on Nginx nodes and configure SSL on balancer.
Options:
Traefik. Can be configured to auto update LetsEncrypt certs.
Fabio. Also can be configured to use SSL certs. (I've used Hashicorp Vault to store them). Need to configure updates myself.
Those 2 integrate well with service discovery tools like Consul.