How can I get the real ip address of a client when using Traefik on k3s? - traefik

I have gone through many blog posts and SO questions as well as k3s documentation and am still coming up short getting the real ip address of clients rather than the internal cluster ip address.
I have a standard k3s install using Traefik 1.8. As indicated in several github issues, I have set all my services to use Clusterip and I set externalTrafficPolicy: Local for my Traefik and apache services per this: https://github.com/k3s-io/k3s/issues/1652
The strange thing is, it seems that Traefik is passing along any headers like x-forwarded-for because if I manually add an x-forwarded-for with my ip address into my browser request, the result in the apache logs has my ip as well as the internal cluster ip separated by commas.
Is there something that gets hit before the Traefik instance when traffic comes in to the cluster that should be injecting the ip address?

It appears there are many things that can cause this problem. In my case, it was one of the more common issues. I simply had to patch the k3s traefik manifest to have hostNetwork: true.
kubectl patch deployment traefik --patch '{"spec":{"template":{"spec":{"hostNetwork":true}}}}'
It should be noted that it is not recommended to manually modify this manifest as it is managed by helm. So if the helm process runs again or k3s is reinstalled or updated, it will revert and you will have to run this patch again. You would have to modify the k3s helm chart for traefik or implement your own in place of the k3s one to get this change to stick.

There seems to be an extensive discussion on this topic in the k3s Discussions here: https://github.com/k3s-io/k3s/discussions/2997 .
However, for me, none of the provided answers works, but YMMV.

Related

502 Bad Gateway Error with Nginx Proxy and Vercel Hosted React Frontend on AWS EC2 - Solved

I am here to describe a certain issue I faced recently. Me and My friends are having a pet project called Wibrant(earlier named winbook). Which is a social media website, hosted here. It has a Django-react stack both repos can be found here, and is hosted on an EC2 instance of free tier, on AWS, which is associated to an elastic IP.
The backend is running on a docker container, on the server itself, however, we decided to host the frontend on vercel, which was initally hosted here.
But I decided to proxy it using nginx. Nginx conf for both react and django can be found here
This configuration was working perfectly, until one night I was suddenly getting a 502 error on https://winbook.d3m0n1k.engineer/. Upon inspecting the nginx logs, I found an error like
no live upstreams while connecting to upstream
which I was unable to understand. So, I tried to curl the site, using my localhost and the server. I was able to curl it using my local system, but was not able to do the same with the ec2 server. I got the error:
curl: (35) error:0A000126:SSL routines::unexpected eof while reading
Upon researching I found this error to occur due to openssl version mismatch, so i tried to update it, but couldn't. So decided to spin up a new ec2 instance. I was able to curl the site from there. Thinking that fixed the issue, I migrated the whole set up to that instance and reassociated my elastic ip to that instance. I tried to test it, Only to find that it stopped working. Confused, I ran the curl command again, and it was failing too. On using a python script with requests module to get the site, I got this error from my latest setup.
Caused by SSLError(SSLZeroReturnError(6, 'TLS/SSL connection has been closed
However, now the previous setup started to work perfectly fine.
So, I could curl the Vercel deployment when I didn't had the elastic IP associated to my instance, but couldn't if I did.
So, I figured it was some issue with the elastic IP. I suspected Vercel had blacklisted my IP maybe. So I reset the whole dns config of my domain, created and associated a new elastic IP with the instance, and it worked perfectly.
So, my question is..
Has anyone faced such an issue before? If yes, what was the fix in your case.
Is it really possible that Vercel has the IP in a blacklist of sorts?
This issue is probably non reproducible, but if someone find this thread, dealing with the same problem, I hope that the post and/or the comments/answers lead you to your solution. Cheers.

Traefik ingress route is not accessible

I have setup the default traefik dashboard example although this is not being exposed. Using kubectl port-forward works. The only thing that comes to mind is that I am using flannel as CNI and my previous k8s cluster instance I was using calico. It is weird though as I have other services exposed from the cluster outside of traefik that are working fine.
Not a proper answer but it seems that Flannel CNI was causing reachability issues. It was either a misconfiguration from my end or some fine tuning the tool was needed. I just replaced it with Calico and ingresses work fine!

Setting up multiple TLS Certificates & Domains with Kubernetes and Helm

This question is more to give me some direction on how to go about the problem in general, not a specific solution to the problem.
I have a working kubernetes cluster that's using an nginx ingress as the gate to the outside world. Right now everything is on minikube, but the end goal is to move it eventually to GKE, EKS or AKS (for on premise clients that want our software).
For this I'm going to use helm charts to paremetrize the yaml files and ENV variables needed to setup the resources. I will keep using nginx as ingress to avoid maintining alb ingress or other cloud-specific ingress controllers.
My question is:
I'm not sure how to manage TLS certificates and then how to point the ingress to a public domain for people to use it.
I wanted some guidance on how to go about this in general. Is the TLS certificate something that the user can provide to the helm chart before configuring it? Where can I see a small exmaple of this. And finally, is the domain responbility of the helm chart? Or is this something that has to be setup on the DNS provide (Route53) for example. Is there an example you can suggest me to take a look at?
Thanks a lot for the help.
Installing certificates using Helm is perfectly fine just make sure you don't accidentally put certificates into public Git repo. Best practice is to have those certificates only on your local laptop and added to .gitignore. After that you may tell Helm to grab those certificates from their directory.
Regarding the DNS - you may use external-dns to make Kubernetes create DNS records for you. You will need first to integrate external-dns with your DNS provider and then it will watch ingress resources for domain names and automatically create them.

Traefik, Metallb portforwarding

I'm having problems portforwarding traefik. I have a deployment in Rancher, where i'm using metallb with traefik to have ssl certs. applied on my services. All of this is working locally, and i'm not seeing any error messages in the traefik logs. It's funny because, at times, i am able to reach my service outside of my network, but other times not.
I have portforwarded, 80, 433, 8080 to 192.168.87.135
What am i doing wrong? are there some ports im missing?
Picture of traefik logs
Picture of the exposed traefik loadbalancer
IPv4 specifies private ip address ranges that are not reachable from the internet because:
The Internet has grown beyond anyone's expectations. Sustained
exponential growth continues to introduce new challenges. One
challenge is a concern within the community that globally unique
address space will be exhausted.
(source: RFC-1918 Address Allocation for Private Internets)
IP addresses from these private IP ranges are not accessible from the internet. Your IP address 192.168.87.235 is part of the class C private ip address range 192.168.0.0/16 hence it is by nature not reachable from the internet.
Furthermore you state yourself that it is working correctly within your local network.
A follow up question to this is: How can I access my network if it's a private network?
To access your local network you need to have a gateway that has both an internal as well as a public IP so that you can reach your network through the public IP. One solution could be to have a DNS name thats mapped to the public IP and is internally routed to the internal load balancer IP 192.168.87.235 with a reverse proxy.
Unfortunately I can't tell you why it is working occasionally because that would require far more knowledge about your local network. But I guess it could i.e. be that you are connected with VPN to your local network or that you already have a reverse proxy that is just not online all the time.
Edit after watching your video:
Your cluster is still reachable from the internet at the end of the video. You get the message "Service unavailable" which is in fact returned by traefik everytime you wish to access a non-healthy application. Your problem is that the demo application is not starting up after you restart the VM. So what you need to do next is to check why the demo app is not starting. This includes checking the logs of the pod and events of the failing pod.
Another topic I'd like to touch is traefik and what it actually does. First to only call Traefik a reverse proxy, while not false,is not the entire truth. Traefik in a kubernetes environment is an ingress controller. That means it is a reverse proxy configured by kubernetes resources, namely by the "Ingress" object or the "IngressRoute" object. The latter is a custom resource introduced by Traefik itself (read here for further informations) because it introduces andvanced options to configure traefik.
The reason I tell you this is because you actually have two ingress controllers installed in your cluster, "Traefik" and "nginx-ingress-controller" and you just need a single one.

Kubernetes, GCE, Load balancing, SSL

To preface this I’m working on the GCE, and Kuberenetes. My goal is simply to expose all microservices on my cluster over SSL. Ideally it would work the same as when you expose a deployment via type=‘LoadBalancer’ and get a single external IP. That is my goal but SSL is not available with those basic load balancers.
From my research the best current solution would be to set up an nginx ingress controller, use ingress resources and services to expose my micro services. Below is a diagram I drew up with my understanding of this process.
I’ve got this all to successfully work over HTTP. I deployed the default nginx controller from here: https://github.com/kubernetes/contrib/tree/master/ingress/controllers/nginx . As well as the default backend and service for the default backend. The ingress for my own micro service has rules set as my domain name and path: /.
This was successful but there were two things that were confusing me a bit.
When exposing the service resource for my backend (microservice) one guide I followed used type=‘NodePort’ and the other just put a port to reach the service. Both set the target port to the backend app port. I tried this both ways and they both seemed to work. Guide one is from the link above. Guide 2: http://blog.kubernetes.io/2016/03/Kubernetes-1.2-and-simplifying-advanced-networking-with-Ingress.html. What is the difference here?
Another point of confusion is that my ingress always gets two IPs. My initial thought process was that there should only be one external ip and that would hit my ingress which is then directed by nginx for the routing. Or is the ip directly to the nginx? Anyway the first IP address created seemed to give me the expected results where as visiting the second IP fails.
Despite my confusion things seemed to work fine over HTTP. Over HTTPS not so much. At first when I made a web request over https things would just hang. I opened 443 on my firewall rules which seemed to work however I would hit my default backend rather than my microservice.
Reading led me to this from Kubernetes docs: Currently the Ingress resource only supports http rules.
This may explain why I am hitting the default backend because my rules are only for HTTP. But if so how am I supposed to use this approach for SSL?
Another thing I noticed is that if I write an ingress resource with no rules and give it my desired backend I still get directed to my original default backend. This is even more odd because kubectl describe ing updated and states that my default backend is my desired backend...
Any help or guidance would be much appreciated. Thanks!
So, for #2, you've probably ended up provisioning a Google HTTP(S) LoadBalancer, probably because you're missing the kubernetes.io/ingress.class: "nginx" annotation as described here: https://github.com/kubernetes/contrib/tree/master/ingress/controllers/nginx#running-multiple-ingress-controllers.
GKE has it's own ingress controller that you need to override by sticking that annotation on your nginx deployment. This article has a good explanation about that stuff.
The kubernetes docs have a pretty good description of what NodePort means - basically, the service will allocate a port from a high range on each Node in your cluster, and Nodes will forward traffic from that port to your service. It's one way of setting up load balancers in different environments, but for your approach it's not necessary. You can just omit the type field of your microservice's Service and it will be assigned the default type, which is ClusterIP.
As for SSL, it could be a few different things. I would make sure you've got the Secret set up just as they describe in the nginx controller docs, eg with a tls.cert and tls.key field.
I'd also check the logs of the nginx controller - find out which pod it's running as with kubectl get pods, and then tail it's logs: kubectl logs nginx-pod-<some-random-hash> -f. This will be helpful to find out if you've misconfigured anything, like if a service does not have any endpoints configured. Most of the time I've messed up the ingress stuff, it's been due to some pretty basic misconfiguration of Services/Deployments.
You'll also need to set up a DNS record for your hostname pointed at the LoadBalancer's static IP, or else ping your service with cURL's -H flag as they do in the docs, otherwise you might end up getting routed to the default back end 404.
To respond directly to your questions, since that's the whole point... Disclaimer: I'm a n00b, so take this all with a grain of salt.
With respect to #2, the blog post I link to below suggests the following architecture:
Create a deployment that deploys the nginx controller pods
Create a service with a type LoadBalancer and a static IP that routes traffic to the controller pods
Create an ingress resource that gets used by the nginx controller pods
Create a secret that gets used by the nginx controller pods to terminate SSL
And other stuff too
From what I understand, the http vs https stuff happens with the nginx controller pods. All of my ingress rules are also http, but the nginx ingress controller forces SSL and takes care of all that, terminating SSL at the controller so that everything below it, all the ingress stuff, can be HTTP. I have all http rules, but all of my traffic through the LoadBalancer service is getting forced to use SSL.
Again, I'm a n00b. Take this all with a grain of salt. I'm speaking in layman's terms because I'm a layman trying to figure this all out.
I came across your question while looking for some answers to my own questions. I ran into a lot of the same issues that you ran into (I'm assuming past tense given the amount of time that has passed). I wanted to point you (and/or others with similar issues) to a blog post that I found helpful when learning about the nginx controller. So far (I'm still at an early stage and in the middle of using the post), everything in the post has worked.
You're probably already past this stuff now being that it's been a few months. But maybe this will help someone else even if it doesn't help you:
https://daemonza.github.io/2017/02/13/kubernetes-nginx-ingress-controller/
It helped me understand what resources needed to be created, how to deploy the controller pods, and how to expose the controller pods (create a LoadBalancer service for the controller pods with a static IP), and also force SSL. It helped me jump over several hurdles and get past the "how do all the moving parts fit together".
The Kubernetes technical documentation is helpful for how to use each piece, but doesn't necessarily lay it all out and slap pieces together like this blog post does. Disclaimer: the model in the blog post might not be the best way to do it though (I don't have enough experience to make that call), but it did help me at least get a working example of an nginx ingress controller that forced SSL.
Hope this helps someone eventually.
Andrew