Traefik & k3d: Dashboard is not reachable - traefik

This is my k3d cluster creation command:
$ k3d cluster create arxius \
--agents 3 \
--k3s-server-arg --disable=traefik \
-p "8888:80#loadbalancer" -p "9000:9000#loadbalancer" \
--volume ${HOME}/.k3d/registries.yaml:/etc/rancher/k3s/registries.yaml
Here my nodes:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c83f2f746621 rancher/k3d-proxy:v3.0.1 "/bin/sh -c nginx-pr…" 2 weeks ago Up 21 minutes 0.0.0.0:9000->9000/tcp, 0.0.0.0:8888->80/tcp, 0.0.0.0:45195->6443/tcp k3d-arxius-serverlb
0ed525443da2 rancher/k3s:v1.18.6-k3s1 "/bin/k3s agent" 2 weeks ago Up 21 minutes k3d-arxius-agent-2
561a0a51e6d7 rancher/k3s:v1.18.6-k3s1 "/bin/k3s agent" 2 weeks ago Up 21 minutes k3d-arxius-agent-1
fc131df35105 rancher/k3s:v1.18.6-k3s1 "/bin/k3s agent" 2 weeks ago Up 21 minutes k3d-arxius-agent-0
4cfceabad5af rancher/k3s:v1.18.6-k3s1 "/bin/k3s server --d…" 2 weeks ago Up 21 minutes k3d-arxius-server-0
873a4f157251 registry:2 "/entrypoint.sh /etc…" 3 months ago Up About an hour 0.0.0.0:5000->5000/tcp registry.localhost
I've installed traefik using default helm installation command:
$ helm install traefik traefik/traefik
After that, an ingressroute is also installed in order to reach dashboard:
Name: traefik-dashboard
Namespace: traefik
Labels: app.kubernetes.io/instance=traefik
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=traefik
helm.sh/chart=traefik-9.1.1
Annotations: helm.sh/hook: post-install,post-upgrade
API Version: traefik.containo.us/v1alpha1
Kind: IngressRoute
Metadata:
Creation Timestamp: 2020-12-09T19:07:41Z
Generation: 1
Managed Fields:
API Version: traefik.containo.us/v1alpha1
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.:
f:helm.sh/hook:
f:labels:
.:
f:app.kubernetes.io/instance:
f:app.kubernetes.io/managed-by:
f:app.kubernetes.io/name:
f:helm.sh/chart:
f:spec:
.:
f:entryPoints:
f:routes:
Manager: Go-http-client
Operation: Update
Time: 2020-12-09T19:07:41Z
Resource Version: 141805
Self Link: /apis/traefik.containo.us/v1alpha1/namespaces/traefik/ingressroutes/traefik-dashboard
UID: 1cbcd5ec-d967-440c-ad21-e41a59ca1ba8
Spec:
Entry Points:
traefik
Routes:
Kind: Rule
Match: PathPrefix(`/dashboard`) || PathPrefix(`/api`)
Services:
Kind: TraefikService
Name: api#internal
Events: <none>
As you can see:
Match: PathPrefix(`/dashboard`) || PathPrefix(`/api`)
I'm trying to reach dashboard. Nevertheless:
Details are not shown.
I've also tried to launch a curl command:
curl 'http://localhost:9000/api/overview'
curl: (52) Empty reply from server
Any ideas?

First, using the default configuration of the traefik helm chart (in version 9.1.1) sets up the entryPoint traefik on port 9000 but does not expose it automatically. So, if you check the service created for you, you will see that this only maps the web and websecure endpoints.
Check this snippet from kubectl get svc traefik -o yaml
spec:
clusterIP: xx.xx.xx.xx
externalTrafficPolicy: Cluster
ports:
- name: web
nodePort: 30388
port: 80
protocol: TCP
targetPort: web
- name: websecure
nodePort: 31115
port: 443
protocol: TCP
targetPort: websecure
selector:
app.kubernetes.io/instance: traefik
app.kubernetes.io/name: traefik
sessionAffinity: None
type: LoadBalancer
As explained in the docs, there are two ways to reach your dashboard. Either, you start a port-forward to your local machine for port 9000 or you expose the dashboard via ingressroute on another entrypoint.
Please be aware that you still net to port-forward even though your k3d proxy already binds to 9000. This is only the reservation if some loadbalanced service wants to be exposed on that external port. At the moment, this is not used and is also not necessary for any of the solutions. You still need to port-forward to the traefik pod. After establishing the port-forward, you can access the dashboard on http://localhost:9000/dashboard/ (be aware of the trailing slash that is needed for the PathPrefix rule).
The other solution of exposing on another entrypoint requires no port-forward, but you need to care for a proper domain name (dns entry + host rule) and take care of not exposing it to the whole world by e.g. adding an auth middleware.
See the changes highlighted below:
# dashboard.yaml
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: dashboard
spec:
entryPoints:
- web # <-- using the web entrypoint, not the traefik (9000) one
routes: # v-- adding a host rule
- match: Host(`traefik.localhost`) && (PathPrefix(`/dashboard`) || PathPrefix(`/api`))
kind: Rule
services:
- name: api#internal
kind: TraefikService

Related

Accessing service from custom port using k3d and traefik

I am trying to configure traefik and loadbalancer to accept traffic from host port 9200.
Everything works fine for port 8443 (websecure). I am using k3d and traefik is initially disabled.
I can curl my "2048" service from my macos host. The ingress is configured for "websecure" endpoint and a match is found.
curl --cacert ca.crt -I https://2048.127.0.0.1.nip.io:8443
HTTP/2 200
I have installed the exact same service and named it "2049". I want this service to be available from 9200 (I have de-configured tls to simplify things).
+ curl -vvv -k -I http://2049.127.0.0.1.nip.io:9200
* Trying 127.0.0.1:9200...
* Connected to 2049.127.0.0.1.nip.io (127.0.0.1) port 9200 (#0)
> HEAD / HTTP/1.1
> Host: 2049.127.0.0.1.nip.io:9200
> User-Agent: curl/7.79.1
> Accept: */*
>
* Empty reply from server
* Closing connection 0
curl: (52) Empty reply from server
Both services can be accessed from within the cluster.
I have installed traefik through helm and made sure ports are available.
#
k get -n traefik-system svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
traefik LoadBalancer 10.43.86.220 172.27.0.3,172.27.0.4,172.27.0.5 80:30039/TCP,443:30253/TCP,9092:30179/TCP,9200:31428/TCP 61m
# just to display, the lb is configured for port 9200 (iptables, /pause container)
k logs -n traefik-system pod/svclb-traefik-h5zs4
error: a container name must be specified for pod svclb-traefik-h5zs4, choose one of: [lb-tcp-80 lb-tcp-443 lb-tcp-9092 lb-tcp-9200]
# my ingress
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: game-2049
spec:
entryPoints: # We listen to requests coming from port 9200
- elasticsearch
routes:
- match: Host(`2049.127.0.0.1.nip.io`)
kind: Rule
services:
- name: game-2049 # Requests will be forwarded to this service
port: 80
# traefik is configured with these endpoint addresses:
- "--entrypoints.web.address=:8000/tcp"
- "--entrypoints.websecure.address=:8443/tcp"
- "--entrypoints.kafka.address=:9092/tcp"
- "--entrypoints.elasticsearch.address=:9200/tcp"
My goal is to access elasticsearch 9200 and kafka 9092 from my MacOS host using k3d. But first I need to get this configuration for "2049" right.
What I am missing?
I have this working on K3s using bitnami kafka
You need two things:
Define the entry point in traefik config -- which from your note you already have.
kubectl describe pods traefik-5bcf476bb9-qrqg7 --namespace traefik
Name: traefik-5bcf476bb9-qrqg7
Namespace: traefik
Priority: 0
Service Account: traefik
...
Status: Running
...
Image: traefik:2.9.1
Image ID: docker.io/library/traefik#sha256:4ebf68cdb33c162e8786ac83ece782ec0dbe583471c04dfd0af43f245b96c88f
Ports: 9094/TCP, 9100/TCP, 9000/TCP, 8000/TCP, 8443/TCP
Host Ports: 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP
Args:
--global.checknewversion
--global.sendanonymoususage
--entrypoints.kafka.address=:9094/tcp
--entrypoints.metrics.address=:9100/tcp
--entrypoints.traefik.address=:9000/tcp
--entrypoints.web.address=:8000/tcp
--entrypoints.websecure.address=:8443/tcp
--api.dashboard=true
--ping=true
--metrics.prometheus=true
--metrics.prometheus.entrypoint=metrics
--providers.kubernetescrd
--providers.kubernetescrd.allowCrossNamespace=true
--providers.kubernetescrd.allowExternalNameServices=true
--providers.kubernetesingress
--providers.kubernetesingress.allowExternalNameServices=true
--providers.kubernetesingress.allowEmptyServices=true
--entrypoints.websecure.http.tls=true
State: Running
Started: Thu, 27 Oct 2022 16:27:22 -0400
Ready: True
I'm using TCP port 9094 for kafka traffic.
Is the Ingress- I'm using IngressRouteTCP CRD
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRouteTCP
metadata:
name: kafka-ingress
namespace: bitnami-kafka
spec:
entryPoints:
- kafka
routes:
- match: HostSNI(`*`)
services:
- name: my-bkafka-0-external
namespace: bitnami-kafka
port : 9094
Note: traefik is routing to a k8 LoadBalancer
kubectl get services --namespace bitnami-kafka
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
my-bkafka ClusterIP 10.43.153.8 <none> 9092/TCP 20h
my-bkafka-0-external LoadBalancer 10.43.45.233 10.55.10.243 9094:30737/TCP 20h
my-bkafka-headless ClusterIP None <none> 9092/TCP,9093/TCP 20h
my-bkafka-zookeeper ClusterIP 10.43.170.229 <none> 2181/TCP,2888/TCP,3888/TCP 20h
my-bkafka-zookeeper-headless ClusterIP None <none> 2181/TCP,2888/TCP,3888/TCP 20h
which is
option A from bitnami's write-up on Kafka external access.

How to configure Traefik UDP Ingress?

My UDP setup doesn't work.
In traefik pod,
--entryPoints.udp.address=:4001/udp
is added. The port is listening and on traefik UI, it shows udp entrypoints port 4001. So entry-point UDP 4001 is working.
I have applied this CRD:
kind: IngressRouteUDP
metadata:
name: udp
spec:
entryPoints:
- udp
routes:
- services:
- name: udp
port: 4001
kubrnetes service CRD:
apiVersion: v1
kind: Service
metadata:
name: udp
spec:
selector:
app: udp-server
ports:
- protocol: UDP
port: 4001
targetPort: 4001
got error on traefik UI:
NAME: default-udp-0#kubernetescrd
ENTRYPOINTS: udp
SERVICE:
ERRORS: the udp service "default-udp-0#kubernetescrd" does not exist
What did I wrong? Or is it a bug?
traefik version 2.3.1
So I ran into the trouble using k3s/rancher and traefik 2.x. The problem here was that configuring the command line switch only showed up a working environment in the traefik dashboard, - but it just did not worked.
In k3s the solution is to provide a traefik-config.yam besite the trafik.yaml. traefik.yaml is always recreated on a restart of k3s.
Put traefik-config.yaml to /var/lib/rancher/k3s/server/manifests/traefik-config.yaml is keeping changes persistent.
What misses is the entrypoint declaration. You may assume this is done as well by the command line switch, but it is not.
apiVersion: helm.cattle.io/v1
kind: HelmChartConfig
metadata:
name: traefik
namespace: kube-system
spec:
valuesContent: |-
additionalArguments:
- "--entryPoints.udp.address=:55000/udp"
entryPoints:
udp:
address: ':55000/upd'
Before going further check the helm install job in the name kube-system. If one of the two helm install jobs error out, traefik won't work.
In case everything worked as above and you still have trouble. Then one option is just to configure the upd traffic as a normal kubernetes loadbalancer service. Like this example, that was successfully tested by me
apiVersion: v1
kind: Service
metadata:
name: nginx-udp-ingress-demo-svc-udp
spec:
selector:
app: nginx-udp-ingress-demo
ports:
- protocol: UDP
port: 55000
targetPort: 55000
type: LoadBalancer
The entry type: LoadBalancer will start a pod on ony kubernets node, that will send incoming UDP/55000 to the load balancer service.
This worked for me on a k3s cluster. But is not a native traefik solution asked in the question. More a work around, that make things work in the first place.
I found a source that seem to handle the Traefik solution on https://github.com/traefik/traefik/blob/master/docs/content/routing/providers/kubernetes-crd.md.
That seems to have a working solution. But it has very slim expanation and shows just the manifests. I need to test this out, and come back.
This worked on my system.

Unable to successfully setup TLS on a Multi-Tenant GKE+Istio with LetsEncrypt (via Cert Manager)

I'm trying configure for TLS (LetsEncrypt) on a multi-tenant GKE+Istio setup.
I mainly followed this guide -> Full Isolation in Multi-Tenant SAAS with Kubernetes & Istio for setting up the multi-tenancy in GKE+Istio, which I was able to successfully pull-off. I'm able to deploy simple apps on their separate namespaces which are accessible through their respective subdomains.
I then tried to move forward and setup the TLS with LetsEncrypt. For this I mainly followed a different guide which is can be found here-> istio-gke . But unfortunately, following this guide didn't produce the result I wanted. When I was done with it, LetsEncrypt aren't even issuing certificates to my deployment or domain.
Thus I tried to follow a different guide which is as follows -> istio-gateway-tls-setup. Here I managed to get LetsEncrypt issue a certificate for my domain, but when I tried to test it out with openssl or other online ssl checkers, it says that I still aren't communicating securely.
Below are the results when I try describe the configurations of my certificates, issuer & gateway:
Certificate: kubectl -n istio-system describe certificate istio-gateway
Issuer: kubectl -n istio-system describe issuer letsencrypt-prod
Gateway: kubectl -n istio-system describe gateway istio-gateway
While here's the dry-run results for my helm install <tenant>
install.go:173: [debug] Original chart version: ""
install.go:190: [debug] CHART PATH: /home/cjcabero/projects/aqt-ott-msging-dev/gke-setup/helmchart
NAME: tenanta
LAST DEPLOYED: Wed Feb 17 21:15:08 2021
NAMESPACE: default
STATUS: pending-install
REVISION: 1
TEST SUITE: None
USER-SUPPLIED VALUES:
{}
COMPUTED VALUES:
frontend:
image:
pullPolicy: IfNotPresent
repository: paulbouwer/hello-kubernetes
tag: "1.8"
ports:
containerPort: 8080
service:
name: http
port: 80
type: ClusterIP
HOOKS:
MANIFEST:
---
# Source: helmchart/templates/namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
name: tenanta
labels:
istio-injection: enabled
---
# Source: helmchart/templates/frontend-service.yaml
apiVersion: v1
kind: Service
metadata:
name: frontend
namespace: tenanta
spec:
type: ClusterIP
ports:
- name: http
port: 80
targetPort: 8080
selector:
app: frontend
---
# Source: helmchart/templates/frontend-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend
namespace: tenanta
spec:
replicas: 3
selector:
matchLabels:
app: frontend
template:
metadata:
labels:
app: frontend
spec:
containers:
- name: hello-kubernetes
image: paulbouwer/hello-kubernetes:1.8
ports:
- containerPort: 8080
env:
- name: MESSAGE
value: Hello tenanta
---
# Source: helmchart/templates/virtualservice.yaml
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: tenanta-frontend-ingress
namespace: istio-system
spec:
hosts:
- tenanta.cjcabero.dev
gateways:
- istio-gateway
http:
- route:
- destination:
host: frontend.tenanta.svc.cluster.local
port:
number: 80
I don't understand how come even though LetsEncrypt seem to be able issue the certificate for my domain, it still aren't communicating securely.
Google Domains even managed to find that a certificate was issued in the domain in it's Transparency Report.
Anyway, I'm not sure if this could help, but I also tried the check the domain with an online ssl checker and here are the results -> https://check-your-website.server-daten.de/?q=cjcabero.dev.
By the way I did use Istio on GKE which results with Istio v1.4.10 & Kubernetes v1.18.15-gke.1100.

Unable to connect to Azure Kubernetes (AKS) external-ip

I'm trying to deploy my first asp.net app (sample VS 2019 project) to AKS.
I was able to create a docker container, run it locally and access it via http://localhost:8000/weatherforecast.
However, I'm not able to access the endpoint when it's deployed in AKS.
Yaml file:
apiVersion: apps/v1
kind: Deployment
metadata:
<t/>name: aspnetdemo-deployment
spec:
replicas: 1
selector:
matchLabels:
app: aspnetdemo
template:
metadata:
labels:
app: aspnetdemo
spec:
containers:
- name: mycr
image: mycr.azurecr.io/aspnetdemo:v1
imagePullPolicy: Always
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: aspnetdemo-service
spec:
ports:
- port: 80
targetPort: 8080
selector:
app: aspnetdemo
type : LoadBalancer
I verified that the pod is running -
kubectl get pods
NAME READY STATUS RESTARTS AGE
aspnetdemo-deployment-* 2/2 Running 0 21m
and the service too -
kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
aspnetdemo-service LoadBalancer 10.0.X.X 13.89.X.X 80:30635/TCP 22m
I am getting a error when I try to access 13.89.X.X/weatherforecast:
"this site can't be reached - the connection was reset"
Any ideas?
When I run the following command, it returns an endpoint -
kubectl describe service aspnetdemo-service | select-string Endpoints
Endpoints: 10.244.X.X:8080
I also tried port forwarding and that didn't work either.
kubectl port-forward service/aspnetdemo-service 3000:80
http://localhost:3000/weatherforecast
E0512 15:16:24.429387 21356 portforward.go:400] an error occurred forwarding 3000 -> 8080: error forwarding port 8080 to pod a87ebc116d0e0b6e7066f32e945661c50d745d392c76844de084c7da96a874b8, uid : exit status 1: 2020/05/12 22:16:24 socat[18810] E write(5, 0x14da4c0, 535): Broken pipe
Thanks in advance!

AWS EKS - cannot access apache httpd behind a LoadBalancer

I've deployed an apache httpd server in a container and am attempting to expose it externally via a LoadBalancer. Although I can log on to the local host and get the expected response (curl -X GET localhost) when I try and access the external URL exposed by the load balancer I get an Empty reply from server:
curl -X GET ad8d14ea0ba9611e8b2360afc35626a3-553331517.us-east-1.elb.amazonaws.com:5000
curl: (52) Empty reply from server
Any idea what I am missing - is there some kind of additional redirection going on that I'm unaware of?
The yaml is here:
apiVersion: apps/v1
kind: Deployment
metadata:
name: apache
labels:
app: apache
spec:
replicas: 1
selector:
matchLabels:
pod: apache
template:
metadata:
name: apachehost
labels:
pod: apache
spec:
containers:
- name: apache
image: myrepo/apache2
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: apache
labels:
app: apache
spec:
type: LoadBalancer
selector:
pod: apache
ports:
- name: port1
port: 5000
targetPort: 80
1.Check your pod running.
2.Check aws IAM and security group also may be 5000 port not open for public.Use curl command in kubernet master and check port.
3.Share a pod logs
check on your aws load balancer for open port of 5000 in security group for LB. as in bound rule.
check for inbound rule of load balancer.
If your pods are running on Fargate the load balancer service will not work: https://docs.aws.amazon.com/eks/latest/userguide/load-balancing.html