Accessing service from custom port using k3d and traefik - traefik

I am trying to configure traefik and loadbalancer to accept traffic from host port 9200.
Everything works fine for port 8443 (websecure). I am using k3d and traefik is initially disabled.
I can curl my "2048" service from my macos host. The ingress is configured for "websecure" endpoint and a match is found.
curl --cacert ca.crt -I https://2048.127.0.0.1.nip.io:8443
HTTP/2 200
I have installed the exact same service and named it "2049". I want this service to be available from 9200 (I have de-configured tls to simplify things).
+ curl -vvv -k -I http://2049.127.0.0.1.nip.io:9200
* Trying 127.0.0.1:9200...
* Connected to 2049.127.0.0.1.nip.io (127.0.0.1) port 9200 (#0)
> HEAD / HTTP/1.1
> Host: 2049.127.0.0.1.nip.io:9200
> User-Agent: curl/7.79.1
> Accept: */*
>
* Empty reply from server
* Closing connection 0
curl: (52) Empty reply from server
Both services can be accessed from within the cluster.
I have installed traefik through helm and made sure ports are available.
#
k get -n traefik-system svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
traefik LoadBalancer 10.43.86.220 172.27.0.3,172.27.0.4,172.27.0.5 80:30039/TCP,443:30253/TCP,9092:30179/TCP,9200:31428/TCP 61m
# just to display, the lb is configured for port 9200 (iptables, /pause container)
k logs -n traefik-system pod/svclb-traefik-h5zs4
error: a container name must be specified for pod svclb-traefik-h5zs4, choose one of: [lb-tcp-80 lb-tcp-443 lb-tcp-9092 lb-tcp-9200]
# my ingress
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: game-2049
spec:
entryPoints: # We listen to requests coming from port 9200
- elasticsearch
routes:
- match: Host(`2049.127.0.0.1.nip.io`)
kind: Rule
services:
- name: game-2049 # Requests will be forwarded to this service
port: 80
# traefik is configured with these endpoint addresses:
- "--entrypoints.web.address=:8000/tcp"
- "--entrypoints.websecure.address=:8443/tcp"
- "--entrypoints.kafka.address=:9092/tcp"
- "--entrypoints.elasticsearch.address=:9200/tcp"
My goal is to access elasticsearch 9200 and kafka 9092 from my MacOS host using k3d. But first I need to get this configuration for "2049" right.
What I am missing?

I have this working on K3s using bitnami kafka
You need two things:
Define the entry point in traefik config -- which from your note you already have.
kubectl describe pods traefik-5bcf476bb9-qrqg7 --namespace traefik
Name: traefik-5bcf476bb9-qrqg7
Namespace: traefik
Priority: 0
Service Account: traefik
...
Status: Running
...
Image: traefik:2.9.1
Image ID: docker.io/library/traefik#sha256:4ebf68cdb33c162e8786ac83ece782ec0dbe583471c04dfd0af43f245b96c88f
Ports: 9094/TCP, 9100/TCP, 9000/TCP, 8000/TCP, 8443/TCP
Host Ports: 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP
Args:
--global.checknewversion
--global.sendanonymoususage
--entrypoints.kafka.address=:9094/tcp
--entrypoints.metrics.address=:9100/tcp
--entrypoints.traefik.address=:9000/tcp
--entrypoints.web.address=:8000/tcp
--entrypoints.websecure.address=:8443/tcp
--api.dashboard=true
--ping=true
--metrics.prometheus=true
--metrics.prometheus.entrypoint=metrics
--providers.kubernetescrd
--providers.kubernetescrd.allowCrossNamespace=true
--providers.kubernetescrd.allowExternalNameServices=true
--providers.kubernetesingress
--providers.kubernetesingress.allowExternalNameServices=true
--providers.kubernetesingress.allowEmptyServices=true
--entrypoints.websecure.http.tls=true
State: Running
Started: Thu, 27 Oct 2022 16:27:22 -0400
Ready: True
I'm using TCP port 9094 for kafka traffic.
Is the Ingress- I'm using IngressRouteTCP CRD
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRouteTCP
metadata:
name: kafka-ingress
namespace: bitnami-kafka
spec:
entryPoints:
- kafka
routes:
- match: HostSNI(`*`)
services:
- name: my-bkafka-0-external
namespace: bitnami-kafka
port : 9094
Note: traefik is routing to a k8 LoadBalancer
kubectl get services --namespace bitnami-kafka
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
my-bkafka ClusterIP 10.43.153.8 <none> 9092/TCP 20h
my-bkafka-0-external LoadBalancer 10.43.45.233 10.55.10.243 9094:30737/TCP 20h
my-bkafka-headless ClusterIP None <none> 9092/TCP,9093/TCP 20h
my-bkafka-zookeeper ClusterIP 10.43.170.229 <none> 2181/TCP,2888/TCP,3888/TCP 20h
my-bkafka-zookeeper-headless ClusterIP None <none> 2181/TCP,2888/TCP,3888/TCP 20h
which is
option A from bitnami's write-up on Kafka external access.

Related

Traefik & k3d: Dashboard is not reachable

This is my k3d cluster creation command:
$ k3d cluster create arxius \
--agents 3 \
--k3s-server-arg --disable=traefik \
-p "8888:80#loadbalancer" -p "9000:9000#loadbalancer" \
--volume ${HOME}/.k3d/registries.yaml:/etc/rancher/k3s/registries.yaml
Here my nodes:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c83f2f746621 rancher/k3d-proxy:v3.0.1 "/bin/sh -c nginx-pr…" 2 weeks ago Up 21 minutes 0.0.0.0:9000->9000/tcp, 0.0.0.0:8888->80/tcp, 0.0.0.0:45195->6443/tcp k3d-arxius-serverlb
0ed525443da2 rancher/k3s:v1.18.6-k3s1 "/bin/k3s agent" 2 weeks ago Up 21 minutes k3d-arxius-agent-2
561a0a51e6d7 rancher/k3s:v1.18.6-k3s1 "/bin/k3s agent" 2 weeks ago Up 21 minutes k3d-arxius-agent-1
fc131df35105 rancher/k3s:v1.18.6-k3s1 "/bin/k3s agent" 2 weeks ago Up 21 minutes k3d-arxius-agent-0
4cfceabad5af rancher/k3s:v1.18.6-k3s1 "/bin/k3s server --d…" 2 weeks ago Up 21 minutes k3d-arxius-server-0
873a4f157251 registry:2 "/entrypoint.sh /etc…" 3 months ago Up About an hour 0.0.0.0:5000->5000/tcp registry.localhost
I've installed traefik using default helm installation command:
$ helm install traefik traefik/traefik
After that, an ingressroute is also installed in order to reach dashboard:
Name: traefik-dashboard
Namespace: traefik
Labels: app.kubernetes.io/instance=traefik
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=traefik
helm.sh/chart=traefik-9.1.1
Annotations: helm.sh/hook: post-install,post-upgrade
API Version: traefik.containo.us/v1alpha1
Kind: IngressRoute
Metadata:
Creation Timestamp: 2020-12-09T19:07:41Z
Generation: 1
Managed Fields:
API Version: traefik.containo.us/v1alpha1
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.:
f:helm.sh/hook:
f:labels:
.:
f:app.kubernetes.io/instance:
f:app.kubernetes.io/managed-by:
f:app.kubernetes.io/name:
f:helm.sh/chart:
f:spec:
.:
f:entryPoints:
f:routes:
Manager: Go-http-client
Operation: Update
Time: 2020-12-09T19:07:41Z
Resource Version: 141805
Self Link: /apis/traefik.containo.us/v1alpha1/namespaces/traefik/ingressroutes/traefik-dashboard
UID: 1cbcd5ec-d967-440c-ad21-e41a59ca1ba8
Spec:
Entry Points:
traefik
Routes:
Kind: Rule
Match: PathPrefix(`/dashboard`) || PathPrefix(`/api`)
Services:
Kind: TraefikService
Name: api#internal
Events: <none>
As you can see:
Match: PathPrefix(`/dashboard`) || PathPrefix(`/api`)
I'm trying to reach dashboard. Nevertheless:
Details are not shown.
I've also tried to launch a curl command:
curl 'http://localhost:9000/api/overview'
curl: (52) Empty reply from server
Any ideas?
First, using the default configuration of the traefik helm chart (in version 9.1.1) sets up the entryPoint traefik on port 9000 but does not expose it automatically. So, if you check the service created for you, you will see that this only maps the web and websecure endpoints.
Check this snippet from kubectl get svc traefik -o yaml
spec:
clusterIP: xx.xx.xx.xx
externalTrafficPolicy: Cluster
ports:
- name: web
nodePort: 30388
port: 80
protocol: TCP
targetPort: web
- name: websecure
nodePort: 31115
port: 443
protocol: TCP
targetPort: websecure
selector:
app.kubernetes.io/instance: traefik
app.kubernetes.io/name: traefik
sessionAffinity: None
type: LoadBalancer
As explained in the docs, there are two ways to reach your dashboard. Either, you start a port-forward to your local machine for port 9000 or you expose the dashboard via ingressroute on another entrypoint.
Please be aware that you still net to port-forward even though your k3d proxy already binds to 9000. This is only the reservation if some loadbalanced service wants to be exposed on that external port. At the moment, this is not used and is also not necessary for any of the solutions. You still need to port-forward to the traefik pod. After establishing the port-forward, you can access the dashboard on http://localhost:9000/dashboard/ (be aware of the trailing slash that is needed for the PathPrefix rule).
The other solution of exposing on another entrypoint requires no port-forward, but you need to care for a proper domain name (dns entry + host rule) and take care of not exposing it to the whole world by e.g. adding an auth middleware.
See the changes highlighted below:
# dashboard.yaml
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: dashboard
spec:
entryPoints:
- web # <-- using the web entrypoint, not the traefik (9000) one
routes: # v-- adding a host rule
- match: Host(`traefik.localhost`) && (PathPrefix(`/dashboard`) || PathPrefix(`/api`))
kind: Rule
services:
- name: api#internal
kind: TraefikService

Unable to connect to Azure Kubernetes (AKS) external-ip

I'm trying to deploy my first asp.net app (sample VS 2019 project) to AKS.
I was able to create a docker container, run it locally and access it via http://localhost:8000/weatherforecast.
However, I'm not able to access the endpoint when it's deployed in AKS.
Yaml file:
apiVersion: apps/v1
kind: Deployment
metadata:
<t/>name: aspnetdemo-deployment
spec:
replicas: 1
selector:
matchLabels:
app: aspnetdemo
template:
metadata:
labels:
app: aspnetdemo
spec:
containers:
- name: mycr
image: mycr.azurecr.io/aspnetdemo:v1
imagePullPolicy: Always
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: aspnetdemo-service
spec:
ports:
- port: 80
targetPort: 8080
selector:
app: aspnetdemo
type : LoadBalancer
I verified that the pod is running -
kubectl get pods
NAME READY STATUS RESTARTS AGE
aspnetdemo-deployment-* 2/2 Running 0 21m
and the service too -
kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
aspnetdemo-service LoadBalancer 10.0.X.X 13.89.X.X 80:30635/TCP 22m
I am getting a error when I try to access 13.89.X.X/weatherforecast:
"this site can't be reached - the connection was reset"
Any ideas?
When I run the following command, it returns an endpoint -
kubectl describe service aspnetdemo-service | select-string Endpoints
Endpoints: 10.244.X.X:8080
I also tried port forwarding and that didn't work either.
kubectl port-forward service/aspnetdemo-service 3000:80
http://localhost:3000/weatherforecast
E0512 15:16:24.429387 21356 portforward.go:400] an error occurred forwarding 3000 -> 8080: error forwarding port 8080 to pod a87ebc116d0e0b6e7066f32e945661c50d745d392c76844de084c7da96a874b8, uid : exit status 1: 2020/05/12 22:16:24 socat[18810] E write(5, 0x14da4c0, 535): Broken pipe
Thanks in advance!

curl: (7) Failed connect to xx.xx.xx.xx:80; Connection refused

I am trying to deploy nginx - ingress
kubectl run nginx --image=nginx
kubectl get pod
NAME READY STATUS RESTARTS AGE
nginx-65899c769f-wf7dl 1/1 Running 0 9m
kubectl expose deploy nginx --port 80
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx ClusterIP 10.254.75.184 <none> 80/TCP 9m
vi ingress.yml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nginx
spec:
rules:
- host: kub-mst.coral.io
http:
paths:
- backend:
serviceName: nginx
servicePort: 80
kubectl get ing
NAME HOSTS ADDRESS PORTS AGE
nginx kub-mst 80 9m
vi /etc/hosts
xx.xx.xx.xx kub-mst.coral.io
curl kub-mst.coral.io
curl: (7) Failed connect to kub-mst; Connection refused
I have Kubernetes Cluster and am trying to
curl http://xx.xx.xx.xx
it returns
curl: (7) Failed connect to xx.xx.xx.xx:80; Connection refused
and i execute
kubectl cluster-info
it returns
Kubernetes master is running at http://localhost:8080
KubeDNS is running at http://localhost:8080/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
xx.xx.xx.xx is public IP.
how to troubleshooting to detect where is the problem
You provided the ingress controller with a single rule that matches the host header of your request, yet, for some odd reason, you're testing with a request that does not provide the host header.
curl -H 'Host: kub-mst.coral.io' http://xx.xx.xx.xx

AWS EKS - cannot access apache httpd behind a LoadBalancer

I've deployed an apache httpd server in a container and am attempting to expose it externally via a LoadBalancer. Although I can log on to the local host and get the expected response (curl -X GET localhost) when I try and access the external URL exposed by the load balancer I get an Empty reply from server:
curl -X GET ad8d14ea0ba9611e8b2360afc35626a3-553331517.us-east-1.elb.amazonaws.com:5000
curl: (52) Empty reply from server
Any idea what I am missing - is there some kind of additional redirection going on that I'm unaware of?
The yaml is here:
apiVersion: apps/v1
kind: Deployment
metadata:
name: apache
labels:
app: apache
spec:
replicas: 1
selector:
matchLabels:
pod: apache
template:
metadata:
name: apachehost
labels:
pod: apache
spec:
containers:
- name: apache
image: myrepo/apache2
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: apache
labels:
app: apache
spec:
type: LoadBalancer
selector:
pod: apache
ports:
- name: port1
port: 5000
targetPort: 80
1.Check your pod running.
2.Check aws IAM and security group also may be 5000 port not open for public.Use curl command in kubernet master and check port.
3.Share a pod logs
check on your aws load balancer for open port of 5000 in security group for LB. as in bound rule.
check for inbound rule of load balancer.
If your pods are running on Fargate the load balancer service will not work: https://docs.aws.amazon.com/eks/latest/userguide/load-balancing.html

Port 443 times out on kubernetes in GCE

I have created a kubernetes cluster where I'm currently running only a docker service that is serving a static web page. It is working exposing the standard port 80.
Now I want to attach an SSL certificate to the domain, and have managed to do so running locally. But when I try to publish my service to the kubernates cluster, the https://my.domain.com times out. It appears like the service does not receives the request, but is blocked by the kuernates or GCE.
Do I need to open up a firewall, or setup my cluster deployment to open port 443? What might be the issue?
I have heard about Ingress and kubernetes secrets, and that is the way to go. But all I find is with using Ingress-nginx, and as I'm only having a single docker service I do not utilize Nginx. To me it seems like enabling the 443 call to reach the service would be the easiest solution. Or am I wrong?
Below is my setup:
apiVersion: v1
kind: Service
metadata:
name: client
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
- port: 443
targetPort: 443
protocol: TCP
name: https
selector:
name: client-pods
-----
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: client
spec:
replicas: 1
revisionHistoryLimit: 0
template:
metadata:
labels:
name: client-pods
spec:
containers:
- image: <CONTAINER>
name: client-container
imagePullPolicy: Always
ports:
- containerPort: 80
name: http
- containerPort: 443
name: https
livenessProbe:
httpGet:
path: /health
port: http
initialDelaySeconds: 10
timeoutSeconds: 1
I have also enabled HTTPS traffic on the GKE VM running the cluster, and the Dockerfile exposes both 80 and 443. I'm at a loss. Anyone know what I'm doing wrong?