Unable to connect to Azure Kubernetes (AKS) external-ip - asp.net-core

I'm trying to deploy my first asp.net app (sample VS 2019 project) to AKS.
I was able to create a docker container, run it locally and access it via http://localhost:8000/weatherforecast.
However, I'm not able to access the endpoint when it's deployed in AKS.
Yaml file:
apiVersion: apps/v1
kind: Deployment
metadata:
<t/>name: aspnetdemo-deployment
spec:
replicas: 1
selector:
matchLabels:
app: aspnetdemo
template:
metadata:
labels:
app: aspnetdemo
spec:
containers:
- name: mycr
image: mycr.azurecr.io/aspnetdemo:v1
imagePullPolicy: Always
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: aspnetdemo-service
spec:
ports:
- port: 80
targetPort: 8080
selector:
app: aspnetdemo
type : LoadBalancer
I verified that the pod is running -
kubectl get pods
NAME READY STATUS RESTARTS AGE
aspnetdemo-deployment-* 2/2 Running 0 21m
and the service too -
kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
aspnetdemo-service LoadBalancer 10.0.X.X 13.89.X.X 80:30635/TCP 22m
I am getting a error when I try to access 13.89.X.X/weatherforecast:
"this site can't be reached - the connection was reset"
Any ideas?
When I run the following command, it returns an endpoint -
kubectl describe service aspnetdemo-service | select-string Endpoints
Endpoints: 10.244.X.X:8080
I also tried port forwarding and that didn't work either.
kubectl port-forward service/aspnetdemo-service 3000:80
http://localhost:3000/weatherforecast
E0512 15:16:24.429387 21356 portforward.go:400] an error occurred forwarding 3000 -> 8080: error forwarding port 8080 to pod a87ebc116d0e0b6e7066f32e945661c50d745d392c76844de084c7da96a874b8, uid : exit status 1: 2020/05/12 22:16:24 socat[18810] E write(5, 0x14da4c0, 535): Broken pipe
Thanks in advance!

Related

Accessing service from custom port using k3d and traefik

I am trying to configure traefik and loadbalancer to accept traffic from host port 9200.
Everything works fine for port 8443 (websecure). I am using k3d and traefik is initially disabled.
I can curl my "2048" service from my macos host. The ingress is configured for "websecure" endpoint and a match is found.
curl --cacert ca.crt -I https://2048.127.0.0.1.nip.io:8443
HTTP/2 200
I have installed the exact same service and named it "2049". I want this service to be available from 9200 (I have de-configured tls to simplify things).
+ curl -vvv -k -I http://2049.127.0.0.1.nip.io:9200
* Trying 127.0.0.1:9200...
* Connected to 2049.127.0.0.1.nip.io (127.0.0.1) port 9200 (#0)
> HEAD / HTTP/1.1
> Host: 2049.127.0.0.1.nip.io:9200
> User-Agent: curl/7.79.1
> Accept: */*
>
* Empty reply from server
* Closing connection 0
curl: (52) Empty reply from server
Both services can be accessed from within the cluster.
I have installed traefik through helm and made sure ports are available.
#
k get -n traefik-system svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
traefik LoadBalancer 10.43.86.220 172.27.0.3,172.27.0.4,172.27.0.5 80:30039/TCP,443:30253/TCP,9092:30179/TCP,9200:31428/TCP 61m
# just to display, the lb is configured for port 9200 (iptables, /pause container)
k logs -n traefik-system pod/svclb-traefik-h5zs4
error: a container name must be specified for pod svclb-traefik-h5zs4, choose one of: [lb-tcp-80 lb-tcp-443 lb-tcp-9092 lb-tcp-9200]
# my ingress
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: game-2049
spec:
entryPoints: # We listen to requests coming from port 9200
- elasticsearch
routes:
- match: Host(`2049.127.0.0.1.nip.io`)
kind: Rule
services:
- name: game-2049 # Requests will be forwarded to this service
port: 80
# traefik is configured with these endpoint addresses:
- "--entrypoints.web.address=:8000/tcp"
- "--entrypoints.websecure.address=:8443/tcp"
- "--entrypoints.kafka.address=:9092/tcp"
- "--entrypoints.elasticsearch.address=:9200/tcp"
My goal is to access elasticsearch 9200 and kafka 9092 from my MacOS host using k3d. But first I need to get this configuration for "2049" right.
What I am missing?
I have this working on K3s using bitnami kafka
You need two things:
Define the entry point in traefik config -- which from your note you already have.
kubectl describe pods traefik-5bcf476bb9-qrqg7 --namespace traefik
Name: traefik-5bcf476bb9-qrqg7
Namespace: traefik
Priority: 0
Service Account: traefik
...
Status: Running
...
Image: traefik:2.9.1
Image ID: docker.io/library/traefik#sha256:4ebf68cdb33c162e8786ac83ece782ec0dbe583471c04dfd0af43f245b96c88f
Ports: 9094/TCP, 9100/TCP, 9000/TCP, 8000/TCP, 8443/TCP
Host Ports: 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP
Args:
--global.checknewversion
--global.sendanonymoususage
--entrypoints.kafka.address=:9094/tcp
--entrypoints.metrics.address=:9100/tcp
--entrypoints.traefik.address=:9000/tcp
--entrypoints.web.address=:8000/tcp
--entrypoints.websecure.address=:8443/tcp
--api.dashboard=true
--ping=true
--metrics.prometheus=true
--metrics.prometheus.entrypoint=metrics
--providers.kubernetescrd
--providers.kubernetescrd.allowCrossNamespace=true
--providers.kubernetescrd.allowExternalNameServices=true
--providers.kubernetesingress
--providers.kubernetesingress.allowExternalNameServices=true
--providers.kubernetesingress.allowEmptyServices=true
--entrypoints.websecure.http.tls=true
State: Running
Started: Thu, 27 Oct 2022 16:27:22 -0400
Ready: True
I'm using TCP port 9094 for kafka traffic.
Is the Ingress- I'm using IngressRouteTCP CRD
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRouteTCP
metadata:
name: kafka-ingress
namespace: bitnami-kafka
spec:
entryPoints:
- kafka
routes:
- match: HostSNI(`*`)
services:
- name: my-bkafka-0-external
namespace: bitnami-kafka
port : 9094
Note: traefik is routing to a k8 LoadBalancer
kubectl get services --namespace bitnami-kafka
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
my-bkafka ClusterIP 10.43.153.8 <none> 9092/TCP 20h
my-bkafka-0-external LoadBalancer 10.43.45.233 10.55.10.243 9094:30737/TCP 20h
my-bkafka-headless ClusterIP None <none> 9092/TCP,9093/TCP 20h
my-bkafka-zookeeper ClusterIP 10.43.170.229 <none> 2181/TCP,2888/TCP,3888/TCP 20h
my-bkafka-zookeeper-headless ClusterIP None <none> 2181/TCP,2888/TCP,3888/TCP 20h
which is
option A from bitnami's write-up on Kafka external access.

Cannot access standalone-chrome selenium server via Kubernetes service

I have a k0s Kubernetes cluster on a single node. I am trying to run a selenium/standalone-chrome to create a remote Selenium node. The trouble that I am having is that it responds if I port forward 4444 from the pod, but cannot seem to access it via a Service port. I get connection refused. I don't know if it's because it's ignore connections that non-localhost.
The Pod definition for pod/standalone-chrome is:
apiVersion: v1
kind: Pod
metadata:
name: standalone-chrome
spec:
containers:
- name: standalone-chrome
image: selenium/standalone-chrome
ports:
- containerPort: 4444
env:
- name: JAVA_OPTS
value: '-Dwebdriver.chrome.whitelistedIps=""'
The Service definition I have for service/standalone-chrome-service is:
apiVersion: v1
kind: Service
metadata:
name: standalone-chrome-service
labels:
app: standalone-chrome
spec:
ports:
- port: 4444
name: standalone-chrome
type: ClusterIP
selector:
app: standalone-chrome
This creates the following, along with a busybox container I have just for testing connectivity.
NAME READY STATUS RESTARTS AGE
pod/busybox1 1/1 Running 70 2d22h
pod/standalone-chrome 1/1 Running 0 3m15s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 18d
service/standalone-chrome-service ClusterIP 10.111.12.1 <none> 4444/TCP 3m5s
The issue I am having now is that I'm not able to access the remote Selenium service via standalone-chrome-service. I get connection refused. For example, here is trying to reach it via the busybox1 container:
$ wget http://standalone-chrome-service:4444
Connecting to standalone-chrome-service:4444 (10.111.12.1:4444)
wget: can't connect to remote host (10.111.12.1): Connection refused
I am able to port forward from pod/standalone-chrome to my host machine using kubectl port-forward though and it works OK, which I think confirms a service is successfully running but not accessible via the Service:
$ kubectl port-forward pod/standalone-chrome 4444:4444 &
Forwarding from 127.0.0.1:4444 -> 4444
Forwarding from [::1]:4444 -> 4444
$ wget http://localhost:4444
--2021-11-22 13:37:20-- http://localhost:4444/
Resolving localhost (localhost)... ::1, 127.0.0.1
Connecting to localhost (localhost)|::1|:4444... connected.
...
I'd greatly appreciate any help in figuring out how to get the Selenium remote server accessible via the Service.
EDIT: Here is the updated Service definition with name...
apiVersion: v1
kind: Service
metadata:
name: standalone-chrome-service
labels:
app: standalone-chrome
spec:
ports:
- port: 4444
name: standalone-chrome
type: ClusterIP
selector:
name: standalone-chrome
Here is the output of describe:
Name: standalone-chrome-service
Namespace: default
Labels: app=standalone-chrome
Annotations: <none>
Selector: name=standalone-chrome
Type: ClusterIP
IP Families: <none>
IP: 10.100.179.116
IPs: 10.100.179.116
Port: standalone-chrome 4444/TCP
TargetPort: 4444/TCP
Endpoints: <none>
Session Affinity: None
Events: <none>
Service's syntax with:
selector:
app: standalone-chrome
is correct, selector should be matched by label.
Services match a set of Pods using labels and selectors, a grouping
primitive that allows logical operation on objects in Kubernetes.
Labels are key/value pairs attached to objects
See for more details Using a Service to Expose Your App.
Now you need to add this label (which is app: standalone-chrome) to your pod.yaml metadata:
apiVersion: v1
kind: Pod
metadata:
name: standalone-chrome
labels:
app: standalone-chrome # this label should match to selector in service
spec:
containers:
- name: standalone-chrome
image: selenium/standalone-chrome
ports:
- containerPort: 4444
env:
- name: JAVA_OPTS
value: '-Dwebdriver.chrome.whitelistedIps=""'

AWS-EKS deployed pod is exposed with type service Node Port is not accessible over nodePort IP and exposed port

I've created k8s cluster on AWS using EKS with Terraform followed this documentation https://docs.aws.amazon.com/eks/latest/userguide/what-is-eks.html .
I have one worker node.Note: Everything is in Private Subnets
Just running node.js hello-world container
Code for pod definition
kind: Pod
metadata:
name: nodehelloworld.example.com
labels:
app: helloworld
spec:
containers:
- name: k8s-demo
image: wardviaene/k8s-demo
ports:
- name: nodejs-port
containerPort: 3000
Code for service definition
apiVersion: v1
kind: Service
metadata:
name: helloworld-service
spec:
ports:
- port: 31001
nodePort: 31001
targetPort: nodejs-port
protocol: TCP
selector:
app: helloworld
type: NodePort
kubectl get pods shows that my pod is up and running
nodehelloworld.example.com 1/1 Running 0 17h
kubectl get svc shows that my service is also created
helloworld-service NodePort 172.20.146.235 <none> 31001:31001/TCP 16h
kubectl describe svc helloworld-service it has correct end-point and correct selector
So here is the problem
When I hit NodeIP:exposed port(which is 31001) I'm getting This site can’t be reached
then I used kubeclt port-forward podname 3000:3000
I can hit with curl -v localhost:3000 is reachable
I checked my security group inbound rule is 0-65535 from my CIDR block.
Is there anything else I'm missing?
If you are trying to connect from outside the cluster then in the security group for worker nodes you will have to add a custom TCP entry for enabling inbound traffic on port 31001.
If that does not work then make sure you are able to connect to the Node through that IP. I usually connect using a VPN client.
Fixed..
On AWS EKS nodeports are not working as on Pure Kubernetes.
when you exposing
- port: 31001
targetPort: nodejs-port
protocol: TCP
31001 that's the clusterIP port will get exposed.
in order to get nodePort you must describe your service and look for NodePort is description that was exposed

AWS EKS - cannot access apache httpd behind a LoadBalancer

I've deployed an apache httpd server in a container and am attempting to expose it externally via a LoadBalancer. Although I can log on to the local host and get the expected response (curl -X GET localhost) when I try and access the external URL exposed by the load balancer I get an Empty reply from server:
curl -X GET ad8d14ea0ba9611e8b2360afc35626a3-553331517.us-east-1.elb.amazonaws.com:5000
curl: (52) Empty reply from server
Any idea what I am missing - is there some kind of additional redirection going on that I'm unaware of?
The yaml is here:
apiVersion: apps/v1
kind: Deployment
metadata:
name: apache
labels:
app: apache
spec:
replicas: 1
selector:
matchLabels:
pod: apache
template:
metadata:
name: apachehost
labels:
pod: apache
spec:
containers:
- name: apache
image: myrepo/apache2
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: apache
labels:
app: apache
spec:
type: LoadBalancer
selector:
pod: apache
ports:
- name: port1
port: 5000
targetPort: 80
1.Check your pod running.
2.Check aws IAM and security group also may be 5000 port not open for public.Use curl command in kubernet master and check port.
3.Share a pod logs
check on your aws load balancer for open port of 5000 in security group for LB. as in bound rule.
check for inbound rule of load balancer.
If your pods are running on Fargate the load balancer service will not work: https://docs.aws.amazon.com/eks/latest/userguide/load-balancing.html

Kubernetes can't connect redis on Cluster-IP of service

I got on Google cloud this setup:
Pod and service with (php) web app
Pod and service with mysql server
Pod and service with redis server
Where kubernetes configuration file for mysql server and redis server are almost identical, only what differs is name, port and image.
I can connect mysql server from the web app but I can't connect redis server.
Also I can't connect redis server from web app on its service CLUSTER-IP but I can connect redis server on its pod IP.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: redis
spec:
replicas: 1
selector:
matchLabels:
app: redis
template:
metadata:
labels:
app: redis
spec:
containers:
- name: redis
image: launcher.gcr.io/google/redis4
resources:
requests:
cpu: 100m
memory: 100Mi
ports:
- containerPort: 6379
env:
---
apiVersion: v1
kind: Service
metadata:
name: redis
labels:
app: redis
role: master
tier: backend
spec:
selector:
app: redis
role: master
tier: backend
ports:
- port: 6379
targetPort: 6379
The deployment spec is missing some labels so the service is not selecting it.
Current deployment spec:
metadata:
labels:
app: redis
include the other labels required by the service:
metadata:
labels:
app: redis
role: metadata
tier: backend
or depending on how you want to look at it the service spec is trying match labels that don't exist, you can change the service from:
selector:
app: redis
role: master
tier: backend
to:
selector:
app: redis