AWS EKS - cannot access apache httpd behind a LoadBalancer - apache

I've deployed an apache httpd server in a container and am attempting to expose it externally via a LoadBalancer. Although I can log on to the local host and get the expected response (curl -X GET localhost) when I try and access the external URL exposed by the load balancer I get an Empty reply from server:
curl -X GET ad8d14ea0ba9611e8b2360afc35626a3-553331517.us-east-1.elb.amazonaws.com:5000
curl: (52) Empty reply from server
Any idea what I am missing - is there some kind of additional redirection going on that I'm unaware of?
The yaml is here:
apiVersion: apps/v1
kind: Deployment
metadata:
name: apache
labels:
app: apache
spec:
replicas: 1
selector:
matchLabels:
pod: apache
template:
metadata:
name: apachehost
labels:
pod: apache
spec:
containers:
- name: apache
image: myrepo/apache2
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: apache
labels:
app: apache
spec:
type: LoadBalancer
selector:
pod: apache
ports:
- name: port1
port: 5000
targetPort: 80

1.Check your pod running.
2.Check aws IAM and security group also may be 5000 port not open for public.Use curl command in kubernet master and check port.
3.Share a pod logs

check on your aws load balancer for open port of 5000 in security group for LB. as in bound rule.
check for inbound rule of load balancer.

If your pods are running on Fargate the load balancer service will not work: https://docs.aws.amazon.com/eks/latest/userguide/load-balancing.html

Related

How to configure Traefik UDP Ingress?

My UDP setup doesn't work.
In traefik pod,
--entryPoints.udp.address=:4001/udp
is added. The port is listening and on traefik UI, it shows udp entrypoints port 4001. So entry-point UDP 4001 is working.
I have applied this CRD:
kind: IngressRouteUDP
metadata:
name: udp
spec:
entryPoints:
- udp
routes:
- services:
- name: udp
port: 4001
kubrnetes service CRD:
apiVersion: v1
kind: Service
metadata:
name: udp
spec:
selector:
app: udp-server
ports:
- protocol: UDP
port: 4001
targetPort: 4001
got error on traefik UI:
NAME: default-udp-0#kubernetescrd
ENTRYPOINTS: udp
SERVICE:
ERRORS: the udp service "default-udp-0#kubernetescrd" does not exist
What did I wrong? Or is it a bug?
traefik version 2.3.1
So I ran into the trouble using k3s/rancher and traefik 2.x. The problem here was that configuring the command line switch only showed up a working environment in the traefik dashboard, - but it just did not worked.
In k3s the solution is to provide a traefik-config.yam besite the trafik.yaml. traefik.yaml is always recreated on a restart of k3s.
Put traefik-config.yaml to /var/lib/rancher/k3s/server/manifests/traefik-config.yaml is keeping changes persistent.
What misses is the entrypoint declaration. You may assume this is done as well by the command line switch, but it is not.
apiVersion: helm.cattle.io/v1
kind: HelmChartConfig
metadata:
name: traefik
namespace: kube-system
spec:
valuesContent: |-
additionalArguments:
- "--entryPoints.udp.address=:55000/udp"
entryPoints:
udp:
address: ':55000/upd'
Before going further check the helm install job in the name kube-system. If one of the two helm install jobs error out, traefik won't work.
In case everything worked as above and you still have trouble. Then one option is just to configure the upd traffic as a normal kubernetes loadbalancer service. Like this example, that was successfully tested by me
apiVersion: v1
kind: Service
metadata:
name: nginx-udp-ingress-demo-svc-udp
spec:
selector:
app: nginx-udp-ingress-demo
ports:
- protocol: UDP
port: 55000
targetPort: 55000
type: LoadBalancer
The entry type: LoadBalancer will start a pod on ony kubernets node, that will send incoming UDP/55000 to the load balancer service.
This worked for me on a k3s cluster. But is not a native traefik solution asked in the question. More a work around, that make things work in the first place.
I found a source that seem to handle the Traefik solution on https://github.com/traefik/traefik/blob/master/docs/content/routing/providers/kubernetes-crd.md.
That seems to have a working solution. But it has very slim expanation and shows just the manifests. I need to test this out, and come back.
This worked on my system.

How to configure https on deployment yaml file for asp.net core app locally in minikube?

I have an ASP.NET Core app that I want to configure with HTTPS in my local kubernetes clustur using minikube.
The deployment yaml file is:
apiVersion: apps/v1
kind: Deployment
metadata:
name: kube-volume
labels:
app: kube-volume-app
spec:
replicas: 1
selector:
matchLabels:
component: web
template:
metadata:
labels:
component: web
spec:
containers:
- name: ckubevolume
image: kubevolume
imagePullPolicy: Never
ports:
- containerPort: 80
- containerPort: 443
env:
- name: ASPNETCORE_ENVIRONMENT
value: Development
- name: ASPNETCORE_URLS
value: https://+:443;http://+:80
- name: ASPNETCORE_HTTPS_PORT
value: '443'
- name: ASPNETCORE_Kestrel__Certificates__Default__Password
value: mypass123
- name: ASPNETCORE_Kestrel__Certificates__Default__Path
value: /app/https/aspnetapp.pfx
volumeMounts:
- name: ssl
mountPath: "/app/https"
volumes:
- name: ssl
configMap:
name: game-config
You can see i have added environment variables for https in yaml file.
I also created a service for this deployment. The yaml file of the service is:
apiVersion: v1
kind: Service
metadata:
name: service-1
spec:
type: NodePort
selector:
component: web
ports:
- name: http
protocol: TCP
port: 100
targetPort: 80
- name: https
protocol: TCP
port: 200
targetPort: 443
But unfortunately the app is not opening by the service when I run the minikube service service-1 command.
However when I remove the env variables for https then the app is opening by the service. These are the lines which when I remove the app opens:
- name: ASPNETCORE_URLS
value: https://+:443;http://+:80
- name: ASPNETCORE_HTTPS_PORT
value: '443'
- name: ASPNETCORE_Kestrel__Certificates__Default__Password
value: mypass123
- name: ASPNETCORE_Kestrel__Certificates__Default__Path
value: /app/https/aspnetapp.pfx
I also confirmed with the shell that the certificate is present in the /app/https folder.
Whay I am doing wrong?
I think your approach does not fit well with the architecture of Kubernetes. A TLS certificate (for https) is coupled to a hostname.
I would recommend one of two different approaches:
Expose your app with a Service of type: LoadBalancer
Expose your app with an Ingress resource
Expose your app with a Service of type LoadBalancer
This is typically called a Network LoadBalancer as it exposes your app for TCP or UDP directly.
See LoadBalancer access in the Minikube documentation. But beware that your app get an external address from your LoadBalancer, and your TLS certificate probably has to match that.
Expose your app with an Ingress resource
This is the most common approach for Microservices in Kubernetes. In addition to your Service of type: NodePort you also need to create an Ingress resource for your app.
The cluster needs an Ingress controller and the gateway will handle your TLS certificate, instead of your app.
See How to use custom TLS certificate with ingress addon for how to configure both Ingress and TLS certificate in Minikube.
I would recommend to go this route.

Unable to connect to Azure Kubernetes (AKS) external-ip

I'm trying to deploy my first asp.net app (sample VS 2019 project) to AKS.
I was able to create a docker container, run it locally and access it via http://localhost:8000/weatherforecast.
However, I'm not able to access the endpoint when it's deployed in AKS.
Yaml file:
apiVersion: apps/v1
kind: Deployment
metadata:
<t/>name: aspnetdemo-deployment
spec:
replicas: 1
selector:
matchLabels:
app: aspnetdemo
template:
metadata:
labels:
app: aspnetdemo
spec:
containers:
- name: mycr
image: mycr.azurecr.io/aspnetdemo:v1
imagePullPolicy: Always
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: aspnetdemo-service
spec:
ports:
- port: 80
targetPort: 8080
selector:
app: aspnetdemo
type : LoadBalancer
I verified that the pod is running -
kubectl get pods
NAME READY STATUS RESTARTS AGE
aspnetdemo-deployment-* 2/2 Running 0 21m
and the service too -
kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
aspnetdemo-service LoadBalancer 10.0.X.X 13.89.X.X 80:30635/TCP 22m
I am getting a error when I try to access 13.89.X.X/weatherforecast:
"this site can't be reached - the connection was reset"
Any ideas?
When I run the following command, it returns an endpoint -
kubectl describe service aspnetdemo-service | select-string Endpoints
Endpoints: 10.244.X.X:8080
I also tried port forwarding and that didn't work either.
kubectl port-forward service/aspnetdemo-service 3000:80
http://localhost:3000/weatherforecast
E0512 15:16:24.429387 21356 portforward.go:400] an error occurred forwarding 3000 -> 8080: error forwarding port 8080 to pod a87ebc116d0e0b6e7066f32e945661c50d745d392c76844de084c7da96a874b8, uid : exit status 1: 2020/05/12 22:16:24 socat[18810] E write(5, 0x14da4c0, 535): Broken pipe
Thanks in advance!

AWS-EKS deployed pod is exposed with type service Node Port is not accessible over nodePort IP and exposed port

I've created k8s cluster on AWS using EKS with Terraform followed this documentation https://docs.aws.amazon.com/eks/latest/userguide/what-is-eks.html .
I have one worker node.Note: Everything is in Private Subnets
Just running node.js hello-world container
Code for pod definition
kind: Pod
metadata:
name: nodehelloworld.example.com
labels:
app: helloworld
spec:
containers:
- name: k8s-demo
image: wardviaene/k8s-demo
ports:
- name: nodejs-port
containerPort: 3000
Code for service definition
apiVersion: v1
kind: Service
metadata:
name: helloworld-service
spec:
ports:
- port: 31001
nodePort: 31001
targetPort: nodejs-port
protocol: TCP
selector:
app: helloworld
type: NodePort
kubectl get pods shows that my pod is up and running
nodehelloworld.example.com 1/1 Running 0 17h
kubectl get svc shows that my service is also created
helloworld-service NodePort 172.20.146.235 <none> 31001:31001/TCP 16h
kubectl describe svc helloworld-service it has correct end-point and correct selector
So here is the problem
When I hit NodeIP:exposed port(which is 31001) I'm getting This site can’t be reached
then I used kubeclt port-forward podname 3000:3000
I can hit with curl -v localhost:3000 is reachable
I checked my security group inbound rule is 0-65535 from my CIDR block.
Is there anything else I'm missing?
If you are trying to connect from outside the cluster then in the security group for worker nodes you will have to add a custom TCP entry for enabling inbound traffic on port 31001.
If that does not work then make sure you are able to connect to the Node through that IP. I usually connect using a VPN client.
Fixed..
On AWS EKS nodeports are not working as on Pure Kubernetes.
when you exposing
- port: 31001
targetPort: nodejs-port
protocol: TCP
31001 that's the clusterIP port will get exposed.
in order to get nodePort you must describe your service and look for NodePort is description that was exposed

Port 443 times out on kubernetes in GCE

I have created a kubernetes cluster where I'm currently running only a docker service that is serving a static web page. It is working exposing the standard port 80.
Now I want to attach an SSL certificate to the domain, and have managed to do so running locally. But when I try to publish my service to the kubernates cluster, the https://my.domain.com times out. It appears like the service does not receives the request, but is blocked by the kuernates or GCE.
Do I need to open up a firewall, or setup my cluster deployment to open port 443? What might be the issue?
I have heard about Ingress and kubernetes secrets, and that is the way to go. But all I find is with using Ingress-nginx, and as I'm only having a single docker service I do not utilize Nginx. To me it seems like enabling the 443 call to reach the service would be the easiest solution. Or am I wrong?
Below is my setup:
apiVersion: v1
kind: Service
metadata:
name: client
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
- port: 443
targetPort: 443
protocol: TCP
name: https
selector:
name: client-pods
-----
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: client
spec:
replicas: 1
revisionHistoryLimit: 0
template:
metadata:
labels:
name: client-pods
spec:
containers:
- image: <CONTAINER>
name: client-container
imagePullPolicy: Always
ports:
- containerPort: 80
name: http
- containerPort: 443
name: https
livenessProbe:
httpGet:
path: /health
port: http
initialDelaySeconds: 10
timeoutSeconds: 1
I have also enabled HTTPS traffic on the GKE VM running the cluster, and the Dockerfile exposes both 80 and 443. I'm at a loss. Anyone know what I'm doing wrong?