Port 443 times out on kubernetes in GCE - ssl

I have created a kubernetes cluster where I'm currently running only a docker service that is serving a static web page. It is working exposing the standard port 80.
Now I want to attach an SSL certificate to the domain, and have managed to do so running locally. But when I try to publish my service to the kubernates cluster, the https://my.domain.com times out. It appears like the service does not receives the request, but is blocked by the kuernates or GCE.
Do I need to open up a firewall, or setup my cluster deployment to open port 443? What might be the issue?
I have heard about Ingress and kubernetes secrets, and that is the way to go. But all I find is with using Ingress-nginx, and as I'm only having a single docker service I do not utilize Nginx. To me it seems like enabling the 443 call to reach the service would be the easiest solution. Or am I wrong?
Below is my setup:
apiVersion: v1
kind: Service
metadata:
name: client
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
- port: 443
targetPort: 443
protocol: TCP
name: https
selector:
name: client-pods
-----
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: client
spec:
replicas: 1
revisionHistoryLimit: 0
template:
metadata:
labels:
name: client-pods
spec:
containers:
- image: <CONTAINER>
name: client-container
imagePullPolicy: Always
ports:
- containerPort: 80
name: http
- containerPort: 443
name: https
livenessProbe:
httpGet:
path: /health
port: http
initialDelaySeconds: 10
timeoutSeconds: 1
I have also enabled HTTPS traffic on the GKE VM running the cluster, and the Dockerfile exposes both 80 and 443. I'm at a loss. Anyone know what I'm doing wrong?

Related

How to configure Traefik UDP Ingress?

My UDP setup doesn't work.
In traefik pod,
--entryPoints.udp.address=:4001/udp
is added. The port is listening and on traefik UI, it shows udp entrypoints port 4001. So entry-point UDP 4001 is working.
I have applied this CRD:
kind: IngressRouteUDP
metadata:
name: udp
spec:
entryPoints:
- udp
routes:
- services:
- name: udp
port: 4001
kubrnetes service CRD:
apiVersion: v1
kind: Service
metadata:
name: udp
spec:
selector:
app: udp-server
ports:
- protocol: UDP
port: 4001
targetPort: 4001
got error on traefik UI:
NAME: default-udp-0#kubernetescrd
ENTRYPOINTS: udp
SERVICE:
ERRORS: the udp service "default-udp-0#kubernetescrd" does not exist
What did I wrong? Or is it a bug?
traefik version 2.3.1
So I ran into the trouble using k3s/rancher and traefik 2.x. The problem here was that configuring the command line switch only showed up a working environment in the traefik dashboard, - but it just did not worked.
In k3s the solution is to provide a traefik-config.yam besite the trafik.yaml. traefik.yaml is always recreated on a restart of k3s.
Put traefik-config.yaml to /var/lib/rancher/k3s/server/manifests/traefik-config.yaml is keeping changes persistent.
What misses is the entrypoint declaration. You may assume this is done as well by the command line switch, but it is not.
apiVersion: helm.cattle.io/v1
kind: HelmChartConfig
metadata:
name: traefik
namespace: kube-system
spec:
valuesContent: |-
additionalArguments:
- "--entryPoints.udp.address=:55000/udp"
entryPoints:
udp:
address: ':55000/upd'
Before going further check the helm install job in the name kube-system. If one of the two helm install jobs error out, traefik won't work.
In case everything worked as above and you still have trouble. Then one option is just to configure the upd traffic as a normal kubernetes loadbalancer service. Like this example, that was successfully tested by me
apiVersion: v1
kind: Service
metadata:
name: nginx-udp-ingress-demo-svc-udp
spec:
selector:
app: nginx-udp-ingress-demo
ports:
- protocol: UDP
port: 55000
targetPort: 55000
type: LoadBalancer
The entry type: LoadBalancer will start a pod on ony kubernets node, that will send incoming UDP/55000 to the load balancer service.
This worked for me on a k3s cluster. But is not a native traefik solution asked in the question. More a work around, that make things work in the first place.
I found a source that seem to handle the Traefik solution on https://github.com/traefik/traefik/blob/master/docs/content/routing/providers/kubernetes-crd.md.
That seems to have a working solution. But it has very slim expanation and shows just the manifests. I need to test this out, and come back.
This worked on my system.

How to configure https on deployment yaml file for asp.net core app locally in minikube?

I have an ASP.NET Core app that I want to configure with HTTPS in my local kubernetes clustur using minikube.
The deployment yaml file is:
apiVersion: apps/v1
kind: Deployment
metadata:
name: kube-volume
labels:
app: kube-volume-app
spec:
replicas: 1
selector:
matchLabels:
component: web
template:
metadata:
labels:
component: web
spec:
containers:
- name: ckubevolume
image: kubevolume
imagePullPolicy: Never
ports:
- containerPort: 80
- containerPort: 443
env:
- name: ASPNETCORE_ENVIRONMENT
value: Development
- name: ASPNETCORE_URLS
value: https://+:443;http://+:80
- name: ASPNETCORE_HTTPS_PORT
value: '443'
- name: ASPNETCORE_Kestrel__Certificates__Default__Password
value: mypass123
- name: ASPNETCORE_Kestrel__Certificates__Default__Path
value: /app/https/aspnetapp.pfx
volumeMounts:
- name: ssl
mountPath: "/app/https"
volumes:
- name: ssl
configMap:
name: game-config
You can see i have added environment variables for https in yaml file.
I also created a service for this deployment. The yaml file of the service is:
apiVersion: v1
kind: Service
metadata:
name: service-1
spec:
type: NodePort
selector:
component: web
ports:
- name: http
protocol: TCP
port: 100
targetPort: 80
- name: https
protocol: TCP
port: 200
targetPort: 443
But unfortunately the app is not opening by the service when I run the minikube service service-1 command.
However when I remove the env variables for https then the app is opening by the service. These are the lines which when I remove the app opens:
- name: ASPNETCORE_URLS
value: https://+:443;http://+:80
- name: ASPNETCORE_HTTPS_PORT
value: '443'
- name: ASPNETCORE_Kestrel__Certificates__Default__Password
value: mypass123
- name: ASPNETCORE_Kestrel__Certificates__Default__Path
value: /app/https/aspnetapp.pfx
I also confirmed with the shell that the certificate is present in the /app/https folder.
Whay I am doing wrong?
I think your approach does not fit well with the architecture of Kubernetes. A TLS certificate (for https) is coupled to a hostname.
I would recommend one of two different approaches:
Expose your app with a Service of type: LoadBalancer
Expose your app with an Ingress resource
Expose your app with a Service of type LoadBalancer
This is typically called a Network LoadBalancer as it exposes your app for TCP or UDP directly.
See LoadBalancer access in the Minikube documentation. But beware that your app get an external address from your LoadBalancer, and your TLS certificate probably has to match that.
Expose your app with an Ingress resource
This is the most common approach for Microservices in Kubernetes. In addition to your Service of type: NodePort you also need to create an Ingress resource for your app.
The cluster needs an Ingress controller and the gateway will handle your TLS certificate, instead of your app.
See How to use custom TLS certificate with ingress addon for how to configure both Ingress and TLS certificate in Minikube.
I would recommend to go this route.

How can I deploy a .NET container with HTTPS (port 443) in AWS Kubernetes?

I have tried google a lot and done many configuration but still don't work.
I have a Kubernetes cluster in Amazon EKS and try to deploy a .NET container which is a website, of course.
Create a classic load-balancer to expose it to the internet.
What I want is exposing both HTTP and HTTPS - 80, 443 to the internet
I see many tutorials to solve this by pointing both 80 and 443 to a single port 80 of the container
It means the container itself only runs on port 80 -> which I don't want
Base on my understanding, to expose and run the app in container with 443, I have to put a SSL certificate, then the pod that runs the container somehow need to trust the certificate automatically otherwise it cannot receive any requests come from the load balancer. Am I right?
apiVersion: apps/v1
kind: Deployment
metadata:
name: dev-demo
spec:
selector:
matchLabels:
app: dev-demo
tier: backend
track: stable
replicas: 1
template:
metadata:
labels:
app: dev-demo
tier: backend
track: stable
spec:
containers:
- name: dev-demo
image: xxxxxxxxxxx
ports:
- containerPort: 80
- containerPort: 443
imagePullPolicy: Always
resources:
requests:
cpu: 500m
memory: 256Mi
limits:
cpu: 1000m
memory: 512Mi
env:
- name: ASPNETCORE_URLS
value: "https://*:443;http://*:80"
- name: ASPNETCORE_HTTPS_PORT
value: "443"
- name: ASPNETCORE_Kestrel__Certificates__Default__Path
value: "xxxxxx.pfx"
- name: ASPNETCORE_Kestrel__Certificates__Default__Password
value: "xxxxxx"
nodeSelector:
kubernetes.io/os: linux
apiVersion: v1
kind: Service
metadata:
name: dev-demo
labels:
run: dev-demo
annotations:
# Note that the backend talks over HTTP.
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: https
# TODO: Fill in with the ARN of your certificate.
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: xxxxxxxxxx
# Only run SSL on the port named "https" below.
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"
spec:
ports:
- name: http
port: 80
targetPort: 80
- name: https
port: 443
targetPort: 443
selector:
app: dev-demo
tier: backend
track: stable
sessionAffinity: None
type: LoadBalancer

ActiveMQ consumer in AKS

I have an ActiveMQ consumer in AKS I am trying to connect to a external service.
I have setup a AKS load balancer with a dedicated IP with with the following rules but it will not connect.
apiVersion: v1
kind: Service
metadata:
name: mx-load-balancer
spec:
loadBalancerIP: 1.1.1.1
type: LoadBalancer
ports:
- name: activemq-port-61616
port: 61616
targetPort: 61616
protocol: TCP
selector:
k8s-app: handlers-mx
Any ideas?
First of all, your loadBalancerIP is not a real one, you need to use a real IP of your LB. Second, you need to add annotation for service of type LoadBalancer to work:
annotations:
service.beta.kubernetes.io/azure-load-balancer-resource-group: LB_RESORCE_GROUP

AWS EKS - cannot access apache httpd behind a LoadBalancer

I've deployed an apache httpd server in a container and am attempting to expose it externally via a LoadBalancer. Although I can log on to the local host and get the expected response (curl -X GET localhost) when I try and access the external URL exposed by the load balancer I get an Empty reply from server:
curl -X GET ad8d14ea0ba9611e8b2360afc35626a3-553331517.us-east-1.elb.amazonaws.com:5000
curl: (52) Empty reply from server
Any idea what I am missing - is there some kind of additional redirection going on that I'm unaware of?
The yaml is here:
apiVersion: apps/v1
kind: Deployment
metadata:
name: apache
labels:
app: apache
spec:
replicas: 1
selector:
matchLabels:
pod: apache
template:
metadata:
name: apachehost
labels:
pod: apache
spec:
containers:
- name: apache
image: myrepo/apache2
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: apache
labels:
app: apache
spec:
type: LoadBalancer
selector:
pod: apache
ports:
- name: port1
port: 5000
targetPort: 80
1.Check your pod running.
2.Check aws IAM and security group also may be 5000 port not open for public.Use curl command in kubernet master and check port.
3.Share a pod logs
check on your aws load balancer for open port of 5000 in security group for LB. as in bound rule.
check for inbound rule of load balancer.
If your pods are running on Fargate the load balancer service will not work: https://docs.aws.amazon.com/eks/latest/userguide/load-balancing.html