What is the quickest way to expose a LoadBalancer service over HTTPS? - ssl

I have a simple web server running in a single pod on GKE. I has also exposed it using a load balancer service. What is the easiest way to make this pod accessible over HTTPS?
gcloud container clusters list
NAME LOCATION MASTER_VERSION MASTER_IP MACHINE_TYPE NODE_VERSION NUM_NODES STATUS
personal..... us-central1-a 1.19.14-gke.1900 34.69..... e2-medium 1.19.14-gke.1900 1 RUNNING
kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10..... <none> 443/TCP 437d
my-service LoadBalancer 10..... 34.71...... 80:30066/TCP 12d
kubectl get pods
NAME READY STATUS RESTARTS AGE
nodeweb-server-9pmxc 1/1 Running 0 2d15h
EDIT: I also have a domain name registered if it's easier to use that instead of https://34.71....

First, your cluster should have Config Connector installed and function properly.
Start by delete your existing load balancer service kubectl delete service my-service
Create a static IP.
apiVersion: compute.cnrm.cloud.google.com/v1beta1
kind: ComputeAddress
metadata:
name: <name your IP>
spec:
location: global
Retrieve the created IP kubectl get computeaddress <the named IP> -o jsonpath='{.spec.address}'
Create an DNS "A" record that map your registered domain with the created IP address. Check with nslookup <your registered domain name> to ensure the correct IP is returned.
Update your load balancer service spec by insert the following line after type: LoadBalancer: loadBalancerIP: "<the created IP address>"
Re-create the service and check kubectl get service my-service has the EXTERNAL-IP set correctly.
Create ManagedCertificate.
apiVersion: networking.gke.io/v1
kind: ManagedCertificate
metadata:
name: <name your cert>
spec:
domains:
- <your registered domain name>
Then create the Ingress.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: <name your ingress>
annotations:
networking.gke.io/managed-certificates: <the named certificate>
spec:
rules:
- host: <your registered domain name>
http:
paths:
- pathType: ImplementationSpecific
backend:
service:
name: my-service
port:
number: 80
Check with kubectl describe ingress <named ingress>, see the rules and annotations section.
NOTE: It can take up to 15mins for the load balancer to be fully ready. Test with curl https://<your registered domain name>.

Related

Traefik & k3d: Dashboard is not reachable

This is my k3d cluster creation command:
$ k3d cluster create arxius \
--agents 3 \
--k3s-server-arg --disable=traefik \
-p "8888:80#loadbalancer" -p "9000:9000#loadbalancer" \
--volume ${HOME}/.k3d/registries.yaml:/etc/rancher/k3s/registries.yaml
Here my nodes:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c83f2f746621 rancher/k3d-proxy:v3.0.1 "/bin/sh -c nginx-pr…" 2 weeks ago Up 21 minutes 0.0.0.0:9000->9000/tcp, 0.0.0.0:8888->80/tcp, 0.0.0.0:45195->6443/tcp k3d-arxius-serverlb
0ed525443da2 rancher/k3s:v1.18.6-k3s1 "/bin/k3s agent" 2 weeks ago Up 21 minutes k3d-arxius-agent-2
561a0a51e6d7 rancher/k3s:v1.18.6-k3s1 "/bin/k3s agent" 2 weeks ago Up 21 minutes k3d-arxius-agent-1
fc131df35105 rancher/k3s:v1.18.6-k3s1 "/bin/k3s agent" 2 weeks ago Up 21 minutes k3d-arxius-agent-0
4cfceabad5af rancher/k3s:v1.18.6-k3s1 "/bin/k3s server --d…" 2 weeks ago Up 21 minutes k3d-arxius-server-0
873a4f157251 registry:2 "/entrypoint.sh /etc…" 3 months ago Up About an hour 0.0.0.0:5000->5000/tcp registry.localhost
I've installed traefik using default helm installation command:
$ helm install traefik traefik/traefik
After that, an ingressroute is also installed in order to reach dashboard:
Name: traefik-dashboard
Namespace: traefik
Labels: app.kubernetes.io/instance=traefik
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=traefik
helm.sh/chart=traefik-9.1.1
Annotations: helm.sh/hook: post-install,post-upgrade
API Version: traefik.containo.us/v1alpha1
Kind: IngressRoute
Metadata:
Creation Timestamp: 2020-12-09T19:07:41Z
Generation: 1
Managed Fields:
API Version: traefik.containo.us/v1alpha1
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.:
f:helm.sh/hook:
f:labels:
.:
f:app.kubernetes.io/instance:
f:app.kubernetes.io/managed-by:
f:app.kubernetes.io/name:
f:helm.sh/chart:
f:spec:
.:
f:entryPoints:
f:routes:
Manager: Go-http-client
Operation: Update
Time: 2020-12-09T19:07:41Z
Resource Version: 141805
Self Link: /apis/traefik.containo.us/v1alpha1/namespaces/traefik/ingressroutes/traefik-dashboard
UID: 1cbcd5ec-d967-440c-ad21-e41a59ca1ba8
Spec:
Entry Points:
traefik
Routes:
Kind: Rule
Match: PathPrefix(`/dashboard`) || PathPrefix(`/api`)
Services:
Kind: TraefikService
Name: api#internal
Events: <none>
As you can see:
Match: PathPrefix(`/dashboard`) || PathPrefix(`/api`)
I'm trying to reach dashboard. Nevertheless:
Details are not shown.
I've also tried to launch a curl command:
curl 'http://localhost:9000/api/overview'
curl: (52) Empty reply from server
Any ideas?
First, using the default configuration of the traefik helm chart (in version 9.1.1) sets up the entryPoint traefik on port 9000 but does not expose it automatically. So, if you check the service created for you, you will see that this only maps the web and websecure endpoints.
Check this snippet from kubectl get svc traefik -o yaml
spec:
clusterIP: xx.xx.xx.xx
externalTrafficPolicy: Cluster
ports:
- name: web
nodePort: 30388
port: 80
protocol: TCP
targetPort: web
- name: websecure
nodePort: 31115
port: 443
protocol: TCP
targetPort: websecure
selector:
app.kubernetes.io/instance: traefik
app.kubernetes.io/name: traefik
sessionAffinity: None
type: LoadBalancer
As explained in the docs, there are two ways to reach your dashboard. Either, you start a port-forward to your local machine for port 9000 or you expose the dashboard via ingressroute on another entrypoint.
Please be aware that you still net to port-forward even though your k3d proxy already binds to 9000. This is only the reservation if some loadbalanced service wants to be exposed on that external port. At the moment, this is not used and is also not necessary for any of the solutions. You still need to port-forward to the traefik pod. After establishing the port-forward, you can access the dashboard on http://localhost:9000/dashboard/ (be aware of the trailing slash that is needed for the PathPrefix rule).
The other solution of exposing on another entrypoint requires no port-forward, but you need to care for a proper domain name (dns entry + host rule) and take care of not exposing it to the whole world by e.g. adding an auth middleware.
See the changes highlighted below:
# dashboard.yaml
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: dashboard
spec:
entryPoints:
- web # <-- using the web entrypoint, not the traefik (9000) one
routes: # v-- adding a host rule
- match: Host(`traefik.localhost`) && (PathPrefix(`/dashboard`) || PathPrefix(`/api`))
kind: Rule
services:
- name: api#internal
kind: TraefikService

Kubernetes: Why are my acme challenges getting EOF/no response?

I'm setting up a Kubernetes cluster in AWS using Kops. I've got an nginx ingress controller, and I'm trying to use letsencrypt to setup tls. Right now I can't get my ingress up and running because my certificate challenges get this error:
Waiting for http-01 challenge propagation: failed to perform self check GET request 'http://critsit.io/.well-known/acme-challenge/[challengeId]': Get http://critsit.io/.well-known/acme-challenge/[challengeId]: EOF
I've got a LoadBalancer service that's taking public traffic, and the certificate issuer automatically creates 2 other services which don't have public IPs.
What am I doing wrong here? Is there some networking issue preventing the pods from finishing the acme flow? Or maybe something else?
Note: I have setup an A record in Route53 to direct traffic to the LoadBalancer.
> kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
cm-acme-http-solver-m2q2g NodePort 100.69.86.241 <none> 8089:31574/TCP 3m34s
cm-acme-http-solver-zs2sd NodePort 100.67.15.149 <none> 8089:30710/TCP 3m34s
default-http-backend NodePort 100.65.125.130 <none> 80:32485/TCP 19h
kubernetes ClusterIP 100.64.0.1 <none> 443/TCP 19h
landing ClusterIP 100.68.115.188 <none> 3001/TCP 93m
nginx-ingress LoadBalancer 100.67.204.166 [myELB].us-east-1.elb.amazonaws.com 443:30596/TCP,80:30877/TCP 19h
Here's my ingress setup:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: critsit-ingress
namespace: default
annotations:
kubernetes.io/ingress.class: "nginx"
cert-manager.io/acme-challenge-type: "http01"
cert-manager.io/cluster-issuer: "letsencrypt-prod"
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
tls:
- hosts:
- critsit.io
- app.critsit.io
secretName: letsencrypt-prod
rules:
- host: critsit.io
http:
paths:
- path: /
backend:
serviceName: landing
servicePort: 3001
And my certificate issuer:
apiVersion: cert-manager.io/v1alpha2
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
spec:
acme:
# The ACME server URL
server: https://acme-v02.api.letsencrypt.org/directory
# Email address used for ACME registration
email: michael.vegeto#gmail.com
# Name of a secret used to store the ACME account private key
privateKeySecretRef:
name: letsencrypt-prod
# Enable the HTTP-01 challenge provider
solvers:
- http01:
ingress:
class: nginx
selector: {}
Update: I've noticed that my load balancer has all of the instances marked as OutOfOrder because they're failing health checks. I wonder if that's related to the issue.
Second update: I abandoned this route altogether, and rebuilt my networking/ingress system using Istio
The error message you are getting can mean a wide variety of issues. However, there are few things you can check/do in order to make it work:
Delete the Ingress, the certificates and the cert-manager fully. After that add them all back to make sure it installs clean. Sometimes stale certs or bad/multi Ingress pathing might be the issue. For example you can use Helm:
helm install my-nginx-ingress stable/nginx-ingress
helm repo add jetstack https://charts.jetstack.io
helm repo update
helm install cert-manager jetstack/cert-manager --namespace cert-manager --version v0.15.0 --set installCRDs=true
Make sure your traffic allows HTTP or has HTTPS with a trusted cert.
Check if hairpin mode of your loadbalancer and make sure it is working.
Add: nginx.ingress.kubernetes.io/ssl-redirect: "false" annotation to the Ingress rule. Wait a moment and see if valid cert will be created.
You can manually manually issue certificates in your Kubernetes cluster. To do so, please follow this guide.
The problem can solve itself in time. Currently if the self check fails, it
updates the status information with the reason (like: self check failed) and than
tries again later (to allow for propagation). This is an expected behavior.
This is an ongoing issue that is being tracked here and here.

Unable to connect to Azure Kubernetes (AKS) external-ip

I'm trying to deploy my first asp.net app (sample VS 2019 project) to AKS.
I was able to create a docker container, run it locally and access it via http://localhost:8000/weatherforecast.
However, I'm not able to access the endpoint when it's deployed in AKS.
Yaml file:
apiVersion: apps/v1
kind: Deployment
metadata:
<t/>name: aspnetdemo-deployment
spec:
replicas: 1
selector:
matchLabels:
app: aspnetdemo
template:
metadata:
labels:
app: aspnetdemo
spec:
containers:
- name: mycr
image: mycr.azurecr.io/aspnetdemo:v1
imagePullPolicy: Always
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: aspnetdemo-service
spec:
ports:
- port: 80
targetPort: 8080
selector:
app: aspnetdemo
type : LoadBalancer
I verified that the pod is running -
kubectl get pods
NAME READY STATUS RESTARTS AGE
aspnetdemo-deployment-* 2/2 Running 0 21m
and the service too -
kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
aspnetdemo-service LoadBalancer 10.0.X.X 13.89.X.X 80:30635/TCP 22m
I am getting a error when I try to access 13.89.X.X/weatherforecast:
"this site can't be reached - the connection was reset"
Any ideas?
When I run the following command, it returns an endpoint -
kubectl describe service aspnetdemo-service | select-string Endpoints
Endpoints: 10.244.X.X:8080
I also tried port forwarding and that didn't work either.
kubectl port-forward service/aspnetdemo-service 3000:80
http://localhost:3000/weatherforecast
E0512 15:16:24.429387 21356 portforward.go:400] an error occurred forwarding 3000 -> 8080: error forwarding port 8080 to pod a87ebc116d0e0b6e7066f32e945661c50d745d392c76844de084c7da96a874b8, uid : exit status 1: 2020/05/12 22:16:24 socat[18810] E write(5, 0x14da4c0, 535): Broken pipe
Thanks in advance!

AWS-EKS deployed pod is exposed with type service Node Port is not accessible over nodePort IP and exposed port

I've created k8s cluster on AWS using EKS with Terraform followed this documentation https://docs.aws.amazon.com/eks/latest/userguide/what-is-eks.html .
I have one worker node.Note: Everything is in Private Subnets
Just running node.js hello-world container
Code for pod definition
kind: Pod
metadata:
name: nodehelloworld.example.com
labels:
app: helloworld
spec:
containers:
- name: k8s-demo
image: wardviaene/k8s-demo
ports:
- name: nodejs-port
containerPort: 3000
Code for service definition
apiVersion: v1
kind: Service
metadata:
name: helloworld-service
spec:
ports:
- port: 31001
nodePort: 31001
targetPort: nodejs-port
protocol: TCP
selector:
app: helloworld
type: NodePort
kubectl get pods shows that my pod is up and running
nodehelloworld.example.com 1/1 Running 0 17h
kubectl get svc shows that my service is also created
helloworld-service NodePort 172.20.146.235 <none> 31001:31001/TCP 16h
kubectl describe svc helloworld-service it has correct end-point and correct selector
So here is the problem
When I hit NodeIP:exposed port(which is 31001) I'm getting This site can’t be reached
then I used kubeclt port-forward podname 3000:3000
I can hit with curl -v localhost:3000 is reachable
I checked my security group inbound rule is 0-65535 from my CIDR block.
Is there anything else I'm missing?
If you are trying to connect from outside the cluster then in the security group for worker nodes you will have to add a custom TCP entry for enabling inbound traffic on port 31001.
If that does not work then make sure you are able to connect to the Node through that IP. I usually connect using a VPN client.
Fixed..
On AWS EKS nodeports are not working as on Pure Kubernetes.
when you exposing
- port: 31001
targetPort: nodejs-port
protocol: TCP
31001 that's the clusterIP port will get exposed.
in order to get nodePort you must describe your service and look for NodePort is description that was exposed

AWS EKS - cannot access apache httpd behind a LoadBalancer

I've deployed an apache httpd server in a container and am attempting to expose it externally via a LoadBalancer. Although I can log on to the local host and get the expected response (curl -X GET localhost) when I try and access the external URL exposed by the load balancer I get an Empty reply from server:
curl -X GET ad8d14ea0ba9611e8b2360afc35626a3-553331517.us-east-1.elb.amazonaws.com:5000
curl: (52) Empty reply from server
Any idea what I am missing - is there some kind of additional redirection going on that I'm unaware of?
The yaml is here:
apiVersion: apps/v1
kind: Deployment
metadata:
name: apache
labels:
app: apache
spec:
replicas: 1
selector:
matchLabels:
pod: apache
template:
metadata:
name: apachehost
labels:
pod: apache
spec:
containers:
- name: apache
image: myrepo/apache2
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: apache
labels:
app: apache
spec:
type: LoadBalancer
selector:
pod: apache
ports:
- name: port1
port: 5000
targetPort: 80
1.Check your pod running.
2.Check aws IAM and security group also may be 5000 port not open for public.Use curl command in kubernet master and check port.
3.Share a pod logs
check on your aws load balancer for open port of 5000 in security group for LB. as in bound rule.
check for inbound rule of load balancer.
If your pods are running on Fargate the load balancer service will not work: https://docs.aws.amazon.com/eks/latest/userguide/load-balancing.html