curl: (7) Failed connect to xx.xx.xx.xx:80; Connection refused - ssl

I am trying to deploy nginx - ingress
kubectl run nginx --image=nginx
kubectl get pod
NAME READY STATUS RESTARTS AGE
nginx-65899c769f-wf7dl 1/1 Running 0 9m
kubectl expose deploy nginx --port 80
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx ClusterIP 10.254.75.184 <none> 80/TCP 9m
vi ingress.yml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nginx
spec:
rules:
- host: kub-mst.coral.io
http:
paths:
- backend:
serviceName: nginx
servicePort: 80
kubectl get ing
NAME HOSTS ADDRESS PORTS AGE
nginx kub-mst 80 9m
vi /etc/hosts
xx.xx.xx.xx kub-mst.coral.io
curl kub-mst.coral.io
curl: (7) Failed connect to kub-mst; Connection refused
I have Kubernetes Cluster and am trying to
curl http://xx.xx.xx.xx
it returns
curl: (7) Failed connect to xx.xx.xx.xx:80; Connection refused
and i execute
kubectl cluster-info
it returns
Kubernetes master is running at http://localhost:8080
KubeDNS is running at http://localhost:8080/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
xx.xx.xx.xx is public IP.
how to troubleshooting to detect where is the problem

You provided the ingress controller with a single rule that matches the host header of your request, yet, for some odd reason, you're testing with a request that does not provide the host header.
curl -H 'Host: kub-mst.coral.io' http://xx.xx.xx.xx

Related

Accessing service from custom port using k3d and traefik

I am trying to configure traefik and loadbalancer to accept traffic from host port 9200.
Everything works fine for port 8443 (websecure). I am using k3d and traefik is initially disabled.
I can curl my "2048" service from my macos host. The ingress is configured for "websecure" endpoint and a match is found.
curl --cacert ca.crt -I https://2048.127.0.0.1.nip.io:8443
HTTP/2 200
I have installed the exact same service and named it "2049". I want this service to be available from 9200 (I have de-configured tls to simplify things).
+ curl -vvv -k -I http://2049.127.0.0.1.nip.io:9200
* Trying 127.0.0.1:9200...
* Connected to 2049.127.0.0.1.nip.io (127.0.0.1) port 9200 (#0)
> HEAD / HTTP/1.1
> Host: 2049.127.0.0.1.nip.io:9200
> User-Agent: curl/7.79.1
> Accept: */*
>
* Empty reply from server
* Closing connection 0
curl: (52) Empty reply from server
Both services can be accessed from within the cluster.
I have installed traefik through helm and made sure ports are available.
#
k get -n traefik-system svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
traefik LoadBalancer 10.43.86.220 172.27.0.3,172.27.0.4,172.27.0.5 80:30039/TCP,443:30253/TCP,9092:30179/TCP,9200:31428/TCP 61m
# just to display, the lb is configured for port 9200 (iptables, /pause container)
k logs -n traefik-system pod/svclb-traefik-h5zs4
error: a container name must be specified for pod svclb-traefik-h5zs4, choose one of: [lb-tcp-80 lb-tcp-443 lb-tcp-9092 lb-tcp-9200]
# my ingress
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: game-2049
spec:
entryPoints: # We listen to requests coming from port 9200
- elasticsearch
routes:
- match: Host(`2049.127.0.0.1.nip.io`)
kind: Rule
services:
- name: game-2049 # Requests will be forwarded to this service
port: 80
# traefik is configured with these endpoint addresses:
- "--entrypoints.web.address=:8000/tcp"
- "--entrypoints.websecure.address=:8443/tcp"
- "--entrypoints.kafka.address=:9092/tcp"
- "--entrypoints.elasticsearch.address=:9200/tcp"
My goal is to access elasticsearch 9200 and kafka 9092 from my MacOS host using k3d. But first I need to get this configuration for "2049" right.
What I am missing?
I have this working on K3s using bitnami kafka
You need two things:
Define the entry point in traefik config -- which from your note you already have.
kubectl describe pods traefik-5bcf476bb9-qrqg7 --namespace traefik
Name: traefik-5bcf476bb9-qrqg7
Namespace: traefik
Priority: 0
Service Account: traefik
...
Status: Running
...
Image: traefik:2.9.1
Image ID: docker.io/library/traefik#sha256:4ebf68cdb33c162e8786ac83ece782ec0dbe583471c04dfd0af43f245b96c88f
Ports: 9094/TCP, 9100/TCP, 9000/TCP, 8000/TCP, 8443/TCP
Host Ports: 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP
Args:
--global.checknewversion
--global.sendanonymoususage
--entrypoints.kafka.address=:9094/tcp
--entrypoints.metrics.address=:9100/tcp
--entrypoints.traefik.address=:9000/tcp
--entrypoints.web.address=:8000/tcp
--entrypoints.websecure.address=:8443/tcp
--api.dashboard=true
--ping=true
--metrics.prometheus=true
--metrics.prometheus.entrypoint=metrics
--providers.kubernetescrd
--providers.kubernetescrd.allowCrossNamespace=true
--providers.kubernetescrd.allowExternalNameServices=true
--providers.kubernetesingress
--providers.kubernetesingress.allowExternalNameServices=true
--providers.kubernetesingress.allowEmptyServices=true
--entrypoints.websecure.http.tls=true
State: Running
Started: Thu, 27 Oct 2022 16:27:22 -0400
Ready: True
I'm using TCP port 9094 for kafka traffic.
Is the Ingress- I'm using IngressRouteTCP CRD
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRouteTCP
metadata:
name: kafka-ingress
namespace: bitnami-kafka
spec:
entryPoints:
- kafka
routes:
- match: HostSNI(`*`)
services:
- name: my-bkafka-0-external
namespace: bitnami-kafka
port : 9094
Note: traefik is routing to a k8 LoadBalancer
kubectl get services --namespace bitnami-kafka
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
my-bkafka ClusterIP 10.43.153.8 <none> 9092/TCP 20h
my-bkafka-0-external LoadBalancer 10.43.45.233 10.55.10.243 9094:30737/TCP 20h
my-bkafka-headless ClusterIP None <none> 9092/TCP,9093/TCP 20h
my-bkafka-zookeeper ClusterIP 10.43.170.229 <none> 2181/TCP,2888/TCP,3888/TCP 20h
my-bkafka-zookeeper-headless ClusterIP None <none> 2181/TCP,2888/TCP,3888/TCP 20h
which is
option A from bitnami's write-up on Kafka external access.

What is the quickest way to expose a LoadBalancer service over HTTPS?

I have a simple web server running in a single pod on GKE. I has also exposed it using a load balancer service. What is the easiest way to make this pod accessible over HTTPS?
gcloud container clusters list
NAME LOCATION MASTER_VERSION MASTER_IP MACHINE_TYPE NODE_VERSION NUM_NODES STATUS
personal..... us-central1-a 1.19.14-gke.1900 34.69..... e2-medium 1.19.14-gke.1900 1 RUNNING
kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10..... <none> 443/TCP 437d
my-service LoadBalancer 10..... 34.71...... 80:30066/TCP 12d
kubectl get pods
NAME READY STATUS RESTARTS AGE
nodeweb-server-9pmxc 1/1 Running 0 2d15h
EDIT: I also have a domain name registered if it's easier to use that instead of https://34.71....
First, your cluster should have Config Connector installed and function properly.
Start by delete your existing load balancer service kubectl delete service my-service
Create a static IP.
apiVersion: compute.cnrm.cloud.google.com/v1beta1
kind: ComputeAddress
metadata:
name: <name your IP>
spec:
location: global
Retrieve the created IP kubectl get computeaddress <the named IP> -o jsonpath='{.spec.address}'
Create an DNS "A" record that map your registered domain with the created IP address. Check with nslookup <your registered domain name> to ensure the correct IP is returned.
Update your load balancer service spec by insert the following line after type: LoadBalancer: loadBalancerIP: "<the created IP address>"
Re-create the service and check kubectl get service my-service has the EXTERNAL-IP set correctly.
Create ManagedCertificate.
apiVersion: networking.gke.io/v1
kind: ManagedCertificate
metadata:
name: <name your cert>
spec:
domains:
- <your registered domain name>
Then create the Ingress.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: <name your ingress>
annotations:
networking.gke.io/managed-certificates: <the named certificate>
spec:
rules:
- host: <your registered domain name>
http:
paths:
- pathType: ImplementationSpecific
backend:
service:
name: my-service
port:
number: 80
Check with kubectl describe ingress <named ingress>, see the rules and annotations section.
NOTE: It can take up to 15mins for the load balancer to be fully ready. Test with curl https://<your registered domain name>.

Traefik & k3d: Dashboard is not reachable

This is my k3d cluster creation command:
$ k3d cluster create arxius \
--agents 3 \
--k3s-server-arg --disable=traefik \
-p "8888:80#loadbalancer" -p "9000:9000#loadbalancer" \
--volume ${HOME}/.k3d/registries.yaml:/etc/rancher/k3s/registries.yaml
Here my nodes:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c83f2f746621 rancher/k3d-proxy:v3.0.1 "/bin/sh -c nginx-pr…" 2 weeks ago Up 21 minutes 0.0.0.0:9000->9000/tcp, 0.0.0.0:8888->80/tcp, 0.0.0.0:45195->6443/tcp k3d-arxius-serverlb
0ed525443da2 rancher/k3s:v1.18.6-k3s1 "/bin/k3s agent" 2 weeks ago Up 21 minutes k3d-arxius-agent-2
561a0a51e6d7 rancher/k3s:v1.18.6-k3s1 "/bin/k3s agent" 2 weeks ago Up 21 minutes k3d-arxius-agent-1
fc131df35105 rancher/k3s:v1.18.6-k3s1 "/bin/k3s agent" 2 weeks ago Up 21 minutes k3d-arxius-agent-0
4cfceabad5af rancher/k3s:v1.18.6-k3s1 "/bin/k3s server --d…" 2 weeks ago Up 21 minutes k3d-arxius-server-0
873a4f157251 registry:2 "/entrypoint.sh /etc…" 3 months ago Up About an hour 0.0.0.0:5000->5000/tcp registry.localhost
I've installed traefik using default helm installation command:
$ helm install traefik traefik/traefik
After that, an ingressroute is also installed in order to reach dashboard:
Name: traefik-dashboard
Namespace: traefik
Labels: app.kubernetes.io/instance=traefik
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=traefik
helm.sh/chart=traefik-9.1.1
Annotations: helm.sh/hook: post-install,post-upgrade
API Version: traefik.containo.us/v1alpha1
Kind: IngressRoute
Metadata:
Creation Timestamp: 2020-12-09T19:07:41Z
Generation: 1
Managed Fields:
API Version: traefik.containo.us/v1alpha1
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.:
f:helm.sh/hook:
f:labels:
.:
f:app.kubernetes.io/instance:
f:app.kubernetes.io/managed-by:
f:app.kubernetes.io/name:
f:helm.sh/chart:
f:spec:
.:
f:entryPoints:
f:routes:
Manager: Go-http-client
Operation: Update
Time: 2020-12-09T19:07:41Z
Resource Version: 141805
Self Link: /apis/traefik.containo.us/v1alpha1/namespaces/traefik/ingressroutes/traefik-dashboard
UID: 1cbcd5ec-d967-440c-ad21-e41a59ca1ba8
Spec:
Entry Points:
traefik
Routes:
Kind: Rule
Match: PathPrefix(`/dashboard`) || PathPrefix(`/api`)
Services:
Kind: TraefikService
Name: api#internal
Events: <none>
As you can see:
Match: PathPrefix(`/dashboard`) || PathPrefix(`/api`)
I'm trying to reach dashboard. Nevertheless:
Details are not shown.
I've also tried to launch a curl command:
curl 'http://localhost:9000/api/overview'
curl: (52) Empty reply from server
Any ideas?
First, using the default configuration of the traefik helm chart (in version 9.1.1) sets up the entryPoint traefik on port 9000 but does not expose it automatically. So, if you check the service created for you, you will see that this only maps the web and websecure endpoints.
Check this snippet from kubectl get svc traefik -o yaml
spec:
clusterIP: xx.xx.xx.xx
externalTrafficPolicy: Cluster
ports:
- name: web
nodePort: 30388
port: 80
protocol: TCP
targetPort: web
- name: websecure
nodePort: 31115
port: 443
protocol: TCP
targetPort: websecure
selector:
app.kubernetes.io/instance: traefik
app.kubernetes.io/name: traefik
sessionAffinity: None
type: LoadBalancer
As explained in the docs, there are two ways to reach your dashboard. Either, you start a port-forward to your local machine for port 9000 or you expose the dashboard via ingressroute on another entrypoint.
Please be aware that you still net to port-forward even though your k3d proxy already binds to 9000. This is only the reservation if some loadbalanced service wants to be exposed on that external port. At the moment, this is not used and is also not necessary for any of the solutions. You still need to port-forward to the traefik pod. After establishing the port-forward, you can access the dashboard on http://localhost:9000/dashboard/ (be aware of the trailing slash that is needed for the PathPrefix rule).
The other solution of exposing on another entrypoint requires no port-forward, but you need to care for a proper domain name (dns entry + host rule) and take care of not exposing it to the whole world by e.g. adding an auth middleware.
See the changes highlighted below:
# dashboard.yaml
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: dashboard
spec:
entryPoints:
- web # <-- using the web entrypoint, not the traefik (9000) one
routes: # v-- adding a host rule
- match: Host(`traefik.localhost`) && (PathPrefix(`/dashboard`) || PathPrefix(`/api`))
kind: Rule
services:
- name: api#internal
kind: TraefikService

How to configure ssl for ldap/opendj while using ISTIO service mesh

I have a couple of microservices and our backend is opendj/ldap. It has been configured to use SSL. Now we are trying to use ISTIO as our k8s service mesh. Every other service works fine but the ldap server - opendj - is not. My gues is it's because of the ssl configuration. It's meant to use self-signed cert.
I have a script that creates a self-signed cert in istio namespace and I have tried to use it like this on the gateway.yaml
- port:
number: 4444
name: tcp-admin
protocol: TCP
hosts:
- "*"
tls:
mode: SIMPLE # enable https on this port
credentialName: tls-certificate # fetch cert from k8s secret
I also have tried to use
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: opendj-istio-mtls
spec:
host: opendj.{{.Release.Namespace }}.svc.cluster.local
trafficPolicy:
tls:
mode: ISTIO_MUTUAL
credentialName: tls-certificate
---
apiVersion: authentication.istio.io/v1alpha1
kind: Policy
metadata:
name: opendj-receive-tls
spec:
targets:
- name: opendj
peers:
- mtls: {}
For the ldap server but it's not connecting. While trying to use the tls spec in gateway.yaml I am getting this error
Error: admission webhook "pilot.validation.istio.io" denied the request: configuration is invalid: server cannot have TLS settings for non HTTPS/TLS ports
And the logs from opendj server
INFO - entrypoint - 2020-06-17 12:49:44,768 - Configuring OpenDJ.
WARNING - entrypoint - 2020-06-17 12:49:48,987 -
Unable to connect to the server at
"oj-opendj-0.opendj.default.svc.cluster.local" on port 4444
WARNING - entrypoint - 2020-06-17 12:49:53,293 -
Unable to connect to the server at
"oj-opendj-0.opendj.default.svc.cluster.local" on port 4444
Can someone please help me out how I should approach this.
To Enable non-https traffic over TLS connections you have to use Protocol TLS. TLS implies the connection will be routed based on the SNI header to the destination without terminating the TLS connection. You can check this.
- port:
number: 4444
name: tls
protocol: TLS
hosts:
- "*"
tls:
mode: SIMPLE # enable https on this port
credentialName: tls-certificate # fetch cert from k8s secret
Please check this istio documentation also.

AWS-EKS deployed pod is exposed with type service Node Port is not accessible over nodePort IP and exposed port

I've created k8s cluster on AWS using EKS with Terraform followed this documentation https://docs.aws.amazon.com/eks/latest/userguide/what-is-eks.html .
I have one worker node.Note: Everything is in Private Subnets
Just running node.js hello-world container
Code for pod definition
kind: Pod
metadata:
name: nodehelloworld.example.com
labels:
app: helloworld
spec:
containers:
- name: k8s-demo
image: wardviaene/k8s-demo
ports:
- name: nodejs-port
containerPort: 3000
Code for service definition
apiVersion: v1
kind: Service
metadata:
name: helloworld-service
spec:
ports:
- port: 31001
nodePort: 31001
targetPort: nodejs-port
protocol: TCP
selector:
app: helloworld
type: NodePort
kubectl get pods shows that my pod is up and running
nodehelloworld.example.com 1/1 Running 0 17h
kubectl get svc shows that my service is also created
helloworld-service NodePort 172.20.146.235 <none> 31001:31001/TCP 16h
kubectl describe svc helloworld-service it has correct end-point and correct selector
So here is the problem
When I hit NodeIP:exposed port(which is 31001) I'm getting This site can’t be reached
then I used kubeclt port-forward podname 3000:3000
I can hit with curl -v localhost:3000 is reachable
I checked my security group inbound rule is 0-65535 from my CIDR block.
Is there anything else I'm missing?
If you are trying to connect from outside the cluster then in the security group for worker nodes you will have to add a custom TCP entry for enabling inbound traffic on port 31001.
If that does not work then make sure you are able to connect to the Node through that IP. I usually connect using a VPN client.
Fixed..
On AWS EKS nodeports are not working as on Pure Kubernetes.
when you exposing
- port: 31001
targetPort: nodejs-port
protocol: TCP
31001 that's the clusterIP port will get exposed.
in order to get nodePort you must describe your service and look for NodePort is description that was exposed