Cannot Access Kubernetes Dashboard through SSH Tunnel - ssh

I'm running minikube on a Linux cloud instance (Ubuntu 20.04).
I enabled the ingress and kubernetes-dashboard minikube addons, then applied the following configuration file to startup an ingress component pointing to the kubernetes-dashboard service:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: dashboard-ingress
namespace: kubernetes-dashboard
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: dashboard.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: kubernetes-dashboard
port:
number: 80
>> kubectl get svc -A
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 50m
ingress-nginx ingress-nginx-controller NodePort 10.103.29.93 <none> 80:31819/TCP,443:31799/TCP 43m
ingress-nginx ingress-nginx-controller-admission ClusterIP 10.108.79.5 <none> 443/TCP 43m
kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 50m
kubernetes-dashboard dashboard-metrics-scraper ClusterIP 10.105.118.63 <none> 8000/TCP 49m
kubernetes-dashboard kubernetes-dashboard ClusterIP 10.108.83.77 <none> 80/TCP 49m
>> kubectl get ingress -A
NAMESPACE NAME CLASS HOSTS ADDRESS PORTS AGE
kubernetes-dashboard dashboard-ingress <none> dashboard.com 192.168.49.2 80 51m
Then I added the following entry to the /etc/hosts file on the Linux server, so that it basically 'caches' the DNS resolution for dashboard.com:
127.0.0.1 localhost
192.168.49.2 dashboard.com. # This is the line I added
# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts
After all of this setup, I'm able to hit the dashboard.com endpoint from inside the Linux server:
>> curl dashboard.com
<!--
Copyright 2017 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
--><!DOCTYPE html><html lang="en" dir="ltr"><head>
<meta charset="utf-8">
<title>Kubernetes Dashboard</title>
<link rel="icon" type="image/png" href="assets/images/kubernetes-logo.png">
<meta name="viewport" content="width=device-width">
<style>html,body{height:100%;margin:0}*::-webkit-scrollbar{background:transparent;height:8px;width:8px}</style><link rel="stylesheet" href="styles.243e6d874431c8e8.css" media="print" onload="this.media='all'"><noscript><link rel="stylesheet" href="styles.243e6d874431c8e8.css"></noscript></head>
<body>
<kd-root></kd-root>
<script src="runtime.134ad7745384bed8.js" type="module"></script><script src="polyfills.5c84b93f78682d4f.js" type="module"></script><script src="scripts.2c4f58d7c579cacb.js" defer></script><script src="en.main.3550e3edca7d0ed8.js" type="module"></script>
</body></html>
Now, to access the dashboard through the browser on my local machine, I setup this SSH local-forwarding tunnel:
ssh -L 30001:dashboard.com:80 azureuser#<linux-instance-ip-address> -i <path-to-pem-file>
and hit the following address in my browser:
http://localhost:30001
but this gives me a 404 response from nginx.
How can I access the dashboard from my local machine? The SSH connection is working and the tunnel gets setup with no errors.

Related

Accessing service from custom port using k3d and traefik

I am trying to configure traefik and loadbalancer to accept traffic from host port 9200.
Everything works fine for port 8443 (websecure). I am using k3d and traefik is initially disabled.
I can curl my "2048" service from my macos host. The ingress is configured for "websecure" endpoint and a match is found.
curl --cacert ca.crt -I https://2048.127.0.0.1.nip.io:8443
HTTP/2 200
I have installed the exact same service and named it "2049". I want this service to be available from 9200 (I have de-configured tls to simplify things).
+ curl -vvv -k -I http://2049.127.0.0.1.nip.io:9200
* Trying 127.0.0.1:9200...
* Connected to 2049.127.0.0.1.nip.io (127.0.0.1) port 9200 (#0)
> HEAD / HTTP/1.1
> Host: 2049.127.0.0.1.nip.io:9200
> User-Agent: curl/7.79.1
> Accept: */*
>
* Empty reply from server
* Closing connection 0
curl: (52) Empty reply from server
Both services can be accessed from within the cluster.
I have installed traefik through helm and made sure ports are available.
#
k get -n traefik-system svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
traefik LoadBalancer 10.43.86.220 172.27.0.3,172.27.0.4,172.27.0.5 80:30039/TCP,443:30253/TCP,9092:30179/TCP,9200:31428/TCP 61m
# just to display, the lb is configured for port 9200 (iptables, /pause container)
k logs -n traefik-system pod/svclb-traefik-h5zs4
error: a container name must be specified for pod svclb-traefik-h5zs4, choose one of: [lb-tcp-80 lb-tcp-443 lb-tcp-9092 lb-tcp-9200]
# my ingress
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: game-2049
spec:
entryPoints: # We listen to requests coming from port 9200
- elasticsearch
routes:
- match: Host(`2049.127.0.0.1.nip.io`)
kind: Rule
services:
- name: game-2049 # Requests will be forwarded to this service
port: 80
# traefik is configured with these endpoint addresses:
- "--entrypoints.web.address=:8000/tcp"
- "--entrypoints.websecure.address=:8443/tcp"
- "--entrypoints.kafka.address=:9092/tcp"
- "--entrypoints.elasticsearch.address=:9200/tcp"
My goal is to access elasticsearch 9200 and kafka 9092 from my MacOS host using k3d. But first I need to get this configuration for "2049" right.
What I am missing?
I have this working on K3s using bitnami kafka
You need two things:
Define the entry point in traefik config -- which from your note you already have.
kubectl describe pods traefik-5bcf476bb9-qrqg7 --namespace traefik
Name: traefik-5bcf476bb9-qrqg7
Namespace: traefik
Priority: 0
Service Account: traefik
...
Status: Running
...
Image: traefik:2.9.1
Image ID: docker.io/library/traefik#sha256:4ebf68cdb33c162e8786ac83ece782ec0dbe583471c04dfd0af43f245b96c88f
Ports: 9094/TCP, 9100/TCP, 9000/TCP, 8000/TCP, 8443/TCP
Host Ports: 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP
Args:
--global.checknewversion
--global.sendanonymoususage
--entrypoints.kafka.address=:9094/tcp
--entrypoints.metrics.address=:9100/tcp
--entrypoints.traefik.address=:9000/tcp
--entrypoints.web.address=:8000/tcp
--entrypoints.websecure.address=:8443/tcp
--api.dashboard=true
--ping=true
--metrics.prometheus=true
--metrics.prometheus.entrypoint=metrics
--providers.kubernetescrd
--providers.kubernetescrd.allowCrossNamespace=true
--providers.kubernetescrd.allowExternalNameServices=true
--providers.kubernetesingress
--providers.kubernetesingress.allowExternalNameServices=true
--providers.kubernetesingress.allowEmptyServices=true
--entrypoints.websecure.http.tls=true
State: Running
Started: Thu, 27 Oct 2022 16:27:22 -0400
Ready: True
I'm using TCP port 9094 for kafka traffic.
Is the Ingress- I'm using IngressRouteTCP CRD
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRouteTCP
metadata:
name: kafka-ingress
namespace: bitnami-kafka
spec:
entryPoints:
- kafka
routes:
- match: HostSNI(`*`)
services:
- name: my-bkafka-0-external
namespace: bitnami-kafka
port : 9094
Note: traefik is routing to a k8 LoadBalancer
kubectl get services --namespace bitnami-kafka
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
my-bkafka ClusterIP 10.43.153.8 <none> 9092/TCP 20h
my-bkafka-0-external LoadBalancer 10.43.45.233 10.55.10.243 9094:30737/TCP 20h
my-bkafka-headless ClusterIP None <none> 9092/TCP,9093/TCP 20h
my-bkafka-zookeeper ClusterIP 10.43.170.229 <none> 2181/TCP,2888/TCP,3888/TCP 20h
my-bkafka-zookeeper-headless ClusterIP None <none> 2181/TCP,2888/TCP,3888/TCP 20h
which is
option A from bitnami's write-up on Kafka external access.

cert-manager.io: no certificate issued

I am working on setting up an ingress-controller for my microk8s setup.
Minimal whoami-service is up and running:
microk8s kubectl describe service whoami
Name: whoami
Namespace: default
Labels: <none>
Annotations: <none>
Selector: app=whoami
Type: ClusterIP
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.152.183.112
IPs: 10.152.183.112
Port: <unset> 80/TCP
TargetPort: 80/TCP
Endpoints: 10.1.76.35:80
Session Affinity: None
Events: <none>
Response via clusterIP working:
curl 10.152.183.112:80
Hostname: whoami-84f56668f5-g2j8j
IP: 127.0.0.1
IP: ::1
IP: 10.1.76.35
IP: fe80::90cb:25ff:fe3f:2fe7
RemoteAddr: 192.168.0.100:46568
GET / HTTP/1.1
Host: 10.152.183.112
User-Agent: curl/7.68.0
Accept: */*
I have now configured a minimal ingress.yaml as follows:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: whoami-ingress
namespace: default
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
cert-manager.io/cluster-issuer: "cert-manager"
spec:
tls:
- hosts:
- www.example-domain.com
secretName: demo-key
rules:
- host: www.example-domain.com
http:
paths:
- path: /whoami
pathType: Prefix
backend:
service:
name: whoami
port:
number: 80
ingress seems to be up and running.
Name: whoami-ingress
Namespace: default
Address: 127.0.0.1
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
TLS:
demo-key terminates www.example-domain.com
Rules:
Host Path Backends
---- ---- --------
www.example-domain.com
/whoami whoami:80 (10.1.76.35:80)
Annotations: cert-manager.io/cluster-issuer: cert-manager
nginx.ingress.kubernetes.io/rewrite-target: /
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Sync 22m (x2 over 22m) nginx-ingress-controller Scheduled for sync
Normal Sync 15m nginx-ingress-controller Scheduled for sync
Pinging the domain works (so DNS-resolving seems to work).
But when checking the certificate, there aren't any.
microk8s kubectl get certificates
No resources found in default namespace.
Where did I go wrong? Shouldn't cert-manager.io take care of the certificate?
UPDATE:
It was pointed that I seem to lack a ClusterIssuer. I have now set one up according to the cert-manager-docs using ACME:
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: cert-manager
spec:
acme:
# You must replace this email address with your own.
# Let's Encrypt will use this to contact you about expiring
# certificates, and issues related to your account.
email: mail#domain.com
server: https://acme-staging-v02.api.letsencrypt.org/directory
privateKeySecretRef:
# Secret resource that will be used to store the account's private key.
name: demo-key
# Add a single challenge solver, HTTP01 using nginx
solvers:
- http01:
ingress:
class: nginx
But again, no luck. I can reach my cluster from outside, but only without https. Still, get certificate shows no resources found message. Certificate is classified as non trusted, issued to Kubernetes Ingress Controller Fake Certificate.

What is the quickest way to expose a LoadBalancer service over HTTPS?

I have a simple web server running in a single pod on GKE. I has also exposed it using a load balancer service. What is the easiest way to make this pod accessible over HTTPS?
gcloud container clusters list
NAME LOCATION MASTER_VERSION MASTER_IP MACHINE_TYPE NODE_VERSION NUM_NODES STATUS
personal..... us-central1-a 1.19.14-gke.1900 34.69..... e2-medium 1.19.14-gke.1900 1 RUNNING
kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10..... <none> 443/TCP 437d
my-service LoadBalancer 10..... 34.71...... 80:30066/TCP 12d
kubectl get pods
NAME READY STATUS RESTARTS AGE
nodeweb-server-9pmxc 1/1 Running 0 2d15h
EDIT: I also have a domain name registered if it's easier to use that instead of https://34.71....
First, your cluster should have Config Connector installed and function properly.
Start by delete your existing load balancer service kubectl delete service my-service
Create a static IP.
apiVersion: compute.cnrm.cloud.google.com/v1beta1
kind: ComputeAddress
metadata:
name: <name your IP>
spec:
location: global
Retrieve the created IP kubectl get computeaddress <the named IP> -o jsonpath='{.spec.address}'
Create an DNS "A" record that map your registered domain with the created IP address. Check with nslookup <your registered domain name> to ensure the correct IP is returned.
Update your load balancer service spec by insert the following line after type: LoadBalancer: loadBalancerIP: "<the created IP address>"
Re-create the service and check kubectl get service my-service has the EXTERNAL-IP set correctly.
Create ManagedCertificate.
apiVersion: networking.gke.io/v1
kind: ManagedCertificate
metadata:
name: <name your cert>
spec:
domains:
- <your registered domain name>
Then create the Ingress.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: <name your ingress>
annotations:
networking.gke.io/managed-certificates: <the named certificate>
spec:
rules:
- host: <your registered domain name>
http:
paths:
- pathType: ImplementationSpecific
backend:
service:
name: my-service
port:
number: 80
Check with kubectl describe ingress <named ingress>, see the rules and annotations section.
NOTE: It can take up to 15mins for the load balancer to be fully ready. Test with curl https://<your registered domain name>.

Istio Egress Gateways with TLS Origination CERTIFICATE_VERIFY_FAILED

I'm trying to setup istio (v1.7.3) on AKS (v1.16.13) in a way that for some of the HTTP destinations a TLS Origination will be performed. So when one of my pods is invoking abc.mydomain.com with HTTP, the Egress request will be upgraded to HTTPS and the TLS verification done through the Egress gateway.
I have followed these 2 tutorials to achieve that:
https://istio.io/latest/docs/tasks/traffic-management/egress/egress-gateway-tls-origination-sds/
https://istio.io/latest/docs/tasks/traffic-management/egress/egress-gateway-tls-origination/
I ended up with something like this (abc.mydomain.com is an external URL so that why I created a ServiceEntry for it):
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: abc.mydomain.com
spec:
hosts:
- abc.mydomain.com
ports:
- number: 80
name: http
protocol: HTTP
- number: 443
name: https
protocol: HTTPS
resolution: DNS
---
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: istio-egressgateway
namespace: istio-system
spec:
selector:
istio: egressgateway
servers:
- port:
number: 443
name: https
protocol: HTTPS
hosts:
- abc.mydomain.com
tls:
mode: ISTIO_MUTUAL
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: egressgateway-for-abc
namespace: istio-system
spec:
host: istio-egressgateway.istio-system.svc.cluster.local
subsets:
- name: abc
trafficPolicy:
loadBalancer:
simple: ROUND_ROBIN
portLevelSettings:
- port:
number: 443
tls:
mode: ISTIO_MUTUAL
sni: abc.mydomain.com
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: direct-abc-through-egress-gateway
namespace: istio-system
spec:
hosts:
- abc.mydomain.com
gateways:
- istio-egressgateway
- mesh
http:
- match:
- gateways:
- mesh
port: 80
route:
- destination:
host: istio-egressgateway.istio-system.svc.cluster.local
subset: abc
port:
number: 443
weight: 100
- match:
- gateways:
- istio-egressgateway
port: 443
route:
- destination:
host: abc.mydomain.com
port:
number: 443
weight: 100
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: originate-tls-for-abc
namespace: istio-system
spec:
host: abc.mydomain.com
trafficPolicy:
loadBalancer:
simple: ROUND_ROBIN
portLevelSettings:
- port:
number: 443
tls:
mode: SIMPLE
credentialName: abc # this must match the secret created earlier without the "-cacert" suffix
sni: abc.mydomain.com
I'm creating a secret for my CA root with: kubectl create secret generic abc-cacert --from-file=ca.crt=mydomainrootca.crt -n istio-system
I've used the same certificate for my java applications and I can successfully invoke HTTPS for the same url using JKS. It seems the certificate is loaded properly into egress (kubectl logs -f -l istio=egressgateway -n istio-system):
2020-10-06T20:00:36.611607Z info sds resource:abc-cacert new connection
2020-10-06T20:00:36.612907Z info sds Skipping waiting for gateway secret
2020-10-06T20:00:36.612994Z info cache GenerateSecret abc-cacert
2020-10-06T20:00:36.613063Z info sds resource:abc-cacert pushed root cert to proxy
When I invoke curl abc.mydomain.com from a pod running on my cluster I'm getting this error from egress gateway:
[2020-10-06T19:33:40.902Z] "GET / HTTP/1.1" 503 UF,URX "-" "TLS error: 268435581:SSL routines:OPENSSL_internal:CERTIFICATE_VERIFY_FAILED" 0 91 172 - "192.244.0.191" "curl/7.64.0" "b618b1e6-e543-4053-bf2f-8ae56664545f" "abc.mydomain.com" "192.223.24.254:443" outbound|443||abc.mydomain.com - 192.244.0.188:8443 192.244.0.191:41306 abc.mydomain.com -
Any idea what I might be doing wrong? I'm quite new to istio and I don't understand all of the need of DestinationRule/VirtualService so please bare with me.
UPDATE1
After putting the DestinationRules in the namespace where my pod is running, I'm getting the following:
curl abc.mydomain.com
<html>
<head><title>400 The plain HTTP request was sent to HTTPS port</title></head>
<body>
<center><h1>400 Bad Request</h1></center>
<center>The plain HTTP request was sent to HTTPS port</center>
<hr><center>nginx/1.17.10</center>
</body>
</html>
Here is the output of istioctl proxy-status:
NAME CDS LDS EDS RDS ISTIOD VERSION
istio-egressgateway-695dc4fc7c-p5p42.istio-system SYNCED SYNCED SYNCED SYNCED istiod-5c6b7b5b8f-csggg 1.7.3
istio-ingressgateway-5689f7c67-j54m7.istio-system SYNCED SYNCED SYNCED SYNCED istiod-5c6b7b5b8f-csggg 1.7.3
test-5bbfdb8f4b-hg7vf.test SYNCED SYNCED SYNCED SYNCED istiod-5c6b7b5b8f-csggg 1.7.3

Kubernetes cert manager ssl error verify ACME account

I can`t create wilcard ssl with cert manager, I add my domain to cloudflare but cert manager can`t verify ACME account. How i resolve this problem?
i want wilcard ssl for my domain and use any deployments how could i do?
I find error but how i resolve, error is my k8s doesnt resolve dns acme-v02.api.letsencrypt.org
error is k8s dns can't find
My k8s is
Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.3-k3s.1", GitCommit:"8343999292c55c807be4406fcaa9f047e8751ffd", GitTreeState:"clean", BuildDate:"2019-06-12T04:56+00:00Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
Error log:
I0716 13:06:11.712878 1 controller.go:153] cert-manager/controller/issuers "level"=0 "msg"="syncing item" "key"="default/issuer-letsencrypt"
I0716 13:06:11.713218 1 setup.go:162] cert-manager/controller/issuers "level"=0 "msg"="ACME server URL host and ACME private key registration host differ. Re-checking ACME account registration" "related_resource_kind"="Secret" "related_resource_name"="issuer-letsencrypt" "related_resource_namespace"="default" "resource_kind"="Issuer" "resource_name"="issuer-letsencrypt" "resource_namespace"="default"
I0716 13:06:11.713245 1 logger.go:88] Calling GetAccount
E0716 13:06:16.714911 1 setup.go:172] cert-manager/controller/issuers "msg"="failed to verify ACME account" "error"="Get https://acme-v02.api.letsencrypt.org/directory: dial tcp: i/o timeout" "related_resource_kind"="Secret" "related_resource_name"="issuer-letsencrypt" "related_resource_namespace"="default" "resource_kind"="Issuer" "resource_name"="issuer-letsencrypt" "resource_namespace"="default"
I0716 13:06:16.715527 1 sync.go:76] cert-manager/controller/issuers "level"=0 "msg"="Error initializing issuer: Get https://acme-v02.api.letsencrypt.org/directory: dial tcp: i/o timeout" "resource_kind"="Issuer" "resource_name"="issuer-letsencrypt" "resource_namespace"="default"
E0716 13:06:16.715609 1 controller.go:155] cert-manager/controller/issuers "msg"="re-queuing item due to error processing" "error"="Get https://acme-v02.api.letsencrypt.org/directory: dial tcp: i/o timeout" "key"="default/issuer-letsencrypt"
my Issuer
apiVersion: certmanager.k8s.io/v1alpha1
kind: Issuer
metadata:
name: issuer-letsencrypt
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: yusufkaan142#gmail.com
privateKeySecretRef:
name: issuer-letsencrypt
dns01:
providers:
- name: cf-dns
cloudflare:
email: mail#gmail.com
apiKeySecretRef:
name: cloudflare-api-key
key: api-key.txt
Secret:
apiVersion: v1
kind: Secret
metadata:
name: cloudflare-api-key
namespace: cert-manager
type: Opaque
data:
api-key.txt: base64encoded
My Certificate:
apiVersion: certmanager.k8s.io/v1alpha1
kind: Certificate
metadata:
name: wilcard-theykk-net
namespace: cert-manager
spec:
secretName: wilcard-theykk-net
issuerRef:
name: issuer-letsencrypt
kind: Issuer
commonName: '*.example.net'
dnsNames:
- '*.example.net'
acme:
config:
- dns01:
provider: cf-dns
domains:
- '*.example.net'
- 'example.net'
Dns for k8s
apiVersion: v1
kind: ConfigMap
metadata:
labels:
addonmanager.kubernetes.io/mode: EnsureExists
name: kube-dns
namespace: kube-system
data:
upstreamNameservers: |
["1.1.1.1","8.8.8.8"]
I would start with debugging DNS resolution function within your K8s cluster:
Spin up some container with basic network tools on a board:
kubectl run -i -t busybox --image=radial/busyboxplus:curl --restart=Never
From within busybox container check /etc/resolv.conf file and ensure that you can resolve Kubernetes DNS service kube-dns:
$ cat /etc/resolv.conf
nameserver 10.96.0.10
search default.svc.cluster.local svc.cluster.local cluster.local c.org-int.internal google.internal
options ndots:5
Make a lookup request to kubernetes.default which should get output with a DNS nameserver without any issues:
$ nslookup kubernetes.default
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
Name: kubernetes.default
Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local
Due to the fact that you've defined upstreamNameservers in the corresponded kube-dns ConfigMap, check whether you can ping upstream nameservers: 1.1.1.1 and 8.8.8.8 that should be accessible from within a Pod.
Verify DNS pod logs for any suspicious events for each container(kubedns, dnsmasq, sidecar):
kubectl logs --namespace=kube-system $(kubectl get pods --namespace=kube-system -l k8s-app=kube-dns -o name | head -1) -c kubedns
kubectl logs --namespace=kube-system $(kubectl get pods --namespace=kube-system -l k8s-app=kube-dns -o name | head -1) -c dnsmasq
kubectl logs --namespace=kube-system $(kubectl get pods --namespace=kube-system -l k8s-app=kube-dns -o name | head -1) -c sidecar
If you are fine with all precedent steps then DNS discovery is working properly, thus you can also inspect Cloudflare DNS firewall configuration in order to exclude potential restrictions. More relevant information about troubleshooting DNS issue you can find in the official K8s documentation.