OpenFaaS gateway and queue worker can't connect to nats - windows-subsystem-for-linux

I have been having issues for more than a week trying to get OpenFaaS to start on my WSL instance using a K8S cluster deployed by Docker Desktop.
I keep on getting this from the gateway and queueworker pods:
OpenFaaS Gateway - Community Edition (CE)
Version: 0.25.2 Commit: bc2eeff4678407583faec982c1c7d1da915dd60c
Timeouts: read=5m2s write=5m2s upstream=5m0s
Function provider: http://127.0.0.1:8081/
2022/12/05 16:08:53 Async enabled: Using NATS Streaming.
2022/12/05 16:08:53 Opening connection to nats://nats.openfaas.svc.cluster.local:4222
2022/12/05 16:08:53 Connect: nats://nats.openfaas.svc.cluster.local:4222
2022/12/05 16:08:55 dial tcp 10.101.241.144:4222: i/o timeout
I have searched the openfaas website, github issues but nothing fixed my issue.
One of the things I tried is to set the NO_PROXY environment variable using export no_proxy=".svc,.svc.cluster.local" but that did not fix it.
Not sure what to do. Here are the pods:
NAME READY STATUS RESTARTS AGE
alertmanager-7c456f4cdd-n6r6q 1/1 Running 0 68m
basic-auth-plugin-589ff48889-b8wkp 1/1 Running 0 68m
gateway-5989d8f775-2k795 1/2 CrashLoopBackOff 10 (39s ago) 27m
nats-64d9444b95-rpfhr 1/1 Running 0 68m
prometheus-68584f9786-mp7t4 1/1 Running 0 68m
queue-worker-97bcf9fb-zgk6z 0/1 CrashLoopBackOff 18 (26s ago) 68m
And the services:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
alertmanager ClusterIP 10.100.115.189 <none> 9093/TCP 69m
basic-auth-plugin ClusterIP 10.105.160.116 <none> 8080/TCP 69m
gateway ClusterIP 10.110.144.212 <none> 8080/TCP 69m
gateway-external NodePort 10.103.2.255 <none> 8080:31112/TCP 69m
gateway-provider ClusterIP 10.101.1.205 <none> 8081/TCP 69m
nats ClusterIP 10.101.241.144 <none> 4222/TCP 69m
prometheus ClusterIP 10.97.171.69 <none> 9090/TCP 69m
The nats service has the 4222 endpoint enabled on TCP.
I have opened an issue on the faas-netes github repo but not getting any help. I just opened an issue on the main faas Github repo asking for help.
What could be causing this issue?

Related

Accessing service from custom port using k3d and traefik

I am trying to configure traefik and loadbalancer to accept traffic from host port 9200.
Everything works fine for port 8443 (websecure). I am using k3d and traefik is initially disabled.
I can curl my "2048" service from my macos host. The ingress is configured for "websecure" endpoint and a match is found.
curl --cacert ca.crt -I https://2048.127.0.0.1.nip.io:8443
HTTP/2 200
I have installed the exact same service and named it "2049". I want this service to be available from 9200 (I have de-configured tls to simplify things).
+ curl -vvv -k -I http://2049.127.0.0.1.nip.io:9200
* Trying 127.0.0.1:9200...
* Connected to 2049.127.0.0.1.nip.io (127.0.0.1) port 9200 (#0)
> HEAD / HTTP/1.1
> Host: 2049.127.0.0.1.nip.io:9200
> User-Agent: curl/7.79.1
> Accept: */*
>
* Empty reply from server
* Closing connection 0
curl: (52) Empty reply from server
Both services can be accessed from within the cluster.
I have installed traefik through helm and made sure ports are available.
#
k get -n traefik-system svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
traefik LoadBalancer 10.43.86.220 172.27.0.3,172.27.0.4,172.27.0.5 80:30039/TCP,443:30253/TCP,9092:30179/TCP,9200:31428/TCP 61m
# just to display, the lb is configured for port 9200 (iptables, /pause container)
k logs -n traefik-system pod/svclb-traefik-h5zs4
error: a container name must be specified for pod svclb-traefik-h5zs4, choose one of: [lb-tcp-80 lb-tcp-443 lb-tcp-9092 lb-tcp-9200]
# my ingress
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: game-2049
spec:
entryPoints: # We listen to requests coming from port 9200
- elasticsearch
routes:
- match: Host(`2049.127.0.0.1.nip.io`)
kind: Rule
services:
- name: game-2049 # Requests will be forwarded to this service
port: 80
# traefik is configured with these endpoint addresses:
- "--entrypoints.web.address=:8000/tcp"
- "--entrypoints.websecure.address=:8443/tcp"
- "--entrypoints.kafka.address=:9092/tcp"
- "--entrypoints.elasticsearch.address=:9200/tcp"
My goal is to access elasticsearch 9200 and kafka 9092 from my MacOS host using k3d. But first I need to get this configuration for "2049" right.
What I am missing?
I have this working on K3s using bitnami kafka
You need two things:
Define the entry point in traefik config -- which from your note you already have.
kubectl describe pods traefik-5bcf476bb9-qrqg7 --namespace traefik
Name: traefik-5bcf476bb9-qrqg7
Namespace: traefik
Priority: 0
Service Account: traefik
...
Status: Running
...
Image: traefik:2.9.1
Image ID: docker.io/library/traefik#sha256:4ebf68cdb33c162e8786ac83ece782ec0dbe583471c04dfd0af43f245b96c88f
Ports: 9094/TCP, 9100/TCP, 9000/TCP, 8000/TCP, 8443/TCP
Host Ports: 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP
Args:
--global.checknewversion
--global.sendanonymoususage
--entrypoints.kafka.address=:9094/tcp
--entrypoints.metrics.address=:9100/tcp
--entrypoints.traefik.address=:9000/tcp
--entrypoints.web.address=:8000/tcp
--entrypoints.websecure.address=:8443/tcp
--api.dashboard=true
--ping=true
--metrics.prometheus=true
--metrics.prometheus.entrypoint=metrics
--providers.kubernetescrd
--providers.kubernetescrd.allowCrossNamespace=true
--providers.kubernetescrd.allowExternalNameServices=true
--providers.kubernetesingress
--providers.kubernetesingress.allowExternalNameServices=true
--providers.kubernetesingress.allowEmptyServices=true
--entrypoints.websecure.http.tls=true
State: Running
Started: Thu, 27 Oct 2022 16:27:22 -0400
Ready: True
I'm using TCP port 9094 for kafka traffic.
Is the Ingress- I'm using IngressRouteTCP CRD
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRouteTCP
metadata:
name: kafka-ingress
namespace: bitnami-kafka
spec:
entryPoints:
- kafka
routes:
- match: HostSNI(`*`)
services:
- name: my-bkafka-0-external
namespace: bitnami-kafka
port : 9094
Note: traefik is routing to a k8 LoadBalancer
kubectl get services --namespace bitnami-kafka
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
my-bkafka ClusterIP 10.43.153.8 <none> 9092/TCP 20h
my-bkafka-0-external LoadBalancer 10.43.45.233 10.55.10.243 9094:30737/TCP 20h
my-bkafka-headless ClusterIP None <none> 9092/TCP,9093/TCP 20h
my-bkafka-zookeeper ClusterIP 10.43.170.229 <none> 2181/TCP,2888/TCP,3888/TCP 20h
my-bkafka-zookeeper-headless ClusterIP None <none> 2181/TCP,2888/TCP,3888/TCP 20h
which is
option A from bitnami's write-up on Kafka external access.

EKS, how to create 2 LoadBalancer services sharing a static IP

Have a 1.18 EKS cluster, with 2 services on different protocol and port, e.g.
proc-tcp ClusterIP 10.100.200.247 <none> 4060/TCP 26h
proc-udp ClusterIP 10.100.200.20 <none> 4800/UDP 26h
How do I convert or recreate them to be type LoadBalancer and share a static IP?
To create loadbalancer, You need to pass
type: LoadBalancer instead of type: ClusterIP

Exposing kubernetes service outside the cluster for development purposes

Is it somehow possible to expose a kubernetes service to the outside world?
I am currently developping an application which need to communicate with a service, and to do so I need to know the pod ip and port address, which I withing the kubernetes cluster can get with the kubernetes services linked to it, but outside the cluster I seem to be unable to find it, or expose it?
apiVersion: v1
kind: Service
metadata:
name: kafka-broker
spec:
ports:
- name: broker
port: 9092
protocol: TCP
targetPort: kafka
selector:
app: kafka
sessionAffinity: None
type: ClusterIP
I could containerize the application, put it in a pod, and run it within kubernetes, but for fast development it seems tedious to have to go through this, for testing such a small things such as connectivity?
Someway i can expose the service, and thereby reach the application in its selector?
In order to expose your Kubernetes service to the internet you must change the ServiceType.
Your service is using the default which is ClusterIP, it exposes the Service on a cluster-internal IP, making it only reachable within the cluster.
1 - If you use cloud provider like AWS or GCP The best option for you is to use the LoadBalancer Service Type: which automatically exposes to the internet using the provider Load Balancer.
Run:
kubectl expose deployment deployment-name --type=LoadBalancer --name=service-name
Where deployment-name must be replaced by your actual deploy name. and the same goes for the desired service-name
wait a few minutes and the kubectl get svc command will give you the external IP and PORT:
owilliam#minikube:~$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 4d21h
nginx-service-lb LoadBalancer 10.96.125.208 0.0.0.0 80:30081/TCP 36m
2 - If you are running Kubernetes locally (like Minikube) the best option is the Nodeport Service Type:
It it exposes the service to the Cluster Node( the hosting computer).
Which is safer for testing purposes than exposing the service to the whole internet.
Run: kubectl expose deployment deployment-name --type=NodePort --name=service-name
Where deployment-name must be replaced by your actual deploy name. and the same goes for the desired service-name
Bellow are my outputs after exposing an Nginx webserver to the NodePort for your reference:
user#minikube:~$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 4d21h
service-name NodePort 10.96.33.84 <none> 80:31198/TCP 4s
user#minikube:~$ minikube service list
|----------------------|---------------------------|-----------------------------|-----|
| NAMESPACE | NAME | TARGET PORT | URL |
|----------------------|---------------------------|-----------------------------|-----|
| default | kubernetes | No node port |
| default | service-name | http://192.168.39.181:31198 |
| kube-system | kube-dns | No node port |
| kubernetes-dashboard | dashboard-metrics-scraper | No node port |
| kubernetes-dashboard | kubernetes-dashboard | No node port |
|----------------------|---------------------------|-----------------------------|-----|
user#minikube:~$ curl http://192.168.39.181:31198
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
...//// suppressed output
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
user#minikube:~$
you can use: NodePort or Load balancer type services as mentioned in other answers. or even ingress.
But as you are asking for developer purpose only, I suggest to start a testing pod in given namespace and check connectivity from that pod. You can get actual SSH access to running pod kubectl exec -it {PODNAME} /bin/sh
you can also try tools like
- kubefwd
- squash
- stern
Use Service type as NodePort or Loadbalancer. Latter is recommended if you are running in cloud like Azure,AWS or GCD
refer the same below
apiVersion: v1
kind: Service
metadata:
name: kafka-broker
spec:
ports:
- name: broker
port: 9092
protocol: TCP
targetPort: kafka
selector:
app: kafka
sessionAffinity: None
type: NodePort

Spring boot micro-service is not communicating to the Redis

I Have deployed few microservices on Kubernetes cluster. Now the problem is these microservices are not communicating to Redis server regardless it is deployed in same Kubernetes cluster or hosted outside. but at the same time if I start the microservice outside of cluster it is able to access both kinds of Redis hosted remotely or deployed in the k8s cluster. I am not getting what I am doing wrong, here are my deployment and service yaml files.
Deployment yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis-master
spec:
selector:
matchLabels:
app: redis
role: master
tier: backend
replicas: 1
template:
metadata:
labels:
app: redis
role: master
tier: backend
spec:
containers:
- name: master
image: redis
resources:
requests:
cpu: 100m
memory: 100Mi
ports:
- containerPort: 6379
Service yaml:
apiVersion: v1
kind: Service
metadata:
name: redis-service
spec:
ports:
-
nodePort: 30011
port: 80
targetPort: 6379
selector:
app: redis
role: master
tier: backend
type: NodePort
I Have followed multiple blogs also tried to get help from other sources but didn't work. and also forgot to mention, I am able to access Redis deployed in k8s cluster from the remote machines using Redis desktop manager.
#
k8s cluster is running on-prem in Ubuntu 16.04
#
Microservices when trying from outside k8s running on Windows machine can access redis on k8s cluster and outside of k8s with the same code.
When microservices are trying to communicate with the Redis from inside the cluster getting the following log
2019-03-11 10:28:50,650 [main] INFO org.springframework.boot.context.embedded.tomcat.TomcatEmbeddedServletContainer - Tomcat started on port(s): 8082 (http)
2019-03-11 10:29:02,382 [http-nio-8082-exec-1] INFO org.springframework.web.servlet.DispatcherServlet - FrameworkServlet 'dispatcherServlet': initialization started
2019-03-11 10:29:02,396 [http-nio-8082-exec-1] INFO org.springframework.web.servlet.DispatcherServlet - FrameworkServlet 'dispatcherServlet': initialization completed in 14 ms
Print : JedisConnectionException - redis.clients.jedis.exceptions.JedisConnectionException: Could not get a resource from the pool
Print : JedisConnectionException - redis.clients.jedis.exceptions.JedisConnectionException: Could not get a resource from the pool
Print : JedisConnectionException - redis.clients.jedis.exceptions.JedisConnectionException: Could not get a resource from the pool

AWS-EKS deployed pod is exposed with type service Node Port is not accessible over nodePort IP and exposed port

I've created k8s cluster on AWS using EKS with Terraform followed this documentation https://docs.aws.amazon.com/eks/latest/userguide/what-is-eks.html .
I have one worker node.Note: Everything is in Private Subnets
Just running node.js hello-world container
Code for pod definition
kind: Pod
metadata:
name: nodehelloworld.example.com
labels:
app: helloworld
spec:
containers:
- name: k8s-demo
image: wardviaene/k8s-demo
ports:
- name: nodejs-port
containerPort: 3000
Code for service definition
apiVersion: v1
kind: Service
metadata:
name: helloworld-service
spec:
ports:
- port: 31001
nodePort: 31001
targetPort: nodejs-port
protocol: TCP
selector:
app: helloworld
type: NodePort
kubectl get pods shows that my pod is up and running
nodehelloworld.example.com 1/1 Running 0 17h
kubectl get svc shows that my service is also created
helloworld-service NodePort 172.20.146.235 <none> 31001:31001/TCP 16h
kubectl describe svc helloworld-service it has correct end-point and correct selector
So here is the problem
When I hit NodeIP:exposed port(which is 31001) I'm getting This site can’t be reached
then I used kubeclt port-forward podname 3000:3000
I can hit with curl -v localhost:3000 is reachable
I checked my security group inbound rule is 0-65535 from my CIDR block.
Is there anything else I'm missing?
If you are trying to connect from outside the cluster then in the security group for worker nodes you will have to add a custom TCP entry for enabling inbound traffic on port 31001.
If that does not work then make sure you are able to connect to the Node through that IP. I usually connect using a VPN client.
Fixed..
On AWS EKS nodeports are not working as on Pure Kubernetes.
when you exposing
- port: 31001
targetPort: nodejs-port
protocol: TCP
31001 that's the clusterIP port will get exposed.
in order to get nodePort you must describe your service and look for NodePort is description that was exposed