React works from docker image on localmachine but unreachable from kubernetes. service looks to be configured correctly - react-native

I want to put my docker image running react into kubernetes and be able to hit the main page. I am able to get the main page just running docker run --rm -p 3000:3000 reactdemo locally. When I try to deploy to my kubernetes (running locally via docker-desktop) I get no response until eventually a timeout.
I tried this same process below with a springboot docker image and I am able to get a simple json response in my browser.
Below is my Dockerfile, deployment yaml (with service inside it), and commands I'm running to try and get my results. Morale is low, any help would be appreciated!
Dockerfile:
# pull official base image
FROM node
# set working directory
RUN mkdir /app
WORKDIR /app
# install app dependencies
COPY package.json /app
RUN npm install
# add app
COPY . /app
#Command to build ReactJS application for deploy might not need this...
RUN npm run build
# start app
CMD ["npm", "start"]
Deployment file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: demo
spec:
replicas: 1
selector:
matchLabels:
app: demo
template:
metadata:
labels:
app: demo
spec:
containers:
- name: reactdemo
image: reactdemo:latest
imagePullPolicy: Never
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: demo
spec:
type: NodePort
selector:
app: demo
ports:
- port: 3000
targetPort: 3000
protocol: TCP
nodePort: 31000
selector:
app: demo
I then open a port on my local machine to the nodeport for the service:
PS C:\WINDOWS\system32> kubectl port-forward pod/demo-854f4d78f6-qv4mt 31000:3000
Forwarding from 127.0.0.1:31000 -> 3000
My assumption is that everything is in place at this point and I should be able to open a browser to hit localhost:31000. I expected to see that spinning react symbol for their landing page just like I do when I only run a local docker container.
Here is it all running:
$ kubectl get all
NAME READY STATUS RESTARTS AGE
pod/demo-854f4d78f6-7dn7c 1/1 Running 0 4s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/demo NodePort 10.111.203.209 <none> 3000:31000/TCP 4s
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 9d
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/demo 1/1 1 1 4s
NAME DESIRED CURRENT READY AGE
replicaset.apps/demo-854f4d78f6 1 1 1 4s
Some extra things to note:
Although I don't have it setup currently I did have my springboot service in the deployment file. I logged into it's pod and ensured the react container was reachable. It was.
I haven't done anything with my firewall settings (but sort of assume I dont have to since the run with the springboot service worked?)
I see this in chrome developer tools and so far don't think it's related to my problem: crbug/1173575, non-JS module files deprecated. I see this response in the main browser page after some time:
localhost didn’t send any data.
ERR_EMPTY_RESPONSE

if you are using Kubernetes using minikube in your local system then it will not work with localhost:3000, because it runs in minikube cluster which has there owned private IP address, so instead of trying localhost:3000 you should try minikube service <servicename> this in your terminal and it shows the URL of your service.

Thanks for all the feedback peeps! In trying out the solutions presented I found my error and it was pretty silly. I tried removing the service and trying the different port configs mentioned above. What solved it was using 127.0.0.1:31000 instead of localhost. Not sure why that fixed it but it did!
That being said a few comments I found while looking at the above comments.
I found that I couldnt hit the cluster without doing the port forwarding regardless of whether I had a service defined or not.
ContainerPort to my understanding is for kubernetes to work on pod to pod communication and doesnt impact application function from a user perspective (could be wrong!)
good to know on minikube Im thinking about trying it out and if I do I'll know why port 3000 stops working.
Thanks

You don't need a Service to port-forward to a pod. You just port-forward straight to it
kubectl port-forward <pod-name> <local-port>:<pod-port>
This creates a "tunnel" between your local machine on a chosen port (<local-port>) and your pod (pod-name), on a chosen pod's port (<pod-port>).
Then you can curl your pod with
curl localhost:<local-port>
If you really want to use a Service, then port-forward to the Service (service/demo in your case) + service port, but it will be translated to POD's IP eventually.
Change .spec.port.[0].targetPort in your Service to be the same as .spec.template.spec.containers.port.containerPort in your deployment, so in your case:
...
- port: 3000
targetPort: 80
...
Port forward to a Service, with Service port
kubectl port-forward service/demo 8080:3000
Then curl your Service with
curl localhost:8080
This have a side effect if there are more pods under the same service. Port forward almost always will be connected to the same pod.

Related

Which is the correct IP to run API tests on kubernetes cluster

i have kubernetes cluster with pods which are type cluster IP. Which is the correct ip to shoot it if want to run integration tests IP:10.102.222.181 or Endpoints: 10.244.0.157:80,10.244.5.243:80
for example:
Type: ClusterIP
IP Families: <none>
IP: 10.102.222.181
IPs: <none>
Port: http 80/TCP
TargetPort: 80/TCP
Endpoints: 10.244.0.157:80,10.244.5.243:80
Session Affinity: None
Events: <none>
If your test runner is running inside the cluster, use the name: of the Service as a host name. Don't use any of these IP addresses directly. Kubernetes provides a DNS service that will translate the Service's name to its address (the IP: from the kubectl describe service output), and the Service itself just forwards network traffic to the Endpoints: (individual pod addresses).
If the test runner is outside the cluster, none of these DNS names or IP addresses are reachable at all. For basic integration tests, it should be enough to kubectl port-forward service/its-name 12345:80, and then you can use http://localhost:12345 to reach the service (actually a fixed single pod from it). This isn't a good match for performance or load tests, and you'll either need to launch these from inside the cluster, or to use a NodePort or LoadBalancer service to make the service accessible from outside.
IPs in the Endpoints are individual Pod IPs which are subject to change when new pods are created and replace the old pods. ClusterIP is stable IP which does not change unless you delete the service and recreate it. So recommendation is to use the clusterIP.

Running an apache container on a port > 1024

I've built a docker image based on httpd:2.4. In my k8s deployment I've defined the following securityContext:
securityContext:
privileged: false
runAsNonRoot: true
runAsUser: 431
allowPrivilegeEscalation: false
In order to get this container to run properly as non-root apache needs to be configured to bind to a port > 1024, as opposed to the default 80. As far as I can tell this means editing Listen 80 in httpd.conf to Listen {Some port > 1024}.
When I want to run the docker image I've build normally (i.e. on default port 80) I have the following port settings:
deployment
spec.template.spec.containers[0].ports[0].containerPort: 80
service
spec.ports[0].targetPort: 80
spec.ports[0].port: 8080
ingress
spec.rules[0].http.paths[0].backend.servicePort: 8080
Given these settings the service becomes accessible at the host url provided in the ingress manifest. Again, this is without the changes to httpd.conf. When I make those changes (using Listen 8000), and add in the securityContext section to the deployment, I change the various manifests accordingly:
deployment
spec.template.spec.containers[0].ports[0].containerPort: 8000
service
spec.ports[0].targetPort: 8000
spec.ports[0].port: 8080
ingress
spec.rules[0].http.paths[0].backend.servicePort: 8080
Yet for some reason, when I try to access a URL that should be working I get a 502 Bad Gateway error. Have I set the ports correctly? Is there something else I need to do?
Check if pod is Running
kubectl get pods
kubectl logs pod_name
Check if the URL is accessible within the pod
kubectl exec -it <pod_name> -- bash
$ curl http://localhost:8000
If the above didn't work, check your httpd.conf.
Check with the service name
kubectl exec -it <ingress pod_name> -- bash
$ curl http://svc:8080
You can check ingress logs too.
In order to get this container to run properly as non-root apache
needs to be configured to bind to a port > 1024, as opposed to the
default 80
You got it, that's the hard requirement in order to make the apache container running as non-root, therefore this change needs to be done at container level, not to Kubernetes' abstracts like Deployment's Pod spec or Service/Ingress resource object definitions. So the only thing left in your case, is to build a custom httpd image, with listening port > 1024. The same approach applies to the NGINX Docker containers.
One key information for the 'containerPort' field in Pod spec, that you are trying to manually adjust, and which is not so apparent. It's there primarily for informational purposes, and does not cause opening port on container level. According Kubernetes API reference:
Not specifying a port here DOES NOT prevent that port from being
exposed. Any port which is listening on the default "0.0.0.0" address
inside a container will be accessible from the network. Cannot be updated.
I hope this will help you to move on

AWS-EKS deployed pod is exposed with type service Node Port is not accessible over nodePort IP and exposed port

I've created k8s cluster on AWS using EKS with Terraform followed this documentation https://docs.aws.amazon.com/eks/latest/userguide/what-is-eks.html .
I have one worker node.Note: Everything is in Private Subnets
Just running node.js hello-world container
Code for pod definition
kind: Pod
metadata:
name: nodehelloworld.example.com
labels:
app: helloworld
spec:
containers:
- name: k8s-demo
image: wardviaene/k8s-demo
ports:
- name: nodejs-port
containerPort: 3000
Code for service definition
apiVersion: v1
kind: Service
metadata:
name: helloworld-service
spec:
ports:
- port: 31001
nodePort: 31001
targetPort: nodejs-port
protocol: TCP
selector:
app: helloworld
type: NodePort
kubectl get pods shows that my pod is up and running
nodehelloworld.example.com 1/1 Running 0 17h
kubectl get svc shows that my service is also created
helloworld-service NodePort 172.20.146.235 <none> 31001:31001/TCP 16h
kubectl describe svc helloworld-service it has correct end-point and correct selector
So here is the problem
When I hit NodeIP:exposed port(which is 31001) I'm getting This site can’t be reached
then I used kubeclt port-forward podname 3000:3000
I can hit with curl -v localhost:3000 is reachable
I checked my security group inbound rule is 0-65535 from my CIDR block.
Is there anything else I'm missing?
If you are trying to connect from outside the cluster then in the security group for worker nodes you will have to add a custom TCP entry for enabling inbound traffic on port 31001.
If that does not work then make sure you are able to connect to the Node through that IP. I usually connect using a VPN client.
Fixed..
On AWS EKS nodeports are not working as on Pure Kubernetes.
when you exposing
- port: 31001
targetPort: nodejs-port
protocol: TCP
31001 that's the clusterIP port will get exposed.
in order to get nodePort you must describe your service and look for NodePort is description that was exposed

redis cluster K8S connection

I'm running a redis-cluster in K8S:
kubectl get services -o wide
redis-cluster ClusterIP 10.97.31.167 <none> 6379/TCP,16379/TCP 22h app=redis-cluster
When connecting to the cluster IP from the node itself connection is working fine:
redis-cli -h 10.97.31.167 -c
10.97.31.167:6379> set some_val 1
-> Redirected to slot [11662] located at 10.244.1.9:6379
OK
Is there some way I can access the redis server from my local development VM without exposing every single pod as a service?
When deploying my application to run inside the cluster itself (later, when in production), I should use the cluster IP too, or should I use the internal IPs of the pods as master IPs of the redis-master servers?
Simple forwarding to the remote machine won't work:
devvm:ssh -L 6380:10.97.31.167:6379 -i user.pem admin#k8snode.com
On dev VM:
root#devvm:~# redis-cli -h 127.0.0.1 -p 6380 -c
127.0.0.1:6380> set jaheller 1
-> Redirected to slot [11662] located at 10.244.1.9:6379
The redis connection is timeouting at this point.
I belive in all scenarios you just need to expose the service using kubernetes Service object of type:
Cluster IP ( in case you are consuming it inside the cluster )
NodePort ( for external access )
LoadBalancer ( in case of public access and if you are on cloud provider)
NodePort with external loadbalancer ( for public external access if you are on local infrastructure )
Dont need to worry about individual pods. The Service will take care of them.
Docs:
https://kubernetes.io/docs/concepts/services-networking/service/
I don't think you need any port redirection. You have to build an ingress controller upon your cluster though, i.e. nginx ingress controller
Then you just set up a single ingress service with exposed access, which will serve a cluster traffic.
Here is an example of Ingress Controller to Access cluster service:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: redis-cluster-ing
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- http:
paths:
- backend:
serviceName: redis-cluster
servicePort: 6379
You may check a step-by-step instruction

How to make REST calls between Frontend and Backend using Docker containers

I have 3 docker containers:
Backend (Spring boot rest api)
Frontend (Js and html in the apache image)
Mongodb
I'm orchestrating them through docker-compose and works nicely.
However I don't know how to let my frontend javascript client know the backend container's host/ip in order to reach it.
This is my docker-compose.yml:
version: '3.1'
services:
project-server:
build: .
restart: always
container_name: project-server
ports:
- 8200:8200
working_dir: /opt/app
depends_on:
- mongo
httpd:
image: project-ui
container_name: project-ui
ports:
- 8201:80
mongo:
image: project-mongo
container_name: project-mongo
ports:
- 27018:27017
volumes:
- $HOME/data/mongo-data:/data/db
- $HOME/data/mongo-bkp:/data/bkp
restart: always
So i've tried with this in my js client app:
export default {
REMOTE_HOST: 'http://project-server:8200'
}
But it doesn't work. (Failed to load resource: net::ERR_NAME_NOT_RESOLVED)
And i'm pretty sure it's because JS runs locally on the browser so it has no way to resolve that.
What's the right way to do this? There is any way for the frontend service (apache) to pass/render the real host to Javascript and get it somehow?
Thanks a lot
project-server can be resolved only within the network created by docker-compose. As you mentioned, to connect from the outside world you need to export the IP of your host instead of project-server. The problem is the guest container doesn't know the IP of the guest. Here is a detailed discussion about that: How to get the IP address of the docker host from inside a docker container
What you probably need in your situation is to run the container passing the IP of the host as an environment variable:
run --env <IP>=<value>
Then in node you can just read that variable.
Hope it helps