I'm running a redis-cluster in K8S:
kubectl get services -o wide
redis-cluster ClusterIP 10.97.31.167 <none> 6379/TCP,16379/TCP 22h app=redis-cluster
When connecting to the cluster IP from the node itself connection is working fine:
redis-cli -h 10.97.31.167 -c
10.97.31.167:6379> set some_val 1
-> Redirected to slot [11662] located at 10.244.1.9:6379
OK
Is there some way I can access the redis server from my local development VM without exposing every single pod as a service?
When deploying my application to run inside the cluster itself (later, when in production), I should use the cluster IP too, or should I use the internal IPs of the pods as master IPs of the redis-master servers?
Simple forwarding to the remote machine won't work:
devvm:ssh -L 6380:10.97.31.167:6379 -i user.pem admin#k8snode.com
On dev VM:
root#devvm:~# redis-cli -h 127.0.0.1 -p 6380 -c
127.0.0.1:6380> set jaheller 1
-> Redirected to slot [11662] located at 10.244.1.9:6379
The redis connection is timeouting at this point.
I belive in all scenarios you just need to expose the service using kubernetes Service object of type:
Cluster IP ( in case you are consuming it inside the cluster )
NodePort ( for external access )
LoadBalancer ( in case of public access and if you are on cloud provider)
NodePort with external loadbalancer ( for public external access if you are on local infrastructure )
Dont need to worry about individual pods. The Service will take care of them.
Docs:
https://kubernetes.io/docs/concepts/services-networking/service/
I don't think you need any port redirection. You have to build an ingress controller upon your cluster though, i.e. nginx ingress controller
Then you just set up a single ingress service with exposed access, which will serve a cluster traffic.
Here is an example of Ingress Controller to Access cluster service:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: redis-cluster-ing
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- http:
paths:
- backend:
serviceName: redis-cluster
servicePort: 6379
You may check a step-by-step instruction
Related
I have a simple web server running in a single pod on GKE. I has also exposed it using a load balancer service. What is the easiest way to make this pod accessible over HTTPS?
gcloud container clusters list
NAME LOCATION MASTER_VERSION MASTER_IP MACHINE_TYPE NODE_VERSION NUM_NODES STATUS
personal..... us-central1-a 1.19.14-gke.1900 34.69..... e2-medium 1.19.14-gke.1900 1 RUNNING
kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10..... <none> 443/TCP 437d
my-service LoadBalancer 10..... 34.71...... 80:30066/TCP 12d
kubectl get pods
NAME READY STATUS RESTARTS AGE
nodeweb-server-9pmxc 1/1 Running 0 2d15h
EDIT: I also have a domain name registered if it's easier to use that instead of https://34.71....
First, your cluster should have Config Connector installed and function properly.
Start by delete your existing load balancer service kubectl delete service my-service
Create a static IP.
apiVersion: compute.cnrm.cloud.google.com/v1beta1
kind: ComputeAddress
metadata:
name: <name your IP>
spec:
location: global
Retrieve the created IP kubectl get computeaddress <the named IP> -o jsonpath='{.spec.address}'
Create an DNS "A" record that map your registered domain with the created IP address. Check with nslookup <your registered domain name> to ensure the correct IP is returned.
Update your load balancer service spec by insert the following line after type: LoadBalancer: loadBalancerIP: "<the created IP address>"
Re-create the service and check kubectl get service my-service has the EXTERNAL-IP set correctly.
Create ManagedCertificate.
apiVersion: networking.gke.io/v1
kind: ManagedCertificate
metadata:
name: <name your cert>
spec:
domains:
- <your registered domain name>
Then create the Ingress.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: <name your ingress>
annotations:
networking.gke.io/managed-certificates: <the named certificate>
spec:
rules:
- host: <your registered domain name>
http:
paths:
- pathType: ImplementationSpecific
backend:
service:
name: my-service
port:
number: 80
Check with kubectl describe ingress <named ingress>, see the rules and annotations section.
NOTE: It can take up to 15mins for the load balancer to be fully ready. Test with curl https://<your registered domain name>.
I want to put my docker image running react into kubernetes and be able to hit the main page. I am able to get the main page just running docker run --rm -p 3000:3000 reactdemo locally. When I try to deploy to my kubernetes (running locally via docker-desktop) I get no response until eventually a timeout.
I tried this same process below with a springboot docker image and I am able to get a simple json response in my browser.
Below is my Dockerfile, deployment yaml (with service inside it), and commands I'm running to try and get my results. Morale is low, any help would be appreciated!
Dockerfile:
# pull official base image
FROM node
# set working directory
RUN mkdir /app
WORKDIR /app
# install app dependencies
COPY package.json /app
RUN npm install
# add app
COPY . /app
#Command to build ReactJS application for deploy might not need this...
RUN npm run build
# start app
CMD ["npm", "start"]
Deployment file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: demo
spec:
replicas: 1
selector:
matchLabels:
app: demo
template:
metadata:
labels:
app: demo
spec:
containers:
- name: reactdemo
image: reactdemo:latest
imagePullPolicy: Never
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: demo
spec:
type: NodePort
selector:
app: demo
ports:
- port: 3000
targetPort: 3000
protocol: TCP
nodePort: 31000
selector:
app: demo
I then open a port on my local machine to the nodeport for the service:
PS C:\WINDOWS\system32> kubectl port-forward pod/demo-854f4d78f6-qv4mt 31000:3000
Forwarding from 127.0.0.1:31000 -> 3000
My assumption is that everything is in place at this point and I should be able to open a browser to hit localhost:31000. I expected to see that spinning react symbol for their landing page just like I do when I only run a local docker container.
Here is it all running:
$ kubectl get all
NAME READY STATUS RESTARTS AGE
pod/demo-854f4d78f6-7dn7c 1/1 Running 0 4s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/demo NodePort 10.111.203.209 <none> 3000:31000/TCP 4s
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 9d
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/demo 1/1 1 1 4s
NAME DESIRED CURRENT READY AGE
replicaset.apps/demo-854f4d78f6 1 1 1 4s
Some extra things to note:
Although I don't have it setup currently I did have my springboot service in the deployment file. I logged into it's pod and ensured the react container was reachable. It was.
I haven't done anything with my firewall settings (but sort of assume I dont have to since the run with the springboot service worked?)
I see this in chrome developer tools and so far don't think it's related to my problem: crbug/1173575, non-JS module files deprecated. I see this response in the main browser page after some time:
localhost didn’t send any data.
ERR_EMPTY_RESPONSE
if you are using Kubernetes using minikube in your local system then it will not work with localhost:3000, because it runs in minikube cluster which has there owned private IP address, so instead of trying localhost:3000 you should try minikube service <servicename> this in your terminal and it shows the URL of your service.
Thanks for all the feedback peeps! In trying out the solutions presented I found my error and it was pretty silly. I tried removing the service and trying the different port configs mentioned above. What solved it was using 127.0.0.1:31000 instead of localhost. Not sure why that fixed it but it did!
That being said a few comments I found while looking at the above comments.
I found that I couldnt hit the cluster without doing the port forwarding regardless of whether I had a service defined or not.
ContainerPort to my understanding is for kubernetes to work on pod to pod communication and doesnt impact application function from a user perspective (could be wrong!)
good to know on minikube Im thinking about trying it out and if I do I'll know why port 3000 stops working.
Thanks
You don't need a Service to port-forward to a pod. You just port-forward straight to it
kubectl port-forward <pod-name> <local-port>:<pod-port>
This creates a "tunnel" between your local machine on a chosen port (<local-port>) and your pod (pod-name), on a chosen pod's port (<pod-port>).
Then you can curl your pod with
curl localhost:<local-port>
If you really want to use a Service, then port-forward to the Service (service/demo in your case) + service port, but it will be translated to POD's IP eventually.
Change .spec.port.[0].targetPort in your Service to be the same as .spec.template.spec.containers.port.containerPort in your deployment, so in your case:
...
- port: 3000
targetPort: 80
...
Port forward to a Service, with Service port
kubectl port-forward service/demo 8080:3000
Then curl your Service with
curl localhost:8080
This have a side effect if there are more pods under the same service. Port forward almost always will be connected to the same pod.
Is it somehow possible to expose a kubernetes service to the outside world?
I am currently developping an application which need to communicate with a service, and to do so I need to know the pod ip and port address, which I withing the kubernetes cluster can get with the kubernetes services linked to it, but outside the cluster I seem to be unable to find it, or expose it?
apiVersion: v1
kind: Service
metadata:
name: kafka-broker
spec:
ports:
- name: broker
port: 9092
protocol: TCP
targetPort: kafka
selector:
app: kafka
sessionAffinity: None
type: ClusterIP
I could containerize the application, put it in a pod, and run it within kubernetes, but for fast development it seems tedious to have to go through this, for testing such a small things such as connectivity?
Someway i can expose the service, and thereby reach the application in its selector?
In order to expose your Kubernetes service to the internet you must change the ServiceType.
Your service is using the default which is ClusterIP, it exposes the Service on a cluster-internal IP, making it only reachable within the cluster.
1 - If you use cloud provider like AWS or GCP The best option for you is to use the LoadBalancer Service Type: which automatically exposes to the internet using the provider Load Balancer.
Run:
kubectl expose deployment deployment-name --type=LoadBalancer --name=service-name
Where deployment-name must be replaced by your actual deploy name. and the same goes for the desired service-name
wait a few minutes and the kubectl get svc command will give you the external IP and PORT:
owilliam#minikube:~$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 4d21h
nginx-service-lb LoadBalancer 10.96.125.208 0.0.0.0 80:30081/TCP 36m
2 - If you are running Kubernetes locally (like Minikube) the best option is the Nodeport Service Type:
It it exposes the service to the Cluster Node( the hosting computer).
Which is safer for testing purposes than exposing the service to the whole internet.
Run: kubectl expose deployment deployment-name --type=NodePort --name=service-name
Where deployment-name must be replaced by your actual deploy name. and the same goes for the desired service-name
Bellow are my outputs after exposing an Nginx webserver to the NodePort for your reference:
user#minikube:~$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 4d21h
service-name NodePort 10.96.33.84 <none> 80:31198/TCP 4s
user#minikube:~$ minikube service list
|----------------------|---------------------------|-----------------------------|-----|
| NAMESPACE | NAME | TARGET PORT | URL |
|----------------------|---------------------------|-----------------------------|-----|
| default | kubernetes | No node port |
| default | service-name | http://192.168.39.181:31198 |
| kube-system | kube-dns | No node port |
| kubernetes-dashboard | dashboard-metrics-scraper | No node port |
| kubernetes-dashboard | kubernetes-dashboard | No node port |
|----------------------|---------------------------|-----------------------------|-----|
user#minikube:~$ curl http://192.168.39.181:31198
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
...//// suppressed output
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
user#minikube:~$
you can use: NodePort or Load balancer type services as mentioned in other answers. or even ingress.
But as you are asking for developer purpose only, I suggest to start a testing pod in given namespace and check connectivity from that pod. You can get actual SSH access to running pod kubectl exec -it {PODNAME} /bin/sh
you can also try tools like
- kubefwd
- squash
- stern
Use Service type as NodePort or Loadbalancer. Latter is recommended if you are running in cloud like Azure,AWS or GCD
refer the same below
apiVersion: v1
kind: Service
metadata:
name: kafka-broker
spec:
ports:
- name: broker
port: 9092
protocol: TCP
targetPort: kafka
selector:
app: kafka
sessionAffinity: None
type: NodePort
I've created k8s cluster on AWS using EKS with Terraform followed this documentation https://docs.aws.amazon.com/eks/latest/userguide/what-is-eks.html .
I have one worker node.Note: Everything is in Private Subnets
Just running node.js hello-world container
Code for pod definition
kind: Pod
metadata:
name: nodehelloworld.example.com
labels:
app: helloworld
spec:
containers:
- name: k8s-demo
image: wardviaene/k8s-demo
ports:
- name: nodejs-port
containerPort: 3000
Code for service definition
apiVersion: v1
kind: Service
metadata:
name: helloworld-service
spec:
ports:
- port: 31001
nodePort: 31001
targetPort: nodejs-port
protocol: TCP
selector:
app: helloworld
type: NodePort
kubectl get pods shows that my pod is up and running
nodehelloworld.example.com 1/1 Running 0 17h
kubectl get svc shows that my service is also created
helloworld-service NodePort 172.20.146.235 <none> 31001:31001/TCP 16h
kubectl describe svc helloworld-service it has correct end-point and correct selector
So here is the problem
When I hit NodeIP:exposed port(which is 31001) I'm getting This site can’t be reached
then I used kubeclt port-forward podname 3000:3000
I can hit with curl -v localhost:3000 is reachable
I checked my security group inbound rule is 0-65535 from my CIDR block.
Is there anything else I'm missing?
If you are trying to connect from outside the cluster then in the security group for worker nodes you will have to add a custom TCP entry for enabling inbound traffic on port 31001.
If that does not work then make sure you are able to connect to the Node through that IP. I usually connect using a VPN client.
Fixed..
On AWS EKS nodeports are not working as on Pure Kubernetes.
when you exposing
- port: 31001
targetPort: nodejs-port
protocol: TCP
31001 that's the clusterIP port will get exposed.
in order to get nodePort you must describe your service and look for NodePort is description that was exposed
I've deployed an apache httpd server in a container and am attempting to expose it externally via a LoadBalancer. Although I can log on to the local host and get the expected response (curl -X GET localhost) when I try and access the external URL exposed by the load balancer I get an Empty reply from server:
curl -X GET ad8d14ea0ba9611e8b2360afc35626a3-553331517.us-east-1.elb.amazonaws.com:5000
curl: (52) Empty reply from server
Any idea what I am missing - is there some kind of additional redirection going on that I'm unaware of?
The yaml is here:
apiVersion: apps/v1
kind: Deployment
metadata:
name: apache
labels:
app: apache
spec:
replicas: 1
selector:
matchLabels:
pod: apache
template:
metadata:
name: apachehost
labels:
pod: apache
spec:
containers:
- name: apache
image: myrepo/apache2
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: apache
labels:
app: apache
spec:
type: LoadBalancer
selector:
pod: apache
ports:
- name: port1
port: 5000
targetPort: 80
1.Check your pod running.
2.Check aws IAM and security group also may be 5000 port not open for public.Use curl command in kubernet master and check port.
3.Share a pod logs
check on your aws load balancer for open port of 5000 in security group for LB. as in bound rule.
check for inbound rule of load balancer.
If your pods are running on Fargate the load balancer service will not work: https://docs.aws.amazon.com/eks/latest/userguide/load-balancing.html