EKS, how to create 2 LoadBalancer services sharing a static IP - amazon-eks

Have a 1.18 EKS cluster, with 2 services on different protocol and port, e.g.
proc-tcp ClusterIP 10.100.200.247 <none> 4060/TCP 26h
proc-udp ClusterIP 10.100.200.20 <none> 4800/UDP 26h
How do I convert or recreate them to be type LoadBalancer and share a static IP?

To create loadbalancer, You need to pass
type: LoadBalancer instead of type: ClusterIP

Related

What is the quickest way to expose a LoadBalancer service over HTTPS?

I have a simple web server running in a single pod on GKE. I has also exposed it using a load balancer service. What is the easiest way to make this pod accessible over HTTPS?
gcloud container clusters list
NAME LOCATION MASTER_VERSION MASTER_IP MACHINE_TYPE NODE_VERSION NUM_NODES STATUS
personal..... us-central1-a 1.19.14-gke.1900 34.69..... e2-medium 1.19.14-gke.1900 1 RUNNING
kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10..... <none> 443/TCP 437d
my-service LoadBalancer 10..... 34.71...... 80:30066/TCP 12d
kubectl get pods
NAME READY STATUS RESTARTS AGE
nodeweb-server-9pmxc 1/1 Running 0 2d15h
EDIT: I also have a domain name registered if it's easier to use that instead of https://34.71....
First, your cluster should have Config Connector installed and function properly.
Start by delete your existing load balancer service kubectl delete service my-service
Create a static IP.
apiVersion: compute.cnrm.cloud.google.com/v1beta1
kind: ComputeAddress
metadata:
name: <name your IP>
spec:
location: global
Retrieve the created IP kubectl get computeaddress <the named IP> -o jsonpath='{.spec.address}'
Create an DNS "A" record that map your registered domain with the created IP address. Check with nslookup <your registered domain name> to ensure the correct IP is returned.
Update your load balancer service spec by insert the following line after type: LoadBalancer: loadBalancerIP: "<the created IP address>"
Re-create the service and check kubectl get service my-service has the EXTERNAL-IP set correctly.
Create ManagedCertificate.
apiVersion: networking.gke.io/v1
kind: ManagedCertificate
metadata:
name: <name your cert>
spec:
domains:
- <your registered domain name>
Then create the Ingress.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: <name your ingress>
annotations:
networking.gke.io/managed-certificates: <the named certificate>
spec:
rules:
- host: <your registered domain name>
http:
paths:
- pathType: ImplementationSpecific
backend:
service:
name: my-service
port:
number: 80
Check with kubectl describe ingress <named ingress>, see the rules and annotations section.
NOTE: It can take up to 15mins for the load balancer to be fully ready. Test with curl https://<your registered domain name>.

Which is the correct IP to run API tests on kubernetes cluster

i have kubernetes cluster with pods which are type cluster IP. Which is the correct ip to shoot it if want to run integration tests IP:10.102.222.181 or Endpoints: 10.244.0.157:80,10.244.5.243:80
for example:
Type: ClusterIP
IP Families: <none>
IP: 10.102.222.181
IPs: <none>
Port: http 80/TCP
TargetPort: 80/TCP
Endpoints: 10.244.0.157:80,10.244.5.243:80
Session Affinity: None
Events: <none>
If your test runner is running inside the cluster, use the name: of the Service as a host name. Don't use any of these IP addresses directly. Kubernetes provides a DNS service that will translate the Service's name to its address (the IP: from the kubectl describe service output), and the Service itself just forwards network traffic to the Endpoints: (individual pod addresses).
If the test runner is outside the cluster, none of these DNS names or IP addresses are reachable at all. For basic integration tests, it should be enough to kubectl port-forward service/its-name 12345:80, and then you can use http://localhost:12345 to reach the service (actually a fixed single pod from it). This isn't a good match for performance or load tests, and you'll either need to launch these from inside the cluster, or to use a NodePort or LoadBalancer service to make the service accessible from outside.
IPs in the Endpoints are individual Pod IPs which are subject to change when new pods are created and replace the old pods. ClusterIP is stable IP which does not change unless you delete the service and recreate it. So recommendation is to use the clusterIP.

ActiveMQ consumer in AKS

I have an ActiveMQ consumer in AKS I am trying to connect to a external service.
I have setup a AKS load balancer with a dedicated IP with with the following rules but it will not connect.
apiVersion: v1
kind: Service
metadata:
name: mx-load-balancer
spec:
loadBalancerIP: 1.1.1.1
type: LoadBalancer
ports:
- name: activemq-port-61616
port: 61616
targetPort: 61616
protocol: TCP
selector:
k8s-app: handlers-mx
Any ideas?
First of all, your loadBalancerIP is not a real one, you need to use a real IP of your LB. Second, you need to add annotation for service of type LoadBalancer to work:
annotations:
service.beta.kubernetes.io/azure-load-balancer-resource-group: LB_RESORCE_GROUP

Exposing kubernetes service outside the cluster for development purposes

Is it somehow possible to expose a kubernetes service to the outside world?
I am currently developping an application which need to communicate with a service, and to do so I need to know the pod ip and port address, which I withing the kubernetes cluster can get with the kubernetes services linked to it, but outside the cluster I seem to be unable to find it, or expose it?
apiVersion: v1
kind: Service
metadata:
name: kafka-broker
spec:
ports:
- name: broker
port: 9092
protocol: TCP
targetPort: kafka
selector:
app: kafka
sessionAffinity: None
type: ClusterIP
I could containerize the application, put it in a pod, and run it within kubernetes, but for fast development it seems tedious to have to go through this, for testing such a small things such as connectivity?
Someway i can expose the service, and thereby reach the application in its selector?
In order to expose your Kubernetes service to the internet you must change the ServiceType.
Your service is using the default which is ClusterIP, it exposes the Service on a cluster-internal IP, making it only reachable within the cluster.
1 - If you use cloud provider like AWS or GCP The best option for you is to use the LoadBalancer Service Type: which automatically exposes to the internet using the provider Load Balancer.
Run:
kubectl expose deployment deployment-name --type=LoadBalancer --name=service-name
Where deployment-name must be replaced by your actual deploy name. and the same goes for the desired service-name
wait a few minutes and the kubectl get svc command will give you the external IP and PORT:
owilliam#minikube:~$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 4d21h
nginx-service-lb LoadBalancer 10.96.125.208 0.0.0.0 80:30081/TCP 36m
2 - If you are running Kubernetes locally (like Minikube) the best option is the Nodeport Service Type:
It it exposes the service to the Cluster Node( the hosting computer).
Which is safer for testing purposes than exposing the service to the whole internet.
Run: kubectl expose deployment deployment-name --type=NodePort --name=service-name
Where deployment-name must be replaced by your actual deploy name. and the same goes for the desired service-name
Bellow are my outputs after exposing an Nginx webserver to the NodePort for your reference:
user#minikube:~$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 4d21h
service-name NodePort 10.96.33.84 <none> 80:31198/TCP 4s
user#minikube:~$ minikube service list
|----------------------|---------------------------|-----------------------------|-----|
| NAMESPACE | NAME | TARGET PORT | URL |
|----------------------|---------------------------|-----------------------------|-----|
| default | kubernetes | No node port |
| default | service-name | http://192.168.39.181:31198 |
| kube-system | kube-dns | No node port |
| kubernetes-dashboard | dashboard-metrics-scraper | No node port |
| kubernetes-dashboard | kubernetes-dashboard | No node port |
|----------------------|---------------------------|-----------------------------|-----|
user#minikube:~$ curl http://192.168.39.181:31198
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
...//// suppressed output
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
user#minikube:~$
you can use: NodePort or Load balancer type services as mentioned in other answers. or even ingress.
But as you are asking for developer purpose only, I suggest to start a testing pod in given namespace and check connectivity from that pod. You can get actual SSH access to running pod kubectl exec -it {PODNAME} /bin/sh
you can also try tools like
- kubefwd
- squash
- stern
Use Service type as NodePort or Loadbalancer. Latter is recommended if you are running in cloud like Azure,AWS or GCD
refer the same below
apiVersion: v1
kind: Service
metadata:
name: kafka-broker
spec:
ports:
- name: broker
port: 9092
protocol: TCP
targetPort: kafka
selector:
app: kafka
sessionAffinity: None
type: NodePort

AWS-EKS deployed pod is exposed with type service Node Port is not accessible over nodePort IP and exposed port

I've created k8s cluster on AWS using EKS with Terraform followed this documentation https://docs.aws.amazon.com/eks/latest/userguide/what-is-eks.html .
I have one worker node.Note: Everything is in Private Subnets
Just running node.js hello-world container
Code for pod definition
kind: Pod
metadata:
name: nodehelloworld.example.com
labels:
app: helloworld
spec:
containers:
- name: k8s-demo
image: wardviaene/k8s-demo
ports:
- name: nodejs-port
containerPort: 3000
Code for service definition
apiVersion: v1
kind: Service
metadata:
name: helloworld-service
spec:
ports:
- port: 31001
nodePort: 31001
targetPort: nodejs-port
protocol: TCP
selector:
app: helloworld
type: NodePort
kubectl get pods shows that my pod is up and running
nodehelloworld.example.com 1/1 Running 0 17h
kubectl get svc shows that my service is also created
helloworld-service NodePort 172.20.146.235 <none> 31001:31001/TCP 16h
kubectl describe svc helloworld-service it has correct end-point and correct selector
So here is the problem
When I hit NodeIP:exposed port(which is 31001) I'm getting This site can’t be reached
then I used kubeclt port-forward podname 3000:3000
I can hit with curl -v localhost:3000 is reachable
I checked my security group inbound rule is 0-65535 from my CIDR block.
Is there anything else I'm missing?
If you are trying to connect from outside the cluster then in the security group for worker nodes you will have to add a custom TCP entry for enabling inbound traffic on port 31001.
If that does not work then make sure you are able to connect to the Node through that IP. I usually connect using a VPN client.
Fixed..
On AWS EKS nodeports are not working as on Pure Kubernetes.
when you exposing
- port: 31001
targetPort: nodejs-port
protocol: TCP
31001 that's the clusterIP port will get exposed.
in order to get nodePort you must describe your service and look for NodePort is description that was exposed