I have an EKS cluster running in one VPC and some ec2 instances on a legacy VPC, the 2 VPCs has peering between them.
I have a app on the EKS cluster needs to be reachable from inside the cluster and also to the ec2 instances on the legacy VPC.
Do I need to create 2 services for the app - one kind: clusterIP for in cluster communication and one kine: LoadBalancer for external VPC communication:
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: external
service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: ip
OR - I can create only one service kind: nodeport/clusterIP/LB internal for both in-cluster and external VPC communication?
What is the preferred way.
Thanks,
Do I need to create 2 services for the app...create only one service kind: nodeport/clusterIP/LB internal for both in-cluster and external VPC communication?
You need only one service in this case. Either a typed NodePort where you get a cluster IP (for connection within the k8s cluster network); plus a port accessible at EC2 worker node. Typed LoadBalancer gets you a cluster IP too; plus the LB endpoint. As worker nodes come and go, LB give you more flexibility as you will only dealing with a known endpoint.
Related
i have kubernetes cluster with pods which are type cluster IP. Which is the correct ip to shoot it if want to run integration tests IP:10.102.222.181 or Endpoints: 10.244.0.157:80,10.244.5.243:80
for example:
Type: ClusterIP
IP Families: <none>
IP: 10.102.222.181
IPs: <none>
Port: http 80/TCP
TargetPort: 80/TCP
Endpoints: 10.244.0.157:80,10.244.5.243:80
Session Affinity: None
Events: <none>
If your test runner is running inside the cluster, use the name: of the Service as a host name. Don't use any of these IP addresses directly. Kubernetes provides a DNS service that will translate the Service's name to its address (the IP: from the kubectl describe service output), and the Service itself just forwards network traffic to the Endpoints: (individual pod addresses).
If the test runner is outside the cluster, none of these DNS names or IP addresses are reachable at all. For basic integration tests, it should be enough to kubectl port-forward service/its-name 12345:80, and then you can use http://localhost:12345 to reach the service (actually a fixed single pod from it). This isn't a good match for performance or load tests, and you'll either need to launch these from inside the cluster, or to use a NodePort or LoadBalancer service to make the service accessible from outside.
IPs in the Endpoints are individual Pod IPs which are subject to change when new pods are created and replace the old pods. ClusterIP is stable IP which does not change unless you delete the service and recreate it. So recommendation is to use the clusterIP.
I trying to understand how to work load balancing in Istio.
Istio DestinationRule define rules for traffic balancing between pods.
K8s Service similar manages traffic load balancing between pods.
DestinationRule define host and k8s service define host.
But without k8s service, request failed with http code 503.
How k8s Service relates to DestinationRule?
Kubernetes service
Kubernetes service type ClusterIP uses kube-proxy's iptables rules to distribute the requests.
The documentation says:
By default, kube-proxy in userspace mode chooses a backend via a round-robin algorithm.
More about it here.
Destination rule
As mentioned here
You can think of virtual services as how you route your traffic to a given destination, and then you use destination rules to configure what happens to traffic for that destination. Destination rules are applied after virtual service routing rules are evaluated, so they apply to the traffic’s “real” destination.
Every HTTP route must have a target: a route, or a redirect. A route is a forwarding target, and it can point to one of several versions of a service described in DestinationRules. Weights associated with the service version determine the proportion of traffic it receives.
DestinationRule defines policies that apply to traffic intended for a service after routing has occurred.
And here
While a virtual service matches on a rule and evaluates a destination to route the traffic to, destination rules define available subsets of the service to send the traffic.
For example, if you have a service that has multiple versions running at a time, you can create destination rules to define routes to those versions. Then use virtual services to map to a specific subset defined by the destination rules or split a percentage of the traffic to particular versions.
503 without kubernetes service
But without k8s service, request failed with http code 503.
It's failing because there is no host, which was specified in virtual service, and destination rule.
For example, take a look at this virtual service and destination rule.
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: reviews-route
spec:
hosts:
- reviews.prod.svc.cluster.local
http:
- name: "reviews-v2-routes"
match:
- uri:
prefix: "/wpcatalog"
route:
- destination:
host: reviews.prod.svc.cluster.local <---
subset: v2
- name: "reviews-v1-route"
route:
- destination:
host: reviews.prod.svc.cluster.local <---
subset: v1
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: reviews-destination
spec:
host: reviews.prod.svc.cluster.local <---
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2
If you check the host then you see there is your kubernetes service specified, and it won't work without it.
Additionaly when setting route rules to direct traffic to specific versions (subsets) of a service, care must be taken to ensure that the subsets are available before they are used in the routes. Otherwise, calls to the service may return 503 errors during a reconfiguration period.
More about it here.
DestinationRule define host and k8s service define host.
Destination rule host is your kubernetes service. Kubernetes service hosts are your pods,
you may be wondering, but why do I need a service?
As mentioned here.
A Kubernetes Service is an abstraction which defines a logical set of Pods running somewhere in your cluster, that all provide the same functionality. When created, each Service is assigned a unique IP address (also called clusterIP). This address is tied to the lifespan of the Service, and will not change while the Service is alive. Pods can be configured to talk to the Service, and know that communication to the Service will be automatically load-balanced out to some pod that is a member of the Service.
Kubernetes Service relates to DestinationRule
I couldn't find the exact information how it works, so I will explain how I understand it.
You need kubernetes service so virtual service and destination rule could actually work.
As kubernetes service uses kube-proxy's iptables rules to distribute the requests, I assume that istio destination rule can ovveride it with his own rules, and apply them through envoy sidecar, because all traffic that your mesh services send and receive (data plane traffic) is proxied through Envoy, making it easy to direct and control traffic around your mesh without making any changes to your services.
More about it here.
Additional resources:
https://istio.io/latest/docs/reference/config/networking/destination-rule/#Subset
https://istio.io/latest/docs/examples/bookinfo/#apply-default-destination-rules
https://istio.io/latest/docs/concepts/traffic-management/#load-balancing-options
Let me know if you have any more questions.
I 've apache camel application deployed on kubernetes. My application is esposed in kubernetes cluster which is accessible at http://192.168.99.100:31750. so how to make it accessiible accross.
I suggest you do 2 things :
run an NginX Ingress Controller in your minikube and expose it with NodePort service. Meaning it will be available somewhat similar to your service right now (high port range)
run HAProxy on your host that runs minikube that will forward 80/443 port to your high ports on minikube (ie. 80->32080, 443->32443)
that way you can expose your ingress controller on standard ports and have your services exposed with regular kubernetes Ingress definitions on these ports.
I am running web site with Kubernetes on Google Cloud. At the moment, everything is working well - through http. But I need https. I have several services and one of them is exposed to the outside world, let's call it web. As far as I know, this is the only service that needs to be modified. I tried to creating a static IP and TCP/SSL loadbalancer ssl-LB in the Networking section of GCP and using that LB in web.yaml, which I create. Creating the service gets stuck with:
Error creating load balancer (will retry): Failed to create load
balancer for service default/web: requested ip <IP> is
neither static nor assigned to LB
aff3a4e1f487f11e787cc42010a84016(default/web): <nil>
According to GCP my IP is static, however. The hashed LB I cannot find anywhere and it should be assigned to ssl-LB anyway. How do I assign this properly?
More details:
Here are the contents of web.yaml
apiVersion: v1
kind: Service
metadata:
name: web
labels:
...
spec:
type: LoadBalancer
loadBalancerIP: <RESERVED STATIC IP>
ports:
- port: 443
targetPort: 7770
selector:
...
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: web
spec:
replicas: 1
template:
metadata:
labels:
...
spec:
containers:
- name: web
image: gcr.io/<PROJECT>/<IMAGE NAME>
ports:
- containerPort: 7770
Since you have not mentioned this already, I'm just assuming you're using Google Container Engine (GKE) for your Kubernetes setup.
In the service resource manifest, if you set the Type to LoadBalancer, Kubernetes on GKE automatically sets up Network load balancing (L4 Load balancer) using GCE. You will have to terminate connections in your pod using your own custom server or something like nginx/apache.
If your goal is to set up a (HTTP/HTTPS) L7 load balancer (which looks to be the case), it will be simpler and easier to use the Ingress resource in Kubernetes (starting with v1.1). GKE automatically sets up a GCE HTTP/HTTPS L7 load balancing with this setup.
You will be able to add your TLS certificates which will get provisioned on the GCE load balancer automatically by GKE.
This setup has the following advantages:
Specify services per URL path and port (it uses URL Maps from GCE to configure this).
Set up and terminate SSL/TLS on the GCE load balancer (it uses Target proxies from GCE to configure this).
GKE will automatically also configure the GCE health checks for your services.
Your responsibility will be to handle the backend service logic to handle requests in your pods.
More info available on the GKE page about setting up HTTP load balancing.
Remember that when using GKE, it automatically uses the available GCE load balancer support for both the use cases described above and you will not need to manually set up GCE load balancing.
I setup cluster via gcloud container engine, where I have deployed my pods with nodejs server running on them. I am using LoadBalancer service and static IP for routing the traffic across these instances. Everything works perfectly, but I forget to specify write/read permission for google storage api, and my server cannot save files to the bucket storage.
According to this answer there is no way I can change permissions (scopes) for cluster after it was created. So I created a new cluster with correct permissions and re-deployed my containers. I would like to re-use the static IP, I have received from google, tell loadBalancer to use existing IP and remove old cluster. How to do that? I really don't want to change DNS.
If you are using a type: LoadBalancer style service then you can use the loadBalancerIP field on the service.
apiVersion: v1
kind: Service
spec:
type: LoadBalancer
loadBalancerIP: 10.10.10.10
...
If you are using an Ingress you can use an annotation on Google Cloud to set the IP address. Here you use the IP address name in Google Cloud rather than the IP address itself.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: myingress
annotations:
"kubernetes.io/ingress.global-static-ip-name": my-ip-name
spec:
...