Which is the correct IP to run API tests on kubernetes cluster - api

i have kubernetes cluster with pods which are type cluster IP. Which is the correct ip to shoot it if want to run integration tests IP:10.102.222.181 or Endpoints: 10.244.0.157:80,10.244.5.243:80
for example:
Type: ClusterIP
IP Families: <none>
IP: 10.102.222.181
IPs: <none>
Port: http 80/TCP
TargetPort: 80/TCP
Endpoints: 10.244.0.157:80,10.244.5.243:80
Session Affinity: None
Events: <none>

If your test runner is running inside the cluster, use the name: of the Service as a host name. Don't use any of these IP addresses directly. Kubernetes provides a DNS service that will translate the Service's name to its address (the IP: from the kubectl describe service output), and the Service itself just forwards network traffic to the Endpoints: (individual pod addresses).
If the test runner is outside the cluster, none of these DNS names or IP addresses are reachable at all. For basic integration tests, it should be enough to kubectl port-forward service/its-name 12345:80, and then you can use http://localhost:12345 to reach the service (actually a fixed single pod from it). This isn't a good match for performance or load tests, and you'll either need to launch these from inside the cluster, or to use a NodePort or LoadBalancer service to make the service accessible from outside.

IPs in the Endpoints are individual Pod IPs which are subject to change when new pods are created and replace the old pods. ClusterIP is stable IP which does not change unless you delete the service and recreate it. So recommendation is to use the clusterIP.

Related

EKS - internal LB or ndoeport

I have an EKS cluster running in one VPC and some ec2 instances on a legacy VPC, the 2 VPCs has peering between them.
I have a app on the EKS cluster needs to be reachable from inside the cluster and also to the ec2 instances on the legacy VPC.
Do I need to create 2 services for the app - one kind: clusterIP for in cluster communication and one kine: LoadBalancer for external VPC communication:
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: external
service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: ip
OR - I can create only one service kind: nodeport/clusterIP/LB internal for both in-cluster and external VPC communication?
What is the preferred way.
Thanks,
Do I need to create 2 services for the app...create only one service kind: nodeport/clusterIP/LB internal for both in-cluster and external VPC communication?
You need only one service in this case. Either a typed NodePort where you get a cluster IP (for connection within the k8s cluster network); plus a port accessible at EC2 worker node. Typed LoadBalancer gets you a cluster IP too; plus the LB endpoint. As worker nodes come and go, LB give you more flexibility as you will only dealing with a known endpoint.

How Istio DestinationRule related to Kubernetes Service?

I trying to understand how to work load balancing in Istio.
Istio DestinationRule define rules for traffic balancing between pods.
K8s Service similar manages traffic load balancing between pods.
DestinationRule define host and k8s service define host.
But without k8s service, request failed with http code 503.
How k8s Service relates to DestinationRule?
Kubernetes service
Kubernetes service type ClusterIP uses kube-proxy's iptables rules to distribute the requests.
The documentation says:
By default, kube-proxy in userspace mode chooses a backend via a round-robin algorithm.
More about it here.
Destination rule
As mentioned here
You can think of virtual services as how you route your traffic to a given destination, and then you use destination rules to configure what happens to traffic for that destination. Destination rules are applied after virtual service routing rules are evaluated, so they apply to the traffic’s “real” destination.
Every HTTP route must have a target: a route, or a redirect. A route is a forwarding target, and it can point to one of several versions of a service described in DestinationRules. Weights associated with the service version determine the proportion of traffic it receives.
DestinationRule defines policies that apply to traffic intended for a service after routing has occurred.
And here
While a virtual service matches on a rule and evaluates a destination to route the traffic to, destination rules define available subsets of the service to send the traffic.
For example, if you have a service that has multiple versions running at a time, you can create destination rules to define routes to those versions. Then use virtual services to map to a specific subset defined by the destination rules or split a percentage of the traffic to particular versions.
503 without kubernetes service
But without k8s service, request failed with http code 503.
It's failing because there is no host, which was specified in virtual service, and destination rule.
For example, take a look at this virtual service and destination rule.
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: reviews-route
spec:
hosts:
- reviews.prod.svc.cluster.local
http:
- name: "reviews-v2-routes"
match:
- uri:
prefix: "/wpcatalog"
route:
- destination:
host: reviews.prod.svc.cluster.local <---
subset: v2
- name: "reviews-v1-route"
route:
- destination:
host: reviews.prod.svc.cluster.local <---
subset: v1
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: reviews-destination
spec:
host: reviews.prod.svc.cluster.local <---
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2
If you check the host then you see there is your kubernetes service specified, and it won't work without it.
Additionaly when setting route rules to direct traffic to specific versions (subsets) of a service, care must be taken to ensure that the subsets are available before they are used in the routes. Otherwise, calls to the service may return 503 errors during a reconfiguration period.
More about it here.
DestinationRule define host and k8s service define host.
Destination rule host is your kubernetes service. Kubernetes service hosts are your pods,
you may be wondering, but why do I need a service?
As mentioned here.
A Kubernetes Service is an abstraction which defines a logical set of Pods running somewhere in your cluster, that all provide the same functionality. When created, each Service is assigned a unique IP address (also called clusterIP). This address is tied to the lifespan of the Service, and will not change while the Service is alive. Pods can be configured to talk to the Service, and know that communication to the Service will be automatically load-balanced out to some pod that is a member of the Service.
Kubernetes Service relates to DestinationRule
I couldn't find the exact information how it works, so I will explain how I understand it.
You need kubernetes service so virtual service and destination rule could actually work.
As kubernetes service uses kube-proxy's iptables rules to distribute the requests, I assume that istio destination rule can ovveride it with his own rules, and apply them through envoy sidecar, because all traffic that your mesh services send and receive (data plane traffic) is proxied through Envoy, making it easy to direct and control traffic around your mesh without making any changes to your services.
More about it here.
Additional resources:
https://istio.io/latest/docs/reference/config/networking/destination-rule/#Subset
https://istio.io/latest/docs/examples/bookinfo/#apply-default-destination-rules
https://istio.io/latest/docs/concepts/traffic-management/#load-balancing-options
Let me know if you have any more questions.

redis cluster K8S connection

I'm running a redis-cluster in K8S:
kubectl get services -o wide
redis-cluster ClusterIP 10.97.31.167 <none> 6379/TCP,16379/TCP 22h app=redis-cluster
When connecting to the cluster IP from the node itself connection is working fine:
redis-cli -h 10.97.31.167 -c
10.97.31.167:6379> set some_val 1
-> Redirected to slot [11662] located at 10.244.1.9:6379
OK
Is there some way I can access the redis server from my local development VM without exposing every single pod as a service?
When deploying my application to run inside the cluster itself (later, when in production), I should use the cluster IP too, or should I use the internal IPs of the pods as master IPs of the redis-master servers?
Simple forwarding to the remote machine won't work:
devvm:ssh -L 6380:10.97.31.167:6379 -i user.pem admin#k8snode.com
On dev VM:
root#devvm:~# redis-cli -h 127.0.0.1 -p 6380 -c
127.0.0.1:6380> set jaheller 1
-> Redirected to slot [11662] located at 10.244.1.9:6379
The redis connection is timeouting at this point.
I belive in all scenarios you just need to expose the service using kubernetes Service object of type:
Cluster IP ( in case you are consuming it inside the cluster )
NodePort ( for external access )
LoadBalancer ( in case of public access and if you are on cloud provider)
NodePort with external loadbalancer ( for public external access if you are on local infrastructure )
Dont need to worry about individual pods. The Service will take care of them.
Docs:
https://kubernetes.io/docs/concepts/services-networking/service/
I don't think you need any port redirection. You have to build an ingress controller upon your cluster though, i.e. nginx ingress controller
Then you just set up a single ingress service with exposed access, which will serve a cluster traffic.
Here is an example of Ingress Controller to Access cluster service:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: redis-cluster-ing
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- http:
paths:
- backend:
serviceName: redis-cluster
servicePort: 6379
You may check a step-by-step instruction

IP/hostname whitelist way for a call to API from openshift

This is more of a how-to question , as i am still exploring openshift.
We have an orchestrator running out of openshift which calls to a REST API written in Flask hosted on apache/RHEL.
While our end point is token authenticated - we wanted to add a second level of restriction by allowing access from a whitelisted source of hosts.
But given the concept of openshift, that can span a container across any (number of ) server across its cluster.
What is the best way to go about whitelisting the action from a cluster of computers?
I tried to take a look at External Load Balancer for my the orchestrator service.
clusterIP: 172.30.65.163
externalIPs:
- 10.198.40.123
- 172.29.29.133
externalTrafficPolicy: Cluster
loadBalancerIP: 10.198.40.123
ports:
- nodePort: 30768
port: 5023
protocol: TCP
targetPort: 5023
selector:
app: dbrun-x2
deploymentconfig: dbrun-x2
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer:
ingress:
- ip: 172.29.29.133
So I am unsure is - what is the IP I am expected to see on the other side [ my API apache access logs ] with this setup?
or
Does this LoadBalancer act as gateway only for incoming calls to openshift.
sorry about the long post - would appreciate some inputs

How to re-assign static IP address from one cluster to another in the google container engine

I setup cluster via gcloud container engine, where I have deployed my pods with nodejs server running on them. I am using LoadBalancer service and static IP for routing the traffic across these instances. Everything works perfectly, but I forget to specify write/read permission for google storage api, and my server cannot save files to the bucket storage.
According to this answer there is no way I can change permissions (scopes) for cluster after it was created. So I created a new cluster with correct permissions and re-deployed my containers. I would like to re-use the static IP, I have received from google, tell loadBalancer to use existing IP and remove old cluster. How to do that? I really don't want to change DNS.
If you are using a type: LoadBalancer style service then you can use the loadBalancerIP field on the service.
apiVersion: v1
kind: Service
spec:
type: LoadBalancer
loadBalancerIP: 10.10.10.10
...
If you are using an Ingress you can use an annotation on Google Cloud to set the IP address. Here you use the IP address name in Google Cloud rather than the IP address itself.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: myingress
annotations:
"kubernetes.io/ingress.global-static-ip-name": my-ip-name
spec:
...