How to share an AWS NLB between two EKS Services? - amazon-eks

We have a cross AZ deployment in an EKS cluster inside an AWS region where every AZ is independent, meaning that components do not talk to other components that are not in the same AZ.
We are using Contour as our ingress and have different Daemon Sets, one for each AZ. As a result, we also have different Services defined for every Daemon Set.
When deploying the Services to EKS, two different NLBs are created.
We would like to have only one NLB that will be shared between the Services.
The question is: can it be achieved and if it can then how?

Yes, you should be able to do this, by using an appropriate selector in your Service.
In each DaemonSet that you use, you have set the label in the Pod-template for the pods.
E.g.
template:
metadata:
labels:
app: contour
az: az-1
and
template:
metadata:
labels:
app: contour
az: az-2
Now, in your Loadbalancer Service, you need to use a selector that matches the Pods on both your DaemonSets, e.g. app: contour
Example Service
apiVersion: v1
kind: Service
metadata:
name: my-service
annotation:
service.beta.kubernetes.io/aws-load-balancer-type: nlb
spec:
selector:
app: contour # this needs to match the Pods in all your DaemonSets
ports:
- protocol: TCP
port: 80
type: LoadBalancer

Related

how ingressroute is hooked up to traefik's ingress contoller

I am learning traefik and ingressroute. One thing confused me the most is how the two parts are connected together.
After deploying traefik and my own service, I can simply create the following ingressroute to make it work:
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: my-service-ir
namespace: my-service-ns
spec:
entryPoints:
- web
routes:
- match: Path(`/`)
kind: Rule
services:
- name: my-service
port: 8000
But the ingressroute has nothing shared with traefik: not in the same namespace, no selector, etc.. It seems to me that the ingressroute can magically find traefik and apply on traefik. I am curious what happened behind it.
Thanks
When you deploy traefik in the kubernetes cluster, you use the rbac-k8s manifests like here. If you use helm then these are all present under that hood.
These RBACs actually create the new resource types i.e. IngressRoute over here.
They are applied at cluster level as you see in the link ClusterRole. This gives them ClusterLevel privileges. This is the reason you don't see anything special in the namespace.
You can checkout the sample task here which will give some more light on the matter.

HTTPS Ingress with Istio and SDS not working (returns 404) when I configure multiple Gateways

When I configure multiple (gateway - virtual service) pairs in a namespace, each pointing to basic HTTP services, only one service becomes accessable. Calls to the other (typically, the second configured) return 404. If the first gateway is deleted, the second service then becomes accesible
I raised a github issue a few weeks ago ( https://github.com/istio/istio/issues/20661 ) that contains all my configuration but no response to date. Does anyone know what I'm doing wrong (if anything) ?
Based on that github issue
The gateway port names have to be unique, if they are sharing the same port. Thats the only way we differentiate different RDS blocks. We went through this motion earlier as well. I wouldn't rock this boat unless absolutely necessary.
More about the issue here
Checked it on istio documentation, and in fact if you configure multiple gateways name of the first one is https, but second is https-bookinfo.
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: httpbin-gateway
spec:
selector:
istio: ingressgateway # use istio default ingress gateway
servers:
- port:
number: 443
name: https
protocol: HTTPS
tls:
mode: SIMPLE
serverCertificate: /etc/istio/ingressgateway-certs/tls.crt
privateKey: /etc/istio/ingressgateway-certs/tls.key
hosts:
- "httpbin.example.com"
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: bookinfo-gateway
spec:
selector:
istio: ingressgateway # use istio default ingress gateway
servers:
- port:
number: 443
name: https-bookinfo
protocol: HTTPS
tls:
mode: SIMPLE
serverCertificate: /etc/istio/ingressgateway-bookinfo-certs/tls.crt
privateKey: /etc/istio/ingressgateway-bookinfo-certs/tls.key
hosts:
- "bookinfo.com"
EDIT
That's weird, but I have another idea.
There is a github pull which have the following line in pilot:
routeName := gatewayRDSRouteName(s, config.Namespace)
This change adds namespace scoping to Gateway port names by appending
namespace suffix to the HTTPS RDS routes. Port names still have to be
unique within the namespace boundaries, but this change makes adding
more specific scoping rather trivial.
Could you try make 2 namespaces like in below example
EXAMPLE
apiVersion: v1
kind: Namespace
metadata:
name: httpbin
labels:
name: httpbin
istio-injection: enabled
---
apiVersion: v1
kind: Namespace
metadata:
name: nodejs
labels:
name: nodejs
istio-injection: enabled
And deploy everything( deployment,service,virtual service,gateway) in proper namespace and let me know if that works?
Could You try change the hosts from "*" to some names? It's only thing that came to my mind besides trying serverCertficate and privateKey but from the comments I assume you have already try it.
Let me know if that help.

Deploy the same deployment several times in Kubernetes

I intend to deploy my online service that depends on Redis servers in Kubernetes. So far I have:
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: "redis"
spec:
replicas: 1
template:
metadata:
labels:
app: redis
spec:
containers:
- name: redis
image: redis
ports:
- containerPort: 6379
protocol: TCP
I can also expose redis as a service with:
apiVersion: v1
kind: Service
metadata:
name: redis
labels:
app: redis
spec:
ports:
- port: 6379
protocol: TCP
selector:
app: redis
With this, I can run a single pod with a Redis server and expose it.
However, my application needs to contact several Redis servers (this can be configured but ideally should not change live). It does care which Redis server it talks to so I can't just use replicas and expose it a single service as the service abstraction would not allow me to know which instance I am talking too. I understand that the need for my application to know this hinders scalability and am happy to loose some flexibility as a result.
I thought of deploying the same deployment several times and declaring a service for each, but I could not find anything to do that in the Kubernetes documentation. I could of course copy-paste my deployment and service YAML files and adding a suffix for each but this seems silly and way too manual (and frankly too complex).
Is there anything in Kubernetes to help me achieve my goal ?
You should look into pet sets. A pet set will get a unique, but determinable, name per instance like redis0, redis1, redis2.
They are also treated as pets opposed to deployment pods, which are treated as cattle.
Be advised that pet sets are harder to upgrade, delete and handle in general, but thats the price of getting the reliability and determinability.
Your deployment as pet set:
apiVersion: apps/v1alpha1
kind: PetSet
metadata:
name: redis
spec:
serviceName: "redis"
replicas: 3
template:
metadata:
labels:
app: redis
annotations:
pod.alpha.kubernetes.io/initialized: "true"
spec:
terminationGracePeriodSeconds: 0
containers:
- name: redis
image: redis
ports:
- containerPort: 6379
protocol: TCP
Also consider to use volumes to make it easier to access data in the pods.

Return http busy code from google cluster managed by kubernetes

I've simple following configuration of the cluster: 3 nodes on 3 machines. On top of each node i have 1 pod. Each pod has readinessProbe. All these pods exposed to one service object with type NodePort. On the top of this object i have the following ingress object:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: dess-ingress
spec:
backend:
serviceName: dess-index
servicePort: 2280
This configuration works fine, but i want to return http busy status when all readinessProbe fails and cluster cannot handle more requests. How can I do that? And a related question - google compute engine allows to specify backend service capacity based on cpu utilisation or requests per second for load-balancing purposes. Can I do the following with the kubernetes service object?

get pods belonging to a kubernetes replication controller

I'm wondering whether there is a way using the kubernetes API to get the the details of the pods that belong to a given replication controller. I've looked at the reference and the only way as I see it, is getting the pods list and go through each one checking whether it is belongs to a certain RC by analysing the 'annotations' section. It's again a hard job since the json specifies the whole 'kubernetes.io/created-by' part as a single string.
Every Replication Controller has a selector which defines the set of pods managed by it:
selector:
label_name_1: some_value
label_name_2: another_value
You can use the selector to get all the pods with a corresponding set of labels:
https://k8s.example.com/api/v1/pods?labelSelector=label_name_1%3Dsome_value,label_name_2%3Danother_value
To get the details of pods belonging to a particular replication controller we need to include selector field in the yaml file that defines the replication controller and matching label fields in the template of the pod to be created. An example of a replication controller yaml file is given below:
apiVersion: v1
kind: ReplicationController
metadata:
name: nginx
spec:
replicas: 3
selector:
app: nginx
template:
metadata:
name: nginx
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
To list out the pod names use the command:
pods=$(kubectl get pods --selector=app=nginx --output=jsonpath={.items..metadata.name})
echo $pods
In the above command the --output=jsonpath option specifies the expression that just gets the name of each pod.