get pods belonging to a kubernetes replication controller - api

I'm wondering whether there is a way using the kubernetes API to get the the details of the pods that belong to a given replication controller. I've looked at the reference and the only way as I see it, is getting the pods list and go through each one checking whether it is belongs to a certain RC by analysing the 'annotations' section. It's again a hard job since the json specifies the whole 'kubernetes.io/created-by' part as a single string.

Every Replication Controller has a selector which defines the set of pods managed by it:
selector:
label_name_1: some_value
label_name_2: another_value
You can use the selector to get all the pods with a corresponding set of labels:
https://k8s.example.com/api/v1/pods?labelSelector=label_name_1%3Dsome_value,label_name_2%3Danother_value

To get the details of pods belonging to a particular replication controller we need to include selector field in the yaml file that defines the replication controller and matching label fields in the template of the pod to be created. An example of a replication controller yaml file is given below:
apiVersion: v1
kind: ReplicationController
metadata:
name: nginx
spec:
replicas: 3
selector:
app: nginx
template:
metadata:
name: nginx
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
To list out the pod names use the command:
pods=$(kubectl get pods --selector=app=nginx --output=jsonpath={.items..metadata.name})
echo $pods
In the above command the --output=jsonpath option specifies the expression that just gets the name of each pod.

Related

how ingressroute is hooked up to traefik's ingress contoller

I am learning traefik and ingressroute. One thing confused me the most is how the two parts are connected together.
After deploying traefik and my own service, I can simply create the following ingressroute to make it work:
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: my-service-ir
namespace: my-service-ns
spec:
entryPoints:
- web
routes:
- match: Path(`/`)
kind: Rule
services:
- name: my-service
port: 8000
But the ingressroute has nothing shared with traefik: not in the same namespace, no selector, etc.. It seems to me that the ingressroute can magically find traefik and apply on traefik. I am curious what happened behind it.
Thanks
When you deploy traefik in the kubernetes cluster, you use the rbac-k8s manifests like here. If you use helm then these are all present under that hood.
These RBACs actually create the new resource types i.e. IngressRoute over here.
They are applied at cluster level as you see in the link ClusterRole. This gives them ClusterLevel privileges. This is the reason you don't see anything special in the namespace.
You can checkout the sample task here which will give some more light on the matter.

Should Tekton Dashboard deployed on root path?

I'm trying Tekton on a Kind cluster and successfully configured Tekton Dashboard to work with Ingress rules. But I don't have a dedicated domain name, and unlikely to have one later. This Tekton instance will be exposed on a subpath on another domain through another NGINX.
But Tekton Dashboard doesn't seem to work on subpath locations. Tekton Dashboard exposed with Ingress path: / works well, but if I change it to path: /tekton, it doesn't work.
So, is it designed to work only at root path? No support for working on subpath?
P.S.
I'm going to use Kind cluster for production too as I do not have access to a Kubernetes cluster. This is small service and we don't need scale, but just CI/CD-as-code. And nowadays it seems all of new CI/CD implementations are designed only for Kubernetes.
You can also use the following Ingress:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: tekton-dashboard
namespace: tekton-pipelines
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /$2
nginx.ingress.kubernetes.io/configuration-snippet: |
rewrite ^(/[a-z1-9\-]*)$ $1/ redirect;
spec:
rules:
- http:
paths:
- path: /tekton-dashboard(/|$)(.*)
pathType: Prefix
backend:
service:
name: tekton-dashboard
port:
number: 9097
Tekton Dashboard does support being exposed on a subpath, it attempts to detect the base URL to use and adapts accordingly. For example, if you run kubectl proxy locally against the target cluster you can then access the Dashboard at http://localhost:8001/api/v1/namespaces/tekton-pipelines/services/tekton-dashboard:http/proxy/
More details about the issue you're encountering would be useful to help debug, e.g. Dashboard version? Is anything loading at all? Ingress controller and config? Any errors in the browser console / network tab, etc.

How to share an AWS NLB between two EKS Services?

We have a cross AZ deployment in an EKS cluster inside an AWS region where every AZ is independent, meaning that components do not talk to other components that are not in the same AZ.
We are using Contour as our ingress and have different Daemon Sets, one for each AZ. As a result, we also have different Services defined for every Daemon Set.
When deploying the Services to EKS, two different NLBs are created.
We would like to have only one NLB that will be shared between the Services.
The question is: can it be achieved and if it can then how?
Yes, you should be able to do this, by using an appropriate selector in your Service.
In each DaemonSet that you use, you have set the label in the Pod-template for the pods.
E.g.
template:
metadata:
labels:
app: contour
az: az-1
and
template:
metadata:
labels:
app: contour
az: az-2
Now, in your Loadbalancer Service, you need to use a selector that matches the Pods on both your DaemonSets, e.g. app: contour
Example Service
apiVersion: v1
kind: Service
metadata:
name: my-service
annotation:
service.beta.kubernetes.io/aws-load-balancer-type: nlb
spec:
selector:
app: contour # this needs to match the Pods in all your DaemonSets
ports:
- protocol: TCP
port: 80
type: LoadBalancer

kubernetes + ingress controller + lets encrypt + block mixed content

Thanks for taking the time to read this.
I am testing a cluster of kubernetes in digitalocean.
I have installed an ingress controler with cert-manager and letsencript (I followed this guide https://cert-manager.io/docs/tutorials/acme/ingress/) and when I launch some deployment I have problems with the files that are not in the root (Blocked loading mixed active content).
To give a more concrete example, I'm trying to put the application bookstack, if I not active tls, I see everything correctly. On the other hand if I activate tls I see everything without css and in the console I see that there are files that have been blocked by the browser.
On the other hand if I do a port-forward I see it correctly (http://localhost:8080/) -> note http and not https
I have done the test also with a wordpress, with the same problem, the main page is seen without the styles. In this case, for wordpress there is a plugin, that if you get into the backend (browsing the page without css is a torture) and install it solves the problem (this is the plugin https://es.wordpress.org/plugins/ssl-insecure-content-fixer/). On plugin i have to check "HTTP_X_FORWARDED_PROTO" to make it work.
But I'm realizing that it's a recurring problem, and I think there are concepts that are not clear to me and I do not know very well what I have to do.
Here is an example of the ingress controller
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: bookstack
annotations:
kubernetes.io/ingress.class: "nginx"
# cert-manager.io/issuer: "letsencrypt-staging"
cert-manager.io/issuer: "letsencrypt-prod"
spec:
tls:
- hosts:
- k1.athosnetwork.es
secretName: tls-bookstack
rules:
- host: k1.athosnetwork.es
http:
paths:
- path: /
backend:
serviceName: bookstack
servicePort: 80
Thanks very much for your time
I have found the solution, I write it for other person on my situation.
The problem were on one environment variable that I dont write on my deployment.
APP_URL .
On bookstack dockerhub repository talk about it:
-e APP_URL=http://your.site.here.xyz for specifying the url your application will be accessed on (required for correct operation of reverse proxy)

Unable to create a PodPreset on EKS cluster

Environment:
AWS managed Kubernetes cluster (EKS)
Action:
Create a PodPreset object by applying the following:
apiVersion: settings.k8s.io/v1alpha1
kind: PodPreset
meta data:
name: sample
spec:
selector:
matchLabels:
app: microservice
env:
- name: test_env
value: "6379"
volumeMounts:
- name: shared
mountPath: /usr/shared
volumes:
- name: shared
emptyDir: {}
Observation:
unable to recognize "podpreset.yaml": no matches for kind "PodPreset" in version "settings.k8s.io/v1alpha1"
Looks like that the API version settings.k8s.io/v1alpha1 is not supported by default by EKS.
I'm using EKS as well, I just run this commands to check it out:
$ kubectl proxy
Starting to serve on 127.0.0.1:8001
The I run
curl localhost:8001/apis
And clearly in my case settings.k8s.io/v1alpha1 was not supported. I recommend running the same checks.
Also checking here it's mentioned that
You should have enabled the API type settings.k8s.io/v1alpha1/podpreset
I don't know how can the settings.k8s.io/v1alpha1 can be enabled in EKS.
EKS does not enable any k8s Alpha feature and as of today, PodPreset is a k8s Alpha feature. So if you want to achieve something like above, you have will have to create a Mutating Admission webhook which is supported by EKS now. But it is not sure simple use cases, PodPreset can handle most of the simple use cases hopefully it will enter Beta Phase soon.
As of 03.11.2020 there is still an open GitHub request for this.