Unable to create a PodPreset on EKS cluster - amazon-eks

Environment:
AWS managed Kubernetes cluster (EKS)
Action:
Create a PodPreset object by applying the following:
apiVersion: settings.k8s.io/v1alpha1
kind: PodPreset
meta data:
name: sample
spec:
selector:
matchLabels:
app: microservice
env:
- name: test_env
value: "6379"
volumeMounts:
- name: shared
mountPath: /usr/shared
volumes:
- name: shared
emptyDir: {}
Observation:
unable to recognize "podpreset.yaml": no matches for kind "PodPreset" in version "settings.k8s.io/v1alpha1"

Looks like that the API version settings.k8s.io/v1alpha1 is not supported by default by EKS.
I'm using EKS as well, I just run this commands to check it out:
$ kubectl proxy
Starting to serve on 127.0.0.1:8001
The I run
curl localhost:8001/apis
And clearly in my case settings.k8s.io/v1alpha1 was not supported. I recommend running the same checks.
Also checking here it's mentioned that
You should have enabled the API type settings.k8s.io/v1alpha1/podpreset
I don't know how can the settings.k8s.io/v1alpha1 can be enabled in EKS.

EKS does not enable any k8s Alpha feature and as of today, PodPreset is a k8s Alpha feature. So if you want to achieve something like above, you have will have to create a Mutating Admission webhook which is supported by EKS now. But it is not sure simple use cases, PodPreset can handle most of the simple use cases hopefully it will enter Beta Phase soon.

As of 03.11.2020 there is still an open GitHub request for this.

Related

how ingressroute is hooked up to traefik's ingress contoller

I am learning traefik and ingressroute. One thing confused me the most is how the two parts are connected together.
After deploying traefik and my own service, I can simply create the following ingressroute to make it work:
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: my-service-ir
namespace: my-service-ns
spec:
entryPoints:
- web
routes:
- match: Path(`/`)
kind: Rule
services:
- name: my-service
port: 8000
But the ingressroute has nothing shared with traefik: not in the same namespace, no selector, etc.. It seems to me that the ingressroute can magically find traefik and apply on traefik. I am curious what happened behind it.
Thanks
When you deploy traefik in the kubernetes cluster, you use the rbac-k8s manifests like here. If you use helm then these are all present under that hood.
These RBACs actually create the new resource types i.e. IngressRoute over here.
They are applied at cluster level as you see in the link ClusterRole. This gives them ClusterLevel privileges. This is the reason you don't see anything special in the namespace.
You can checkout the sample task here which will give some more light on the matter.

Should Tekton Dashboard deployed on root path?

I'm trying Tekton on a Kind cluster and successfully configured Tekton Dashboard to work with Ingress rules. But I don't have a dedicated domain name, and unlikely to have one later. This Tekton instance will be exposed on a subpath on another domain through another NGINX.
But Tekton Dashboard doesn't seem to work on subpath locations. Tekton Dashboard exposed with Ingress path: / works well, but if I change it to path: /tekton, it doesn't work.
So, is it designed to work only at root path? No support for working on subpath?
P.S.
I'm going to use Kind cluster for production too as I do not have access to a Kubernetes cluster. This is small service and we don't need scale, but just CI/CD-as-code. And nowadays it seems all of new CI/CD implementations are designed only for Kubernetes.
You can also use the following Ingress:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: tekton-dashboard
namespace: tekton-pipelines
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /$2
nginx.ingress.kubernetes.io/configuration-snippet: |
rewrite ^(/[a-z1-9\-]*)$ $1/ redirect;
spec:
rules:
- http:
paths:
- path: /tekton-dashboard(/|$)(.*)
pathType: Prefix
backend:
service:
name: tekton-dashboard
port:
number: 9097
Tekton Dashboard does support being exposed on a subpath, it attempts to detect the base URL to use and adapts accordingly. For example, if you run kubectl proxy locally against the target cluster you can then access the Dashboard at http://localhost:8001/api/v1/namespaces/tekton-pipelines/services/tekton-dashboard:http/proxy/
More details about the issue you're encountering would be useful to help debug, e.g. Dashboard version? Is anything loading at all? Ingress controller and config? Any errors in the browser console / network tab, etc.

kubernetes + ingress controller + lets encrypt + block mixed content

Thanks for taking the time to read this.
I am testing a cluster of kubernetes in digitalocean.
I have installed an ingress controler with cert-manager and letsencript (I followed this guide https://cert-manager.io/docs/tutorials/acme/ingress/) and when I launch some deployment I have problems with the files that are not in the root (Blocked loading mixed active content).
To give a more concrete example, I'm trying to put the application bookstack, if I not active tls, I see everything correctly. On the other hand if I activate tls I see everything without css and in the console I see that there are files that have been blocked by the browser.
On the other hand if I do a port-forward I see it correctly (http://localhost:8080/) -> note http and not https
I have done the test also with a wordpress, with the same problem, the main page is seen without the styles. In this case, for wordpress there is a plugin, that if you get into the backend (browsing the page without css is a torture) and install it solves the problem (this is the plugin https://es.wordpress.org/plugins/ssl-insecure-content-fixer/). On plugin i have to check "HTTP_X_FORWARDED_PROTO" to make it work.
But I'm realizing that it's a recurring problem, and I think there are concepts that are not clear to me and I do not know very well what I have to do.
Here is an example of the ingress controller
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: bookstack
annotations:
kubernetes.io/ingress.class: "nginx"
# cert-manager.io/issuer: "letsencrypt-staging"
cert-manager.io/issuer: "letsencrypt-prod"
spec:
tls:
- hosts:
- k1.athosnetwork.es
secretName: tls-bookstack
rules:
- host: k1.athosnetwork.es
http:
paths:
- path: /
backend:
serviceName: bookstack
servicePort: 80
Thanks very much for your time
I have found the solution, I write it for other person on my situation.
The problem were on one environment variable that I dont write on my deployment.
APP_URL .
On bookstack dockerhub repository talk about it:
-e APP_URL=http://your.site.here.xyz for specifying the url your application will be accessed on (required for correct operation of reverse proxy)

Docker Containers as Jenkins Slave

I want to know are the following scenarios possible , please help me out:-
Scenario 1:-
I have my local system as a Jenkins Master and Every time I need A slave to run my automation test script , a docker container spins up as a Jenkins slave and my script is executed on the slave and after the execution is completed the container is destroyed .
Is this possible . I want to keep my local system as the Jenkins master .
Scenario 2:-
Can i spin up multiple containers as the Jenkins slave for local system as a Jenkins master.
Thanks
Scenario 1 is at least covered by the JENKINS/Kubernetes Plugin: see its README
Based on the Scaling Docker with Kubernetes article, automates the scaling of Jenkins agents running in Kubernetes.
But that requires a Kubernetes setup, which means, in your case (if you have only one machine), a minikube.
I have written a message on Scenario 1 (with Kubernetes) at this link:
Jenkins kubernetes plugin not working
Here the post.
Instead of using certificates, I suggest you to use credentials in kubernetes, by creating a serviceAccount:
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: jenkins
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: jenkins
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["create","delete","get","list","patch","update","watch"]
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["create","delete","get","list","patch","update","watch"]
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get","list","watch"]
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: jenkins
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: jenkins
subjects:
- kind: ServiceAccount
name: jenkins
and deploying jenkins using that serviceAccount:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: jenkins
name: jenkins
spec:
replicas: 1
selector:
matchLabels:
app: jenkins
template:
metadata:
labels:
app: jenkins
spec:
serviceAccountName: jenkins
....
I show you my screenshots for Kubernetes plugin (note Jenkins tunnel for the JNLP port, 'jenkins' is the name of my kubernetes service):
For credentials:
Then fill the fileds (ID will be autogenerated, description will be shown in credentials listbox), but be sure to have created serviceAccount in kubernetes as I said before:
My instructions are for the Jenkins master inside kubernetes. If you want it outside the cluster (but slaves inside) I think you have to use simple login/password credentials.
I hope it helps you.

get pods belonging to a kubernetes replication controller

I'm wondering whether there is a way using the kubernetes API to get the the details of the pods that belong to a given replication controller. I've looked at the reference and the only way as I see it, is getting the pods list and go through each one checking whether it is belongs to a certain RC by analysing the 'annotations' section. It's again a hard job since the json specifies the whole 'kubernetes.io/created-by' part as a single string.
Every Replication Controller has a selector which defines the set of pods managed by it:
selector:
label_name_1: some_value
label_name_2: another_value
You can use the selector to get all the pods with a corresponding set of labels:
https://k8s.example.com/api/v1/pods?labelSelector=label_name_1%3Dsome_value,label_name_2%3Danother_value
To get the details of pods belonging to a particular replication controller we need to include selector field in the yaml file that defines the replication controller and matching label fields in the template of the pod to be created. An example of a replication controller yaml file is given below:
apiVersion: v1
kind: ReplicationController
metadata:
name: nginx
spec:
replicas: 3
selector:
app: nginx
template:
metadata:
name: nginx
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
To list out the pod names use the command:
pods=$(kubectl get pods --selector=app=nginx --output=jsonpath={.items..metadata.name})
echo $pods
In the above command the --output=jsonpath option specifies the expression that just gets the name of each pod.