Variables in Kubernetes ConfigMaps - variables

I'm currently working with some configmaps and I've noticed, that there are some documents in the configmap having redundant values/ referencing the same value e.g.
apiVersion: v1
kind: ConfigMap
metadata:
name: my-configmap
labels:
app: my-app
data:
some_file: |-
...
foo1=bar
...
some_other_file: |-
...
foo2=bar
...
Is it somehow possible, to use a variable instead of writing bar two times?
This way I wouldn't have to search every config file if bar changes at some point.

No, it's not possible.
If the problems gets worse, you can always start using kustomize or Helm, which allow you to create templates for your Kubernetes manifests, and use variables on those templates.

Related

Login page inside ingress in kubernetes

How can I have login page inside my ingress (nginx)? I know I can use basic authentication or OAuth but I want to have a login page just with one user and I don't want it will be like basic authentication. I want it has a specific page.
As per this official NGINX Ingress Controller document, you can create a custom nginx page for OAuth or basic authentication nginx ingress controller. For this you have to use the volume but at the same time if you are using new template then the configmap also needs to be updated.
By using a volume you can add your custom template to nginx deployment like this
volumeMounts:
- mountPath: /etc/nginx/template
name: nginx-template-volume
readOnly: true
volumes:
- name: nginx-template-volume
configMap:
name: nginx-template
items:
- key: custom-nginx.tmpl
path: custom-nginx.tmpl
For more detailed information on how to use the custom templates refer this document DOC1 DOC2
Try this tutorial for more details (refer to the custom templates section)

How can I configure the AdmissionConfiguration > PodSecurity > PodSecurityConfiguration in an EKS cluster?

If I understand right from Apply Pod Security Standards at the Cluster Level, in order to have a PSS (Pod Security Standard) as default for the whole cluster I need to create an AdmissionConfiguration in a file that the API server needs to consume during cluster creation.
I don't see any way to configure / provide the AdmissionConfiguration at CreateCluster , also I'm not sure how to provide this AdmissionConfiguration in a managed EKS node.
From the tutorials that use KinD or minikube it seems that the AdmissionConfiguration must be in a file that is referenced in the cluster-config.yaml, but if I'm not mistaken the EKS API server is managed and does not allow to change or even see this file.
The GitHub issue aws/container-roadmap Allow Access to AdmissionConfiguration seems to suggest that currently there is no possibility of providing AdmissionConfiguration at creation, but on the other hand aws-eks-best-practices says These exemptions are applied statically in the PSA admission controller configuration as part of the API server configuration
so, is there a way to provide PodSecurityConfiguration for the whole cluster in EKS? or I'm forced to just use per-namespace labels?
See also Enforce Pod Security Standards by Configuration the Built-in Admission Controller and EKS Best practices PSS and PSA
I don't think there is any way currently in EKS to provide configuration for the built-in PSA controller (Pod Security Admission controller).
But if you want to implement a cluster-wide default for PSS (Pod Security Standards) you can do that by installing the the official pod-security-webhook as a Dynamic Admission Controller in EKS.
git clone https://github.com/kubernetes/pod-security-admission
cd pod-security-admission/webhook
make certs
kubectl apply -k .
The default podsecurityconfiguration.yaml in pod-security-admission/webhook/manifests/020-configmap.yaml allows EVERYTHING so you should edit it and write something like
apiVersion: v1
kind: ConfigMap
metadata:
name: pod-security-webhook
namespace: pod-security-webhook
data:
podsecurityconfiguration.yaml: |
apiVersion: pod-security.admission.config.k8s.io/v1beta1
kind: PodSecurityConfiguration
defaults:
enforce: "restricted"
enforce-version: "latest"
audit: "restricted"
audit-version: "latest"
warn: "restricted"
warn-version: "latest"
exemptions:
# Array of authenticated usernames to exempt.
usernames: []
# Array of runtime class names to exempt.
runtimeClasses: []
# Array of namespaces to exempt.
namespaces: ["policy-test2"]
then
kubectl apply -k .
kubectl -n pod-security-webhook rollout restart deployment/pod-security-webhook # otherwise the pods won't reread the configuration changes
After those changes you can verify that the default forbids privileged pods with:
kubectl --context aihub-eks-terraform create ns policy-test1
kubectl --context aihub-eks-terraform -n policy-test1 run --image=ecerulm/ubuntu-tools:latest --rm -ti rubelagu-$RANDOM --privileged
Error from server (Forbidden): admission webhook "pod-security-webhook.kubernetes.io" denied the request: pods "rubelagu-32081" is forbidden: violates PodSecurity "restricted:latest": privileged (container "rubelagu-32081" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (container "rubelagu-32081" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "rubelagu-32081" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "rubelagu-32081" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "rubelagu-32081" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
Note: that you get the error forbidding privileged pods even when the namespace policy-test1 has no label pod-security.kubernetes.io/enforce, so you know that this rule comes from the pod-security-webhook that we just installed and configured.
Now if you want to create a pod you will be forced to create in a way that complies with the restricted PSS, by specifying runAsNonRoot, seccompProfile.type and capabilities and For example:
apiVersion: v1
kind: Pod
metadata:
name: test-1
spec:
restartPolicy: Never
securityContext:
runAsNonRoot: true
runAsUser: 1000
runAsGroup: 3000
fsGroup: 2000
seccompProfile:
type: RuntimeDefault
containers:
- name: test
image: ecerulm/ubuntu-tools:latest
imagePullPolicy: Always
command: ["/bin/bash", "-c", "--", "sleep 900"]
securityContext:
privileged: false
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL

How to define name for s3bucket for different environment in Kafka Sink

I am currently setting up my aws s3 bucket for different environments so I can have data in dev, tqa, stg, and prd. The name of my bucket in dev is s3.dev.kafka.sink while in tqa it is named as s3.tqa.kafka.sink each associated with its correct env. The documentation in the Kafka Connect website doesn't specify how to be set the environments, so I did the following way, however I keep getting errors that the bucket name is not named properly.
I put it in the secret yaml file
apiVersion: kubernetes-client.io/v1
kind: ExternalSecret
metadata:
name: kafka-sink-s3-secret
namespace: namespace
spec:
backendType: secretManager
data:
-key: s3.tqa.kafka.sink
name: bucket_name
property: bucket_name
While in deployment file
env:
-name: bucket_name
valueFrom:
secretKeyRef:
name:kaka-sink-s3-secret
key: bucket_name
And I will specify the bucket name in the config:
"s3.bucket.name":"'"$bucket_name"'"
But it fails to deploy. Any idea how can i specify as s3.{{ENV}}.kafka.sink so it runs the correct bucket name in their own env in aws
Out of the box, Kafka Connect doesn't have any way to access environment variables other than those defined by the AWS SDK (the keys and profile, at least)
Sounds like you will need to use a ConfigProvider of the Kafka Connect API
Here's one example on Github, which you'd need to compile and load into your Docker images - https://github.com/giogt/kafka-env-config-provider
Inside the connector properties, use like this
"bucket.name": "${env:ENVIRONMENT_VARIABLE_NAME}"
You should be able to use Helm to better separate/template out the full bucket name within the secret/deployment resource definition

Assign roles to EKS cluster in manifest file?

I'm new to Kubernetes, and am playing with eksctl to create an EKS cluster in AWS. Here's my simple manifest file
kind: ClusterConfig
apiVersion: eksctl.io/v1alpha5
metadata:
name: sandbox
region: us-east-1
version: "1.18"
managedNodeGroups:
- name: ng-sandbox
instanceType: r5a.xlarge
privateNetworking: true
desiredCapacity: 2
minSize: 1
maxSize: 4
ssh:
allow: true
publicKeyName: my-ssh-key
fargateProfiles:
- name: fp-default
selectors:
# All workloads in the "default" Kubernetes namespace will be
# scheduled onto Fargate:
- namespace: default
# All workloads in the "kube-system" Kubernetes namespace will be
# scheduled onto Fargate:
- namespace: kube-system
- name: fp-sandbox
selectors:
# All workloads in the "sandbox" Kubernetes namespace matching the
# following label selectors will be scheduled onto Fargate:
- namespace: sandbox
labels:
env: sandbox
checks: passed
I created 2 roles, EKSClusterRole for cluster management, and EKSWorkerRole for the worker nodes? Where do I use them in the file? I'm looking at eksctl Config file schema page and it's not clear to me where in manifest file to use them.
As you mentioned, it's in the managedNodeGroups docs
managedNodeGroups:
- ...
iam:
instanceRoleARN: my-role-arn
# or
# instanceRoleName: my-role-name
You should also read about
Creating a cluster with Fargate support using a config file
AWS Fargate

Drone self-hosted, pipeline routing between Drone servers

I have dev & prod kubernetes clusters with drone server in each. Both servers watching the same set of github repos.
I want to do smth like:
---
kind: pipeline
name: artifacts
drone_instance: dev # <--- magic routing
steps:
- ...
trigger:
event: tag
ref: refs/tags/dev-*
---
kind: pipeline
name: deploy_dev
drone_instance: dev # <--- magic routing
steps:
- ...
trigger:
event: tag
ref: refs/tags/dev-*
---
kind: pipeline
name: deploy_prod
drone_instance: prod # <--- magic routing
steps:
- ...
trigger:
event: tag
ref: refs/tags/prod-*
E.g. run different pipelines on different drone instances. I was looking at platform filter but it does not seem to be available in Kubernetes mode. Did anyone hack smth similar?
NOTE: corresponding gh thread https://github.com/drone/drone-runtime/issues/63
Got answer from drone.io team in Gitter:
I recommend using .drone.yml for prod, and then creating a
.drone.dev.yml for dev. In your dev Drone instance, in the repository
settings, point Drone at the .drone.dev.yml