helm config map from external yaml file - config

I want to update my helm dependencies with configurations , declared in central folder ,among microservices .
having the following tree of folders
- config-repo
- application.yml
- specific.yml
- kubernetes
- helm
- common
- components
- microservice#1 (templates relating to )
- config-repo
- application.yml
- specefic.yml
- templates
- configmap_from_file.yaml
- values.yaml
- Chart.yaml
- microservice#2
- ...
here is the template of microservice#1 configmap_from_file.yaml file
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ include "common.fullname" . }}
labels:
app.kubernetes.io/name: {{ include "common.name" . }}
helm.sh/chart: {{ include "common.chart" . }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
data:
{{ (.Files.Glob "config-repo/*").AsConfig | indent 2 }}
{{- end -}}
and inside of microservice#1's config-repo files are
specific.yml
../../../../../config-repo/specific.yml
application.yml
../../../../../config-repo/application.yml
they both are just reference to other files
when I command helm dep update . and then helm template . -s templates/configmap_from_file.yaml
I expect the following output
apiVersion: v1
kind: ConfigMap
metadata:
name: review
labels:
app.kubernetes.io/name: review
helm.sh/chart: review-1.0.0
app.kubernetes.io/managed-by: Helm
data:
application.yml: CONTENTS OF THE FILE IN ADDRESS
specific.yml: CONTENTS OF THE FILE IN ADDRESS
but the following is appeared
apiVersion: v1
kind: ConfigMap
metadata:
name: review
labels:
app.kubernetes.io/name: review
helm.sh/chart: review-1.0.0
app.kubernetes.io/managed-by: Helm
data:
application.yml: ../../../../../config-repo/application.yml
specific.yml: ../../../../../config-repo/specific.yml
why just address is injected and not the content are appeared

Related

sslCertificateCouldNotParseCert - Error syncing to GCP:

I have a terraform code which will deploy the frontend application and have ingress.yaml helm chart.
ingress.yaml
{{- if .Values.ingress.enabled -}}
{{- if semverCompare ">=1.14-0" .Capabilities.KubeVersion.GitVersion -}}
apiVersion: networking.k8s.io/v1beta1
{{- else -}}
apiVersion: extensions/v1beta1
{{- end }}
kind: Ingress
metadata:
name: {{ .Values.global.namespace }}-ingress
namespace: {{ .Values.global.namespace }}
labels:
{{- include "test-frontend.labels" . | nindent 4 }}
annotations:
kubernetes.io/ingress.class: "gce-internal"
kubernetes.io/ingress.allow-http: "false"
spec:
{{- if .Values.ingress.tls }}
tls:
{{- range .Values.ingress.tls }}
- hosts:
{{- range .hosts }}
- {{ . | quote }}
{{- end }}
secretName: {{ .secretName }}
{{- end }}
{{- end }}
rules:
{{- range .Values.ingress.hosts }}
- host: {{ .host | quote }}
http:
paths:
{{- range .paths }}
- path: {{ .path }}
backend:
serviceName: {{ .servicename }}
servicePort: {{ .serviceport }}
{{- end }}
{{- end }}
{{- end }}
values.yaml
ingress:
enabled: true
annotations:
kubernetes.io/ingress.class: "gce-internal"
kubernetes.io/ingress.regional-static-ip-name : "ingress-internal-static-ip"
kubernetes.io/ingress.allow-http: "false"
hosts:
- host: test-dev.test.com
paths:
- path: "/*"
servicename: test-frontend-service
serviceport: 80
- path: "/api/*"
servicename: test-backend-service
serviceport: 80
tls:
- hosts:
- test-dev.test.com
secretName: ingress-tls-credential-file
type: kubernetes.io/tls
crt: <<test.pem value>>
key: <<test.key value>>
The terraform apply command ran successfully. In GCP also the certificate is accepted and ingress in up and running inside Kubernetes Service in GCP. But If I pass the .crt and .key as a file in values.yaml in terraform code
ingress:
enabled: true
annotations:
kubernetes.io/ingress.class: "gce-internal"
kubernetes.io/ingress.regional-static-ip-name : "ingress-internal-static-ip"
kubernetes.io/ingress.allow-http: "false"
hosts:
- host: test-dev.test.com
paths:
- path: "/*"
servicename: test-frontend-service
serviceport: 80
- path: "/api/*"
servicename: test-backend-service
serviceport: 80
tls:
- hosts:
- test-dev.test.com
secretName: ingress-tls-credential-file
type: kubernetes.io/tls
crt: file(../../.secret/test.crt)
key: file(../../.secret/test.key)
The values.yaml will send the certificate to helm->template->secret.yaml which will create the secrets(ingress-tls-credential-file)
secret.yaml
{{- if .Values.ingress.tls }}
{{- $namespace := .Values.global.namespace }}
{{- range .Values.ingress.tls }}
apiVersion: v1
kind: Secret
metadata:
name: {{ .secretName }}
namespace: {{ $namespace }}
labels:
{{- include "test-frontend.labels" $ | nindent 4 }}
type: {{ .type }}
data:
tls.crt: {{ toJson .crt | b64enc | quote }}
tls.key: {{ toJson .key | b64enc | quote }}
{{- end }}
{{- end }}
We are getting below error in GCP -> Kubernetes Engine -> Service & Ingress. How to pass the files to the values.yaml file.
Error syncing to GCP: error running load balancer syncing routine:
loadbalancer 6370cwdc-isp-isp-ingress-ixjheqwi does not exist: Cert
creation failures - k8s2-cr-6370cwdc-q0ndkz9m629eictm-ca5d0f56ba7fe415
Error:googleapi: Error 400: The SSL certificate could not be parsed.,
sslCertificateCouldNotParseCert
So google can accept your cert and key files, you need to make sure they have the proper format as per next steps
You need to first format them creating a Self Managed SSL Certificate resource with your existing files using you GCP Cloud Shell
gcloud compute ssl-certificates create CERTIFICATE_NAME \
--certificate=CERTIFICATE_FILE \
--private-key=PRIVATE_KEY_FILE \
--region=REGION \
--project=PROJECT_ID
Then you need to complete a few more steps to make sure you have all the parameters required in your .yaml file and that you have the proper services enable to accept the information coming from it (you may already have completed them):
Enable Kubernetes Engine API by running the following command:
gcloud services enable container.googleapis.com \
--project=PROJECT_ID
Create a GKE cluster:
gcloud container clusters create CLUSTER_NAME \
--release-channel=rapid \
--enable-ip-alias \
--network=NETWORK_NAME \
--subnetwork=BACKEND_SUBNET_NAME \
--scopes=https://www.googleapis.com/auth/cloud-platform \
--region=REGION --machine-type=MACHINE_TYPE \
--project=PROJECT_ID
The cluster is created in the BACKEND_SUBNET_NAME.
The cluster uses GKE version is 1.18.9-gke.801 or later.
The cluster is created with the Cloud Platform scope.
The cluster is created with the desired service account you would like to use to run the - application.
The cluster is using n1-standard-4 machine type or better.
Enable IAP by doing the following steps:
Configure the OAuth consent screen.
Create OAuth credentials.
Convert the ID and Secret to base64 by running the following commands:
echo -n 'CLIENT_ID' | base64
echo -n 'CLIENT_SECRET' | base64
Create an internal static IP address, and reserve a static IP address for your load balancer
gcloud compute addresses create STATIC_ADDRESS_NAME \
--region=REGION --subnet=BACKEND_SUBNET_NAME \
--project=PROJECT_ID
Get the static IP address by running the following command:
gcloud compute addresses describe STATIC_ADDRESS_NAME \
--region=REGION \
--project=PROJECT_ID
7.Create the values YAML file by copying the gke_internal_ip_config_example.yaml and renaming it to PROJECT_ID_gke_config.yaml:
clientIDEncoded: Base64 encoded CLIENT_ID from earlier step.
clientSecretEncoded: Base64 encoded CLIENT_SECRET from earlier step.
certificate.name: CERTIFICATE_NAME that you have created earlier.
initialEmail: The INITIAL_USER_EMAIL email of the initial user who will set up Custom Governance.
staticIpName: STATIC_ADDRESS_NAME that you created earlier.
Try again your deployment after completing above steps.
You seem to mix a secret and a direct definition.
You need first to create the ingress-tls-credential-file secret then link it in your ingress definition like the example https://kubernetes.io/fr/docs/concepts/services-networking/ingress/#tls
apiVersion: v1
data:
tls.crt: file(../../.secret/test.crt)
tls.key: file(../../.secret/test.key)
kind: Secret
metadata:
name: ingress-tls-credential-file
namespace: default
type: kubernetes.io/tls
Then clean your ingress
ingress:
enabled: true
annotations:
kubernetes.io/ingress.class: "gce-internal"
kubernetes.io/ingress.regional-static-ip-name : "ingress-internal-static-ip"
kubernetes.io/ingress.allow-http: "false"
hosts:
- host: test-dev.test.com
paths:
- path: "/*"
servicename: test-frontend-service
serviceport: 80
- path: "/api/*"
servicename: test-backend-service
serviceport: 80
tls:
- hosts:
- test-dev.test.com
secretName: ingress-tls-credential-file
type: kubernetes.io/tls

Conftest Fails For a Valid Kubernetets YAML File

I have the following simple Kubernetes YAML Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Values.app.name }}
namespace: {{ .Values.app.namespace }}
spec:
selector:
matchLabels:
app: {{ .Values.app.name }}
replicas: 1
template:
metadata:
labels:
app: {{ .Values.app.name }}
spec:
securityContext:
runAsUser: 1000
runAsGroup: 3000
fsGroup: 2000
containers:
- name: {{ .Values.app.name }}
image: {{ .Values.plantSimulatorService.image.repository }}:{{ .Values.plantSimulatorService.image.tag }}
ports:
- containerPort: {{ .Values.plantSimulatorService.ports.containerPort }} # Get this value from ConfigMap
I have the following in my test.rego:
package main
import data.kubernetes
name = input.metadata.name
deny[msg] {
kubernetes.is_deployment
not input.spec.template.spec.securityContext.runAsNonRoot
msg = sprintf("Containers must not run as root in Deployment %s", [name])
}
When I ran this using the following command:
joesan#joesan-InfinityBook-S-14-v5:~/Projects/Private/infrastructure-projects/plant-simulator-deployment$ helm conftest helm-k8s -p test
WARN - Found service plant-simulator-service but services are not allowed
WARN - Found service plant-simulator-grafana but services are not allowed
WARN - Found service plant-simulator-prometheus but services are not allowed
FAIL - Containers must not run as root in Deployment plant-simulator
FAIL - Deployment plant-simulator must provide app/release labels for pod selectors
As you can see I'm indeed not running the container as root, but despite that I get this error message - Containers must not run as root in Deployment plant-simulator
Any ideas what the reason could be?
You need to add runAsNonRoot to your securityContext:
securityContext:
runAsUser: 1000
runAsGroup: 3000
fsGroup: 2000
runAsNonRoot: true
The rego rule is only able to validate Yaml structure - it is not clever enough to work out that your configuration is effectively running a non-root user.

Populating AWS Alb Ingress Annotations from ConfigMap

I am creating a 'alb.ingress' resource as part of my Helm chart.
apiVersion: extenstions/v1beta1
kind: Ingress
metadate:
annotation:
alb.ingress.kubernetes.io/certification-arn: $cert_arn
alb.ingress.kubernetes.io/security-group: $sg
...
The values required in the 'alb.ingress' resource annotation sections, are available in my ConfigMap.
env:
- name: cert_arn
valueFrom:
configMapKeyRef:
name: environmental-variables
key: certification_arn
- name: sg
valueFrom:
configMapKeyRef:
name: environmental-variables
key: security-groups
...
Is there a way to populate the annotations using the config-map?
The way I solved this challenge was to create the ingress resource using Helm and the variables I had prior to creating the resource, such as name of the application, namespaces etc.
apiVersion: extenstions/v1beta1
kind: Ingress
metadata:
name: "{{ .Values.application.name }}-ingress"
namespace: "{{ .Values.env.name }}"
labels:
app: "{{ .Values.application.name }}"
specs:
rules:
- host: "{{ .Values.environment.name }}.{{ .Values.application.name }}.{{ .Values.domain.name }}"
https:
....
I used a pod (a job is also an option) to annotate the newly created ingress resource using the environmental values from the configmap.
apiVersion: extenstions/v1beta1
kind: Ingress
metadate:
name: annotate-ingress-alb
spec:
serviceAccountName: internal-kubectl
containers:
- name: modify-alb-ingress-controller
image: "{{ .Values.images.varion }}"
command: ["sh", "-c"]
args:
- '...
kubectl annotate ingress -n {{ .Values.env.name }} {{ .Values.application.name }}-ingress alb.ingress.kubernetes.io/certificate-arn=$CERT_ARN;
env:
- name: cert_arn
valueFrom:
configMapKeyRef:
name: environmental-variables
key: certification_arn
Note that the pod should have the right service account with the right permission roles are attached to it. For instance, in this case for the pod to be able to annotate the ALB, it had to have extensions apiGroup and the ingress resources in the list of permissions (I have not restricted the verbiage yet).
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: service-account-role
rules:
- apiGroups:
- ""
- extensions
resources:
- ingresses
verbs: ["*"]
Hope this helps someone in the future.

Problems with Traefik/Keycloak (and Gatekeeper) in front of Kibana

I want to use Keycloak as a standard way of authenticating users to applications running in our Kubernetes clusters. One of the clusters is running the Elastic ECK component (v1.1.1) and we use the operator to deploy Elastic clusters and Kibana as a frontend. In order to keep things as simple as possible I’ve done the following.
Deployed Kibana
apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
name: {{ .Values.kibana.name }}
namespace: {{ .Release.Namespace }}
annotations:
traefik.ingress.kubernetes.io/service.sticky.cookie: "true"
spec:
version: {{ .Values.kibana.version }}
count: {{ .Values.kibana.instances }}
elasticsearchRef:
name: {{ .Values.kibana.elasticCluster }}
namespace: {{ .Release.Namespace }}
podTemplate:
spec:
containers:
- name: kibana
env:
- name: SERVER_BASEPATH
value: {{ .Values.kibana.serverBasePath }}
resources:
requests:
{{- if not .Values.kibana.cpu.enableBurstableQoS }}
cpu: {{ .Values.kibana.cpu.requests }}
{{- end }}
memory: {{ .Values.kibana.memory.requests }}Gi
limits:
{{- if not .Values.kibana.cpu.enableBurstableQoS }}
cpu: {{ .Values.kibana.cpu.limits }}
{{- end }}
memory: {{ .Values.kibana.memory.limits }}Gi
http:
tls:
selfSignedCertificate:
disabled: true
Created Ingress
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
name: kibana-{{ .Values.kibana.name }}-stripprefix
namespace: {{ .Release.Namespace }}
spec:
stripPrefix:
prefixes:
- {{ .Values.kibana.serverBasePath }}
forceSlash: true
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: {{ .Values.kibana.name }}-ingress
namespace: {{ .Release.Namespace }}
annotations:
kubernetes.io/ingress.class: traefik
traefik.ingress.kubernetes.io/router.entrypoints: http
traefik.ingress.kubernetes.io/router.middlewares: {{ .Release.Namespace }}-kibana-{{ .Values.kibana.name }}-stripprefix#kubernetescrd
spec:
rules:
- http:
paths:
- path: {{ .Values.kibana.serverBasePath }}
backend:
servicePort: {{ .Values.kibana.port }}
serviceName: {{ .Values.kibana.name }}-kb-http
Result
Deploying the above works perfectly fine. I’m able to reach the Kibana UI using the external IP exposed by our MetalLB component. I simply enter http://external IP/service/logging/kibana and I’m presented to the Kibana log in screen and I can log on using the “built in” authentication process.
Adding the Keycloak Gatekeeper
Now, if I add the following to the Kibana manifest, effectively adding the Keycloak Gatekeeper sidecar to the Kibana Pod:
- name: {{ .Values.kibana.name }}-gatekeeper
image: "{{ .Values.kibana.keycloak.gatekeeper.repository }}/docker-r/keycloak/keycloak-gatekeeper:{{ .Values.kibana.keycloak.gatekeeper.version }}"
args:
- --config=/etc/keycloak-gatekeeper.conf
ports:
- containerPort: 3000
name: proxyport
volumeMounts:
- name: gatekeeper-config
mountPath: /etc/keycloak-gatekeeper.conf
subPath: keycloak-gatekeeper.conf
volumes:
- name: gatekeeper-config
configMap:
name: {{ .Release.Name }}-gatekeeper-config
with the following ConfigMap which is "mounted":
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Release.Name }}-gatekeeper-config
namespace: {{ .Release.Namespace }}
data:
keycloak-gatekeeper.conf: |+
redirection-url: {{ .Values.kibana.keycloak.gatekeeper.redirectionUrl }}
discovery-url: https://.../auth/realms/{{ .Values.kibana.keycloak.gatekeeper.realm }}
skip-openid-provider-tls-verify: true
client-id: kibana
client-secret: {{ .Values.kibana.keycloak.gatekeeper.clientSecret }}
enable-refresh-tokens: true
encryption-key: ...
listen: :3000
tls-cert:
tls-private-key:
secure-cookie: false
upstream-url: {{ .Values.kibana.keycloak.gatekeeper.upstreamUrl }}
resources:
- uri: /*
groups:
- kibana
The upstream-url points to http://127.0.0.1:5601
and add an intermediary service:
In order to explicitly address the Gatekeeper proxy I added another service, “keycloak-proxy” as such:
apiVersion: v1
kind: Service
metadata:
name: {{ .Values.kibana.name }}-keycloak-proxy
namespace: {{ .Release.Namespace }}
spec:
type: ClusterIP
selector:
common.k8s.elastic.co/type: kibana
kibana.k8s.elastic.co/name: cap-logging
ports:
- name: http
protocol: TCP
port: 8888
targetPort: proxyport
and change the backend definition in the Kibana definition to:
servicePort: 8888
serviceName: {{ .Values.kibana.name }}-keycloak-proxy
and then issue the same URL as above, http://external IP/service/logging/kibana, I’m redirected to http://external IP/oauth/authorize?state=0db97b79-b980-4cdc-adbe-707a5e37df1b and get an “404 Page not found” error.
If I reconfigure the “keycloak-proxy” service and convert it into a NodePort and expose it on, say, port 32767 and issue an http://host IP:32767 I’m presented to the Keycloak login screen on the Keycloak server!
If I look into the Gatekeeper startup log I find the following:
1.6018108005048046e+09 info starting the service {"prog": "keycloak-gatekeeper", "author": "Keycloak", "version": "7.0.0 (git+sha: f66e137, built: 03-09-2019)"}
1.6018108005051787e+09 info attempting to retrieve configuration discovery url {"url": "https://.../auth/realms/...", "timeout": "30s"}
1.601810800537417e+09 info successfully retrieved openid configuration from the discovery
1.6018108005392597e+09 info enabled reverse proxy mode, upstream url {"url": "http://127.0.0.1:5601"}
1.6018108005393562e+09 info using session cookies only for access and refresh tokens
1.6018108005393682e+09 info protecting resource {"resource": "uri: /*, methods: DELETE,GET,HEAD,OPTIONS,PATCH,POST,PUT,TRACE, required: authentication only"}
1.6018108005398147e+09 info keycloak proxy service starting {"interface": ":3000"}
This is what I get when I try to access Kibana through the Gatekeeper proxy:
http://host/service/logging/kibana (gets redirected to) http://host/oauth/authorize?state=4dbde9e7-674c-4593-83f2-a8e5ba7cf6b5
and the Gatekeeper log:
1.601810963344485e+09 error no session found in request, redirecting for authorization {"error": "authentication session not found"}
I've been struggling with this for some time now and seems to be stuck! If anybody here "knows what's going on" I'd be very grateful.

Failed to discover supported resources

I am trying to Create User With Limited Namespace Access. created namespace named as test and also created Group:programmers, User:frontend. Generated credentials for user:frontend by the help of following http://docs.bitnami.com/kubernetes/how-to/configure-rbac-in-your-kubernetes-cluster/
I created a role. Here is my role.yml
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
namespace: test
name: frontend-developer
rules:
- apiGroups: ["","extensions","apps"]
resources: ["deployments","replicasets","pods"]
verbs: ["get","list","watch","create","patch"]`
I created rolebinding. Here is role-binding.yml
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: frontend-deploy
namespace: test
subjects:
- kind: User
name: frontend
namespace: test
roleRef:
kind: Role
name: frontend-developer
apiGroup: rbac.authorization.k8s.io
I am talking my deployment file as
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nodefrontend
namespace: test
spec:
replicas: 3
template:
metadata:
labels:
app: bookstore
spec:
containers:
- name: nodeweb
image: balipalligayathri/devops-develop
ports:
- name: http
containerPort: 3000
protocol: TCP
I am using following commands while creating the role and rolebinding
$ kubectl create -f role.yml
$ kubectl create -f role-binding.yml
frontend developer Role and frontend-deploy Rolebindings were created.
Again, i am using the command kubectl create -f node-deployment.yml for deployment creation. Deployment was created and deleted successfully. here, i didn't mention any user while creating the deployment.so, I am trying to create deployment with user by using the below command.
kubectl create -f node-deployment.yml --as=frontend --context=frontend-context
I am facing the error like this
Error from server (Forbidden):
<html><head><meta http-equiv='refresh' content='1;url=/login?from=%2Fswagger-2.0.0.pb-v1%3Ftimeout%3D32s'/><script>window.location.replace('/login?from=%2Fswagger-2.0.0.pb-v1%3Ftimeout%3D32s');</script></head><body style='background-color:white; color:white;'>
Authentication requiredhttps://stackoverflow.com/questions/48164369/kubernetes- 1-8-dashboard-configurations-fails-with-error-no-kind-role-is-regi
You are authenticated as: anonymous
Groups that you are in:
Permission you need to have (but didn't): hudson.model.Hudson.Read
which is implied by: hudson.security.Permission.GenericRead
which is implied by: hudson.model.Hudson.Administer </body></html>
My doubt is: is there any necessity to mention the user in deployment.yml file?
You need to create a serviceAccount, take a look at this snippet:
apiVersion: v1
kind: ServiceAccount
metadata:
name: myAccount
bind it to your role:
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: myBinding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: frontend-developer
subjects:
- kind: ServiceAccount
name: myAccount
and use it in your Deployment:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nodefrontend
namespace: test
spec:
template:
metadata:
labels:
...
spec:
serviceAccountName: myAccount
Ref:
https://kubernetes.io/docs/reference/access-authn-authz/authentication/
https://kubernetes.io/docs/reference/access-authn-authz/rbac/