After installing FluxCD v2 on my EKS cluster, I defined a GitRepository definition pointing to a repo on GitHub.
---
apiVersion: source.toolkit.fluxcd.io/v1beta1
kind: GitRepository
metadata:
name: springbootflux-infra
namespace: flux-system
spec:
interval: 30s
ref:
branch: master
url: https://github.com/***/privaterepo
As the name says, yhe privaterepo on GitHub is a private. Problem is that FluxCD cannot read the repo. What can I do to allow FluxCD on EKS to be able to read the repo?
For private repositories you need to define a secret which contains the credentials.
Create a secret:
apiVersion: v1
kind: Secret
metadata:
name: repository-creds
type: Opaque
data:
username: <BASE64>
password: <BASE64>
Refer to the secret in your GitRepository object:
secretRef:
name: repository-creds
Official documentation:
https://fluxcd.io/docs/components/source/gitrepositories/#secret-reference
Related
I am learning Tekton (for business), coming from github actions (private).
The Tekton docs (or any other tutorial I could find) have instructions on how to automatically start a pipeline from a github push. Basically they all somewhat follow the below flow: (I am aware of PipelineRun/TaskRun etc)
Eventlistener - Trigger - TriggerTemplate - Pipeline
All above steps are basically configuration steps you need to take (and files to create and maintain), one easier than the other but as far as I can see they also need to be taken for every single repo you're maintaining. Compared to github actions where I just need 1 file in my repo describing everything I need this seems very elaborate (if not cumbersome).
Am I missing something ? Or is this just the way to go ?
Thanks !
they also need to be taken for every single repo you're maintaining
You're mistaken here.
The EventListener receives the payload of your webhook.
Based on your TriggerBinding, you may map fields from that GitHub payload, to variables, such as your input repository name/URL, a branch or ref to work with, ...
For GitHub push events, one way to do it would be with a TriggerBinding such as the following:
apiVersion: triggers.tekton.dev/v1alpha1
kind: TriggerBinding
metadata:
name: github-push
spec:
params:
- name: gitbranch
value: $(extensions.branch_name) # uses CEL interceptor, see EL below
- name: gitrevision
value: $(body.after) # uses body from webhook payload
- name: gitrepositoryname
value: $(body.repository.name)
- name: gitrepositoryurl
value: $(body.repository.clone_url)
We may re-use those params within our TriggerTemplate, passing them to our Pipelines / Tasks:
apiVersion: triggers.tekton.dev/v1alpha1
kind: TriggerTemplate
metadata:
name: github-pipelinerun
spec:
params:
- name: gitbranch
- name: gitrevision
- name: gitrepositoryname
- name: gitrepositoryurl
resourcetemplates:
- apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
generateName: github-job-
spec:
params:
- name: identifier
value: "demo-$(tt.params.gitrevision)"
pipelineRef:
name: ci-docker-build
resources:
- name: app-git
resourceSpec:
type: git
params:
- name: revision
value: $(tt.params.gitrevision)
- name: url
value: $(tt.params.gitrepositoryurl)
- name: ci-image
resourceSpec:
type: image
params:
- name: url
value: registry.registry.svc.cluster.local:5000/ci/$(tt.params.gitrepositoryname):$(tt.params.gitrevision)
- name: target-image
resourceSpec:
type: image
params:
- name: url
value: registry.registry.svc.cluster.local:5000/ci/$(tt.params.gitrepositoryname):$(tt.params.gitbranch)
timeout: 2h0m0s
Using the following EventListener:
apiVersion: triggers.tekton.dev/v1alpha1
kind: EventListener
metadata:
name: github-listener
spec:
triggers:
- name: github-push-listener
interceptors:
- name: GitHub push payload check
github:
secretRef:
secretName: github-secret # a Secret you would create (option)
secretKey: secretToken # the secretToken in my Secret matches to secret configured in GitHub, for my webhook
eventTypes:
- push
- name: CEL extracts branch name
ref:
name: cel
params:
- name: overlays
value:
- key: truncated_sha
expression: "body.after.truncate(7)"
- key: branch_name
expression: "body.ref.split('/')[2]"
bindings:
- ref: github-push
template:
ref: github-pipelinerun
And now, you can expose that EventListener, with an Ingress, to receive notifications from any of your GitHub repository.
here are my pvc.yaml:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: csi-s3-pvc
namespace: pai-storage
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
storageClassName: csi-s3
and here is my dashboard:
is my Authority wrong?
You need to call group API to link this PVC to OpenPAI group. Please refer: https://openpai.readthedocs.io/en/latest/manual/cluster-admin/how-to-set-up-storage.html#assign-storage-to-pai-groups
I am creating a 'alb.ingress' resource as part of my Helm chart.
apiVersion: extenstions/v1beta1
kind: Ingress
metadate:
annotation:
alb.ingress.kubernetes.io/certification-arn: $cert_arn
alb.ingress.kubernetes.io/security-group: $sg
...
The values required in the 'alb.ingress' resource annotation sections, are available in my ConfigMap.
env:
- name: cert_arn
valueFrom:
configMapKeyRef:
name: environmental-variables
key: certification_arn
- name: sg
valueFrom:
configMapKeyRef:
name: environmental-variables
key: security-groups
...
Is there a way to populate the annotations using the config-map?
The way I solved this challenge was to create the ingress resource using Helm and the variables I had prior to creating the resource, such as name of the application, namespaces etc.
apiVersion: extenstions/v1beta1
kind: Ingress
metadata:
name: "{{ .Values.application.name }}-ingress"
namespace: "{{ .Values.env.name }}"
labels:
app: "{{ .Values.application.name }}"
specs:
rules:
- host: "{{ .Values.environment.name }}.{{ .Values.application.name }}.{{ .Values.domain.name }}"
https:
....
I used a pod (a job is also an option) to annotate the newly created ingress resource using the environmental values from the configmap.
apiVersion: extenstions/v1beta1
kind: Ingress
metadate:
name: annotate-ingress-alb
spec:
serviceAccountName: internal-kubectl
containers:
- name: modify-alb-ingress-controller
image: "{{ .Values.images.varion }}"
command: ["sh", "-c"]
args:
- '...
kubectl annotate ingress -n {{ .Values.env.name }} {{ .Values.application.name }}-ingress alb.ingress.kubernetes.io/certificate-arn=$CERT_ARN;
env:
- name: cert_arn
valueFrom:
configMapKeyRef:
name: environmental-variables
key: certification_arn
Note that the pod should have the right service account with the right permission roles are attached to it. For instance, in this case for the pod to be able to annotate the ALB, it had to have extensions apiGroup and the ingress resources in the list of permissions (I have not restricted the verbiage yet).
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: service-account-role
rules:
- apiGroups:
- ""
- extensions
resources:
- ingresses
verbs: ["*"]
Hope this helps someone in the future.
My init.sql (script) is not getting copied to docker-entrypoint-initdb.d.
Note that the problem doesn't occur when I try to run it locally or on my server. It happens only when using the azure devops by creating build and release pipeline.
There seems to be a mistake in the hostpath(containing sql script) in the persistant volume YAML file in cases where the file is placed in the azure repos.
mysqlpersistantvolume.yaml
kind: PersistentVolume
apiVersion: v1
metadata:
name: mysql-initdb-pv-volume
labels:
type: local
app: mysql
spec:
storageClassName: manual
capacity:
storage: 1Mi
accessModes:
- ReadOnlyMany
hostPath:
path: "/devops-sample" // main project folder in azure repos which
contains all files including sql script.
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: mysql-initdb-pv-claim
labels:
app: mysql
spec:
storageClassName: manual
accessModes:
- ReadOnlyMany
resources:
requests:
storage: 1Mi
mysql.yaml
apiVersion: v1
kind: Service
metadata:
name: mysql
spec:
ports:
- port: 3306
protocol: TCP
targetPort: 3306
selector:
app: mysql
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
spec:
selector:
matchLabels:
app: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql
spec:
containers:
- image: mysql:5.6
name: mysql
imagePullPolicy: "IfNotPresent"
env:
- name: MYSQL_ROOT_PASSWORD
value: root
- name: MYSQL_PASSWORD
value: kovaion
- name: MYSQL_USER
value: vignesh
- name: MYSQL_DATABASE
value: data-core
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /docker-entrypoint-initdb.d
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-initdb-pv-claim
Currently the folder docker-entrypoint-initdb.d seems to be empty(nothing is getting copied).
how to set the full host path in mysql persistant volume if the sql script is placed in the azure repos inside the devops-sample folder??
Mysql data directory storage location is wrong. You should mount persistent storage to /var/lib/mysql/data
I am trying to Create User With Limited Namespace Access. created namespace named as test and also created Group:programmers, User:frontend. Generated credentials for user:frontend by the help of following http://docs.bitnami.com/kubernetes/how-to/configure-rbac-in-your-kubernetes-cluster/
I created a role. Here is my role.yml
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
namespace: test
name: frontend-developer
rules:
- apiGroups: ["","extensions","apps"]
resources: ["deployments","replicasets","pods"]
verbs: ["get","list","watch","create","patch"]`
I created rolebinding. Here is role-binding.yml
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: frontend-deploy
namespace: test
subjects:
- kind: User
name: frontend
namespace: test
roleRef:
kind: Role
name: frontend-developer
apiGroup: rbac.authorization.k8s.io
I am talking my deployment file as
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nodefrontend
namespace: test
spec:
replicas: 3
template:
metadata:
labels:
app: bookstore
spec:
containers:
- name: nodeweb
image: balipalligayathri/devops-develop
ports:
- name: http
containerPort: 3000
protocol: TCP
I am using following commands while creating the role and rolebinding
$ kubectl create -f role.yml
$ kubectl create -f role-binding.yml
frontend developer Role and frontend-deploy Rolebindings were created.
Again, i am using the command kubectl create -f node-deployment.yml for deployment creation. Deployment was created and deleted successfully. here, i didn't mention any user while creating the deployment.so, I am trying to create deployment with user by using the below command.
kubectl create -f node-deployment.yml --as=frontend --context=frontend-context
I am facing the error like this
Error from server (Forbidden):
<html><head><meta http-equiv='refresh' content='1;url=/login?from=%2Fswagger-2.0.0.pb-v1%3Ftimeout%3D32s'/><script>window.location.replace('/login?from=%2Fswagger-2.0.0.pb-v1%3Ftimeout%3D32s');</script></head><body style='background-color:white; color:white;'>
Authentication requiredhttps://stackoverflow.com/questions/48164369/kubernetes- 1-8-dashboard-configurations-fails-with-error-no-kind-role-is-regi
You are authenticated as: anonymous
Groups that you are in:
Permission you need to have (but didn't): hudson.model.Hudson.Read
which is implied by: hudson.security.Permission.GenericRead
which is implied by: hudson.model.Hudson.Administer </body></html>
My doubt is: is there any necessity to mention the user in deployment.yml file?
You need to create a serviceAccount, take a look at this snippet:
apiVersion: v1
kind: ServiceAccount
metadata:
name: myAccount
bind it to your role:
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: myBinding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: frontend-developer
subjects:
- kind: ServiceAccount
name: myAccount
and use it in your Deployment:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nodefrontend
namespace: test
spec:
template:
metadata:
labels:
...
spec:
serviceAccountName: myAccount
Ref:
https://kubernetes.io/docs/reference/access-authn-authz/authentication/
https://kubernetes.io/docs/reference/access-authn-authz/rbac/