I need to copy the content of text file into yaml file at particular line using shell or ansible scripting language - scripting

My text file similarly looks like below
ICAgICAidHlwZSI6ICJhY2NlcHQiCiAgICB9CiAgXSwKICAidHJhbnNwb3J0cyI6IHsKICAgICJkb2NrZXIiOiB7CiAgICAgICJpbWFnZS1yZWdpc3RyeS5vcGVuc2hpZnQtaW1hZ2UtcmVnaXN0cnkuc3ZjOjUwMDAvaW1hZ2Utc2lnbmluZyI6IFsKICAgICAgICB7CiAgICAgICAgICAidHlwZSI6ICJzaWduZWRCeSIsCiAgICAgICAgICAia2V5VHlwZSI6ICJHUEdLZXlzIiwKICAgICAgICAgICJrZXlQYXRoIjogIi9ldGMvcGtpL3NpZ24ta2V5L2tleSIKICAgICAgICB9CiAgICAgIF0KIC
and YAML file as below
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
labels:
machineconfiguration.openshift.io/role: worker
name: image-policy
spec:
config:
ignition:
config: {}
security:
tls: {}
timeouts: {}
version: 3.2.0
networkd: {}
passwd: {}
storage:
files:
- contents:
source: data:text/plain;charset=utf-8;base64,<<copy_text_here>>
now I need to copy the content of the text file into the YAML file at the source parameter in place of <<copy_text_here>>.
any suggestions on this?
Thanks in advance

Something like that should do de trick, you load the content of the file with lookup. Then you find the string and replace it with the replace module
---
- hosts: localhost
tasks:
- name: replace
replace:
path: subst.yml
regexp: '<<copy_text_here>>'
replace: "{{lookup('file', 'text.yml')}}"

Related

Tekton build Docker image with Kaniko - please provide a valid path to a Dockerfile within the build context with --dockerfile

I am new to Tekton (https://tekton.dev/) and I am trying to
Clone the repository
Build a docker image with the Dockerfile
I have a Tekton pipeline and when I try to execute it, I get the following error:
Error: error resolving dockerfile path: please provide a valid path to a Dockerfile within the build context with --dockerfile
Please find the Tekton manifests below:
1. Pipeline.yml
apiVersion: tekton.dev/v1beta1
kind: Pipeline
metadata:
name: clone-read
spec:
description: |
This pipeline clones a git repo, then echoes the README file to the stout.
params:
- name: repo-url
type: string
description: The git repo URL to clone from.
- name: image-name
type: string
description: for Kaniko
- name: image-path
type: string
description: path of Dockerfile for Kaniko
workspaces:
- name: shared-data
description: |
This workspace contains the cloned repo files, so they can be read by the
next task.
tasks:
- name: fetch-source
taskRef:
name: git-clone
workspaces:
- name: output
workspace: shared-data
params:
- name: url
value: $(params.repo-url)
- name: show-readme
runAfter: ["fetch-source"]
taskRef:
name: show-readme
workspaces:
- name: source
workspace: shared-data
- name: build-push
runAfter: ["show-readme"]
taskRef:
name: kaniko
workspaces:
- name: source
workspace: shared-data
params:
- name: IMAGE
value: $(params.image-name)
- name: CONTEXT
value: $(params.image-path)
1. PipelineRun.yml
apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
name: clone-read-run
spec:
pipelineRef:
name: clone-read
podTemplate:
securityContext:
fsGroup: 65532
workspaces:
- name: shared-data
volumeClaimTemplate:
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
# - name: git-credentials
# secret:
# secretName: git-credentials
params:
- name: repo-url
value: https://github.com/iamdempa/tekton-demos.git
- name: image-name
value: "python-test"
- name: image-path
value: $(workspaces.shared-data.path)/BuildDockerImage2
And here's my repository structure:
. . .
.
├── BuildDockerImage2
│ ├── 1.show-readme.yml
│ ├── 2. Pipeline.yml
│ ├── 3. PipelineRun.yml
│ └── Dockerfile
├── README.md
. . .
7 directories, 25 files
Could someone help me what is wrong here?
Thank you
I was able to find the issue. Issue was with the way I have provided the path.
In the kaniko task, the CONTEXT variable determines the path of the Dockerfile. And the default value is set to ./ and with some additional prefix as below:
$(workspaces.source.path)/$(params.CONTEXT)
That mean, the path of the workspaces is already being appended and I don't need to append that part as I mentioned in the image-path value below:
$(workspaces.shared-data.path)/BuildDockerImage2
Instead, I had to put just the folder name as below:
- name: image-path
value: BuildDockerImage2
This fixed the problem I had.

task hello-world has failed: declared workspace "output" is required but has not been bound

apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
name: hello-world
spec:
workspaces:
- name: output
description: folder where output goes
steps:
- name: hello-world1
image: ubuntu
command: ["/bin/bash"]
args: ["-c", "echo Hello World 1! > $(workspaces.output.path)<200b>/message1.txt"]
- name: hello-world2
image: ubuntu
script: |
#!/usr/bin/env bash
set -xe
echo Hello World 2! > $(workspaces.output.path)/message2.txt
From your error message, we can guess that the TaskRun (and PipelineRun) trying to run this task does not define a workspace to be used with your Task.
Say I would like to call your Task: I would write a Pipeline, which should include something like:
apiVersion: tekton.dev/v1beta1
kind: Pipeline
metadata:
name: hello-world
spec:
tasks:
- name: hello-world-task
taskRef:
name: hello-world
workspaces:
- name: output
workspace: my-workspace
workspaces:
- name: my-workspace
optional: true
And then, start this pipeline with the following PipelineRun:
apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
name: hello-world-0
spec:
pipelineRef: hello-world
workspaces:
- name: my-workspace
volumeClaimTemplate:
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
See Tekton Pipelines Workspaces docs.

Tekton trigger flow from github

I am learning Tekton (for business), coming from github actions (private).
The Tekton docs (or any other tutorial I could find) have instructions on how to automatically start a pipeline from a github push. Basically they all somewhat follow the below flow: (I am aware of PipelineRun/TaskRun etc)
Eventlistener - Trigger - TriggerTemplate - Pipeline
All above steps are basically configuration steps you need to take (and files to create and maintain), one easier than the other but as far as I can see they also need to be taken for every single repo you're maintaining. Compared to github actions where I just need 1 file in my repo describing everything I need this seems very elaborate (if not cumbersome).
Am I missing something ? Or is this just the way to go ?
Thanks !
they also need to be taken for every single repo you're maintaining
You're mistaken here.
The EventListener receives the payload of your webhook.
Based on your TriggerBinding, you may map fields from that GitHub payload, to variables, such as your input repository name/URL, a branch or ref to work with, ...
For GitHub push events, one way to do it would be with a TriggerBinding such as the following:
apiVersion: triggers.tekton.dev/v1alpha1
kind: TriggerBinding
metadata:
name: github-push
spec:
params:
- name: gitbranch
value: $(extensions.branch_name) # uses CEL interceptor, see EL below
- name: gitrevision
value: $(body.after) # uses body from webhook payload
- name: gitrepositoryname
value: $(body.repository.name)
- name: gitrepositoryurl
value: $(body.repository.clone_url)
We may re-use those params within our TriggerTemplate, passing them to our Pipelines / Tasks:
apiVersion: triggers.tekton.dev/v1alpha1
kind: TriggerTemplate
metadata:
name: github-pipelinerun
spec:
params:
- name: gitbranch
- name: gitrevision
- name: gitrepositoryname
- name: gitrepositoryurl
resourcetemplates:
- apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
generateName: github-job-
spec:
params:
- name: identifier
value: "demo-$(tt.params.gitrevision)"
pipelineRef:
name: ci-docker-build
resources:
- name: app-git
resourceSpec:
type: git
params:
- name: revision
value: $(tt.params.gitrevision)
- name: url
value: $(tt.params.gitrepositoryurl)
- name: ci-image
resourceSpec:
type: image
params:
- name: url
value: registry.registry.svc.cluster.local:5000/ci/$(tt.params.gitrepositoryname):$(tt.params.gitrevision)
- name: target-image
resourceSpec:
type: image
params:
- name: url
value: registry.registry.svc.cluster.local:5000/ci/$(tt.params.gitrepositoryname):$(tt.params.gitbranch)
timeout: 2h0m0s
Using the following EventListener:
apiVersion: triggers.tekton.dev/v1alpha1
kind: EventListener
metadata:
name: github-listener
spec:
triggers:
- name: github-push-listener
interceptors:
- name: GitHub push payload check
github:
secretRef:
secretName: github-secret # a Secret you would create (option)
secretKey: secretToken # the secretToken in my Secret matches to secret configured in GitHub, for my webhook
eventTypes:
- push
- name: CEL extracts branch name
ref:
name: cel
params:
- name: overlays
value:
- key: truncated_sha
expression: "body.after.truncate(7)"
- key: branch_name
expression: "body.ref.split('/')[2]"
bindings:
- ref: github-push
template:
ref: github-pipelinerun
And now, you can expose that EventListener, with an Ingress, to receive notifications from any of your GitHub repository.

Sql script file is not getting copied to docker-entrypoint-initdb.d folder of mysql container?

My init.sql (script) is not getting copied to docker-entrypoint-initdb.d.
Note that the problem doesn't occur when I try to run it locally or on my server. It happens only when using the azure devops by creating build and release pipeline.
There seems to be a mistake in the hostpath(containing sql script) in the persistant volume YAML file in cases where the file is placed in the azure repos.
mysqlpersistantvolume.yaml
kind: PersistentVolume
apiVersion: v1
metadata:
name: mysql-initdb-pv-volume
labels:
type: local
app: mysql
spec:
storageClassName: manual
capacity:
storage: 1Mi
accessModes:
- ReadOnlyMany
hostPath:
path: "/devops-sample" // main project folder in azure repos which
contains all files including sql script.
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: mysql-initdb-pv-claim
labels:
app: mysql
spec:
storageClassName: manual
accessModes:
- ReadOnlyMany
resources:
requests:
storage: 1Mi
mysql.yaml
apiVersion: v1
kind: Service
metadata:
name: mysql
spec:
ports:
- port: 3306
protocol: TCP
targetPort: 3306
selector:
app: mysql
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
spec:
selector:
matchLabels:
app: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql
spec:
containers:
- image: mysql:5.6
name: mysql
imagePullPolicy: "IfNotPresent"
env:
- name: MYSQL_ROOT_PASSWORD
value: root
- name: MYSQL_PASSWORD
value: kovaion
- name: MYSQL_USER
value: vignesh
- name: MYSQL_DATABASE
value: data-core
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /docker-entrypoint-initdb.d
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-initdb-pv-claim
Currently the folder docker-entrypoint-initdb.d seems to be empty(nothing is getting copied).
how to set the full host path in mysql persistant volume if the sql script is placed in the azure repos inside the devops-sample folder??
Mysql data directory storage location is wrong. You should mount persistent storage to /var/lib/mysql/data

filebeat configuration using elasticsearch

I am Facing issue with Filebeat,i taken filebeat image docker pull docker.elastic.co/beats/filebeat:6.3.1
my filebeat.yml file is
filebeat.config:
prospectors:
path: ${path.config}/prospectors.d/*.yml
reload.enabled: false
modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
processors:
- add_cloud_metadata:
output.elasticsearch:
hosts: ['192.0.0.0:9200']
username: elastic
password: changeme
setup.kibana:
host: '192.0.0.0:5601'
filebeat.inputs:
- type: log
paths:
- /var/log/*.log
When i run filebeat i am getting yum.logs and harvester started for yum.log,Please help me
Thanks in advance.