ArgoCD with kustomize to replace images during runtime - gitlab-ci

We are trying to deploy few deployment files with argocd app create command. Our yaml file contains a parameter for specifying the image name dynamically. We are looking at passing the image value with argocd cli to replace this variable during runtime. Parameter overrides of argocd don't seem to work for non-helm deployments.
Below is the deployment.yaml file which hold the placeholders for the image:
spec:
containers:
- name: helloworld-gitlab
image: '###IMAGE###'
kustomization.yaml is as below:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- gitlab_msd_deployment.yaml
- gitlab_msd_service.yaml
We are passing the below values in the argocd command.
argocd app create helloworld-gitlab --repo https://example.git --path ./k8s-deployments/ --dest-server https://kubernetes.default.svc --dest-namespace ${NAMESPACE} --kustomize-image $IMAGE
But the pod has InvalidImageName state with the error below:
Failed to apply default image tag "###IMAGE###": couldn't parse image reference "###IMAGE###": invalid reference format: repository name must be lowercase
Any idea how we can get the placeholder replaced with the value sent in the argocd app create command?
Any other ideas?

I could get this working as below (with help from the slack channel). File definitions are here.
argocd app create mykustomize-guestbook --repo https://github.com/mayzhang2000/argocd-example-apps.git --path kustomize-guestbook --dest-namespace default --dest-server https://kubernetes.default.svc --kustomize-image myImage=gcr.io/heptio-images/ks-guestbook-demo:0.1

Related

Unexpected symbol: ‘2e4ce68d4e3feec97e992821e6391166943f4d49’

I tried to built github .yml file but I’m getting error like
|GitHub Actions/ Main Workflow
Invalid workflow file
The workflow is not valid. .github/workflows/build.yml (Line: 22, Col: 22): Unexpected symbol: '<hash_value>'. Located at position 9 within expression: secrets.<hash_value>|
CODE
on:
Trigger analysis when pushing in master or pull requests, and when creating
a pull request.
push:
branches:
- master
pull_request:
types: [opened, synchronize, reopened]
name: Main Workflow
jobs:
sonarcloud:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
with:
# Disabling shallow clone is recommended for improving relevancy of reporting
fetch-depth: 0
- name: SonarCloud Scan
uses: SonarSource/sonarcloud-github-action#v1.3
env:
GITHUB_TOKEN: {{ secrets.<hash_value>}} SONAR_TOKEN: {{ secrets.<hash_value>}}
AND
#Configure here general information about the environment, such as SonarQube server connection details for example
#No information about specific project should appear here
#----- Default SonarQube server
sonar.host.url=https://sonarcloud.io/
#----- Default source code encoding
#sonar.sourceEncoding=UTF-8
sonar.organization=blah blah
sonar.projectKey=blah blah
— optional properties —
defaults to project key
sonar.projectName=Toolsdemo
defaults to ‘not provided’
sonar.projectVersion=1.0
Path is relative to the sonar-project.properties file. Defaults to .
sonar.sources=https://github.com/abcd/xyz
Encoding of the source code. Default is default system encoding
sonar.sourceEncoding=UTF-8
Please help . I need to test my code very fast .

Error in Yaml file while trying to create multiple s3 buckets in Serverless Framework for AWS Lambda Function

So I'm pretty new to CloudFormation and also to Serverless framework. I've been trying to work through some exercises (such as an automatic thumbnail generator) and then create some simple projects that I can hopefully generalize for my own purposes.
Right now I'm attempting create a stack/function that creates two S3 buckets and has the Lambda Function take a CSV file form one, perform some simple transformations, and place it in the other receiving bucket.
In trying to build off the exercise I've done, I created a Yaml file with the following code:
provider:
name: aws
runtime: python3.8
region: us-east-1
profile: serverless-admin
timeout: 10
memorySize: 128
iamRoleStatements:
- Effect: "Allow"
Action:
- "s3:*"
Resource: "*"
custom:
assets:
targets:
- bucket1: csvbucket1-08-16-2020
pythonRequirements:
dockerizePip: true
- bucket2: csvbucket2-08-16-2020
pythonRequirements:
dockerizePip: true
functions:
protomodel-readcsv:
handler: handler.readindata
events:
s3:
- bucket: ${self:custom.bucket1}
event: s3:ObjectCreated:*
suffix: .csv
- bucket: ${self:custom.bucket2}
plugins:
- serverless-python-requirements
- serverless-s3-deploy
However, when i do a Serverless deploy from my command prompt, I get:
Serverless Warning --------------------------------------
A valid service attribute to satisfy the declaration 'self:custom.bucket1' could not be found.
Serverless Warning --------------------------------------
A valid service attribute to satisfy the declaration 'self:custom.bucket2' could not be found.
Serverless Error ---------------------------------------
Events for "protomodel-readcsv" must be an array, not an object
I've tried to make the events object in the protohandler-readcsv: by adding a - but I then get a bad indentation error that for some reason I cannot reconcile. But, more fundamentally, I'm not exactly sure why that item would need be an array anyway, and I wasn't clear about the warnings with the buckets either.
So sorry about a pretty newbie question about this, but running tutorials/examples online leaves a lot to try to figure out in trying to generalize/customize these examples.
custom:
assets:
targets:
- bucket1
I guess you need self:custom.assets.targets.bucket1, not sure if this nested assets will work.
Please check the example below is supposed to work.
service: MyService
custom:
deploymentBucket: s3_my_bucket
provider:
name: aws
deploymentBucket: ${self:custom.deploymentBucket}
stage: dev

Dynamically setting docker image tag from semver when using docker-image-resource in concourse

I have the following semver setup:
- name: version
type: semver
source:
driver: gcs
bucket: my-ci
json_key: ((my.serviceaccount))
key: version/version.txt
initial_version: 0.0.0
In my publishjob, I have the following:
name: publish
serial_groups: [version]
plan:
- get: version
passed: [build]
trigger: true
So, basically the publish job is triggered after build job is passed (version updated)
Now, in the publish job I am creating a docker image and pushing it to gcr.
- put: my-gcr
params:
additional_tags: my/ci/tags
build: mycode
get_params: {skip_download: true}
Here, the image is correctly tagged based on the values in the tags file. However, I want to set these values dynamically based on the current version which can be retreived following this:
https://concoursetutorial.com/miscellaneous/versions-and-buildnumbers/#display-version
How can I use this version number to tag my docker image?
I solved it using the following code:
- put: artifacts
params:
additional_tags: version/number
build: mycode
get_params: {skip_download: true}

Drone.io auto_tag with branch name

Using the drone docker plugin in order to create my cloud images, I would like to simplify the workflow by having drone automatically tagging my images depending of the git branch name I'm working with.
I saw a auto_tag but unfortunately it always tag my images as "latest".
###
# Tag deployment
# Docker image
###
push-tag-news:
image: plugins/docker
registry: docker.domain.com:5000
secrets: [docker_username, docker_password]
repo: docker.domain.com:5000/devs/news
auto_tag: true # Or how to specify the current branch for the tags: option?
when:
exclude: [master, dev]
has anyone tried to do something similar?
I'm using drone 0.8
The auto_tag uses the repository/git tags seems to me you are looking to set custom docker image tags.
You can use any of these variables http://docs.drone.io/environment-reference/
Try using DRONE_COMMIT_BRANCH
build-docker-image:
image: plugins/docker
repo: myname/myrepo
secrets: [ docker_username, docker_password ]
tags:
- ${DRONE_COMMIT_BRANCH}
- latest

How do I use a node selector with a build config in openshift?

I am running a large world spanning openshift cluster. When I run a build from a BuildConfig it will randomly assign the build to any node in the entire cluster. This is problematic as many regions have higher latency which dramatically slows down build times and image uploads. I can't find any information in the documentation on using node selector tags at this level. I have tried adding openshift.io/node-selector: dc=mex01 to the annotations as it is done with project level node-selectors to no avail. Any help would be great. Thanks!
Project node selectors are the only way to control where builds happen at the present time.
Since this is the Question that shows up first on Google:
This is now possible (since 1.3 apparently): https://docs.openshift.org/latest/dev_guide/builds/advanced_build_operations.html#dev-guide-assigning-builds-to-nodes
To elaborate a bit on mhutter's answer, here are example yaml fragments using node selectors:
a BuildConfig:
apiVersion: "v1"
kind: "BuildConfig"
metadata:
name: "sample-build"
spec:
nodeSelector:
canbuild: yes
and a Node:
apiVersion: v1
kind: Node
metadata:
creationTimestamp: null
labels:
beta.kubernetes.io/arch: amd64
beta.kubernetes.io/os: linux
kubernetes.io/hostname: mybestnodeever
canbuild: yes
Since OCPv3.6 there are the taints and tolerations, which can be applied to nodes and pods, but I haven't yet found any docs on applying tolerations to the build configs (or on whether they propagate to the builder pods).
https://docs.openshift.com/container-