gitlab matrix and artifacts with kaniko - gitlab-ci

I used this SO post Build multiple Docker images with gitlab-ci to resolve this kaniko limitation for multi image builds in a single job (--cleanup is to heavy handed on the runner). How do I Incorporate artifacts so that they are available for the next stage in a matrix using this method?

Related

Manual Testing in Devops Pipeline

We are currently doing traditional waterfall model where we have manual and automation tests in SIT and UAT environments. We are moving to Agile/Devops and I am working on POC on Devops. Based on my research, Devops is suited for CI and CD meaning the testing is automated and pipeline is automated from Dev to Production. However when we implement, we want to do automatic code deployments in different environments but stop the pipeline to conduct manual QA testing and Manual UAT before the code is signed off for PROD deployment. If I use Jenkins for Devops, is it recommended to stop the pipeline for few days until manual QA is completed and manual approval is done? How is manual testing accounted in Devops implementations? Any insights would be helpful.
CI and CD are engineering practices that enable teams to improve productivity.
And these should be implemented step by step - first implement CI and then CD. So, build pipelines as you mature in the DevOps processes.
For example, leverage Jenkins pipeline to first orchestrate CI pipeline wherein the following is automated-
application build,
unit testing,
code coverage,
The output of this stage are binaries that are deployed in a binary repository like Nexus.
The next step after successful implementation of CI, is CD - the process to auto-deploy artifacts from one environment to another. Consider we need to deploy artifacts (binaries) in QA for testing. You can extend your CI pipeline to perform CD by moving artifacts from DEV to QA systems. And then stop here, since movement to next environment will be done only when manual testing records are approved. This means progressing to next environment will be manually triggered. Hence, while planning to build a CD pipeline, chalk out the essential steps that should be automated and then progress step by step.
Once you are ready with automated tests and tools, you then complete your CD pipeline and automated the movement to artifacts from DEV-QA-NONPROD, etc.
Having a pipeline blocked for days is certainly an anti-pattern. Here is one way to mitigate it -
Separate Continuous Integration (CI) and Continuous Deployment (CD) pipelines.
Have a separate process routing correct artifacts for environments (disclaimer: I'm biased towards the one we provide - https://relizahub.com, since I'm working on it; video how approvals are implemented - https://www.youtube.com/watch?v=PzdZjMby6Is)
So essentially, what happens - you run CI pipeline, which create a deployment artifact. Then you have some approvals (manual and/or automated) which are recorded specifically on this artifact. Then you have a separate deployment pipeline which picks the right artifact and does the deployment. This way all pipelines are running quickly and you don't have to deal with pipeline runs stuck for a long time.

Bamboo scripted buildplans

We are currently using bamboo to build and test our software. Now our build plans are just a bunch of task: execute this bat, execute that bat etc. Created with the Bamboo UI.
It happens that over months/years the build plan needs adjustments:
Parallelize jobs
Add extra jobs
Change some tasks
But this will break when we try to build an older version of the software. Some scripts (called from bamboo task) are not-existing in older versions.
At my previous employer we used Jenkins pipelines where the content of the build and test was just a file present in the sources repo.
Now with bamboo it appears you can use Bamboo Specs. From I read you create specs file and when you run this, it which will create build plan. But I don't see a relation to cater for changing build plans over time (changing steps).
For example the Bamboo Specs of develop are used to build all Plan Branches (e.g. Pull Requests). So if you want to change the build in a PullRequest, you first need to merge this into develop, the Bamboo Spec of develop updates the Build Plan. Not possible to test this before merging.
Question: How can you make scripted buildplans in Bamboo, where every branch of develop can a have possible other way of building?
We have it now setup as:
Buildplan 'Product A': plan branches: develop, release_x, release, y
Buildplan 'Product A PullRequest': plan branches: feature/*
Edit: supported in 7.0: https://confluence.atlassian.com/bamboo/enhanced-plan-branch-configuration-996709304.html
Old answer:
I found Atlassian documentation: https://jira.atlassian.com/browse/BAM-19620. They call it 'divergent plan branches'. No support, there is a feature request.
As of 15-4-2019:
Atlassian Update – [11 April 2019] Hi everyone,
Thank you for your votes and thoughts on this issue.
We fully understand that many of you are dependent on this
functionality.
After careful consideration, we've decided to prioritise [this
feature] on Bamboo roadmap. We hope to start development after our
current projects are completed.
Expect to hear an update on our progress within the next 6 months.
To learn more on how your suggestions are reviewed, see our updated
workflow for server feature suggestions.
Kind regards,
Bamboo Team
Question: How can you make scripted build plans in Bamboo?
To make scripted build plans in Bamboo, you have to use bamboo specs. Since you are already familiar with Jenkins, bamboo specs work exactly like Jenkinsfile by automating your pipeline. The benefit of using this is that it lives in your source code and the changes you make to this file in your source code automatically changes your plan(pipeline) when bamboo build is triggered.
This is how I script build plans in bamboo:
I add my bamboo.yml file under the root of my repo. But currently, I use git subtree and my bamboo specs live in there. But you don't have to do this. The below link provides you with a simple approach.
Link my repo to bamboo
Tell bamboo to scan for bamboo specs in the repo
Make commit and push
https://confluence.atlassian.com/bamboo/tutorial-bamboo-specs-yaml-stored-in-bitbucket-server-941616819.html
If I have to make changes to the plan in the future, I edit the bamboo specs file then commit and push.
I had the same problem and unfortunately had to go through an unpleasant choice
Backporting the build script
This is not necessarily feasible everywhere, but I managed to make it work somehow for my project.
The idea is: treat the build script as a C#/Java interface, or better as a contract.
As soon as your branches do not provide significant changes in building the software, e.g. your desktop app becomes a web app, or you switch from Ant to Gradle, you can handle this.
Assuming my application is always a web application to be released as a jar on JFrog Artifactory, I have identified the following steps that are common to all maintained versions:
Use javac to build the jar of all modules
Use gulp to build the Javascript resources
Run JUnit from the repository
Baptize 💒 the artifacts with a version number obtained with a tricky algorithm
Push the artifacts to JFrog Artifactory
So the idea is that I had taken my Ant build script and mostly rewrote it in order to do the same tasks on different versions of the application. I started doing the changes from an older version, not maintained anymore, as an excercise. In fact, my official Git branches look like release/x.y.z where semver is x.y.z.k and newer bugfix-builds are built from the head of any x.y.z release.
So I took release/3.10.0 branch and rewrote Ant. I am currently testing with a manually created Bamboo plan
Stage: Compile
ant clean ivy-retrieve compile jar #builds the jar in a job
ant gulp-install gulp-prod zip #creates javascript resources
Stage: Test
ant run-junit
Manual Stage: Release
ant baptize ivy-release #tags the artifact using ${bamboo.jira.version} and pushes to JFrog Artifactory
What I am going to do with Yaml
Since the build script is the same, but specific tasks (e.g. Java compiler version) may change in different versions, I can create a very single Yaml script that rules them versions all.
I will then merge release/3.10.0 => release/3.10.1 => release/3.10.2 ... release/3.11.2 by merging the conflicts
Personal experience
Tonight I am struggling in making the JUnit tests work as I also chose to backport my testing framework to the older version of the project. I accept that a few tests will fail because older and non-maintained versions contain bugs. For me this is a way to prove that the system works.
Indeed, diverging branches are a great idea, but I am forced to use Bamboo 6 in my office

Creating Bamboo Release automatically after successful build

I am using Bamboo 5.6.2 version.
I have a requirement to create a release every time a build is successful as part of Continuous Integration Pipeline. Output of build pipeline is a link to docker image from an external docker registry.
Reason being: Administrator has configured build expiry where old build results may be deleted including artifacts.
Intent: Creating a release will ensure that build result/artifact stays and thereby allowing us to deploy it at a later stage by referring to artifact.
I found similar question here: https://answers.atlassian.com/questions/33136376/how-can-i-automatically-create-a-deployment-release-but-dont-execute-deployment-yet but yet to be answered.
Create Deployment environment with echo script task and add trigger to that environment to deploy on successful build. Not sure such trigger exists in 5.6, working with 5.14.4

Whether drone.io support creating docker during build process

I am using maven-docker-plugin in my project. This plugin will create docker containers during integration tests. Since drone.io put the build process inside a docker container, whether I can still use maven-docker-plugin during maven build? How to control the docker containers during build time?
If you want interact directly with the Docker daemon to create and start containers, you need to mount the host machines Docker socket into your build container.
Since you mentioned using the docker-maven-plugin you may want a configuration similar to the following:
pipeline:
build:
image: maven
environment:
- DOCKER_API_VERSION=1.20
- DOCKER_HOST=/var/run/docker.sock
volumes:
- /var/run/docker.sock:/var/run/docker.sock
commands:
- mvn clean package docker:build
Please note that exposing the Docker daemon to your build environment is essentially giving your build root access to your server. This approach is therefore not recommended for public repositories.
Please also note that volumes are restricted security reasons. To use volumes you need to have a Drone administrator mark your repository as trusted, in your repository settings screen.
So it is possible to launch containers from inside the build environment for the purpose of running your tests. The recommended approach, however, is to run your tests directly inside your build environment. This is the use case for which Drone is optimized, and it eliminates the security issues mentioned above.

Bamboo build with local and remote agents

I have a .Net WebApi project and continuous integration is setup using Bamboo. I am using spec flow tests and some of the tests are tagged to run on bamboo remote agent as they are slow in nature. Other tests supposed to be run on multiple local agents. I have setup multiple stages in Bamboo build plan as stages get run in parallel with each stage is set to run specifically tagged tests suits.
My question is what is the general practice to setup a Bamboo plan to run on multiple agents (local and remote) and how can I share one MS Build output (dlls and config) across multiple agents.
If you need to split build and test phase then usually you have Build stage with one job which produces artifact with build output.
Then you create another stage and put several jobs there. Jobs may be configured to download produced artifact from Build stage and execute tests against your build.
If you want to run some of your jobs at remote agent, you can add some Job requirement which only remote agent can satisfy.