We use GitLab CI/CD on a private GitLab server and maintain a quite complex gitlab-ci.yaml with includes and rules:
Some pipeline jobs only run on specific branches
Some pipeline jobs only run when specific variables are set
Some pipeline jobs are taken from the include without changes
Some pipeline jobs are taken from the include, but overwritten in the current file
etc.
Currently, our quality assurance steps are:
Using the GitLab API to lint the file
Building, testing and deploying a test project to test the pipeline roughly (smoke tests)
What we would like to add are unit tests where we can test the behaviour of the pipeline under specific conditions (specific branches, variables, other conditions) so we can ensure that specific settings lead to a specific pipeline configuration. Unfortunately, I was not able to find any information on how to properly test GitLab CI pipelines. How can we do that?
Related
I am trying to learn how YAML specs works in Bamboo. So far I achieved to deploy the plan following the official documentation. enter link description here
The documentation explains that you need to create a bitbucket repository, create bamboo.yml, set a new project in bamboo, enable a bamboo specs repository and finally you get your plan created and based in YAML specs.
My question is, can I create a plan.yml and deploy it from other bamboo plan?
For example, for JAVA specs, it is enough to checkout a repo with several *.java specs files and use maven and a pom file to deploy all the plans.
Can I do something similar with YAML specs? To have a folder in some SCM with several *.yml files and deploy them simultaneously. As a result, to have a lot of plans in bamboo deployed and based on the yml files.
yes and no, yaml can't be sent to the server as you can do with java specs. It needs to be committed to the repo first
you also need to have your different project created prior to committing the yaml specs and or have that repo granted access to each individual project or enabled the flag on the linked repo to allow access to all projects in the specs tab.
if this is not an issue,then yes there is no problem defining multiple plans in your bamboo specs yml file, even across multiple projects, as long as they are split up in separate yaml documents (with "---")
I have my automation suite plan in a repository. I want to run the automation suite once my apk file is published. The publishing APK file is in another repository. How can I run my suite immediately after completing the first job?
For ex; I have a repo 1 with my automation suite say repoAuto
I have another repo with client build for generating apk, sat repoBuild
Both are having different repository.
How can I run repoAuto immediately after repoBuild?
Thanks in advance.
-Mashkur
Add 2 repositories in Plan Configuration -> Repositories -> Add Repository.
By default, you already have one stage. So add another stage for publishing apk. So repo 1 will have automation suite connected to stage 1 and repo 2 have apk connected to stage 2.
Now if you build the plan, the stages will run one by one. Don't check the manual stage check box.
"Each stage within a plan represents a step within your build process. A stage may contain one or more jobs which Bamboo can execute in parallel. For example, you might have a stage for compilation jobs, followed by one or more stages for various testing jobs, followed by a stage for deployment jobs."-from bamboo
I am planning to create 4 stages
Source code checkout stages
Build for dev env stages
Build for uat env stages
Build for Prod env stages
Is it possible to use the same source code check out for all the stages? How?
This is actually straight forward:
Define your repository in the Repository tab of the plan configuration
Add a Sourcecode Checkout task for each build job in the plan.
By virtue of the repository definition for the plan, a consistent snapshot at the time the plan was started, will be used for the checkout tasks i.e. they will each fetch the same code.
This is not clearly documented in the Bamboo docs but is discussed here https://answers.atlassian.com/questions/33651/stages-and-artifact-passing
Above answer works, but I think you should not build same branch with all environments. It might be better to use better branching workflow,so that you can easily deploy correct change to required environment.
I have a simple Gitlab pipeline setup with two stages: build & test. Both stages are supposed to share cached files but they don't appear to, resulting in the test stage failing. As best I can, the problem is that each stage uses a different runner and the cached files use the runner ID as part of the path.
.gitlab-ci.ym
...
cache:
key: "build"
untracked: true
...
The build stage outputs the following
Creating cache build...
untracked: found 787 files
Uploading cache.zip to https://runners-cache-1.gitlab.com:443/runner/runner/30dcea4b/project/1704442/build
The test stage outputs the following
Checking cache for build...
$ mvn test
I believe this means the cache was NOT found because there is no download information; but it's not clear.
I can also see that each stage uses a different runner and since the runner ID is part of the cache path, I suspect that is the problem.
I need to either use the same runner for each stage or share the cache across runners. I don't understand how to do either.
Any help would be appreciated.
It appears the cache feature is appropriately named, it's only for improving build performance and is not guaranteed to have the data, much like a real cache.
The correct approach is to use artifacts with dependencies.
When we build a Maven project without doing mvn clean, we sometimes get "voodoo errors" such as NoSuchMethodError. I believe these are caused by moving/renaming files.
I don't want to use the clean option in the CI, because it makes the build process take much longer. Is there another alternative?
You should always use clean in a CI build. CI builds must be reproducible and that requires starting from scratch!
And about the process taking longer: the whole point of using CI (one of the many) is that you can keep working while it's running, so that should not be a problem.
But what I like to do is use multiple layers of CI per project:
A first job compiles and executes some basic tests*, this build should take less than 5 minutes
if that succeeds, a second job executes all tests*, code metrics, javadocs etc
if that succeeds a third job deploys the build to a test server
(Or you can let the first job trigger both the second and the third job at once)
* You can implement the some tests / all tests functionality by configuring the maven surefire plugin differently per profile)
We have three build targets:
Continuous Integration: Builds without doing a Clean, and only run the tests identified by Clover. This runs after each commit. On success it deploys to the test server.
Nightly: Does a clean build and runs every single test. This runs every night. On success it deploys to the test server.
Release: Same as Nightly plus creates a source control label. Run manually.
The nightly build is more trustworthy in that a clean build is conducted. However, the CI build is quicker meaning feedback is faster on those occasions.
There is an underlying problem here with the build time, but this is at least a work around while you look at more permanent ways to address that.