Collect artifacts from different jobs within the same configuration plan in Bamboo Atlassian - bamboo

I have an Atlassian bamboo configuration plan that has multiple stages in it. Each stage has a job and every job generates an artifact of the respective test run. The final stage is supposed to collect the artifacts from the different jobs and publish the combined test results. Is this possible?
How should the results directory path be specified?
Should the artifacts, once generated, be copied into another folder to be available in the final stage?
Thanks!

This can be handled by creating a dependency, to each artifact, in the last job of the final stage.

Related

How to run bamboo plan for different repository one after another?

I have my automation suite plan in a repository. I want to run the automation suite once my apk file is published. The publishing APK file is in another repository. How can I run my suite immediately after completing the first job?
For ex; I have a repo 1 with my automation suite say repoAuto
I have another repo with client build for generating apk, sat repoBuild
Both are having different repository.
How can I run repoAuto immediately after repoBuild?
Thanks in advance.
-Mashkur
Add 2 repositories in Plan Configuration -> Repositories -> Add Repository.
By default, you already have one stage. So add another stage for publishing apk. So repo 1 will have automation suite connected to stage 1 and repo 2 have apk connected to stage 2.
Now if you build the plan, the stages will run one by one. Don't check the manual stage check box.
"Each stage within a plan represents a step within your build process. A stage may contain one or more jobs which Bamboo can execute in parallel. For example, you might have a stage for compilation jobs, followed by one or more stages for various testing jobs, followed by a stage for deployment jobs."-from bamboo

Is the .gitlab-ci.yml available for jobs with GIT_STRATEGY=none in Gitlab CI?

The Gitlab documentation says the following about GIT_STRATEGY: none:
none also re-uses the project workspace, but skips all Git operations (including GitLab Runner's pre-clone script, if present). It is mostly useful for jobs that operate exclusively on artifacts (e.g., deploy). Git repository data may be present, but it is certain to be out of date, so you should only rely on files brought into the project workspace from cache or artifacts.
I'm still a bit confused about how this is supposed to work. If the source code is not guaranteed to exist, then there might be no source in the project workspace and thus the .gitlab-ci.yml file would also be missing. Without a build script the job must fail. If the source is missing only part of the time depending on external factors, the job will fail randomly, which is even worse than failing every time. However, if it fails every single time then what's the point of the feature?
Another possibility I see is that .gitlab-ci.yml might be injected at runtime, so that even without a fresh copy of the repository there would be a build script. If so, could I define further files from my repository to inject into the build process? What are the restrictions on these particular jobs?
Yes, the .gitlab-ci.yml file is not copied onto the system just like all the other files. But that doesn't matter as the job is not run from the file. The job is run as a script on your target (and even before that as it defines the target it will run on). It is not possible to copy only selected files without a git clone although you may want to copy the files from some other server.
A good example of when you want to run GIT_STRATEGY: none are things like slackchat notifications as last stage of a build when you really don't want to clone gigabytes of repository data just to push a notification.

Gitlab pipeline cache not being shared due to different runners

I have a simple Gitlab pipeline setup with two stages: build & test. Both stages are supposed to share cached files but they don't appear to, resulting in the test stage failing. As best I can, the problem is that each stage uses a different runner and the cached files use the runner ID as part of the path.
.gitlab-ci.ym
...
cache:
key: "build"
untracked: true
...
The build stage outputs the following
Creating cache build...
untracked: found 787 files
Uploading cache.zip to https://runners-cache-1.gitlab.com:443/runner/runner/30dcea4b/project/1704442/build
The test stage outputs the following
Checking cache for build...
$ mvn test
I believe this means the cache was NOT found because there is no download information; but it's not clear.
I can also see that each stage uses a different runner and since the runner ID is part of the cache path, I suspect that is the problem.
I need to either use the same runner for each stage or share the cache across runners. I don't understand how to do either.
Any help would be appreciated.
It appears the cache feature is appropriately named, it's only for improving build performance and is not guaranteed to have the data, much like a real cache.
The correct approach is to use artifacts with dependencies.

optional artifacts download task in bamboo?

Is it possible to configure a deployment project with optional 'Artifact Download' task?
The artifact comes from another plan which has 2 stages producing 2 artifacts. If only 1 stage is executed, it will have 1 shared artifact. I want my deployment project to run even there is only 1 artifact.
But bamboo fail the whole execution with error: "Unable to download artifact Shared artifact: ..." trying to locate the 2nd artifact.
How can I tell Bamboo to ignore the missing artifact and continue the execution?
The only way I've figure this out is to instead of name an artifact, put all of the artifacts into a "directory" as part of the build process, say "artifacts/", and define the artifacts as "artifacts/**". Then on the Deployment side, be clever about manipulating the artifacts for deployments.
Note, in my case, I have an issue with multiple branches for the same build (think, "future release", "current release", "legacy release") that may have different artifacts on them (either new features in "future release", or aged off artifacts from "legacy release"). I had to wrap the actual deployments into a script that was "smart enough" to just iterate through artifacts that actually existed for a given deployment environment.
I'm not completely happy with Bamboo's treatment of special cases for artifact management at all. In fact, I've found that judicious use of the "script" task in Bamboo (and managing those scripts in some external git repo) seems to be the only real way to manage larger Bamboo installations in general.

Where does Bamboo look for artifacts?

I have created a Bamboo build plan that is supposed to generate artifacts. And it does - I see the generated files on the server. Unfortunately, Bamboo does not copy the files to the desired location -> it does not treat them as artifacts that I can download from Bamboo server.
I am working with Bamboo 4.3.3. The documentation tells me to describe the artifacts location relative to the "working directory", so I am trying to copy everything to ${bamboo.build.working.directory}.
I have tried different location / copy pattern settings, but to no avail.
Where should I put them? I have a scripting environment, and there is no Maven or Ant to help me.
I finally understood what was going on with my artifacts and test results that Bamboo did not see:
Test results: there is a known bug that is affecting all versions up to 4.4.5, which manifests itself in scripting environments. Fortunately, it has a workaround: JUnit Parser: Test results are not found
Bamboo uses system property bamboo.fs.timestamp.precision to define FS timestamp resolution. By default it is set to 100 (ms), please set it to higher value in order to make file date check less strict. Bamboo does the check in the following way:
private boolean isFileRecentEnough(final File file)
{
return file.lastModified() >= (taskStartDate.getTime() - SystemProperty.FS_TIMESTAMP_RESOLUTION_MS.getTypedValue());
}
Other items to check
Double check the task configuration and confirm that it is configured it to look for the test results file in the current working directory of the job (Ex.: C:\Users\ssetayeshfar\bamboo-home-445\xml-data\build-dir\PROJECT-PLAN-JOB) and NOT a sub-directory (Ex. C:\Users\ssetayeshfar\bamboo-home-445\xml-data\build-dir\PROJECT-PLAN-JOB/test-results).
In case test report is not produced by the build (it was produced earlier) use a 'touch' command right before the JUnit task.
Artifacts: at the beginning of my work with Bamboo I did not understand that the working directory is defined PER JOB and tried to copy something produced in a previous job as an artifact of the current one.