Trigger multi-project build on branch, but only if branch exists - gitlab-ci

I have projects in many git repositories, starting with the base api and then branching outwards.
When the API materially changes, I want to verify that downstream projects (that use the particular API) work correctly.
I hoped this workflow would be supported out of the box:
Make changes on branch to base API
Create same named branch on a subset of projects that may be affected by the change
Execute the build triggering children with
run-A:
stage: .post
trigger:
project: a
branch: $CI_COMMIT_BRANCH
strategy: depend
run-B:
...
Sometimes I want to verify A, sometimes B, sometimes both or neither.
If I don't have the same branch in A and B, I start getting
Downstream pipeline cannot be created; reference cannot be found
How can I get gitlab to depend on the results of other project builds, but ignore the downstream project if the branch doesn't exist?

Related

What is the best way to use a pushed commit in one stage in subsequent stages?

I currently have a gitlab ci pipeline that pushes commits to the branch the pipeline is running on to update code versions (using python-semantic-release). As far as I can tell, the later stages in my pipeline do not use this newly pushed code and instead a new pipeline is triggered for this commit. I am currently skipping the triggered pipeline using [skip ci]. I would like to be able to use the original CI pipeline to finish packaging the code and publishing documentation using the new commit. Is there anything I can do to update the commit that the current CI pipeline is running on or something?
I am not aware of changing the ref mid-pipeline.
You might try and experiment with downstream pipelines, especially the multi-project ones (even though would remain in the same project).
Those (downstream "multi-project" pipelines) are the ones which does not have to run under the same project, ref, and commit SHA as the upstream pipeline (as oppose to parent-child pipeline).
I would also push the code (after the python-semantic-release step) to a different branch, in order for your second pipeline to operate on that second branch, directly with the right code.

Is there a way to trigger a child plan in Bamboo and pass it information like a version number?

We're using Go.Cd and transitioning to Bamboo.
One of the features we use in Go.Cd is value stream maps. This enables triggering another pipeline and passing information (and build artifacts) to the downstream pipeline.
This is valuable when an upstream build has a particular version number, and you want to pass that version number to the downstream build.
I want to replicate this setup in Bamboo (without a plugin).
My question is: Is there a way to trigger a child plan in Bamboo and pass it information like a version number?
This has three steps.
Use a parent plan/child plan to setup the relationship.
Using the artifacts tab, setup shared artifacts to transfer files of one plan to another.
3a. At the end of the parent build, dump the environment variables to a file
env > env.txt
3b. Setup (using the artifacts tab) an artifact selector that picks this up.
3c. Setup a fetch for this artifact from the shared artifacts in the child plan.
3d. Using the Inject Variables task - read the env.txt file you have transferred over. Now your build number from the original pipeline is now available in this downstream pipeline. (Just like Go.Cd).

How to make Gitlab CI automatically create pipelines on every commit

Back in the days I've seen on a demo (can't recall now where) where after every push, the Gitlab CI automatically created a pipeline for this commit in an appropriate branch, but without running it automatically.
This allowed user to run pipeline on any branch without having to manually select branch from "new pipeline" button.
Can't find such information in the documentation. Perhaps it was a hack but nonetheless it worked.

Is there a way to make Gitlab CI run only when I commit an actual file?

New to Gitlab CI/CD.
What is the proper construct to use in my .gitlab-ci.yml file to ensure that my validation job runs only when a "real" checkin happens?
What I mean is, I observe that the moment I create a merge request, say—which of course creates a new branch—the CI/CD process runs. That is, the branch creation itself, despite the fact that no files have changed, causes the .gitlab-ci.yml file to be processed and pipelines to be kicked off.
Ideally I'd only want this sort of thing to happen when there is actually a change to a file, or a file addition, etc.—in common-sense terms, I don't want CI/CD running on silly operations that don't actually really change the state of the software under development.
I'm passably familiar with except and only, but these don't seem to be able to limit things the way I want. Am I missing a fundamental category or recipe?
I'm afraid what you ask is not possible within Gitlab CI.
There could be a way to use the CI_COMMIT_SHA predefined variable since that will be the same in your new branch compared to your source branch.
Still, the pipeline will run before it can determine or compare SHA's in a custom script or condition.
Gitlab runs pipelines for branches or tags, not commits. Pushing to a repo triggers a pipeline, branching is in fact pushing a change to the repo.

How to run the same job against multiple repositories with multiple triggers?

So I'm actively trying to circumvent the job limit bamboo has in place because I have many inactive repositories that get fixed occasionally when new platform updates come out or a one-off new feature is added.
What I would like to happen is for my repository polling to pick up that there's been a change on one of my repository branches, run the job, and presto-change-o we're back to square 1 where I'm listening again for another repository polling update from another change.
Example:
Repo 1 has a commit pushed
Bamboo "hears" the change and starts the job
Repo 2 has a commit pushed
Bamboo hears this change as well, but doesn't continue due to 1 agent being available, this change is queued for later
Repo 1's triggered update finishes and publishes an artifact that can be shared
Bamboo resolves and starts Repo 2's job
Is doing something like this even possible? The best solution (meh) that I've found thus far is to just create one job with a sequential build where it's basically checkout/build/checkout/build/checkout/build but that would result in having to run through many unnecessary steps should I poll only one update from one repository. It's not like these things are changing frequently.
You can add multiple repositories to your build plan, and in your repository polling trigger put checkboxes on all repositories added into the plan.
To add multiple repositories,
Open Plan Configuration Editing
Select third tab "Repositories"
Press "Add repository" button.
Configure your repository and save.
Select fourth tab "Triggers".
Open your Repository Polling trigger and select all repositories you've added on steps 3-4.
Save the trigger.
Then repository polling has to check all configured repos, according to documentation:
https://confluence.atlassian.com/display/BAMBOO058/Triggering+builds
You can also add additional repositories into Source code checkout task, and checkout every repository in different subdirectory.
E.g. for repos R1, R2, R3 you will have working copy directories ./W1, ./W2, ./W3.
And Then it's up to you - either you clone your assembler task T to T1, T2, T3 to make builds from each working copy correspondingly, then it will be done for all jobs on every commit, they will all produce artifacts with the same build number, or you can add a shell script task and write a shell script which discovers the latest commit among all working copies (let's assume it is ./W2), creates symbolic link to that working copy subdirectory as ./MySymbolicLink, and your job that assembles the build will do that from ./MySymbolicLink folder.