What is the best way to use a pushed commit in one stage in subsequent stages? - gitlab-ci

I currently have a gitlab ci pipeline that pushes commits to the branch the pipeline is running on to update code versions (using python-semantic-release). As far as I can tell, the later stages in my pipeline do not use this newly pushed code and instead a new pipeline is triggered for this commit. I am currently skipping the triggered pipeline using [skip ci]. I would like to be able to use the original CI pipeline to finish packaging the code and publishing documentation using the new commit. Is there anything I can do to update the commit that the current CI pipeline is running on or something?

I am not aware of changing the ref mid-pipeline.
You might try and experiment with downstream pipelines, especially the multi-project ones (even though would remain in the same project).
Those (downstream "multi-project" pipelines) are the ones which does not have to run under the same project, ref, and commit SHA as the upstream pipeline (as oppose to parent-child pipeline).
I would also push the code (after the python-semantic-release step) to a different branch, in order for your second pipeline to operate on that second branch, directly with the right code.

Related

Get the name of the source branch after an MR is merged in GitlabCI

I have a Pipeline job that needs to run only after an MR has been merged to a certain branch (let’s assume it’s master).
This job is supposed to make an API call to send the name of the merged source branch.
The problem I’m encountering is that CI_MERGE_REQUEST_SOURCE_BRANCH_NAME will not be available on the Pipeline that runs right after the merge (since it’s not a merge request pipeline).
Is there a way (env var) to tell what was the branch that was just merged into master?
Many thanks in advance y’all!
Better you work with hashes not with the names.
Anyway I see two options:
In the pipeline for the merge request save the hash/name in an artifact. The subsequent pipeline can access this artifact an read the hash/name.
Run a independent pipeline and read the hash/name of the previous pipeline. To do this more secure you can add tags and read the previous pipeline only if it has the correct tag.

Is there anyway to run a merge result pipeline in bitbucket using bash script?

I want to check and validate if the merge to master was a good one or bad one so I want to trigger the pipeline to check the merge requests after it is merged to a master branch.
Pipeline should trigger atomically once a PR is merged
pipeline should be successful if the merge is a good onw
pipeline should fail if the merge is bad
[New to DevOps] Any other solutions are likely appreciated!
Read thru the Bitbucket pipeline documentation https://support.atlassian.com/bitbucket-cloud/docs/get-started-with-bitbucket-pipelines/
Pipeline should trigger atomically once a PR is merged
It is possible to run a set pipeline when you do a merge.
pipeline should fail if the merge is bad
Any pipeline can be made to fail, if the step's scripts are made so they can return an error code if they run into issues. You just need to make the steps to check what could have gone wrong. Bitbucket PR page will tell you before you merge if there are conflicts.

How to make Gitlab CI automatically create pipelines on every commit

Back in the days I've seen on a demo (can't recall now where) where after every push, the Gitlab CI automatically created a pipeline for this commit in an appropriate branch, but without running it automatically.
This allowed user to run pipeline on any branch without having to manually select branch from "new pipeline" button.
Can't find such information in the documentation. Perhaps it was a hack but nonetheless it worked.

Is there a way to make Gitlab CI run only when I commit an actual file?

New to Gitlab CI/CD.
What is the proper construct to use in my .gitlab-ci.yml file to ensure that my validation job runs only when a "real" checkin happens?
What I mean is, I observe that the moment I create a merge request, say—which of course creates a new branch—the CI/CD process runs. That is, the branch creation itself, despite the fact that no files have changed, causes the .gitlab-ci.yml file to be processed and pipelines to be kicked off.
Ideally I'd only want this sort of thing to happen when there is actually a change to a file, or a file addition, etc.—in common-sense terms, I don't want CI/CD running on silly operations that don't actually really change the state of the software under development.
I'm passably familiar with except and only, but these don't seem to be able to limit things the way I want. Am I missing a fundamental category or recipe?
I'm afraid what you ask is not possible within Gitlab CI.
There could be a way to use the CI_COMMIT_SHA predefined variable since that will be the same in your new branch compared to your source branch.
Still, the pipeline will run before it can determine or compare SHA's in a custom script or condition.
Gitlab runs pipelines for branches or tags, not commits. Pushing to a repo triggers a pipeline, branching is in fact pushing a change to the repo.

How to run the same job against multiple repositories with multiple triggers?

So I'm actively trying to circumvent the job limit bamboo has in place because I have many inactive repositories that get fixed occasionally when new platform updates come out or a one-off new feature is added.
What I would like to happen is for my repository polling to pick up that there's been a change on one of my repository branches, run the job, and presto-change-o we're back to square 1 where I'm listening again for another repository polling update from another change.
Example:
Repo 1 has a commit pushed
Bamboo "hears" the change and starts the job
Repo 2 has a commit pushed
Bamboo hears this change as well, but doesn't continue due to 1 agent being available, this change is queued for later
Repo 1's triggered update finishes and publishes an artifact that can be shared
Bamboo resolves and starts Repo 2's job
Is doing something like this even possible? The best solution (meh) that I've found thus far is to just create one job with a sequential build where it's basically checkout/build/checkout/build/checkout/build but that would result in having to run through many unnecessary steps should I poll only one update from one repository. It's not like these things are changing frequently.
You can add multiple repositories to your build plan, and in your repository polling trigger put checkboxes on all repositories added into the plan.
To add multiple repositories,
Open Plan Configuration Editing
Select third tab "Repositories"
Press "Add repository" button.
Configure your repository and save.
Select fourth tab "Triggers".
Open your Repository Polling trigger and select all repositories you've added on steps 3-4.
Save the trigger.
Then repository polling has to check all configured repos, according to documentation:
https://confluence.atlassian.com/display/BAMBOO058/Triggering+builds
You can also add additional repositories into Source code checkout task, and checkout every repository in different subdirectory.
E.g. for repos R1, R2, R3 you will have working copy directories ./W1, ./W2, ./W3.
And Then it's up to you - either you clone your assembler task T to T1, T2, T3 to make builds from each working copy correspondingly, then it will be done for all jobs on every commit, they will all produce artifacts with the same build number, or you can add a shell script task and write a shell script which discovers the latest commit among all working copies (let's assume it is ./W2), creates symbolic link to that working copy subdirectory as ./MySymbolicLink, and your job that assembles the build will do that from ./MySymbolicLink folder.