Get the name of the source branch after an MR is merged in GitlabCI - gitlab-ci

I have a Pipeline job that needs to run only after an MR has been merged to a certain branch (let’s assume it’s master).
This job is supposed to make an API call to send the name of the merged source branch.
The problem I’m encountering is that CI_MERGE_REQUEST_SOURCE_BRANCH_NAME will not be available on the Pipeline that runs right after the merge (since it’s not a merge request pipeline).
Is there a way (env var) to tell what was the branch that was just merged into master?
Many thanks in advance y’all!

Better you work with hashes not with the names.
Anyway I see two options:
In the pipeline for the merge request save the hash/name in an artifact. The subsequent pipeline can access this artifact an read the hash/name.
Run a independent pipeline and read the hash/name of the previous pipeline. To do this more secure you can add tags and read the previous pipeline only if it has the correct tag.

Related

ADF: Using ForEach and Execute Pipeline with Pipeline Folder

I have a folder of pipelines, and I want to execute the pipelines inside the folder using a single pipeline. There will be times when there will be another pipeline added to the folder, so creating a pipeline filled with Execute Pipelines is not an option (well, it is the current method, but it's not very "automate-y" and adding another Execute Pipeline whenever a new pipeline is added is, as you can imagine, a pain). I thought of the ForEach Activity, but I don't know what the approach is.
I have not tried this approach but I think we can use the
ADF RestAPI to get all the details of the pipelines which needs to be executed. Since the response is in JSON you can write it back to temp blob and add filter and focus on what you need .
https://learn.microsoft.com/en-us/rest/api/datafactory/pipelines/list-by-factory?tabs=HTTP
You can use the Create RUN API to trigger the pipeline .
https://learn.microsoft.com/en-us/rest/api/datafactory/pipelines/create-run?tabs=HTTP
As Joel called out , if different pipeline has different count of paramter , it will be little messy to maintain .
Folders are really just organizational structures for the code assets that describe pipelines (same for Datasets and Data Flows), they have no real substance or purpose inside the executing environment. This is why pipeline names have to be globally unique rather than unique to their containing folder.
Another problem you are going to face is that the "Execute Pipeline" activity is not very dynamic. The pipeline name has to be known as design time, and while parameter values are dynamic, the parameter names are not. For these reasons, you can't have a foreach loop that dynamically executes child pipelines.
If I were tackling this problem, it would be through an external pipeline management system that you would have to build yourself. This is not trivial, and in your case would have additional challenges because of the folder level focus.

What is the best way to use a pushed commit in one stage in subsequent stages?

I currently have a gitlab ci pipeline that pushes commits to the branch the pipeline is running on to update code versions (using python-semantic-release). As far as I can tell, the later stages in my pipeline do not use this newly pushed code and instead a new pipeline is triggered for this commit. I am currently skipping the triggered pipeline using [skip ci]. I would like to be able to use the original CI pipeline to finish packaging the code and publishing documentation using the new commit. Is there anything I can do to update the commit that the current CI pipeline is running on or something?
I am not aware of changing the ref mid-pipeline.
You might try and experiment with downstream pipelines, especially the multi-project ones (even though would remain in the same project).
Those (downstream "multi-project" pipelines) are the ones which does not have to run under the same project, ref, and commit SHA as the upstream pipeline (as oppose to parent-child pipeline).
I would also push the code (after the python-semantic-release step) to a different branch, in order for your second pipeline to operate on that second branch, directly with the right code.

Is there anyway to run a merge result pipeline in bitbucket using bash script?

I want to check and validate if the merge to master was a good one or bad one so I want to trigger the pipeline to check the merge requests after it is merged to a master branch.
Pipeline should trigger atomically once a PR is merged
pipeline should be successful if the merge is a good onw
pipeline should fail if the merge is bad
[New to DevOps] Any other solutions are likely appreciated!
Read thru the Bitbucket pipeline documentation https://support.atlassian.com/bitbucket-cloud/docs/get-started-with-bitbucket-pipelines/
Pipeline should trigger atomically once a PR is merged
It is possible to run a set pipeline when you do a merge.
pipeline should fail if the merge is bad
Any pipeline can be made to fail, if the step's scripts are made so they can return an error code if they run into issues. You just need to make the steps to check what could have gone wrong. Bitbucket PR page will tell you before you merge if there are conflicts.

Pass output from one pipeline run and use as parameter in another pipeline

The way my ADF setup currently works, is that I have multiple pipelines, each containing atleast one activity. Then I have one big pipeline that sort of chains these pipelines together.
However, now in the big "master" pipeline, I would like to use the output of an activity from one pipeline and then pass it to another pipeline. All of this orchestrated from the "master" pipeline.
My "master" pipeline would look something like this:
What I have tried to do is adding a parameter to "Execute Pipeline2", and I have tried passing:
#activity('Execute Pipeline1').output.pipeline.runId.output.runOutput
#activity('Execute Pipeline1').output.pipelineRunId.output.runOutput
#activity('Execute Pipeline1').output.runOutput
How would one go about doing this?
unfortunately we don't have a way to pass the output of an activity across pipelines. Right now pipelines don't have outputs (only activities).
We have a workitem that will allow a user to choose what should be the output for a pipeline (imagine a pipeline with 40 activities, user would be able to choose the output of activity 3 as pipeline output). However, this workitem is in very early stages so don't expect to see this soon.
For now, the only way would be to save the output that you want in storage (blob, for example) and then read it and pass it to the other pipeline. Another method could be a web activity that gets the pipeline run (passing run id) and you get the output using ADF SDK or REST API, and then you pass that to the next Execute Pipeline activity.

Is there a way to make Gitlab CI run only when I commit an actual file?

New to Gitlab CI/CD.
What is the proper construct to use in my .gitlab-ci.yml file to ensure that my validation job runs only when a "real" checkin happens?
What I mean is, I observe that the moment I create a merge request, say—which of course creates a new branch—the CI/CD process runs. That is, the branch creation itself, despite the fact that no files have changed, causes the .gitlab-ci.yml file to be processed and pipelines to be kicked off.
Ideally I'd only want this sort of thing to happen when there is actually a change to a file, or a file addition, etc.—in common-sense terms, I don't want CI/CD running on silly operations that don't actually really change the state of the software under development.
I'm passably familiar with except and only, but these don't seem to be able to limit things the way I want. Am I missing a fundamental category or recipe?
I'm afraid what you ask is not possible within Gitlab CI.
There could be a way to use the CI_COMMIT_SHA predefined variable since that will be the same in your new branch compared to your source branch.
Still, the pipeline will run before it can determine or compare SHA's in a custom script or condition.
Gitlab runs pipelines for branches or tags, not commits. Pushing to a repo triggers a pipeline, branching is in fact pushing a change to the repo.