trigger pipeline for multi-project pipelines without maintainer permissions - gitlab-ci

I have two projects A and B each with their own CI/CD pipelines. Project A has a trigger implemented that invokes the pipeline of project B when something is committed/merged to the master branch.
The following chart roughly describes the workflow:
A (master) -> Pipeline A /-> trigger -> B (master) -> Pipeline B
^ |- Job 1..N-1 / |- Job 1 - Prepare
commit \- Job N ----/ |- Job 2 - Load dependency A, C, ...
|- Job 3 - Build, etc.
The goal is that a maintainer of project A can merge to the master branch of their component and
then the pipeline is triggered in Project B which has a dependency on project A but also other dependencies such as dependency C (not shown here). Upstream pipelines put their artifacts into a package repository (i.e., npm, pip, nuget, docker, etc.) and the downstream project fetches the most recent dependency from there to create a new build.
Whenever an upstream project changes the downstream project should be rebuild using the most recent dependency of the corresponding upstream project. This ensures that we catch issues due to changes in upstream projects automatically and that we always have a most recent downstream version with the latest upstream changes integrated.
The maintainer of project A usually does not have maintainer permissions for project B. The issue is that this causes a permission error no permission to trigger downstream pipeline on the trigger job when the upstream pipeline is executed by the maintainer (who accepts a merge request for their project).
I am wondering if this is a limitation of Gitlab's CI/CD implementation or if this is a design problem in the pipeline that could be solved by a different implementation/workflow.
This is the trigger that is implemented in project A to invoke the pipeline of project B:
trigger:upstream:
stage: deploy
trigger: group-1/group-2/project-b
rules:
- if: '$CI_COMMIT_REF_PROTECTED == "true" && ($CI_COMMIT_TAG =~ /^$/ || $CI_COMMIT_TAG =~ /^v.*$/)'

Related

Azure DevOps Build pipeline failed for DownloadPipelineArtifact#2

Build pipeline failed for below DownloadPipelineArtifact#2 task in my VB.Net App Deployment.
##[error]No builds currently exist in the pipeline definition supplied.
Code for DownloadPipelineArtifact#2 task in azure-pipelines.yml file is displayed as seen below:
Requesting assistance on the same.
That task fails because you havent published an artifact with that pipeline.
you have 2 different ways to publish artifacts:
publish build artifacts (deprecated) used mostly in classic pipelines
can only be downloaded with download build artifact step
publish pipeline artifacts are used to publish an artifiact available for this same build or another pipeline
can only be downloaded from a download pipeline activity
You need to publish your build using publish pipeline artifact first (check if not already).
So first check the source pipelien with definition 370 from Project StarsDemo, and see if the latest build published an artifact or had an issue.
Also, per documentation, the runId is required (aka pipelineId, buildId).
I was able to resolve the issue by changing runVersion value to latest instead of latestFromBranch , after which the pipeline build successfully.

Is there anyway to check that an npm package has been published from the correct branch?

So, we have Project A that has a develop and release branch. We reference a package from another one of our projects, Project B, which also has develop and release branches. What I'd like to be able to do is check that if I'm building / deploying Project A from it's release branch through Azure Devops that it is referencing the package that was created from the release branch for Project B.
My initial thoughts were to look into npm dist tags: https://docs.npmjs.com/cli/dist-tag but if I understand this correctly this doesn't guarantee that the package has ACTUALLY been published from that branch i.e. I could be on my own branch and simply publish the package with a tag of 'release'. Is there a way to automatically add a dist tag if you publish from a specified branch?
My next issue, which may well need to be another question, is whether there is a way in Azure devops to check against this package to ensure it has been published from the correct branch and if not then fail the build. So, if dist tags were used is there a way to check against a package dist tag as a part of the build to ensure it had the right tag e.g. 'release'?
For first question. you can automatically publish and add dist-tag to your package with azure devops build pipeline, which means you need to create a build pipeline to publish your package. And you can add a powershell task to guarantee that the package is published from a certain branch.
To add a powershell task run an inline script before npm publish task.
$sourceBranch = "$(Build.SourceBranchName)"
if($sourceBranch -ne "release")
{
exit 1
}
Above script checks if the sourceBranch is release branch. If the source Branch is not from release branch it will fail the task.
To publish your npm task with tag information. You can add a npm task to run custom command as shown in below pic. You can dynamic defined your tag name using the variables.(eg. publish --tag $(build.Build.SourceBranchName))
Make sure you enable CI build for this build pipeline. And the trigger is release branch
With above steps, your package will be automatically published with your self-defined tag.
2:
For your second question, to check against a package dist-tag in you build pipeline. You can also add a powershell task to check the tag information of your package and fail the task if the condition fails.
set-location -Path "$(Build.SourcesDirectory)"
$tags = npm view packageName dist-tags
if( $tags.Contains("release") -and ($tags -ne $null))
{
exit 1
}
Above script get the tag information using command npm view packageName dist-tags. and check if the tag contains "release". And the task will fail if the condition falses.

Deploying Vue.js App using azure devops release pipeline

I have a vue.js application that is creating and building using vue-cli 3. I have some environment variables in .env.test and .env.prod files.
To build the app I'm using a azure devops build pipeline where I run the command:
npm run build:test or npm run build:prod
That generates different artifacts that are input for Stage in azure devops release pipeline.
The problem I'm facing is I don't want to have separate builds for every environment. I want to build one and deploy to different environments is that possible?
How do I handle those variables to build once package for all environments? Is it a good practice? Or should I have different pipelines for different environments as I have right now?
From perspective of CI
There should be only single build pipeline that will build artifact regardless of the environment where it will run.
.env.prod might be used to deploy artifacts to any environments (Development, Production, etc.)
You have to provide configuration with tokens, which will be replaced on Deployment/Release stage:
env_key1=#{token_key1}#
env_key2=#{token_key2}#
env_key3=#{token_key3}#
Therefore, just build project and publish artifact using single configuration file for all environments.
From perspective of CD
I would recommend to use single release pipeline with multiple stages
(Development, Production, etc).
Provide separate variables groups based on stages. It allows to keep variables separate, logically grouped and use Azure Key Vault as source of secrets. Variable names must be equal to environment tokens (without prefix and suffix).
Add any Task you wish into Stage, which will find and replace tokens.
Currently, I use Replace Tokens extension from marketplace. Depend on stage, different group of variables will be substituted. Replace Tokens task does all of the job automatically, e.i. scans js files and replaces tokens. Default token prefix and suffix are: #{ }#, but task allow to provide custom you wish.
So we had a similar problem. We are about to update our solution to work with a variable group, but if you want a way to do it without one you can always do something like this:
- script: |
npm install
npm run test:unit
if [ $? -ne 0 ]; then
exit 1
fi
npm run build-prod
condition: and(succeeded(), not(in(variables['Build.Reason'], 'PullRequest', 'Manual')))
displayName: 'npm install, test and build for prod'
- script: |
npm install
npm run test:unit
if [ $? -ne 0 ]; then
exit 1
fi
npm run build
condition: and(succeeded(), in(variables['Build.Reason'], 'PullRequest', 'Manual'))
displayName: 'npm install, test and build for test'
So quick breakdown on the scripts. If the build was part of a PullRequet or manual we wanted a staging build which used the default build script. Otherwise we assumed the build was meant for production (You will want some branch policies to enforce this). Then the release pipe-line checked for the a build tag which we set with the following:
- task: PowerShell#2
condition: and(succeeded(), not(in(variables['Build.Reason'], 'PullRequest', 'Manual')))
inputs:
targetType: 'inline'
script: 'Write-Host "##vso[build.addbuildtag]release"'
- task: PowerShell#2
condition: and(succeeded(), in(variables['Build.Reason'], 'PullRequest', 'Manual'))
inputs:
targetType: 'inline'
script: 'Write-Host "##vso[build.addbuildtag]test"'
Now, like I said we are moving away from this, but it did work pretty well and it allowed us to have one build that would deploy with the correct settings without needing to do anything too fancy.
If you use something like this the last step is filter the builds when they get to the release pipeline based on the build tag and branch.

Gradle Module Execution Order

I have a gradle project with two modules.
The first module (A) produces an archive that the second module (B) makes use of. I've defined the settings.gradle file so that A is seen before B.
At the end of A, there is an install task that is called which will make the archive available for B, however the install task won't execute until all modules build.
When I use A's build file, or set the build command -p it will still try to find dependencies for project B. I don't want to do this!
How can I set this up so that module B will wait completely for module A to finish?
I've defined the settings.gradle file so that A is seen before B.
Order doesn't matter here.
At the end of A, there is an install task that is called which will make the archive available for B
The correct way to handle this is to make the outputs of A available to B via a project dependency. In the simplest case, B's build.gradle will contain the following:
dependencies {
compile project(":A") // could be something other than 'compile'
}
When I use A's build file, or set the build command -p it will still try to find dependencies for project B.
Most likely there is a problem with one of your build scripts, namely that it does work in the configuration phase that should be done in the execution phase.
How can I set this up so that module B will wait completely for module A to finish?
There is no good way. Gradle executes a graph of tasks, not a list of projects. As long as task dependencies are correct, the former will have no drawbacks, only advantages. Often, Gradle can figure out task dependencies automatically (especially between projects).

How can i run maven tests against a previous deployed artifact of the same artifact?

I have an artifact abc which has some tests. I have different versions of abc within my repository. I now want to be able to run the latest tests against the 'old build' of the project.
I tried to add the artifact itself to the test dependencies but this (of course) results in a cyclic reference error of the maven reactor when building the tests via:
mvn compiler:testCompile
mvn surefire:test
Is there any smart way to run tests against a previous old build/artifact?
Must i create a new pom.xml in which i define the solo test execution?
Or should i add a postfix to my current artifact when executing the tests? (This would avoid a cyclic reference error)
Separate the tests out into a separate module/project that depends on the classes it tests. Then create separate profiles where you change the dependency to be on older releases.
The problem I foresee with what you're trying to do is that the package phase comes after the test phase of the maven lifecycle. Which to me implies that maven runs unit tests against the compiled classes and not the physical jar file (generated in the package phase). You'll therefore have to replace the contents of the projects /target/classes folder with the classes in the "older" jar.