How can I write a commit hash to a file using .gitlab-ci before build and deploy - gitlab-ci

I want to add an endpoint in my server to retrieve the current commit hash in production. I am using .gitlab-ci. I want to do this in the pipeline so that the commit hash is written to a file before "build and deploy". I can read this file on request to return the latest deployed version. Can anyone help me with the steps and examples? Thanks in advance!

I would offer an alternative to this. Use GitLab's environments and deployments features that, in part, considers this exact use case.
In your CI/CD configuration (.gitlab-ci.yml), you can specify an environment: key that will record deployments to your environment(s).
For example:
deploy:
script:
- echo "your deployment script here"
environment:
name: "production"
Now, when this job runs, GitLab will record it as a deployment that can be queried later.
Then you can use the deployments API or the environments API to get the latest deployment information which will include, among other information, the commit hash of the deployment.

Related

Gitlab CI - Trigger daily pipeline only if new chanes have been commited

The company I work for has a self hosted Gitlab CE server v13.2.1.
For a specific project I've setup the CI jobs to build according to the following workflow :
If a commit has been pushed to the main branch
If a merge request has been created
If a tag has been pushed
Every day at midnight to build the main branch (using scheduled pipelines)
Everything works fine. The only thing I would like to improve is that the nightly builds are performed even if the main branch has not been modified (no new commit).
I had a look to the Gitlab documentation to change my workflow: rules in the .gitlab-ci.yml file but I didn't find anything relevant.
The gitlab runner is installed in a VM and is setup as a shell executor. I was thinking of creating in the home directory a file to store the last commit ID. I'm not a big fan of that solution, because :
it's a ugly fix.
The pipeline will be triggered by Gitlab even if it does nothing. This will pollute the pipeline list.
Is there any way to setup the workflow: section to perform this so the pipeline list won't contain unnecessary pipeline ?

How can I disable a trigger from a linked repository in Bamboo YAML specs?

We've been using Bamboo YAML specs to run our build plans. We use the default repository + a linked repository in that plan. The build plan now triggers if a commit/new branch has been created in the default repository (=desired behavior), but also when the linked repository has an update (=undesired behavior). How can I disable this via YAML specs?
The Bamboo documentation does not help me, and looking at a 'normal' (non-YAML specs) build plan does not work either, since this option is not converted to YAML specs when selecting 'view as YAML specs'. It does not show in the YAML specs if the trigger of the linked repo is on or off (see attached picture).
Help would be much appreciated!
Insert a script task that compares bamboo.planRepository.<position>.revision to bamboo.planRepository.<position>.previousRevision (find the correct <position> for your repository). Skip the plan build (exit 0) if the two are the same.
Move away from YAML specs. They are still very limited compared to Java specs.
This will disable trigger
---
version: 2
triggers: []
https://docs.atlassian.com/bamboo-specs-docs/8.2.0/specs.html?yaml#plan-branches
This will enable trigger for specific repository
---
version: 2
...
triggers:
- bitbucket-server-trigger:
repositories:
- your_repository_name_here
https://docs.atlassian.com/bamboo-specs-docs/8.2.0/specs.html?yaml#triggering-selected-repositories

GitLab Runner fails to upload artifacts with "invalid argument" error

I'm completely new to trying to implement GitLab's CI/CD pipelines, but it's been going quite well. In fact, for my ASP.NET project, if I specify a Publish Profile in the msbuild command that uses Web Deploy, it actually deploys the code successfully to the web server.
However, I'm now wanting to have the "build" job create artifacts which are uploaded to GitLab that I can then subsequently deploy. We're using a self-hosted instance of GitLab, for which I'm not an admin, but I can speak to the admin if I know what I'm asking for!
So I've configured my gitlab-ci.yml file like this:
variables:
NUGET_PATH: 'C:\Program Files\Nuget\Nuget.exe'
NUGET_SOURCES: 'https://api.nuget.org/v3/index.json'
MSBUILD_PATH: 'C:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\MSBuild\Current\Bin\msbuild.exe'
stages:
- build
build-job:
variables:
CI_DEBUG_TRACE: "true"
stage: build
script:
- '& "$env:NUGET_PATH" restore ApplicationTemplate.sln -Source "$env:NUGET_SOURCES"'
- '& "$env:MSBUILD_PATH" ApplicationTemplate\ApplicationTemplate.csproj /p:DeployOnBuild=true /p:Configuration=Release /p:PublishProfile=FolderPublish.pubxml'
artifacts:
paths:
- '.\ApplicationTemplate\bin\Release\Publish\'
The output shows that this builds the code just fine, and it also seems to successfully find the artifacts for upload. However, when it uploads the artifacts, even though the request gets a 200 OK response, the process fails. Here is the log output:
So, it finds the artifacts, it attempts to upload them and even gets a 200 OK response (in contrast to the handful of similar reports of this error I've been able to find online), but it still fails due to an invalid argument.
I've already enabled verbose debugging, as you can see from the output, but I'm none the wiser. Looking at the GitLab Runner entries in the Windows Event Log on the box where the runner is hosted doesn't shed any light on things either. The total size of the artifacts is 61.1MB, so I don't think my issue is related to that.
Can anyone see from this output what's invalid? Can I identify which argument is invalid and/or why it's invalid?
Edit: Things I've tried
Specifying a value for artifacts:expire_in.
Setting artifacts:public to FALSE, since I'm using a self-hosted GitLab environment and the default value for this setting (TRUE) is not valid in such an environment.
Trying every format I can think of for the value of the artifacts:paths setting (this seems to be incredibly robust - regardless of the format I use, the Runner seems to have no problem parsing it and finding the files to upload).
Taking a cue from this question I created a new project with a very simple build job to upload a single file:
stages:
- build
build-job:
variables:
CI_DEBUG_TRACE: "true"
stage: build
script:
- echo "Test" > test.txt
artifacts:
paths:
- test.txt
About 50% of the time this job hangs on the uploading of the artifacts and I have to cancel it. The other half of the time it fails in exactly the same way as the my previous project:
After countless hours working on this, it seems that ultimately the issue was that our internal Web Application Firewall was blocking some part of the transfer of artefacts to the server, or the response back from it. With the WAF reconfigured not to block traffic from the machine running the GitLab Runner, the artefacts are successfully uploaded and the job succeeds.
This would have been significantly easier to diagnose if the logging from GitLab was better. As per my comment on this issue, it should be possible to see the content of the response from the GitLab server after uploading artefacts, even when the response code is 200.
What's strange - and made diagnosing the issue even harder - is that when I worked through the issue with the admin of our GitLab instance, digging through logs and running it in debug mode, the artefact upload process was uploading something successfully. We could see, for example, the GitLab Runner's log had been uploaded to the server. Clearly the WAF's blocking was selective and didn't block everything in both directions.

Gitlab run different deploment scripts on merge depending on Labels

How can I run different CI deployment scripts on merge to master depending on the labels attached to the merge request?
I have a repository from which I build different versions of my software. I keep it in one repository as the systems share 90% of the code but there are differences that defitively need code modifications. On merge requests all versions are buildt and a suite of tests is run. Usually I want to deploy on accepting the merge request.
As not always the changes are relevant for all systems I would like to attach labels to the merge request that decide which deployments scripts are run on accepting the merge request. I already tried to automatically decide on the changed code parts but this is not possible as often I expand a shared library but this is only relevant for one of the systems.
I am aware of variables but I don't know how to apply them on merge accept in YML like this
deploy:
stage: deploy
script:
...
only:
- master
Update on strategy:
As CI_MERGE_REQUEST_LABELS is not available with only:master I will try to do a beta deployment depending on merge request labels in only:merge-request. In only:master I will deploy the betas that have changed. This most likely will fit my needs. I will add it as a solution once it works.
I finally solved it this way:
My YML script has three stages:
stages:
- buildtest
- createbeta
- deploy
buildtest:
stage: buildtest
script:
- ... run unit tests
- ... build all systems
- ... run scripted tests on all systems
only:
refs:
- merge_requests
createbeta:
stage: createbeta
script:
- ... run setup and update package creation with parameter $CI_MERGE_REQUEST_LABELS
- ... run update package tests with parameter $CI_MERGE_REQUEST_LABELS
- ... run beta deployment scripts with parameter $CI_MERGE_REQUEST_LABELS (see text)
only:
refs:
- merge_requests
deploy:
stage: deploy
script:
- ... run production deployment scripts (see text)
only:
refs:
- master
The first stages are run on merge request creation.
As changes to shared libraries might affect all systems all builds and tests are run in stage "buildtest".
The scripts in stage "createbeta" check for existance of the merge request label for the corresponding system and are skipped if the system is not involved by the labels.
The script for beta deployment creates a signal file "deploy_me" in the beta folder (important) if it runs
When the request is merged the deployment script runs in stage "deploy". It checks for the existance of the "deploy_me" file and only deploys and informs via mail if the file exists.
This way I can easily decide which system I want to deploy by applying a labes to the merge request. I can thorowly test the new feature with the beta version and make sure that changes do not break the other systems as unittests and system tests are run for all systems.
As the GitLab runner runs in a Windows environment (yes, this makes sense as I work with Delphi) here is the way I find the system label in a Windows cmd file for those who are interested. I use %* as the labels are separated by spaces and treated as individual command line parameters.
echo %* | findstr /i /c:"MyCoolSystem" > nul
if %ERRORLEVEL% EQU 0 goto runit
rem If the label is not supplied with the merge request, do nothing
goto ok
:runit
... content
:ok
Perhaps this helps someone with a similar environment and similar workflow.

In GitlabCI - How can we trigger a build/pipeline if a specific build/pipeline is completed successfully?

We are using GitLab Enterprise Edition 10.8.7-ee 075705a and trying to use Gitlab CI.
Here is my scenario:-
I've two repositories repo1 and repo2 and I'm setting up two pipelines pipeline1 and pipeline2.
Now I'm looking for an option where I can configure pipeline2 to trigger a build if pipeline1 build is successful. One more thing, I need to get the version number of the pipeline1 in pipeline2
Note:- I know we can trigger pipeline2 from pipeline1 but I need other way around.
Please suggest.
A couple of options.
Use the gitlab api to do this (triggers).
Use webhooks to do this.
gitlab webhooks docs
gitlab triggers docs
with this. you can get any data / meta data for your stack.
and can automagically call it/set it on any condition.
This can also be done if your stack is using aws (CLI) and (or) Jenkins
Some sections that may interest you in gitlab triggers docs
When used with multi-project Pipelines
When a pipeline depends on the artifacts of another Pipeline
Triggering a pipeline from a webhook
Using cron to trigger nightly (or pretty much *ly) pipelines