GitLab CI/CD Could I get artifacts real path in runner then send files with scp? - gitlab-ci

I'm learning GitLab CI/CD, I want to when finished build send files in artifacts, the idea is possible?
image: maven:3.8.1-jdk-11
stages:
- build
- deploy
build:
stage: build
script:
- mvn clean install
artifacts:
paths:
- "*/target/*.jar"
deploy:
stage: deploy
script:
- scp -r <artifacts_path> root#test.com:~/Deploy

Could I get artifacts real path in runner then send files with scp?
Generally speaking, no. You must rely on artifact restoration process. Keep in mind that (1) artifacts are generally not stored on the runner and (2) docker runners execute jobs inside of a docker container and typically would not have access to files on the runner host, even if artifacts were stored there.
When jobs start, artifacts from previous stages are restored into the workspace.
So, as an alternative solution, you can simply start with an empty workspace (don't checkout the repo), then upload all files in the workspace, which should be only the restored artifacts, assuming there are no file-based variables.
deploy:
variables: # prevent checkout of repository
GIT_STRATEGY: none
stage: deploy
script:
- ls -laht # list files, which should be just restored artifacts
- scp -r ./ root#test.com:~/Deploy
Another way might be to just use the same glob pattern used in the artifacts:paths: to find the files and upload them.
variables:
ARTIFACTS_PATTERN: "*/target/*.jar"
build:
# ...
artifacts:
paths:
- $ARTIFACTS_PATTERN
deploy:
script: # something like this. Not sure if scp supports glob patterns
- rsync -a -m --include="$ARTIFACTS_PATTERN" user#remote:~/Deploy

Related

gitlab CI dependency availability between stages

I have 7 stages in my pipeline. I need ruby for 3 of the stages.
things I have tried two different options,
Install ruby on each of the required stage,
Install ruby as part of the before_script section
Using before_script takes up too much of time trying to install ruby on the 4 other stages that does not require it.
Is there a way to do install dependencies as part of one stage and carry it forward for rest of the stages.
example yml
image: ubuntu:21.10
before_script:
- apt update
- apt install ruby-full
- apt install python3.8
stages:
- s1
- s2
- s3
- s4
s1:
stage: s1
script: ruby s1.rb
s2:
stage: s2
script: ruby s2.rb
s3:
stage: s3
script: python3 s3.py
s4:
stage: s4
script: python3 s4.py
There's a few elements here to understand. Generally, every job starts with the same fresh environment. The only differences to this would be files passed through artifacts: or files restored from cache: configurations. Actions performed in one job generally otherwise have no effect on any other jobs.
Using before_script takes up too much of time trying to install ruby on the 4 other stages that does not require it.
It's also important to know that before_script can be set for each job independently. If one job doesn't need it, just override the before_script: key in that job.
Anyhow. There are a few ways you might optimize your build speed with respect to dependencies:
Docker image containing your dependencies
Typically, you would just use a ruby image as your image: for jobs requiring ruby. Usually an official image from dockerhub will work, like ruby:3.1-alpine.
some_ruby_job:
image: "ruby:3.1-alpine"
script: # ruby is already available by default
- echo "hello ruby"
- ruby -v
some_other_job:
image: alpine:latest
script:
- echo "this job does not need ruby"
Making a custom docker image
If your dependencies are very complex, you may even choose to create your own docker images and push them to the project's container registry so you can use the custom image with all your dependencies as your image:.
You could even build an image in one stage and use it as the image: in subsequent stages. This example uses docker caching with --cache-from to further speed up that process.
build:
image: docker:19.03.12
stage: .pre
services:
- docker:19.03.12-dind
script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
- docker pull $CI_REGISTRY/group/project/image:$CI_BRANCH_NAME || true
- docker build --cache-from $CI_REGISTRY/group/project/image:$CI_BRANCH_NAME -t $CI_REGISTRY/group/project/image:$CI_BRANCH_NAME .
- docker push $CI_REGISTRY/group/project/image:$CI_BRANCH_NAME
some_ruby_job:
stage: test
# This is the image that was built in the previous stage!
image: $CI_REGISTRY/group/project/image:$CI_BRANCH_NAME
script:
- echo "all my dependencies are here!"
- ruby -v
Caching
To further speed things along, you may also choose to cache your ruby dependencies (say, if you install gems as part of your job)
Something like:
some_ruby_job:
stage: one
cache:
key:
files:
- Gemfile.lock
paths:
- vendor/ruby
# ...
That way the vendor/ruby directory is cached which will avoid the need to download the gems again in every stage.
Cache policy
You can also speed up caching behavior in subsequent stages by setting the cache policy to pull (to avoid time spent uploading the cache after the job). In other words, only one job is responsible for generating the cache, the other jobs reuse the same cache.
ruby_jobs_in_future_stages:
cache:
key:
files:
- Gemfile.lock
paths:
- vendor/ruby
policy: pull # only download the cache, don't upload it

cmake does not (always) order Fortran modules correctly

I have a code using Fortran modules. I can build it with no problems under normal circumstances. CMake takes care of the ordering of the module files.
However, using a gitlab runner, it SOMETIMES happens that cmake does NOT order the Fortran modules by dependencies, but alphabetically instead, which than leads to a build failure.
The problem seems to occur at random. I have a branch that built in the CI. After adding a commit, that modified a utility script not involved in any way in the build, I ran into this problem. There is no difference in the output of the cmake configure step.
I use the matrix configuration for the CI to test different configurations. I found, that I could trigger this by adding another mpi version (e.g. openmpi/4.1.6). Without that version, it built. With it added in the matrix, ALL configurations showed the problem.
stages:
- configure
- build
- test
.basic_config:
tags:
- hpc_runner
variables:
# load submodules
GIT_SUBMODULE_STRATEGY: recursive
.config_matrix:
extends: .basic_config
# define job matrix
parallel:
matrix:
- COMPILER: [gcc/9.4.0]
PARALLELIZATION: [serial, openmpi/3.1.6]
TYPE: [option1, option2]
BUILD_TYPE: [debug, release]
- COMPILER: [gcc/10.3.0, intel/19.0.5]
PARALLELIZATION: [serial]
TYPE: [option2]
BUILD_TYPE: [debug]
###############################################################################
# setup script
# These commands will run before each job.
before_script:
- set -e
- uname -a
- |
if [[ "$(uname)" = "Linux" ]]; then
export THREADS=$(nproc --all)
elif [[ "$(uname)" = "Darwin" ]]; then
export THREADS=$(sysctl -n hw.ncpu)
else
echo "Unknown platform. Setting THREADS to 1."
export THREADS=1
fi
# load environment
- source scripts/build/load_environment $COMPILER $BUILD_TYPE $TYPE $PARALLELIZATION
# set path for build folder
- build_path=build/$COMPILER/$PARALLELIZATION/$TYPE/$BUILD_TYPE
configure:
stage: configure
extends: .config_matrix
script:
- mkdir -p $build_path
- cd $build_path
- $CMAKE_COMMAND
artifacts:
paths:
- build
expire_in: 1 days
###############################################################################
# build script
build:
stage: build
extends: .config_matrix
script:
- cd $build_path
- make
artifacts:
paths:
- build
expire_in: 1 days
needs:
- configure
###############################################################################
# test
test:
stage: test
extends: .config_matrix
script:
- cd $build_path
- ctest --output-on-failure
needs:
- build
The runner runs on an HPC machine which a complex setup, and I am not to familiar with the exact configuration. I contacted the admin with this problem, but wanted to see if anybody else had run into this before and have solutions or hints on what is going on.
With the help from our admin I figured it out.
The problem comes from cmake using absolute paths. The runner has actually several runners for parallel jobs, with each using a different prefix path, e.g. /runner/001/ or /runner/012/. So when I run configure on a specific runner, cmake saves that prefix path to the configuration.
Now in the build stage, there is no guarantee to have the same configuration run on the same runner. However, since there are absolute paths in the make files, make tries to access the folders in the configure runner's prefix. Now, that can be anything from non-existing, over old files from previous pipelines to the correct files downloaded by another case.
The only fix I currently can see is to run everything on the same runner in one stage, to avoid the roulette of prefix paths. If anybody has a different idea, or if there is a way to fix a specific matrix case to a specific runner prefix, please comment.

how to config lfs.fetchinclude in gitlabci

I want to git lfs fetch only in some dir in the gitlab CI. but failed
the gitlab-runner was 11.8.0~beta.1077
i config like this:
variables:
# Please edit to your GitLab project
GIT_STRATEGY: clone
GIT_CHECKOUT: "false"
script:
- git config lfs.fetchinclude "xxx/xxx/, test/"
but ci erro:
root config contains unknown keys: script
how to fix it?
The first part is your syntax error - the script key must be part of a job, e.g. like this:
build:
stage: build
script:
- git config ...
However, I think the CI runner will fetch LFS files automatically, and the script is only run after cloning.
So I think you have to disable automatic fetching before, & then this might work:
build:
stage: build
before_script:
- git config lfs.fetchinclude "xxx/xxx/, test/"
- git lfs pull

Executing reckonTagCreate from gitlab ci with authentication failure

I have setup my CI so that I can manually create a release-tag when all tests succeeds for a new commit on master branch. For this I have created a manual step in the CI config like so:
.release-template:
stage:
releasing
dependencies:
- assemble
script:
- ./gradlew reckonTagPush -Preckon.scope=$scope -Preckon.stage=$stage -Dorg.ajoberstar.grgit.auth.username=$GIT_USER -Dorg.ajoberstar.grgit.auth.password=$GIT_PASSSWORD
only:
- master
when: manual #ONLY MANUAL RELEASES, ONLY FROM MASTER
release-major:
extends: .release-template
variables:
scope: major
stage: final
release-minor:
extends: .release-template
variables:
scope: minor
stage: final
release-patch:
extends: .release-template
variables:
scope: patch
stage: final
This setup fails with an authentication error.
Execution failed for task ':reckonTagPush'.
> org.eclipse.jgit.api.errors.TransportException: https://gitlab-ci-token#gitlab.com/<group>/<project>.git: not authorized
I am running this on gitlab.com on a shared runner.
The username and password are configured in gitlab ci variables for the project. When running this locally inside the same docker image that is used in the gitlab runner, it works fine. So there must be something special about the way the gitlab runner is executing the gradle tasks, or communicating with the gitlab git repo.
Solved the issue with access to pushing to the git repo by adding the following script :
script:
- url_host=`git remote get-url origin | sed -e "s/https:\/\/gitlab-ci-token:.*#//g"`
- git remote set-url origin "https://gitlab-ci-token:$GIT_TOKEN#$url_host"
- ./gradlew reckonTagPush -Preckon.scope=$scope -Preckon.stage=$stage -Dorg.ajoberstar.grgit.auth.username="$GIT_USER" -Dorg.ajoberstar.grgit.auth.password="$GIT_TOKEN"
The notably changes here are setting the git remote url, as well as surrounding the gitlab ci variables with " when passing them to the reckon plugin

Use GitLab CI to run tests locally?

If a GitLab project is configured on GitLab CI, is there a way to run the build locally?
I don't want to turn my laptop into a build "runner", I just want to take advantage of Docker and .gitlab-ci.yml to run tests locally (i.e. it's all pre-configured). Another advantage of that is that I'm sure that I'm using the same environment locally and on CI.
Here is an example of how to run Travis builds locally using Docker, I'm looking for something similar with GitLab.
Since a few months ago this is possible using gitlab-runner:
gitlab-runner exec docker my-job-name
Note that you need both docker and gitlab-runner installed on your computer to get this working.
You also need the image key defined in your .gitlab-ci.yml file. Otherwise won't work.
Here's the line I currently use for testing locally using gitlab-runner:
gitlab-runner exec docker test --docker-volumes "/home/elboletaire/.ssh/id_rsa:/root/.ssh/id_rsa:ro"
Note: You can avoid adding a --docker-volumes with your key setting it by default in /etc/gitlab-runner/config.toml. See the official documentation for more details. Also, use gitlab-runner exec docker --help to see all docker-based runner options (like variables, volumes, networks, etc.).
Due to the confusion in the comments, I paste here the gitlab-runner --help result, so you can see that gitlab-runner can make builds locally:
gitlab-runner --help
NAME:
gitlab-runner - a GitLab Runner
USAGE:
gitlab-runner [global options] command [command options] [arguments...]
VERSION:
1.1.0~beta.135.g24365ee (24365ee)
AUTHOR(S):
Kamil TrzciƄski <ayufan#ayufan.eu>
COMMANDS:
exec execute a build locally
[...]
GLOBAL OPTIONS:
--debug debug mode [$DEBUG]
[...]
As you can see, the exec command is to execute a build locally.
Even though there was an issue to deprecate the current gitlab-runner exec behavior, it ended up being reconsidered and a new version with greater features will replace the current exec functionality.
Note that this process is to use your own machine to run the tests using docker containers. This is not to define custom runners. To do so, just go to your repo's CI/CD settings and read the documentation there. If you wanna ensure your runner is executed instead of one from gitlab.com, add a custom and unique tag to your runner, ensure it only runs tagged jobs and tag all the jobs you want your runner to be responsible of.
I use this docker-based approach:
Edit: 2022-10
docker run --entrypoint bash --rm -w $PWD -v $PWD:$PWD -v /var/run/docker.sock:/var/run/docker.sock gitlab/gitlab-runner:latest -c 'git config --global --add safe.directory "*";gitlab-runner exec docker test'
For all git versions > 2.35.2. You must add safe.directory within the container to avoid fatal: detected dubious ownership in repository at.... This also true for patched git versions < 2.35.2. The old command will not work anymore.
Details
0. Create a git repo to test this answer
mkdir my-git-project
cd my-git-project
git init
git commit --allow-empty -m"Initialize repo to showcase gitlab-runner locally."
1. Go to your git directory
cd my-git-project
2. Create a .gitlab-ci.yml
Example .gitlab-ci.yml
image: alpine
test:
script:
- echo "Hello Gitlab-Runner"
3. Create a docker container with your project dir mounted
docker run -d \
--name gitlab-runner \
--restart always \
-v $PWD:$PWD \
-v /var/run/docker.sock:/var/run/docker.sock \
gitlab/gitlab-runner:latest
(-d) run container in background and print container ID
(--restart always) or not?
(-v $PWD:$PWD) Mount current directory into the current directory of the container - Note: On Windows you could bind your dir to a fixed location, e.g. -v ${PWD}:/opt/myapp. Also $PWD will only work at powershell not at cmd
(-v /var/run/docker.sock:/var/run/docker.sock) This gives the container access to the docker socket of the host so it can start "sibling containers" (e.g. Alpine).
(gitlab/gitlab-runner:latest) Just the latest available image from dockerhub.
4. Execute with
Avoid fatal: detected dubious ownership in repository at... More info
docker exec -it -w $PWD gitlab-runner git config --global --add safe.directory "*"
Actual execution
docker exec -it -w $PWD gitlab-runner gitlab-runner exec docker test
# ^ ^ ^ ^ ^ ^
# | | | | | |
# (a) (b) (c) (d) (e) (f)
(a) Working dir within the container. Note: On Windows you could use a fixed location, e.g. /opt/myapp.
(b) Name of the docker container
(c) Execute the command "gitlab-runner" within the docker container
(d)(e)(f) run gitlab-runner with "docker executer" and run a job named "test"
5. Prints
...
Executing "step_script" stage of the job script
$ echo "Hello Gitlab-Runner"
Hello Gitlab-Runner
Job succeeded
...
Note: The runner will only work on the commited state of your code base. Uncommited changes will be ignored. Exception: The .gitlab-ci.yml itself does not have be commited to be taken into account.
Note: There are some limitations running locally. Have a look at limitations of gitlab runner locally.
I'm currently working on making a gitlab runner that works locally.
Still in the early phases, but eventually it will become very relevant.
It doesn't seem like gitlab want/have time to make this, so here you go.
https://github.com/firecow/gitlab-runner-local
If you are running Gitlab using the docker image there: https://hub.docker.com/r/gitlab/gitlab-ce, it's possible to run pipelines by exposing the local docker.sock with a volume option: -v /var/run/docker.sock:/var/run/docker.sock. Adding this option to the Gitlab container will allow your workers to access to the docker instance on the host.
The GitLab runner appears to not work on Windows yet and there is an open issue to resolve this.
So, in the meantime I am moving my script code out to a bash script, which I can easily map to a docker container running locally and execute.
In this case I want to build a docker container in my job, so I create a script 'build':
#!/bin/bash
docker build --pull -t myimage:myversion .
in my .gitlab-ci.yaml I execute the script:
image: docker:latest
services:
- docker:dind
before_script:
- apk add bash
build:
stage: build
script:
- chmod 755 build
- build
To run the script locally using powershell I can start the required image and map the volume with the source files:
$containerId = docker run --privileged -d -v ${PWD}:/src docker:dind
install bash if not present:
docker exec $containerId apk add bash
Set permissions on the bash script:
docker exec -it $containerId chmod 755 /src/build
Execute the script:
docker exec -it --workdir /src $containerId bash -c 'build'
Then stop the container:
docker stop $containerId
And finally clean up the container:
docker container rm $containerId
Another approach is to have a local build tool that is installed on your pc and your server at the same time.
So basically, your .gitlab-ci.yml will basically call your preferred build tool.
Here an example .gitlab-ci.yml that i use with nuke.build:
stages:
- build
- test
- pack
variables:
TERM: "xterm" # Use Unix ASCII color codes on Nuke
before_script:
- CHCP 65001 # Set correct code page to avoid charset issues
.job_template: &job_definition
except:
- tags
build:
<<: *job_definition
stage: build
script:
- "./build.ps1"
test:
<<: *job_definition
stage: test
script:
- "./build.ps1 test"
variables:
GIT_CHECKOUT: "false"
pack:
<<: *job_definition
stage: pack
script:
- "./build.ps1 pack"
variables:
GIT_CHECKOUT: "false"
only:
- master
artifacts:
paths:
- output/
And in nuke.build i've defined 3 targets named like the 3 stages (build, test, pack)
In this way you have a reproducible setup (all other things are configured with your build tool) and you can test directly the different targets of your build tool.
(i can call .\build.ps1 , .\build.ps1 test and .\build.ps1 pack when i want)
I am on Windows using VSCode with WSL
I didn't want to register my work PC as a runner so instead I'm running my yaml stages locally to test them out before I upload them
$ sudo apt-get install gitlab-runner
$ gitlab-runner exec shell build
yaml
image: node:10.19.0 # https://hub.docker.com/_/node/
# image: node:latest
cache:
# untracked: true
key: project-name
# key: ${CI_COMMIT_REF_SLUG} # per branch
# key:
# files:
# - package-lock.json # only update cache when this file changes (not working) #jkr
paths:
- .npm/
- node_modules
- build
stages:
- prepare # prepares builds, makes build needed for testing
- test # uses test:build specifically #jkr
- build
- deploy
# before_install:
before_script:
- npm ci --cache .npm --prefer-offline
prepare:
stage: prepare
needs: []
script:
- npm install
test:
stage: test
needs: [prepare]
except:
- schedules
tags:
- linux
script:
- npm run build:dev
- npm run test:cicd-deps
- npm run test:cicd # runs puppeteer tests #jkr
artifacts:
reports:
junit: junit.xml
paths:
- coverage/
build-staging:
stage: build
needs: [prepare]
only:
- schedules
before_script:
- apt-get update && apt-get install -y zip
script:
- npm run build:stage
- zip -r build.zip build
# cache:
# paths:
# - build
# <<: *global_cache
# policy: push
artifacts:
paths:
- build.zip
deploy-dev:
stage: deploy
needs: [build-staging]
tags: [linux]
only:
- schedules
# # - branches#gitlab-org/gitlab
before_script:
- apt-get update && apt-get install -y lftp
script:
# temporarily using 'verify-certificate no'
# for more on verify-certificate #jkr: https://www.versatilewebsolutions.com/blog/2014/04/lftp-ftps-and-certificate-verification.html
# variables do not work with 'single quotes' unless they are "'surrounded by doubles'"
- lftp -e "set ssl:verify-certificate no; open mediajackagency.com; user $LFTP_USERNAME $LFTP_PASSWORD; mirror --reverse --verbose build/ /var/www/domains/dev/clients/client/project/build/; bye"
# environment:
# name: staging
# url: http://dev.mediajackagency.com/clients/client/build
# # url: https://stg2.client.co
when: manual
allow_failure: true
build-production:
stage: build
needs: [prepare]
only:
- schedules
before_script:
- apt-get update && apt-get install -y zip
script:
- npm run build
- zip -r build.zip build
# cache:
# paths:
# - build
# <<: *global_cache
# policy: push
artifacts:
paths:
- build.zip
deploy-client:
stage: deploy
needs: [build-production]
tags: [linux]
only:
- schedules
# - master
before_script:
- apt-get update && apt-get install -y lftp
script:
- sh deploy-prod
environment:
name: production
url: http://www.client.co
when: manual
allow_failure: true
The idea is to keep check commands outside of .gitlab-ci.yml. I use Makefile to run something like make check and my .gitlab-ci.yml runs the same make commands that I use locally to check various things before committing.
This way you'll have one place with all/most of your commands (Makefile) and .gitlab-ci.yml will have only CI-related stuff.
I have written a tool to run all GitLab-CI job locally without have to commit or push, simply with the command ci-toolbox my_job_name.
The URL of the project : https://gitlab.com/mbedsys/citbx4gitlab
Years ago I build this simple solution with Makefile and docker-compose to run the gitlab runner in docker, you can use it to execute jobs locally as well and should work on all systems where docker works:
https://gitlab.com/1oglop1/gitlab-runner-docker
There are few things to change in the docker-compose.override.yaml
version: "3"
services:
runner:
working_dir: <your project dir>
environment:
- REGISTRATION_TOKEN=<token if you want to register>
volumes:
- "<your project dir>:<your project dir>"
Then inside your project you can execute it the same way as mentioned in other answers:
docker exec -it -w $PWD runner gitlab-runner exec <commands>..
I recommend using gitlab-ci-local
https://github.com/firecow/gitlab-ci-local
It's able to run specific jobs as well.
It's a very cool project and I have used it to run simple pipelines on my laptop.