Use GitLab CI to run tests locally? - gitlab-ci

If a GitLab project is configured on GitLab CI, is there a way to run the build locally?
I don't want to turn my laptop into a build "runner", I just want to take advantage of Docker and .gitlab-ci.yml to run tests locally (i.e. it's all pre-configured). Another advantage of that is that I'm sure that I'm using the same environment locally and on CI.
Here is an example of how to run Travis builds locally using Docker, I'm looking for something similar with GitLab.

Since a few months ago this is possible using gitlab-runner:
gitlab-runner exec docker my-job-name
Note that you need both docker and gitlab-runner installed on your computer to get this working.
You also need the image key defined in your .gitlab-ci.yml file. Otherwise won't work.
Here's the line I currently use for testing locally using gitlab-runner:
gitlab-runner exec docker test --docker-volumes "/home/elboletaire/.ssh/id_rsa:/root/.ssh/id_rsa:ro"
Note: You can avoid adding a --docker-volumes with your key setting it by default in /etc/gitlab-runner/config.toml. See the official documentation for more details. Also, use gitlab-runner exec docker --help to see all docker-based runner options (like variables, volumes, networks, etc.).
Due to the confusion in the comments, I paste here the gitlab-runner --help result, so you can see that gitlab-runner can make builds locally:
gitlab-runner --help
NAME:
gitlab-runner - a GitLab Runner
USAGE:
gitlab-runner [global options] command [command options] [arguments...]
VERSION:
1.1.0~beta.135.g24365ee (24365ee)
AUTHOR(S):
Kamil TrzciƄski <ayufan#ayufan.eu>
COMMANDS:
exec execute a build locally
[...]
GLOBAL OPTIONS:
--debug debug mode [$DEBUG]
[...]
As you can see, the exec command is to execute a build locally.
Even though there was an issue to deprecate the current gitlab-runner exec behavior, it ended up being reconsidered and a new version with greater features will replace the current exec functionality.
Note that this process is to use your own machine to run the tests using docker containers. This is not to define custom runners. To do so, just go to your repo's CI/CD settings and read the documentation there. If you wanna ensure your runner is executed instead of one from gitlab.com, add a custom and unique tag to your runner, ensure it only runs tagged jobs and tag all the jobs you want your runner to be responsible of.

I use this docker-based approach:
Edit: 2022-10
docker run --entrypoint bash --rm -w $PWD -v $PWD:$PWD -v /var/run/docker.sock:/var/run/docker.sock gitlab/gitlab-runner:latest -c 'git config --global --add safe.directory "*";gitlab-runner exec docker test'
For all git versions > 2.35.2. You must add safe.directory within the container to avoid fatal: detected dubious ownership in repository at.... This also true for patched git versions < 2.35.2. The old command will not work anymore.
Details
0. Create a git repo to test this answer
mkdir my-git-project
cd my-git-project
git init
git commit --allow-empty -m"Initialize repo to showcase gitlab-runner locally."
1. Go to your git directory
cd my-git-project
2. Create a .gitlab-ci.yml
Example .gitlab-ci.yml
image: alpine
test:
script:
- echo "Hello Gitlab-Runner"
3. Create a docker container with your project dir mounted
docker run -d \
--name gitlab-runner \
--restart always \
-v $PWD:$PWD \
-v /var/run/docker.sock:/var/run/docker.sock \
gitlab/gitlab-runner:latest
(-d) run container in background and print container ID
(--restart always) or not?
(-v $PWD:$PWD) Mount current directory into the current directory of the container - Note: On Windows you could bind your dir to a fixed location, e.g. -v ${PWD}:/opt/myapp. Also $PWD will only work at powershell not at cmd
(-v /var/run/docker.sock:/var/run/docker.sock) This gives the container access to the docker socket of the host so it can start "sibling containers" (e.g. Alpine).
(gitlab/gitlab-runner:latest) Just the latest available image from dockerhub.
4. Execute with
Avoid fatal: detected dubious ownership in repository at... More info
docker exec -it -w $PWD gitlab-runner git config --global --add safe.directory "*"
Actual execution
docker exec -it -w $PWD gitlab-runner gitlab-runner exec docker test
# ^ ^ ^ ^ ^ ^
# | | | | | |
# (a) (b) (c) (d) (e) (f)
(a) Working dir within the container. Note: On Windows you could use a fixed location, e.g. /opt/myapp.
(b) Name of the docker container
(c) Execute the command "gitlab-runner" within the docker container
(d)(e)(f) run gitlab-runner with "docker executer" and run a job named "test"
5. Prints
...
Executing "step_script" stage of the job script
$ echo "Hello Gitlab-Runner"
Hello Gitlab-Runner
Job succeeded
...
Note: The runner will only work on the commited state of your code base. Uncommited changes will be ignored. Exception: The .gitlab-ci.yml itself does not have be commited to be taken into account.
Note: There are some limitations running locally. Have a look at limitations of gitlab runner locally.

I'm currently working on making a gitlab runner that works locally.
Still in the early phases, but eventually it will become very relevant.
It doesn't seem like gitlab want/have time to make this, so here you go.
https://github.com/firecow/gitlab-runner-local

If you are running Gitlab using the docker image there: https://hub.docker.com/r/gitlab/gitlab-ce, it's possible to run pipelines by exposing the local docker.sock with a volume option: -v /var/run/docker.sock:/var/run/docker.sock. Adding this option to the Gitlab container will allow your workers to access to the docker instance on the host.

The GitLab runner appears to not work on Windows yet and there is an open issue to resolve this.
So, in the meantime I am moving my script code out to a bash script, which I can easily map to a docker container running locally and execute.
In this case I want to build a docker container in my job, so I create a script 'build':
#!/bin/bash
docker build --pull -t myimage:myversion .
in my .gitlab-ci.yaml I execute the script:
image: docker:latest
services:
- docker:dind
before_script:
- apk add bash
build:
stage: build
script:
- chmod 755 build
- build
To run the script locally using powershell I can start the required image and map the volume with the source files:
$containerId = docker run --privileged -d -v ${PWD}:/src docker:dind
install bash if not present:
docker exec $containerId apk add bash
Set permissions on the bash script:
docker exec -it $containerId chmod 755 /src/build
Execute the script:
docker exec -it --workdir /src $containerId bash -c 'build'
Then stop the container:
docker stop $containerId
And finally clean up the container:
docker container rm $containerId

Another approach is to have a local build tool that is installed on your pc and your server at the same time.
So basically, your .gitlab-ci.yml will basically call your preferred build tool.
Here an example .gitlab-ci.yml that i use with nuke.build:
stages:
- build
- test
- pack
variables:
TERM: "xterm" # Use Unix ASCII color codes on Nuke
before_script:
- CHCP 65001 # Set correct code page to avoid charset issues
.job_template: &job_definition
except:
- tags
build:
<<: *job_definition
stage: build
script:
- "./build.ps1"
test:
<<: *job_definition
stage: test
script:
- "./build.ps1 test"
variables:
GIT_CHECKOUT: "false"
pack:
<<: *job_definition
stage: pack
script:
- "./build.ps1 pack"
variables:
GIT_CHECKOUT: "false"
only:
- master
artifacts:
paths:
- output/
And in nuke.build i've defined 3 targets named like the 3 stages (build, test, pack)
In this way you have a reproducible setup (all other things are configured with your build tool) and you can test directly the different targets of your build tool.
(i can call .\build.ps1 , .\build.ps1 test and .\build.ps1 pack when i want)

I am on Windows using VSCode with WSL
I didn't want to register my work PC as a runner so instead I'm running my yaml stages locally to test them out before I upload them
$ sudo apt-get install gitlab-runner
$ gitlab-runner exec shell build
yaml
image: node:10.19.0 # https://hub.docker.com/_/node/
# image: node:latest
cache:
# untracked: true
key: project-name
# key: ${CI_COMMIT_REF_SLUG} # per branch
# key:
# files:
# - package-lock.json # only update cache when this file changes (not working) #jkr
paths:
- .npm/
- node_modules
- build
stages:
- prepare # prepares builds, makes build needed for testing
- test # uses test:build specifically #jkr
- build
- deploy
# before_install:
before_script:
- npm ci --cache .npm --prefer-offline
prepare:
stage: prepare
needs: []
script:
- npm install
test:
stage: test
needs: [prepare]
except:
- schedules
tags:
- linux
script:
- npm run build:dev
- npm run test:cicd-deps
- npm run test:cicd # runs puppeteer tests #jkr
artifacts:
reports:
junit: junit.xml
paths:
- coverage/
build-staging:
stage: build
needs: [prepare]
only:
- schedules
before_script:
- apt-get update && apt-get install -y zip
script:
- npm run build:stage
- zip -r build.zip build
# cache:
# paths:
# - build
# <<: *global_cache
# policy: push
artifacts:
paths:
- build.zip
deploy-dev:
stage: deploy
needs: [build-staging]
tags: [linux]
only:
- schedules
# # - branches#gitlab-org/gitlab
before_script:
- apt-get update && apt-get install -y lftp
script:
# temporarily using 'verify-certificate no'
# for more on verify-certificate #jkr: https://www.versatilewebsolutions.com/blog/2014/04/lftp-ftps-and-certificate-verification.html
# variables do not work with 'single quotes' unless they are "'surrounded by doubles'"
- lftp -e "set ssl:verify-certificate no; open mediajackagency.com; user $LFTP_USERNAME $LFTP_PASSWORD; mirror --reverse --verbose build/ /var/www/domains/dev/clients/client/project/build/; bye"
# environment:
# name: staging
# url: http://dev.mediajackagency.com/clients/client/build
# # url: https://stg2.client.co
when: manual
allow_failure: true
build-production:
stage: build
needs: [prepare]
only:
- schedules
before_script:
- apt-get update && apt-get install -y zip
script:
- npm run build
- zip -r build.zip build
# cache:
# paths:
# - build
# <<: *global_cache
# policy: push
artifacts:
paths:
- build.zip
deploy-client:
stage: deploy
needs: [build-production]
tags: [linux]
only:
- schedules
# - master
before_script:
- apt-get update && apt-get install -y lftp
script:
- sh deploy-prod
environment:
name: production
url: http://www.client.co
when: manual
allow_failure: true

The idea is to keep check commands outside of .gitlab-ci.yml. I use Makefile to run something like make check and my .gitlab-ci.yml runs the same make commands that I use locally to check various things before committing.
This way you'll have one place with all/most of your commands (Makefile) and .gitlab-ci.yml will have only CI-related stuff.

I have written a tool to run all GitLab-CI job locally without have to commit or push, simply with the command ci-toolbox my_job_name.
The URL of the project : https://gitlab.com/mbedsys/citbx4gitlab

Years ago I build this simple solution with Makefile and docker-compose to run the gitlab runner in docker, you can use it to execute jobs locally as well and should work on all systems where docker works:
https://gitlab.com/1oglop1/gitlab-runner-docker
There are few things to change in the docker-compose.override.yaml
version: "3"
services:
runner:
working_dir: <your project dir>
environment:
- REGISTRATION_TOKEN=<token if you want to register>
volumes:
- "<your project dir>:<your project dir>"
Then inside your project you can execute it the same way as mentioned in other answers:
docker exec -it -w $PWD runner gitlab-runner exec <commands>..

I recommend using gitlab-ci-local
https://github.com/firecow/gitlab-ci-local
It's able to run specific jobs as well.
It's a very cool project and I have used it to run simple pipelines on my laptop.

Related

Why cant i use apk package manager in my Gitlab pipeline?

Im trying to build a pipeline which ssh's into my server, and executes a bash script. But when i execute the pipeline, it states: bash: line 129: apk: command not found
Why though? Im already specifying to use alpine over my default node image
My runner is configured as a shell executor
image: node:16
build:
only:
- main
script:
- docker build --file Dockerfile --tag $IMAGE_NAME .
push_to_dockerhub:
only:
- main
script:
- docker login --username $DOCKERHUB_USERNAME --password $DOCKERHUB_PASSWORD
- docker push $IMAGE_NAME
deploy:
image: alpine:3.14
only:
- main
before_script:
- apk add openssh-client
- eval $(ssh-agent -s)
- echo "$SSH_PRIVATE_KEY" | tr -d '\r' | ssh-add -
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
script:
- ssh myusername#$IP_ADDRESS -p 3000 "/home/myusername/deploy.sh && exit"
I need to be able to use apk on this pipeline to add the openssh client and therefore, stablish an ssh connection
The NodeJS default version you get from docker hub's is a Debian-based distribution. Therefore, apt or apt-get must be used as a package management tool.
You must use an alpine-based Node container if you want to use apk (Alpine Package Keeper).
Example image: node:lts-alpine
The NodeJS docker container is built using a variety of versions and underlying Linux distribution. Please click the link to view them, then select what you require based on your requirements.

Execute GitLab job only if files have changed in a subdirectory, otherwise use cached artefact in following job

I have a simple pipeline, comparable to this one:
image: docker:20
variables:
GIT_STRATEGY: clone
stages:
- Building - Frontend
- Building - Backend
include:
- local: /.ci/extensions/ci-variables.yml
- local: /.ci/extensions/docker-login.yml
Build Management:
stage: Building - Frontend
image: node:14-buster
script:
# Install needed dependencies for building
- apt-get update
- apt-get -y upgrade
- apt-get install -y build-essential
- yarn global add #quasar/cli
- yarn global add #vue/cli
# Install required modules
- cd ${CI_PROJECT_DIR}/resources/js/management
- npm ci --cache .npm --prefer-offline
# Build project
- npm run build
# Create archive
- tar czf ${CI_PROJECT_DIR}/dist-resources-js-management.tar.gz *
cache:
policy: pull-push
key:
files:
- ./resources/js/management/package-lock.json
paths:
- ./resources/js/management/.npm/
artifacts:
paths:
- dist-resources-js-management.tar.gz
Build Docker:
stage: Building - Backend
needs: [Build Management, Build Administration]
dependencies:
- Build Management
- Build Administration
variables:
CI_REGISTRY_IMAGE_COMMIT_SHA: !reference [.ci-variables, variables, CI_REGISTRY_IMAGE_COMMIT_SHA]
CI_REGISTRY_IMAGE_REF_NAME: !reference [.ci-variables, variables, CI_REGISTRY_IMAGE_REF_NAME]
before_script:
- !reference [.docker-login, before_script]
script:
- mkdir -p {CI_PROJECT_DIR}/public/static/management
- tar xzf ${CI_PROJECT_DIR}/dist-resources-js-management.tar.gz --directory ${CI_PROJECT_DIR}/public/static/management
- docker build
--pull
--label "org.opencontainers.image.title=$CI_PROJECT_TITLE"
--label "org.opencontainers.image.url=$CI_PROJECT_URL"
--label "org.opencontainers.image.created=$CI_JOB_STARTED_AT"
--label "org.opencontainers.image.revision=$CI_COMMIT_SHA"
--label "org.opencontainers.image.version=$CI_COMMIT_REF_NAME"
--tag "$CI_REGISTRY_IMAGE_COMMIT_SHA"
-f .build/Dockerfile
.
I now want the first job to be executed under the following conditions:
Something has changed in the directory ${CI_PROJECT_DIR}/resources/js/management
This job has not yet created an artifact.
The last job should therefore always be able to access an artifact. If nothing has changed in the directory, it does not have to be created anew each time. If it did not exist before, it must of course be created.
Is there a way to map this in the GitLab Ci?
If I currently specify the dependencies and then work with only:changes: for the first job, GitLab complains if the job is not executed. Likewise with needs:.

Docker Tag Error 25 on gitlab-ci.yml trying to start GitLab Pipeline

I'm going through the "Scalable FastAPI Application on AWS" course. My gitlab-ci.yml file is below.
stages:
- docker
variables:
DOCKER_DRIVER: overlay2
DOCKER_TLS_CERTDIR: "/certs"
cache:
key: ${CI_JOB_NAME}
paths:
- ${CI_PROJECT_DIR}/services/talk_booking/.venv/
build-python-ci-image:
image: docker:19.03.0
services:
- docker:19.03.0-dind
stage: docker
before_script:
- cd ci_cd/python/
script:
- docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
- docker build -t registry.gitlab.com/chris_/talk-booking:cicd-python3.9-slim .
- docker push registry.gitlab.com/chris_/talk-booking:cicd-python3.9-slim
My Pipeline fails with this error:
See https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded
$ docker build -t registry.gitlab.com/chris_/talk-booking:cicd-python3.9-slim .
invalid argument "registry.gitlab.com/chris_/talk-booking:cicd-python3.9-slim" for "-t, --tag" flag: invalid reference format
See 'docker build --help'.
Cleaning up project directory and file based variables
ERROR: Job failed: exit code 125
It may or may not be relevant but the Container Registry for the GitLab project says there's a Docker connection error.
Thanks
I created a new GitLab account with a new username and things are working now. The underscore does appear to have been the issue.

How to run a script from file in another project using include in GitLab CI?

I'm trying to run a shell script from my template file located in another project via my include.
How should this be configured to work? Below scripts are simplified versions of my code.
Project A
template.yml
deploy:
before_script:
- chmod +x ./.run.sh
- source ./.run.sh
Project B
gitlab-ci.yml
include:
- project: 'project-a'
ref: master
file: '/template.yml'
stages:
- deploy
Clearly, the commands are actually being run from ProjectB and not ProjectA where the template resides. This can further be confirmed by adding ls -a in the template file.
So how should we be calling run.sh? Both projects are on the same GitLab instance under different groups.
If you have access project A and B, you can use multi-project pipelines. You trigger a pipeline in project A from project B.
In project A, you clone project B and run your script.
Project B
job 1:
variables:
PROJECT_PATH: "$CI_PROJECT_PATH"
RELEASE_BRANCH: "$CI_COMMIT_BRANCH"
trigger:
project: project-a
strategy: depend
Project A
job 2:
rules:
- if: '$CI_PIPELINE_SOURCE == "pipeline" && $PROJECT_PATH && $RELEASE_BRANCH'
script:
- git clone -b "${RELEASE_BRANCH}" --depth 50 https://gitlab-ci-token:${CI_JOB_TOKEN}#${CI_SERVER_HOST}/${PROJECT_PATH}.git $(basename ${PROJECT_PATH})
- cd $(basename ${PROJECT_PATH})
- chmod +x ../.run.sh
- source ../.run.sh
We've also run into this problem, and kinda wish Gitlab allowed includes to "import" non-yaml files. Nevertheless the simplest workaround we've found is to build a small docker image in repo A, which contains the script you want to run, and then repo B's job uses that docker image as the image, so the file run.sh is available :)
Minimal Dockerfile:
FROM bash:latest
COPY run.sh /usr/local/bin/
CMD run.sh
(Note: make sure you chmod +x run.sh before building your image, or add a RUN chmod +x /usr/local/bin/run.sh step)
Then, you'd just add this to your Project B's .gitlab-ci.yml:
stages:
- deploy
deploy:
image: registry.gitlab.com/... # Wherever you pushed your docker image to
script: run.sh
it's also possible to request a script by curl instead of copying a whole repository:
- curl -H "PRIVATE-TOKEN:$PRIVATE_TOKEN" --create-dirs "$CI_API_V4_URL/projects/$CI_DEPLOY_PROJECT_ID/repository/archive?path=pathToFolderWithScripts" -o $TEMP_DIR/archive.tar.gz
- tar zxvf $TEMP_DIR/archive.tar.gz -C $TEMP_DIR --strip-components 3
- bash $TEMP_DIR/run.sh
to make a curl request
to archive a folder with scripts
to unzip scripts in a temporary folder
to execute sh
ref This :: https://docs.gitlab.com/ee/api/repository_files.html#get-file-from-repository
GET /projects/:id/repository/files/:file_path/raw
curl --header "PRIVATE-TOKEN: <your_access_token>" "https://gitlab.example.com/api/v4/projects/13083/repository/files/app%2Fmodels%2Fkey%2Erb?ref=master"
it will display the file
to download this file just add >>
as below
curl --header "PRIVATE-TOKEN: <your_access_token>" "https://gitlab.example.com/api/v4/projects/13083/repository/files/app%2Fmodels%2Fkey%2Erb?ref=master" >> file.extension
As hinted by the answer above, multi project pipelines is the right approach for it.
Here's how it worked for me:
GroupX/ProjectA - contains reusable code
# .gitlab-ci.yml
stages:
- deploy
reusable_deploy_job:
stage: deploy
rules:
- if: '$CI_PIPELINE_SOURCE == "pipeline"' # run only if triggered by a pipeline
script:
- bash ./src/run.sh $UPSTREAM_CUSTOM_VARIABLE
GroupY/ProjectB - job that will reuse a code
# .gitlab-ci.yml
stages:
- deploy
deploy_job:
stage: deploy
variables:
UPSTREAM_CUSTOM_VARIABLE: CUSTOM_VARIABLE # pass this variable to downstream job
trigger: groupx/projecta

Issues with gitlab-ci stages

I've been working on setting up an automated RPM build and I'd like to perform a simple test on the SPEC file before proceeding with any build steps. The problem I am having is that the job always seems to jump to the deploy stage. Here is the relevant snippet from my .gitlab-ci.yml:
stages:
- test
- build
- deploy
job1:
stage: test
script:
# Test the SPEC file
- su - newbuild -c "rpmbuild --nobuild -vv ~/rpmbuild/SPECS/package.SPEC"
stage: build
script:
# Install our required packages
- yum -y install openssl-devel freetype-devel fontconfig-devel libicu-devel sqlite-devel libpng-devel libjpeg-devel ruby
# Initialize the submodules to build
- git submodule update --init
# build the RPM
- su - newbuild -c "rpmbuild -ba --target=`uname -m` -vv ~/rpmbuild/SPECS/package.SPEC"
stage: deploy
script:
# move the RPM/SRPM
- mkdir -pv $BUILD_DIR/$RELEASEVER/{SRPMS,x86_64}
- 'for f in $WORK_DIR/rpmbuild/RPMS/x86_64/*; do cp -v "$f" $BUILD_DIR/$RELEASEVER/x86_64; done'
- 'for f in $WORK_DIR/rpmbuild/SRPMS/*; do cp -v "$f" $BUILD_DIR/$RELEASEVER/SRPMS; done'
# create the repo
- createrepo -dvp $BUILD_DIR/$RELEASEVER
# update latest
- 'if [ $CI_BUILD_REF_NAME == "master" ]; then rm $PROJECT_DIR/latest; ln -sv $(basename $BUILD_DIR) $PROJECT_DIR/latest; fi'
- 'if [ $CI_BUILD_REF_NAME == "devel" ]; then rm $PROJECT_DIR/latest-dev; ln -sv $(basename $BUILD_DIR) $PROJECT_DIR/latest-dev; fi'
tags:
- repos
I've not found any questions or online documentation to properly explain this to me so any help is appreciated!
You have all stages in one job which does not work. You need to split it up into individual jobs for the three different stages.
Quote from the documentation:
First all jobs of build are executed in parallel.
If all jobs of build succeeds, the test jobs are executed in parallel.
If all jobs of test succeeds, the deploy jobs are executed in parallel.
If all jobs of deploy succeeds, the commit is marked as success.
If any of the previous jobs fails, the commit is marked as failed and no jobs of further stage are executed.
Something like this should work:
stages:
- test
- build
- deploy
do_things_on_stage_test:
script:
- do things
stage: test
do_things_on_stage_build:
script:
- do things
stage: build
do_things_on_stage_deploy:
script:
- do things
stage: deploy
I think you assume that the stages are build on top of each other, which is not the case. If one of your stages needs something like pre-installed packages, you have to add a before_script directive. Think of the stages as in: test-if-build-succeeds, test-if-depoy-succeeds, etc.