Using multiple runners in one gitlab-ci - gitlab-ci

I want to run CI pipline with 2 jobs:
job will boot up a docker image with docker-runner and run test inside docker
will run under ssh runner and pull code on a remote server.
Is it possible?

Yes, it's possible. You need to:
Register two GitLab Runners with needed executor (docker and shell), each witch different tag (or, at least one of them with a build tag).
Declare a specific tag for given job in your .gitlab-ci.yml, .
Shell runner registration:
[root#jsc00mca ~]# gitlab-runner register
Running in system-mode.
Please enter the gitlab-ci coordinator URL (e.g. https://gitlab.com/):
https://example.com/
Please enter the gitlab-ci token for this runner:
1a2b3c
Please enter the gitlab-ci description for this runner:
[jsc00mca.example.com]: my-shell-runner
Please enter the gitlab-ci tags for this runner (comma separated):
shell
Whether to run untagged builds [true/false]:
[false]:
Whether to lock the Runner to current project [true/false]:
[true]:
Registering runner... succeeded runner=ajgHxcNz
Please enter the executor: virtualbox, docker+machine, kubernetes, docker, shell, ssh, docker-ssh+machine, docker-ssh, parallels:
shell
Runner registered successfully. Feel free to start it, but if it's running already the config should be automatically reloaded!
Docker runner registration:
[root#jsc00mca ~]# gitlab-runner register
Running in system-mode.
Please enter the gitlab-ci coordinator URL (e.g. https://gitlab.com/):
https://example.com/
Please enter the gitlab-ci token for this runner:
1a2b3c
Please enter the gitlab-ci description for this runner:
[jsc00mca.example.com]: my-docker-runner
Please enter the gitlab-ci tags for this runner (comma separated):
docker
Whether to run untagged builds [true/false]:
[false]:
Whether to lock the Runner to current project [true/false]:
[true]:
Registering runner... succeeded runner=ajgHxcNz
Please enter the executor: virtualbox, docker+machine, kubernetes, docker, shell, ssh, docker-ssh+machine, docker-ssh, parallels:
docker
Please enter the default Docker image (e.g. ruby:2.1):
alpine:latest
Runner registered successfully. Feel free to start it, but if it's running already the config should be automatically reloaded!
.gitlab-ci.yml
buildWithShell:
stage: build
tags:
- shell
script:
- echo 'Building with the shell executor...'
buildWithDocker:
image: alpine:latest
stage: build
tags:
- docker
script:
- echo 'Building with the docker executor...'

Yes you can trigger different/mixed runners from a single gitlab-ci pipeline.
First you should register a shell runner on the target host and give it a tag (truncated):
$ gitlab-runner register
...
Please enter the gitlab-ci tags for this runner (comma separated):
my_shell_runner
...
Please enter the executor: virtualbox, docker+machine, docker-ssh+machine, docker, docker-ssh, parallels, shell, ssh:
shell
Within your gitlab-ci.yaml something like this should work.
The 'test' job runs your test command in a docker container based on the image NAME_OF_IMAGE.
If that succeeds, the 'deploy' job chooses your shell runner based on the tag 'my_shell_runner' and will execute all commands within the script tag on the runner's host (truncated):
test:
stage: test
services:
- docker:dind
tags:
- docker-executor
script:
- docker run --rm NAME_OF_IMAGE sh -c "TEST_COMMAND_TO_RUN"
deploy:
stage: deploy
tags:
- my_shell_runner
script:
- COMMAND_TO_RUN
- COMMAND_TO_RUN
- COMMAND_TO_RUN

Related

docker executor vs docker dind image

I am a newbie in gitlabci. I want to understand why do we need docker dind image in order to build a docker image in GitLab CI jobs. Why can't we use the docker executor and run docker commands under scripts?
When we register docker executor gitlab runner, we choose one image..
Again inside gitlabci, we choose an image under image: or services: fields. So does that mean this GitLab CI job container runs inside the docker executor container?
why do we need docker dind image in order to build a docker image in GitLab CI jobs. Why can't we use the docker executor and run docker commands under scripts?
This partly depends on how you have configured your GitLab runner.
Why docker doesn't work inside containers
When you invoke docker commands, they are really talking to a docker daemon which is needed to perform builds and carry out other docker commands. Typically, jobs running under the docker executor do not have access to any docker daemon by default. It's the same kind of problem you would face if you tried to run docker inside of a docker container you started locally.
Even if I can run docker successfully on my host:
$ docker run --rm docker /bin/sh -c 'hello from container $HOSTNAME'
hello from container 2b51479b11b1
I cannot run docker inside the container
$ docker run --rm docker /bin/sh -c 'docker info'
errors pretty printing info
Client:
Context: default
Debug Mode: false
Server:
ERROR: error during connect: Get "http://docker:2375/v1.24/info": dial tcp: lookup docker on 192.168.65.5:53: no such host
The same error would happen trying to run any other significant docker command like build, run, etc.
An exception to this would be if you configured your GitLab runner to run containers in privileged mode and mount /var/run/docker.sock to all your jobs (this would not be advisable) in which case all your jobs could talk directly to the docker daemon on the host. Another exception might be if you use the shell executor instead and you have docker installed on the host where the runner is running.
How the dind service fixes this
The docker:dind service is a daemon that is created just for your job. This is incredibly important because it can prevent concurrent jobs from stepping on one another or being able to escalate access where they might not otherwise have it.
When the build starts, the GitLab runner will create two containers: your job container and the docker:dind container; they are linked together. When your job invokes docker commands, your job connects to the docker:dind container, which then carries out the requested commands.
Any containers created by your job (say, by invoking docker run or docker build as part of your job) are managed by the daemon running on the docker:dind container, not the host daemon. If you run docker ps inside the job, you'll notice that none of the containers run on the host daemon are listed, despite the fact that if you ran docker ps on the host, you would see the job container, the dind container, and any other running containers.
To clarify your other questions:
When we register docker executor gitlab runner, we choose one image
The image specified in your runner configuration is simply the default docker image to be used if a job doesn't declare any image: key. It does not affect how the runner runs in any way.
inside gitlabci, we choose an image under image: or services: fields
When the docker executor runs your job, it uses docker run to do so. The image: key determines which image is used to run your job. Similarly, services: define the image used for service containers -- service containers are siblings to the job container and are connected with links.
So does that mean this GitLab CI job container runs inside the docker executor container?
No. I'd also like to clear up: the runner/executor doesn't run in a container, necessarily. Runners might be installed as a Windows service, or simply even a process running directly on a system. You can use runners that happen to be inside containers, but it doesn't materially affect how jobs are run.
In any case, the containers where your job run are generally always going to be run directly by the host docker daemon.

Gitlab CI - How to start Shared Runner

I'm new to Gitlab CI.
I have configured .gitlab-ci.yml file, and using CI Lint it has passed the validation process.
Based on this documentation, I can see a specific runner should be configured on a virtual machine, a VPS, a bare-metal machine, a docker container or
even a cluster of containers.
But I can see gitlab has its own shared runners and enabled by default.
The question is how to use this shared runner?
When I visit the Pipeline page I can only see the blue Get Started with Pipeline button and when clicked I was redirected to this page.
Here's my .gitlab-ci.yml content :
before_script:
- eval $(ssh-agent -s)
stage_deploy:
only:
- testing
script:
- ssh-add <(echo "$STAGING_PRIVATE_KEY")
- ssh root#1.2.3.4 "sh update_app.sh"
It will only run the job for your testing branch, have you added the .gitlab-ci.yml file to that branch too?

How to bind Jenkins build output with tests result?

I'm setting automated protractor tests to run in a docker container with the help of jenkins. But not been able to make a the jenkins build result to reflect the testing outcome (if some test fail, build should fail also).
Important to say that all tests should run, even if the first one fails.
The tests are initiated with docker-compose up --abort-on-container-exit and my docker-compose file looks like:
version: '2'
services:
selenium:
image: selenium/standalone-chrome
ports:
- 4444:4444
volumes:
- /dev/shm:/dev/shm
protractor:
volumes:
- ./reporting:/assets/reporting
image: protractor-test
command: "dockerize -wait http://selenium:4444 -timeout 60m protractor /assets/conf.js"
Looks like your docker-compose command is returning exit code 0 no matter what.
How about using a Jasmine xunit reporter to generate a test report, copy the generated xml test report to outside the container (using docker cp), and then publish it using Jenkins' post-build action?
The job will be marked as failed if the xml is not present, which means there's an error during the test runtime or it will be marked as unstable, if it has failed any of the test asserts.

Gitlab CI Different executor per stage

Is it possible to have 2 stages in gitlab-ci.yml and one to be run with docker runner but the other to be run with shell?
Imagine I want to run tests in a docker container but I want to run deploy stage in shell locally in the container.
Not exactly stages but you can have different jobs to be run by different runners using tags configuration option which should give you exactly what you want.
Add (either during runner creation or later in Project settings -> Runners) tag docker to the Docker runner and tag shell to the shell runner. Then you can set the tags in your .gitlab-ci.yml file:
stages:
- test
- deploy
tests:
stage: test
tags:
- docker
script:
- [test routine]
deployment:
stage: deploy
tags:
- shell
script:
- [deployment routine]

Use GitLab CI to run tests locally?

If a GitLab project is configured on GitLab CI, is there a way to run the build locally?
I don't want to turn my laptop into a build "runner", I just want to take advantage of Docker and .gitlab-ci.yml to run tests locally (i.e. it's all pre-configured). Another advantage of that is that I'm sure that I'm using the same environment locally and on CI.
Here is an example of how to run Travis builds locally using Docker, I'm looking for something similar with GitLab.
Since a few months ago this is possible using gitlab-runner:
gitlab-runner exec docker my-job-name
Note that you need both docker and gitlab-runner installed on your computer to get this working.
You also need the image key defined in your .gitlab-ci.yml file. Otherwise won't work.
Here's the line I currently use for testing locally using gitlab-runner:
gitlab-runner exec docker test --docker-volumes "/home/elboletaire/.ssh/id_rsa:/root/.ssh/id_rsa:ro"
Note: You can avoid adding a --docker-volumes with your key setting it by default in /etc/gitlab-runner/config.toml. See the official documentation for more details. Also, use gitlab-runner exec docker --help to see all docker-based runner options (like variables, volumes, networks, etc.).
Due to the confusion in the comments, I paste here the gitlab-runner --help result, so you can see that gitlab-runner can make builds locally:
gitlab-runner --help
NAME:
gitlab-runner - a GitLab Runner
USAGE:
gitlab-runner [global options] command [command options] [arguments...]
VERSION:
1.1.0~beta.135.g24365ee (24365ee)
AUTHOR(S):
Kamil TrzciƄski <ayufan#ayufan.eu>
COMMANDS:
exec execute a build locally
[...]
GLOBAL OPTIONS:
--debug debug mode [$DEBUG]
[...]
As you can see, the exec command is to execute a build locally.
Even though there was an issue to deprecate the current gitlab-runner exec behavior, it ended up being reconsidered and a new version with greater features will replace the current exec functionality.
Note that this process is to use your own machine to run the tests using docker containers. This is not to define custom runners. To do so, just go to your repo's CI/CD settings and read the documentation there. If you wanna ensure your runner is executed instead of one from gitlab.com, add a custom and unique tag to your runner, ensure it only runs tagged jobs and tag all the jobs you want your runner to be responsible of.
I use this docker-based approach:
Edit: 2022-10
docker run --entrypoint bash --rm -w $PWD -v $PWD:$PWD -v /var/run/docker.sock:/var/run/docker.sock gitlab/gitlab-runner:latest -c 'git config --global --add safe.directory "*";gitlab-runner exec docker test'
For all git versions > 2.35.2. You must add safe.directory within the container to avoid fatal: detected dubious ownership in repository at.... This also true for patched git versions < 2.35.2. The old command will not work anymore.
Details
0. Create a git repo to test this answer
mkdir my-git-project
cd my-git-project
git init
git commit --allow-empty -m"Initialize repo to showcase gitlab-runner locally."
1. Go to your git directory
cd my-git-project
2. Create a .gitlab-ci.yml
Example .gitlab-ci.yml
image: alpine
test:
script:
- echo "Hello Gitlab-Runner"
3. Create a docker container with your project dir mounted
docker run -d \
--name gitlab-runner \
--restart always \
-v $PWD:$PWD \
-v /var/run/docker.sock:/var/run/docker.sock \
gitlab/gitlab-runner:latest
(-d) run container in background and print container ID
(--restart always) or not?
(-v $PWD:$PWD) Mount current directory into the current directory of the container - Note: On Windows you could bind your dir to a fixed location, e.g. -v ${PWD}:/opt/myapp. Also $PWD will only work at powershell not at cmd
(-v /var/run/docker.sock:/var/run/docker.sock) This gives the container access to the docker socket of the host so it can start "sibling containers" (e.g. Alpine).
(gitlab/gitlab-runner:latest) Just the latest available image from dockerhub.
4. Execute with
Avoid fatal: detected dubious ownership in repository at... More info
docker exec -it -w $PWD gitlab-runner git config --global --add safe.directory "*"
Actual execution
docker exec -it -w $PWD gitlab-runner gitlab-runner exec docker test
# ^ ^ ^ ^ ^ ^
# | | | | | |
# (a) (b) (c) (d) (e) (f)
(a) Working dir within the container. Note: On Windows you could use a fixed location, e.g. /opt/myapp.
(b) Name of the docker container
(c) Execute the command "gitlab-runner" within the docker container
(d)(e)(f) run gitlab-runner with "docker executer" and run a job named "test"
5. Prints
...
Executing "step_script" stage of the job script
$ echo "Hello Gitlab-Runner"
Hello Gitlab-Runner
Job succeeded
...
Note: The runner will only work on the commited state of your code base. Uncommited changes will be ignored. Exception: The .gitlab-ci.yml itself does not have be commited to be taken into account.
Note: There are some limitations running locally. Have a look at limitations of gitlab runner locally.
I'm currently working on making a gitlab runner that works locally.
Still in the early phases, but eventually it will become very relevant.
It doesn't seem like gitlab want/have time to make this, so here you go.
https://github.com/firecow/gitlab-runner-local
If you are running Gitlab using the docker image there: https://hub.docker.com/r/gitlab/gitlab-ce, it's possible to run pipelines by exposing the local docker.sock with a volume option: -v /var/run/docker.sock:/var/run/docker.sock. Adding this option to the Gitlab container will allow your workers to access to the docker instance on the host.
The GitLab runner appears to not work on Windows yet and there is an open issue to resolve this.
So, in the meantime I am moving my script code out to a bash script, which I can easily map to a docker container running locally and execute.
In this case I want to build a docker container in my job, so I create a script 'build':
#!/bin/bash
docker build --pull -t myimage:myversion .
in my .gitlab-ci.yaml I execute the script:
image: docker:latest
services:
- docker:dind
before_script:
- apk add bash
build:
stage: build
script:
- chmod 755 build
- build
To run the script locally using powershell I can start the required image and map the volume with the source files:
$containerId = docker run --privileged -d -v ${PWD}:/src docker:dind
install bash if not present:
docker exec $containerId apk add bash
Set permissions on the bash script:
docker exec -it $containerId chmod 755 /src/build
Execute the script:
docker exec -it --workdir /src $containerId bash -c 'build'
Then stop the container:
docker stop $containerId
And finally clean up the container:
docker container rm $containerId
Another approach is to have a local build tool that is installed on your pc and your server at the same time.
So basically, your .gitlab-ci.yml will basically call your preferred build tool.
Here an example .gitlab-ci.yml that i use with nuke.build:
stages:
- build
- test
- pack
variables:
TERM: "xterm" # Use Unix ASCII color codes on Nuke
before_script:
- CHCP 65001 # Set correct code page to avoid charset issues
.job_template: &job_definition
except:
- tags
build:
<<: *job_definition
stage: build
script:
- "./build.ps1"
test:
<<: *job_definition
stage: test
script:
- "./build.ps1 test"
variables:
GIT_CHECKOUT: "false"
pack:
<<: *job_definition
stage: pack
script:
- "./build.ps1 pack"
variables:
GIT_CHECKOUT: "false"
only:
- master
artifacts:
paths:
- output/
And in nuke.build i've defined 3 targets named like the 3 stages (build, test, pack)
In this way you have a reproducible setup (all other things are configured with your build tool) and you can test directly the different targets of your build tool.
(i can call .\build.ps1 , .\build.ps1 test and .\build.ps1 pack when i want)
I am on Windows using VSCode with WSL
I didn't want to register my work PC as a runner so instead I'm running my yaml stages locally to test them out before I upload them
$ sudo apt-get install gitlab-runner
$ gitlab-runner exec shell build
yaml
image: node:10.19.0 # https://hub.docker.com/_/node/
# image: node:latest
cache:
# untracked: true
key: project-name
# key: ${CI_COMMIT_REF_SLUG} # per branch
# key:
# files:
# - package-lock.json # only update cache when this file changes (not working) #jkr
paths:
- .npm/
- node_modules
- build
stages:
- prepare # prepares builds, makes build needed for testing
- test # uses test:build specifically #jkr
- build
- deploy
# before_install:
before_script:
- npm ci --cache .npm --prefer-offline
prepare:
stage: prepare
needs: []
script:
- npm install
test:
stage: test
needs: [prepare]
except:
- schedules
tags:
- linux
script:
- npm run build:dev
- npm run test:cicd-deps
- npm run test:cicd # runs puppeteer tests #jkr
artifacts:
reports:
junit: junit.xml
paths:
- coverage/
build-staging:
stage: build
needs: [prepare]
only:
- schedules
before_script:
- apt-get update && apt-get install -y zip
script:
- npm run build:stage
- zip -r build.zip build
# cache:
# paths:
# - build
# <<: *global_cache
# policy: push
artifacts:
paths:
- build.zip
deploy-dev:
stage: deploy
needs: [build-staging]
tags: [linux]
only:
- schedules
# # - branches#gitlab-org/gitlab
before_script:
- apt-get update && apt-get install -y lftp
script:
# temporarily using 'verify-certificate no'
# for more on verify-certificate #jkr: https://www.versatilewebsolutions.com/blog/2014/04/lftp-ftps-and-certificate-verification.html
# variables do not work with 'single quotes' unless they are "'surrounded by doubles'"
- lftp -e "set ssl:verify-certificate no; open mediajackagency.com; user $LFTP_USERNAME $LFTP_PASSWORD; mirror --reverse --verbose build/ /var/www/domains/dev/clients/client/project/build/; bye"
# environment:
# name: staging
# url: http://dev.mediajackagency.com/clients/client/build
# # url: https://stg2.client.co
when: manual
allow_failure: true
build-production:
stage: build
needs: [prepare]
only:
- schedules
before_script:
- apt-get update && apt-get install -y zip
script:
- npm run build
- zip -r build.zip build
# cache:
# paths:
# - build
# <<: *global_cache
# policy: push
artifacts:
paths:
- build.zip
deploy-client:
stage: deploy
needs: [build-production]
tags: [linux]
only:
- schedules
# - master
before_script:
- apt-get update && apt-get install -y lftp
script:
- sh deploy-prod
environment:
name: production
url: http://www.client.co
when: manual
allow_failure: true
The idea is to keep check commands outside of .gitlab-ci.yml. I use Makefile to run something like make check and my .gitlab-ci.yml runs the same make commands that I use locally to check various things before committing.
This way you'll have one place with all/most of your commands (Makefile) and .gitlab-ci.yml will have only CI-related stuff.
I have written a tool to run all GitLab-CI job locally without have to commit or push, simply with the command ci-toolbox my_job_name.
The URL of the project : https://gitlab.com/mbedsys/citbx4gitlab
Years ago I build this simple solution with Makefile and docker-compose to run the gitlab runner in docker, you can use it to execute jobs locally as well and should work on all systems where docker works:
https://gitlab.com/1oglop1/gitlab-runner-docker
There are few things to change in the docker-compose.override.yaml
version: "3"
services:
runner:
working_dir: <your project dir>
environment:
- REGISTRATION_TOKEN=<token if you want to register>
volumes:
- "<your project dir>:<your project dir>"
Then inside your project you can execute it the same way as mentioned in other answers:
docker exec -it -w $PWD runner gitlab-runner exec <commands>..
I recommend using gitlab-ci-local
https://github.com/firecow/gitlab-ci-local
It's able to run specific jobs as well.
It's a very cool project and I have used it to run simple pipelines on my laptop.