GitLab CI: Is it possible to run parallel jobs in different runner - gitlab-ci

I'm looking for a way to run parallels jobs in different runners. I have several powerful runners set up for GitLab CI. In general, it's ok to run jobs on the same runner because they're executed in Docker container.
However, now I have a Pipeline that jobs are executed in parallel and each job consumes lots of CPU and Mem.(it's by design, not an issue). If it's unlucky that GitLab CI schedules those jobs to the same runner, job fails.
And, I want this limitation applies to this project ONLY, as my runners have 30+ CPU and 120GB+ Memory.
Thanks in advance.

It is possible, if you have set up say two runners (either specific, shared or group runners) with tags.
Say, runner1 has tags runner1-ci, my-runner1
Similarly, runner2 has tags runner2-ci, my-runner2
Now, in your .gitlab-ci.yml file, you can use the tags like below, so a job will pick up that particular runner and execute the job.
image: maven:latest
stages:
- build
- test
- deploy
install_dependencies:
stage: build
tags:
- runner1-ci
script:
- pwd
- echo "Build"
test:
stage: test
tags:
- runner2-ci
script:
- echo "Testing"
deploy:
stage: deploy
tags:
- runner1-ci
script:
- echo "Deploy to nexus"
Note: This is just an example .gitlab-ci.yml to demonstrate the use of tags in pipeline.

Related

Giltab CI Job stuck because the runner tag value hasn’t been assigned

I have a CICD configuration that looks something like this:
.rule_template: &rule_configuration
rules:
- changes:
- file/dev/script1.txt
variables:
DESTINATION_HOST: somehost1
RUNNER_TAG: somerunner1
- changes:
- file/test/script1.txt
variables:
DESTINATION_HOST: somehost2
RUNNER_TAG: somerunner2
default:
tags:
- scripts
stages:
- lint
deploy scripts 1/6:
<<: *rule_configuration
tags:
- ${RUNNER_TAG}
stage: lint
script: |
echo "Add linting here!"
....
In short, which runner to choose depends on which file was changed, hence the runner tag has to be conditionally decided. However, these jobs never execute and the value of never gets assigned as i always get:
This job is stuck because you don't have any active runners online or available with any of these tags assigned to them: ${RUNNER_TAG}
Any idea, what is it this way and what can I do to resolve this?
gitlab-runner --version
Version: 14.7.0
Git revision: 98daeee0
Git branch: 14-7-stable
GO version: go1.17.5
Built: 2022-01-19T17:11:48+0000
OS/Arch: linux/amd64
Tags map jobs to runners. I tag my runners with the type of executor they use, e.g. - shell, docker.
Based on the error message, you do not have any runners with the tag ${RUNNER_TAG}, which means that it is not resolving the variable the way you want it to.
Instead of combining rules like this, make separate jobs for each, and a rule for each to say when to trigger it.
I have faced this issue, and similar issues many times while trying to do some dynamic pipelines for a multi-client environment.
The config you have above should work for your purposes to the best of my knowledge, but since it is not there is another way to accomplish this with trigger jobs.
Create a trigger job for each possible runner tag. You can use extends to reduce the total code required for this.
gitlab-ci.yml
stages:
- trigger
- lint
.trigger:
stage: trigger
trigger:
include:
- local: ./lint-job.yml
strategy: depend
trigger-lint-script1:
extends:
- .trigger
variables:
RUNNER_TAG: somerunner1
rules:
- changes:
- file/dev/script1.txt
trigger-lint-script2:
extends:
- .trigger
variables:
RUNNER_TAG: somerunner2
rules:
- changes:
- file/dev/script2.txt
Create a trigger job with associated rules for each possible tag. This way you can change more than one of the specified files in a single commit with no issues. Define the triggered job in lint-job.yml
lint-job.yml
deploy scripts 1/6:
tags: [$RUNNER_TAG]
stage: lint
script: |
echo "Add linting here!"
There are other ways to accomplish this, but this method is by far the simplest and cleanest for this particular use.

Invoke GitLab CI jobs from inside other jobs

I have many different GitLab CI jobs in my repository and dependent on variables that are set by an user in a config file I want to execute different sequences of jobs. My approach is to create a scheduler job that analyzes the config file and executes the jobs accordingly. However, I cannot figure out how to execute another job from within a job.
Any help is appreciated!
This would be a good use case for dynamic child pipelines. This is pretty much the only way to customize a pipeline based on the outcome of another job.
From the docs:
generate-config:
stage: build
script: generate-ci-config > generated-config.yml
artifacts:
paths:
- generated-config.yml
child-pipeline:
stage: test
trigger:
include:
- artifact: generated-config.yml
job: generate-config
In your case, the script generate-ci-config would be the analysis of your config files and creates a job configuration conditionally based on the config contents.

GitLab CI - forced start of job during manual start of another job

I have a dependency problem. My pipeline looks like it gets the dependencies required for jobs first, and finally runs a stage cleanup that cleans them all. The problem is that I have one stage with manual launch which also needs these dependencies but they are cleared.
Question can I somehow run a stage which has dependencies by running a manual stage? is there any other way i can solve this problem?
The normal behaviour of GitLab-CI is to clone the git repository at each job because the jobs can be run on different runners and thus need to be independent.
The automatic clone can be disabled by adding:
job-with-no-git-clone:
variables:
GIT_STRATEGY: none
If you need to use in a job some files/directories created in a previous stage, you must add them as GitLab artifacts
stages:
- one
- two
job-with-git-clone:
stage: one
script:
# this script creates something in the folder data
# (which means $CI_PROJECT_DIR/data)
do_something()
artifacts:
paths:
- data/
job2-with-git-clone:
stage: two
script:
# here you can use the files created in data
job2-with-no-git-clone:
stage: two
variables:
GIT_STRATEGY: none
script:
# here you can use the files created in data

Pipeline is stuck on "pending"

I'm using GitLab's shared CI runners for training. Since this morning I can't build
my project because the pipeline's status is "pending".
Is it because the number of shared runners is maxed out ? Too busy running other people's
code ?
Is there a setting I need to check ? I know I can pay for a dedicated runner but I'm not
in commercial setting at this point therefore I'm sticking with shared ones.
Thank you for assistance.
In such situations I would highly recommend to make the best use of tags. You can use tags to select a specific runner from the list of all runners that are available for the project.
In this example, the job is run by a runner that
has both ruby and postgres tags defined.
job:
tags:
- ruby
- postgres
If you have your own runners setup then you can use tags to run different jobs on different platforms. For
example, if you have an OS X runner with tag osx and a Windows runner with tag
windows, you can run a job on each platform:
windows job:
stage:
- build
tags:
- windows
script:
- echo Hello, %USERNAME%!
osx job:
stage:
- build
tags:
- osx
script:
- echo "Hello, $USER!"
If this is still a problem then consider using your own private runner.

CI jobs spanning multiple stages

How can I create a CI job that spans more than one stage, to improve parallelism?
As in the following diagram:
The idea is that slow_build should start as early as build, but test doesn't depend on it, so test should be able to start as soon as build is done.
(Note that this is a simplification: each stage has multiple processes running in parallel, otherwise I could just bundle build and test together.)
This is now possible as of Gitlab version 12.2. By adding the keyword needs to jobs that depend on other jobs, stages can now run concurrently. The full documentation for the needs keyword is here, but an example from the docs follows: https://docs.gitlab.com/ee/ci/yaml/#needs
linux:build:
stage: build
mac:build:
stage: build
lint:
stage: test
needs: []
linux:rspec:
stage: test
needs: ["linux:build"]
linux:rubocop:
stage: test
needs: ["linux:build"]
mac:rspec:
stage: test
needs: ["mac:build"]
mac:rubocop:
stage: test
needs: ["mac:build"]
production:
stage: deploy
Since the lint job doesn't need anything, it runs instantly, as does linux:build and mac:build. However, if linux:build finishes before mac:build then both linux:rspec and linux:rubocop can start, even before mac:build and the build stage complete.
As usual, without the needs keyword, the production job requires all previous jobs to complete before it starts.
When using needs in your pipelines, you can also view a Directed Acyclic Graph of your jobs in the pipeline view. More on that can be found here: https://docs.gitlab.com/ee/ci/directed_acyclic_graph/index.html