How to interrupt triggered gitlab pipelines - gitlab-ci

I'm using a webhook to trigger my Gitlab pipeline. Sometimes, this trigger is triggered a bunch of times, but my pipelines only has to run the last one (static site generation). Right now, it will run as many pipelines as I have triggered. My pipelines takes 20 minutes so sometimes it's running the rest of the day, which is completely unnecessary.
https://docs.gitlab.com/ee/ci/yaml/#interruptible and https://docs.gitlab.com/ee/user/project/pipelines/settings.html#auto-cancel-pending-pipelines only work on pushed commits, not on triggers

A similar problem is discussed in gitlab-org/gitlab-foss issue 41560
Example of a use-case:
I want to always push the same Docker "image:tag", for example: "myapp:dev-CI". The idea is that "myapp:dev-CI" should always be the latest Docker image of the application that matches the HEAD of the develop branch.
However if 2 commits are pushed, then 2 pipelines are triggered and executed in paralell. Then the latest triggered pipeline often finishes before the oldest one.
As a consequence the pushed Docker image is not the latest one.
Proposition:
As a workaround for *nix you can get running pipelines from API and wait until they finished or cancel them with the same API.
In the example below script checks for running pipelines with lower id's for the same branch and sleeps.
jq package is required for this code to work.
Or:
Create a new runner instance
Configure it to run jobs marked as deploy with concurrency 1
Add the deploy tag to your CD job.
It's now impossible for two deploy jobs to run concurrently.
To guard against a situation where an older pipeline may run after a new one, add a check in your deploy job to exit if the current pipeline ID is less than the current deployment.
Slight modification:
For me, one slight change: I kept the global concurrency setting the same (8 runners on my machine so concurrency: 8).
But, I tagged one of the runners with deploy and added limit: 1 to its config.
I then updated my .gitlab-ci.yml to use the deploy tag in my deploy job.
Works perfectly: my code_tests job can run simultaneously on 7 runners but deploy is "single threaded" and any other deploy jobs go into pending state until that runner is freed up.

Related

Configure allowed_pull_policies on shared GitLab runner

I'm using GitLab.com's managed CI runners, and I'd like to run my CI jobs using the if-not-present pull policy to avoid the extra minutes it takes to pull the image for each job. Trying to set that value in the .gitlab-ci.yml file gives me this error:
pull_policy ([if-not-present]) defined in GitLab pipeline config is not one of the allowed_pull_policies ([always])
This led me to the config.toml settings for restricting Docker pull policies, so I created a config.toml file at the root of my repository and tried that. However, I still get the same error.
Is config.toml only available for manual/self-hosted runners? Is there any other way to get past this?
Context
Image selection in .gitlab-ci.yml:
default:
image:
name: registry.gitlab.com/myorg/myrepo/ci/builder:latest
pull_policy: if-not-present
Contents of config.toml:
[[runners]]
executor = "docker"
[runners.docker]
pull_policy = ["if-not-present"]
allowed_pull_policies = ["always", "if-not-present"]
First of all, the config.toml file is not meant to be in your repo but on the runner machine (or container).
But anyways, the always pull policy should not cause image pulls to last minutes if the layers are already cached locally: it just ensures you have the latest version by checking the metadata. If the pulls take minutes, it means that either the layers are not available locally, or the image was actually updated (or that the connection to your container registry is so incredibly slow that just checking the metadata takes minutes, but that is unlikely).
It is very possible that Gitlab's managed runners do not have a way to locally cache layers, and thus there would be no practical difference between the always and if-not-present policies. For instance if you use Gitlab Saas:
A dedicated temporary runner VM hosts and runs each CI job.
(see https://docs.gitlab.com/ee/ci/runners/index.html)
Thus the downloaded layers are discarded as soon as the job finishes.

Gitlab CI - Trigger daily pipeline only if new chanes have been commited

The company I work for has a self hosted Gitlab CE server v13.2.1.
For a specific project I've setup the CI jobs to build according to the following workflow :
If a commit has been pushed to the main branch
If a merge request has been created
If a tag has been pushed
Every day at midnight to build the main branch (using scheduled pipelines)
Everything works fine. The only thing I would like to improve is that the nightly builds are performed even if the main branch has not been modified (no new commit).
I had a look to the Gitlab documentation to change my workflow: rules in the .gitlab-ci.yml file but I didn't find anything relevant.
The gitlab runner is installed in a VM and is setup as a shell executor. I was thinking of creating in the home directory a file to store the last commit ID. I'm not a big fan of that solution, because :
it's a ugly fix.
The pipeline will be triggered by Gitlab even if it does nothing. This will pollute the pipeline list.
Is there any way to setup the workflow: section to perform this so the pipeline list won't contain unnecessary pipeline ?

I want to run a specific drone pipeline in series

I have a drone.yml running 3 pipelines on my github repo. One for pullrequests that get run as soon as some one submits a pull request, a pipeline for releases that makes docker containers and outputs a docker-compose, and now I'm making a pipeline that runs integration tests after a merge into master.
One of the steps is that it updates a test server, which is making the task challenging. Is there a way to force this specific drone pipeline to only run if there isn't any other of this pipeline running?
You can used depends_on to force the order of the pipelines execution.
Pipeline: Graph Execution

How to disable simultaneous build on drone io?

I use drone as CI and want to know how I can disable simultaneous build. What's happening is that when I submit two commits to git repo, drone will trigger two build on each of the submit. How can I let the second build wait until the first one finish?
Regarding the open source version of Drone: set the DOCKER_MAX_PROCS environment variable of your drone agent to 1, i.e. docker run -e DOCKER_MAX_PROCS=1 [...] drone/drone:0.5 agent. The agent will run one build concurrently, other builds will queue up.
See the Installation Reference section in the readme for more info.

JBoss Cluster setup with Hudson?

I want to have a Hudson setup that has two cluster nodes with JBoss. There is already a test machine with Hudson and it is running the nightly build and tests. At the moment the application is deployed on the Hudson box.
There are couple options in my mind. One could be to use SCPplugin for Hudson to copy the ear file over from master to the cluster nodes. The other option could be to setup Hudson slaves on cluster nodes.
Any opinions, experiences or other approaches?
edit: I set up a slave but it seems that I can't make a job to take place on more than one slave without copying the job. Am I missing something?
You are right. You can't run different build steps of one job on different nodes. However, a job can be configured to run on different slaves, Hudson than determines at execution time what node that job will run on.
You need to configure labels for you nodes. A node can have more than one label. Every job can also require more than one label.
Example:
Node 1 has label maven and db2
Node 2 has label maven and ant
Job 1 requires label maven
can run on Node 1 and Node 2
Job 2 requires label ant
can run on Node 2
Job 2 requires label maven and db2
can run on Node 1
If you need different build steps of one job to run on different nodes you have to create more than one job and chain them. You only trigger the first job who triggers the subsequent jobs. One of the following jobs can access the artifacts of the previous job. You can even run two jobs in parallel and when both are done automatically trigger the next job. You will need the Join Plugin for the parallel jobs.
If you want load balancing and central administration from Hudson (i.e. configuring projects, seeing what builds run ATM, etc.), you must run slaves.