How can I make to be continuous jobs interruptible?
And by the way, why isn't it the default behavior ?
How can I make to be continuous jobs interruptible?
The jobs can be made interruptible simply by overriding the job(s).
For example, here is how to make one single job (mvn-build) interruptible:
mvn-build:
interruptible: true
And here is how to make all Maven template jobs interruptible (using the base job .mvn-base):
.mvn-base:
interruptible: true
Why isn't it the default to be continuous behavior?
If two branches are merged at the same time, the first resulting pipeline will be cancelled. If the second one has failing tests you won't know which merge was the root cause. This will require some digging to know where the error lies.
Our opinion is that developer's time is more valuable than GitLab CI's so we prefer let him do the extra work by default.
Related
We have multiple runners that share a tag, and these tags can't be changed because of workplace policies. So we currently have something set up like this:
#12345 (Foo)
tag: foobar
#23456 (fOo)
tag: foobar
#34567 (foO)
tag: foobar
However when we run a job using the "foobar" tag, it sometimes fails solely depending on the runner that gets chosen. I ended up running the pipeline a dozen or so times to check, and runners #12345 and #23456 always end up failing, even when the build is fine. The #34567 runner succeeds when the build is fine and fails when the build isn't. The runner documentation says I can specify the runner by name, but looking over the keyword reference documentation I'm not seeing how to specify it.
It's not possible. The runners can only be selected by tags and runners with an identical tag should be homogeneous in terms of software versions and hardware. The fist one that is ready to take your CI job will run it.
So one should never need to select a specific runner, in a group that share a single tag.
Each job may known the runner executing it by looking at the environment variable CI_RUNNER_ID, but this is not usable for your purpose. Unless you force a job failure if the runner is not the "good" one, and retry it until it will be randomly taken by the runner you want. But of course this would be a weird solution.
No. The documentation is misleading. You can only use tags to limit what runner(s) your jobs run on.
The only other way you might have around this would be to register your own runner(s) for your project/group, giving them the tags you need. Though, I doubt that's an acceptable solution for obvious reasons.
Ultimately, your GitLab administrator will need to configure your runner(s) to have an additional tag by which you can uniquely identify the runner(s) if you want to be able to have your jobs use a specific runner out of your shared runner pool.
We have a fairly complex .gitlab-ci.yml configuration. I know about the new gitlab pipeline editor, but I can't find a way to 'simulate' what jobs get picked by my rules depending on the branch name, tag, etc.
On our jobs, we have a $PIPELINE custom variable to allow us to have different pipeline 'types' by using schedules to define this var to different values, like this:
rules:
- if: '$PIPELINE == "regular" && ($CI_COMMIT_BRANCH == "master" || $CI_COMMIT_TAG != null)'
or like this:
rules:
- if: '$CI_COMMIT_TAG != null'
Is there a way to 'simulate' a pipeline with different branch names, tags and variables so I can see what jobs get picked on each case, without actually running the pipelines (e.g. with a test tag, etc.). Or is there a better way to do this?
Thanks in advance.
Not quite, but this is close, with GitLab 15.3 (August 2022):
Simulate default branch pipeline in the Pipeline Editor
The pipeline editor helps prevent syntax errors in your pipeline before you commit. But pipeline logic issues are harder to spot. For example, incorrect rules and needs job dependencies might not be noticed until after you commit and try running a pipeline.
In this release, we’ve brought the ability to simulate a pipeline to the pipeline editor.
This was previously available in limited form in the CI Lint tool, but now you can use it directly in the pipeline editor. Use it to simulate a new pipeline creation on the default branch with your changes, and detect logic problems before you actually commit!
See Documentation and Issue.
New to Gitlab CI/CD.
What is the proper construct to use in my .gitlab-ci.yml file to ensure that my validation job runs only when a "real" checkin happens?
What I mean is, I observe that the moment I create a merge request, say—which of course creates a new branch—the CI/CD process runs. That is, the branch creation itself, despite the fact that no files have changed, causes the .gitlab-ci.yml file to be processed and pipelines to be kicked off.
Ideally I'd only want this sort of thing to happen when there is actually a change to a file, or a file addition, etc.—in common-sense terms, I don't want CI/CD running on silly operations that don't actually really change the state of the software under development.
I'm passably familiar with except and only, but these don't seem to be able to limit things the way I want. Am I missing a fundamental category or recipe?
I'm afraid what you ask is not possible within Gitlab CI.
There could be a way to use the CI_COMMIT_SHA predefined variable since that will be the same in your new branch compared to your source branch.
Still, the pipeline will run before it can determine or compare SHA's in a custom script or condition.
Gitlab runs pipelines for branches or tags, not commits. Pushing to a repo triggers a pipeline, branching is in fact pushing a change to the repo.
I have a build job which takes a parameter (say which branch to build) that, when it completes triggers a testing job (actually several jobs) which does some stuff like download a bunch of test data and checks that the new version is works with the test data.
My problem is that I can't seem to figure out a way to show the test results in a sensible way. If I just use one testing job then the test results for "stable" and "dodgy-future-branch" get mixed up which isn't what I want and if I create a separate testing job for each branch that the build job understands it quickly becomes unmanageable because of combinatorial explosion (say 6 branches and 6 different types of testing mean I need 36 testing jobs and then when I want to make a change, say to save more builds, then I need to update all 36 by hand)
I've been looking at Job Generator Plugin and ez-templates in the hope that I might be able to create and manage just the templates for the testing jobs and have the actual jobs be created / updated on the fly. I can't shake the feeling that this is so hard because my basic model is wrong. Is it just that the separation of the building and testing jobs like this is not recommended or is there some other method to allow the filtering of test results for a job based on build parameters that I haven't found yet?
I would define a set of simple use cases:
Check in on development branch triggers build
Successful build triggers UpdateBuildPage
Successful build of development triggers IntegrationTest
Successful IntegrationTest triggers LoadTest
Successful IntegrationTest triggers UpdateTestPage
Successful LoadTest triggers UpdateTestPage
etc.
So especially I wouldn't look into all jenkins job results for overviews, but create a web page or something like that.
I wouldn't expect the full matrix of build/tests, and the combinations that are used will become clear from the use cases.
I'm trying to configure a set of build configurations in TeamCity 6 and am trying to model a specific requirement in the cleanest possible manner way enabled by TeamCity.
I have a set of acceptance tests (around 4-8 suites of tests grouped by the functional area of the system they pertain to) that I wish to run in parallel (I'll model them as build configurations so they can be distributed across a set of agents).
From my initial research, it seems that having a AcceptanceTests meta-build config that pulls in the set of individual Acceptance test configs via Snapshot dependencies should do the trick. Then all I have to do is say that my Commit build config should trigger AcceptanceTests and they'll all get pulled in. So, lets say I also have AcceptanceSuiteA, AcceptanceSuiteB and AcceptanceSuiteC
So far, so good (I know I could also turn it around the other way and cause the Commit config to trigger AcceptanceSuiteA, AcceptanceSuiteB and AcceptanceSuiteC - problem there is I need to manually aggregate the results to determine the overall success of the acceptance tests as a whole).
The complicating bit is that while AcceptanceSuiteC just needs some Commit artifacts and can then live on it's own, AcceptanceSuiteA and AcceptanceSuiteB need to:
DeploySite (lets say it takes 2 minutes and I cant afford to spin up a completely isolated one just for this run)
Run tests against the deployed site
The problem is that I need to be able to ensure that:
the website only gets configured once
The website does not get clobbered while the two suites are running
If I set up DeploySite as a build config and have AcceptanceSuiteA and AcceptanceSuiteB pull it in as a snapshot dependency, AFAICT:
a subsequent or parallel run of AcceptanceSuiteB could trigger another DeploySite which would clobber the deployment that AcceptanceSuiteA and/or AcceptanceSuiteB are in the middle of using.
While I can say Limit the number of simultaneously running builds to force only one to happen at a time, I need to have one at a time and not while the dependent pieces are still running.
Is there a way in TeamCity to model such a hierarchy?
EDIT: Ideas:-
A crap solution is that DeploySite could set a 'in use flag' marker and then have the AcceptanceTests config clear that flag [after AcceptanceSuiteA and AcceptanceSuiteB have completed]. The problem then becomes one of having the next DeploySite down the pipeline wait until said gate has been opened again (Doing a blocking wait within the build, doesnt feel right - I want it to be flagged as 'not yet started' rather than looking like it's taking a long time to do something). However this sort of stuff a flag over here and have this bit check it is the sort of mutable state / flakiness smell I'm trying to get away from.
EDIT 2: if I could programmatically alter the agent configuration, I could set Agent Requirements to require InUse=false and then set the flag when a deploy starts and clear it after the tests have run
Seems you go look on the Jetbrains Devnet and YouTrack tracker first and remember to use the magic word clobber in your search.
Then you install groovy-plug and use the StartBuildPrecondition facility
To use the feature, add system.locks.readLock. or system.locks.writeLock. property to the build configuration.
The build with writeLock will only start when there are no builds running with read or write locks of the same name.
The build with readLock will only start when there are no builds running with write lock of the same name.
therein to manage the fact that the dependent configs 'read' and the DeploySite config 'writes' the shared item.
(This is not a full productised solution hence the tracker item remains open)
EDIT: And I still dont know whether the lock should be under Build Parameters|System Properties and what the exact name format should be, is it locks.writeLock.MYLOCKNAME (i.e., show up in config with reference syntax %system.locks.writeLock.MYLOCKNAME%) ?
Other puzzlers are: how does one manage giving builds triggered by build completion of a writeLock task read access - does the lock get dropped until the next one picks up (which would allow another writer in) - or is it necessary to have something queue up the parent and child dependency at the same time ?