TLDR: How can I arrange it so that a snapshot dependency does not trigger new builds?
For my test processes to run, they need to run on a "Test" environment. Creating such an environment is simple, but lengthy; it can take as much as 45 minutes to an hour to finish building a test environment. Further, the name of the environment, and other such variables, is not fixed until the environment has finished building.
In my TeamCity build definition, I could put "build environment if missing" as a build step. However, that means that the first test of the day will take 45 minutes to run.
Instead, we created a separate build, that is scheduled to run every morning, that builds the test environment for the day. Our test build then has a snapshot dependency to that build in order to use the parameters of that build to determine the environment information, and everything works as expected, except for one issue:
When a new test is run, it frequently seems to trigger a rebuild of the test environment creation.
We don't want this to ever happen; the test environment creation is 'done' for the day and should not need to run again until tomorrow. How can we achieve this?
You already have time based trigger => environment will be prepared every morning
Create Snapshot Dependency in your product TC configuration (not in that one which is preparing test environment) and tick 'Do not run new build if there is a suitable one'
Your configuration used to setup test environment should not have any VCS root (or point to some calm place of source control where submits will not happen). To physical setup your environment you should not need any source code mapping etc. - you may consume everything needed through your own NuGet packages for example.
Note: In this workflow every build of your real project will enlist build of configuration which sets test environment (so it's physically in build queue) but
when it's turn come up it will compare changes since last build => no submits on VCS found (it's pointing to calm place in SourceControl due step 3)) and so build will be skipped in <1s
Related
The Gitlab documentation says the following about GIT_STRATEGY: none:
none also re-uses the project workspace, but skips all Git operations (including GitLab Runner's pre-clone script, if present). It is mostly useful for jobs that operate exclusively on artifacts (e.g., deploy). Git repository data may be present, but it is certain to be out of date, so you should only rely on files brought into the project workspace from cache or artifacts.
I'm still a bit confused about how this is supposed to work. If the source code is not guaranteed to exist, then there might be no source in the project workspace and thus the .gitlab-ci.yml file would also be missing. Without a build script the job must fail. If the source is missing only part of the time depending on external factors, the job will fail randomly, which is even worse than failing every time. However, if it fails every single time then what's the point of the feature?
Another possibility I see is that .gitlab-ci.yml might be injected at runtime, so that even without a fresh copy of the repository there would be a build script. If so, could I define further files from my repository to inject into the build process? What are the restrictions on these particular jobs?
Yes, the .gitlab-ci.yml file is not copied onto the system just like all the other files. But that doesn't matter as the job is not run from the file. The job is run as a script on your target (and even before that as it defines the target it will run on). It is not possible to copy only selected files without a git clone although you may want to copy the files from some other server.
A good example of when you want to run GIT_STRATEGY: none are things like slackchat notifications as last stage of a build when you really don't want to clone gigabytes of repository data just to push a notification.
I'm running a maven build in Jenkins and my builds are failing because Jenkins isn't smart enough to clean up after my build - it just tries to delete the entire workspace and the builds fails immediatly. Is there a way to NOT have jenkins fail because of this (or even try to do it), but instead let my "mvn clean" script run first? If I can do this then I can do the specialized cleanup that is required.
I suspect that you are using Workspace Cleanup Plugin. First thing to do is to check what happens if you do it as a pre-build step, not post-build.
Alternatively there is a custom step in the build that tries to wipe out the workspace. See what happens if you invoke it as the first build step, not the last. Or try using the aforementioned plugin.
If it does not help, here are other general clenup strategies you may employ:
Start another job as a post-build step that will cleanup for the upstream job.
Have a regularly scheduled (say, nightly) job that will do the cleanup. In this case you'll probably need to 'partition' your workspace by builds - i.e. create a separate subdirectory in the workspace for every build instance (keyed by BUILD_ID) so that builds do not accidentally use leftover files from previous builds.
Have a cleanup build step in your job. In this case you do not need to create extra jobs. However, as it runs before the build is over it may not be able to cleanup completely (say, it won't be able to delete the artifacts that need to be archived in a post-build step).
To summarize: Yes, you can run you script first, just make sure it does not delete things prematurely. Also examine your job configuration for where that cleanup of yours is called and set it up as a pre-build step or remove it.
We have a build that among other things, runs MSTest. We have separated some tests into "Nightly" and everything else, to keep our build running quickly.
We want:
On SCM change trigger, run all uncategorized tests
On nightly schedule, run all tests including "Nightly" category
I was setting this up with parameterized builds, but there isn't an option to select parameters on SCM change or on a schedule.
Is there a workaround to make this work? Maybe using a second build project?
Got it to work with 2 projects:
My original project is setup with SCM trigger, running the "Fast" category only
Setup a second project with a scheduled nightly trigger, its only build step is to trigger the first project running both "Fast" and "Slow categories.
When we build a Maven project without doing mvn clean, we sometimes get "voodoo errors" such as NoSuchMethodError. I believe these are caused by moving/renaming files.
I don't want to use the clean option in the CI, because it makes the build process take much longer. Is there another alternative?
You should always use clean in a CI build. CI builds must be reproducible and that requires starting from scratch!
And about the process taking longer: the whole point of using CI (one of the many) is that you can keep working while it's running, so that should not be a problem.
But what I like to do is use multiple layers of CI per project:
A first job compiles and executes some basic tests*, this build should take less than 5 minutes
if that succeeds, a second job executes all tests*, code metrics, javadocs etc
if that succeeds a third job deploys the build to a test server
(Or you can let the first job trigger both the second and the third job at once)
* You can implement the some tests / all tests functionality by configuring the maven surefire plugin differently per profile)
We have three build targets:
Continuous Integration: Builds without doing a Clean, and only run the tests identified by Clover. This runs after each commit. On success it deploys to the test server.
Nightly: Does a clean build and runs every single test. This runs every night. On success it deploys to the test server.
Release: Same as Nightly plus creates a source control label. Run manually.
The nightly build is more trustworthy in that a clean build is conducted. However, the CI build is quicker meaning feedback is faster on those occasions.
There is an underlying problem here with the build time, but this is at least a work around while you look at more permanent ways to address that.
I'm in a bit of a pickle. I'm trying to run some environmental scripts before I run the build in a m2 project, but it seems no matter how hard I try - the 'pre' build script are never run early enough.
Before the 'pre-build' scripts are run, the project checks to see if the correct files are in the workspace - files that won't be there until the scripts I've written are executed.
To make them 'pre-build', I'm using the M2 Extra Steps plugin - but's it's not 'pre' enough.
Has anyone got any suggestions as to how I can carry out what I want to do?
Cheers.
Have you considered breaking it up into two projects, and setting the pre-build project to be upstream of the build project?
e.g.,
Foo Pre-build
Foo Build
After Foo Pre-build runs, cause "Foo Build" to run.
I have used this, admittedly in different scenarios than yours, quite successfully. This has the added benefit (if you need it) of allowing you to manually run a build without going through the pre-build steps, if you know they aren't necessary.
You should use the free form project type and not the maven project type.
If this is a problem (ie, there are projects that are expecting to be triggered by or triggering from), consider using a custom workspace location and having a free form project execute in this workspace before the maven project runs. The free form project can be used as the trigger for the maven project.
Does adding another build step as a shell script work?
My problem stemmed from the fact I wanted to set-up my workspace before I ran anything due to an issue with Dynamic Views (ClearCase) not being accessible from the workspace - I wanted to add a symlink to fix this.
However, Andrew Bayer has made a change to the plugin that I'm currently testing that should fix this...so the question is probably invalid in it's current form.
Will edit it once we come to a conclusion.