I'm not really sure if I got bamboos workspace handling right...
We have the following situation:
stage1:
job1: scm checkout
stage2:
job2: build1
job3: build2
stage3:
job4: build3
job5: build4
The repositorys size is about ~1,5 gb. Therefore, after every build, I want to delete everything except the build artefacts on my agent. But if I delete something on my agent in stage2, my jobs in stage 3 only get the "cleaned" ws. Is this default behaviour? And if yes, how can I prevent that my agent gets "dumped"?
x jobs = x*1,5gb ...
If you want to delete the working dir, go to Job configuration / Miscellaneous tab and check Clean working directory after each build.
I haven't found this option in the documentation, anyway its there (Bamboo 6.0.3)
Also note, that if you have more than one agent, jobs might run on any of them (concurrently). So job1 (checkout) could theoretically run on another machine, than the rest. You can solve that with Tasks, which always run inside one Job, thus one agent.
Related
So I'm actively trying to circumvent the job limit bamboo has in place because I have many inactive repositories that get fixed occasionally when new platform updates come out or a one-off new feature is added.
What I would like to happen is for my repository polling to pick up that there's been a change on one of my repository branches, run the job, and presto-change-o we're back to square 1 where I'm listening again for another repository polling update from another change.
Example:
Repo 1 has a commit pushed
Bamboo "hears" the change and starts the job
Repo 2 has a commit pushed
Bamboo hears this change as well, but doesn't continue due to 1 agent being available, this change is queued for later
Repo 1's triggered update finishes and publishes an artifact that can be shared
Bamboo resolves and starts Repo 2's job
Is doing something like this even possible? The best solution (meh) that I've found thus far is to just create one job with a sequential build where it's basically checkout/build/checkout/build/checkout/build but that would result in having to run through many unnecessary steps should I poll only one update from one repository. It's not like these things are changing frequently.
You can add multiple repositories to your build plan, and in your repository polling trigger put checkboxes on all repositories added into the plan.
To add multiple repositories,
Open Plan Configuration Editing
Select third tab "Repositories"
Press "Add repository" button.
Configure your repository and save.
Select fourth tab "Triggers".
Open your Repository Polling trigger and select all repositories you've added on steps 3-4.
Save the trigger.
Then repository polling has to check all configured repos, according to documentation:
https://confluence.atlassian.com/display/BAMBOO058/Triggering+builds
You can also add additional repositories into Source code checkout task, and checkout every repository in different subdirectory.
E.g. for repos R1, R2, R3 you will have working copy directories ./W1, ./W2, ./W3.
And Then it's up to you - either you clone your assembler task T to T1, T2, T3 to make builds from each working copy correspondingly, then it will be done for all jobs on every commit, they will all produce artifacts with the same build number, or you can add a shell script task and write a shell script which discovers the latest commit among all working copies (let's assume it is ./W2), creates symbolic link to that working copy subdirectory as ./MySymbolicLink, and your job that assembles the build will do that from ./MySymbolicLink folder.
Does anyone know of a way to split a single Jenkins job into parts and run them concurrently/parallel?
For example if I have a job that runs tests which take 30 minutes, is there a way I can break this job into three 10 minute runs that run at the same time but in three different instances
Thanks in advance.
Create new jobs, call it f.e. Test . You should select the job type based on the type of the root job.
If you have a Maven Job type, you can set the workspace directory under build -> advanced. Freestyle Job type has this option directly under project -> advanced.
Set for all jobs the same working directory. The root job will compile and all other jobs uses the same working directory to use the compiled output.
For the test jobs add the test execution as build step and differ here the tests which should be executed.
Edit your root job and remove there the excution of the long running tests. You can call there the three jobs now. But you need the Parameterized Trigger Plugin.
The downside of this way, you need enough jenkins executors to handle all tests jobs.
If you're using Jenkins 1.x, I would suggest trying the multijob plugin - I've successfully used it to split a single job into a parent job plus multiple child jobs:
https://wiki.jenkins-ci.org/display/JENKINS/Multijob+Plugin
If you're using Jenkins 2.x, then try out the pipeline feature :) It makes running parallel tasks very easy:
https://github.com/jenkinsci/pipeline-plugin/blob/master/TUTORIAL.md#creating-multiple-threads
If you want, I believe you can also use pipelines in Jenkins 1.x by means of a plugin. I haven't looked into that, though.
I divide this to two issues , but solving any one of them will solve the other.
Issue 1:
When I use the section at the bottom of the following picture and add "configurations to release", the build is not triggering release management.
I queried the build logs for hours, and saw that it stops the release when it founds out that the configuration "does not match current"
If ConfigurationsToRelease Matches Current
Initial Property Values
Condition = False
Final Property Values
Condition = False
Final Property Values
Condition = True
before more words, I will show a picture that sums it up:
the build definition and ms build logs
in a default case, where configuration to release is blank , release management would continue from this point and write in the logs "Release the build" (as a command that happened) and the build would trigger release management.
If you look at my tfs build configurations, you can see it's exactly the same as the upper regular ms build configurations , but still I get this error of a mismatch.
In any case I only have only one dialog to fill my configurations at in build definition as shown in the 'configuration dialog' in the above pic.
I succeeded to release once or twice this way in this same project the other day by adding release configurations, it somehow worked, but then stopped working (worked once or twice as a glitch I think like something is cached there) but 99% of the other attempts failed, it was always stopping my trigger from tfs to release since the first time I tried it a few months ago.
Has someone here experienced it? I looked in a lot of places and spotted only one guy that complained about it. His solution was to remove it (not exactly a solution)
Is there a build argument that can fix this? (/p:something=something)
Issue 2: if anyone can solve it in a way different than rm configurations , then I don't need issue 1 to be solved.
For any one who is interested why I even messed with release configuration section of build definition, it's because I want rm to wait for all transformations to happen before rm intervention, and this seems like a way to tell rm , ok dude , see there? you got 2 configurations to wait for their build.
The thing is that by default when that configuration section is blank, rm is getting in the way of tfs build and tfs build is getting in the way of rm, something like a circular wait. rm expects both transformed folders to exist in build outputs when tfs build is waiting for rm to finish it's run after the first build, tfs wants to continue building the second configuration (and transform) but rm is involved already , seeking for it's second configuration, breaking the build when fails to find it, hence the second configuration will never be created, while rm is still waiting for it and tfs build waits for rm and build breaks. confusing? read again and see the pic below cause it's interesting enough.
more info for clarity:
the next stage of RM is trying to get something from build outputs folder before it's already there.
for example , if i set Release build to true , it will only build the first configuration (a folder is created, picture below) , rm will succeed in first step (QA.Release), and continue strait to try and grab Release for its next stage, but it's not built yet by TFS Build , which waits for rm to finish it's weird intervention in the middle of tfs's build work . and like i said above, I'm sure I've seen it work once or twice in one of my attempted builds.
the tfs output folder whith release build flag on(only one transform happens) \ when off (all two transforms work) + rm error when on (circular wait)
If I understand your situation correctly I think your problem is that you are building multiple configurations. This breaks one of the core tenets of continuous delivery which is that for a given release you should build only once and deploy that same build to each stage in your pipeline, in sequence.
In order to do this your build needs to be stage agnostic, which in practice means all your configuration (eg database connection string) needs to be tokensised so that the correct value can be swapped in for the particular stage (QA, Release etc). I have a blog post series here that explains the full process (for a sample web application) in great detail.
i managed to solve issue 2 , and by doing so, issue 1 is no longer relevant .
i switched to ReleaseTfvcTemplate.12.xaml as a build template (found on your C:\Program Files (x86)\Microsoft Visual Studio 12.0\Release Management\Client\bin)
and then the build finished all configurations before RM intervention .
truly seems like a bug in ReleaseDefaultTemplate.11.1.xaml , or this might be because i was using some additional msbuild arguments (which were needed) , or the fact i'm using slow cheetah to create transforms on a windows service (transformations are only introduced in web applications).
either way , i'm now able to perform advanced tasks , like using the transforms to add \ remove tags that should be different in production , for example , i can leave the use of diagnostics in the configuration file for qa use , and remove it for production to make the verbosity lower there . still am using RM PlaceHolders technique in conjunction to transformation technology , to enjoy both worlds where it comes to changes related to environments , but still keeping the principles by passing the same build (dlls) through all environments .
Suppose I have a smart folder X having 5 jobs with multiple dependencies. For example, let us assume the job hierarchy is like this:
So, from Planning tab, I order this smart folder for execution. Since I don't want to wait for Job 202 to execute, as it a tape backup job which is not needed in the environment I am working in, I mark Job 202 as "OK" in the monitoring tab. For Job 302, it is a pre-requisite that Job 202 ends "OK".
In a similar set up, I have hundreds of jobs with similar dependencies. I have to order the folder from time to time, and have to manually mark all the jobs that are not required to run as "OK". I cannot simply remove the jobs that I need to mark ok as they have dependencies with the other jobs I want to execute.
My question is - How can I do this once - that is mark ok all unnecessary jobs - and save this for all future instances when I am going to run the workload?
If the job you mentioned as Job202 is not that important for Job302 to start, then this should be independent. Remove if from the flow and make it independant. Make this changes in Control M Desktop and write it to database. You will not have to make the changes daily.
For all jobs not required in the "environment" you are testing, you can check the check box for "run as dummy" to convert those jobs to "dummy" jobs while maintaining the structure, relationships, and dependencies in your folder. A dummy job will not execute the command, script, etc, rather the dummy job will only provide control-m the instructions on the post-processing steps of the job OR in your case the adding of conditions to continue processing the job flow after the dummy job.
(I realize this is an old question; I provided a response should it be helpful to anyone that finds this thread after me)
I'm trying to configure a set of build configurations in TeamCity 6 and am trying to model a specific requirement in the cleanest possible manner way enabled by TeamCity.
I have a set of acceptance tests (around 4-8 suites of tests grouped by the functional area of the system they pertain to) that I wish to run in parallel (I'll model them as build configurations so they can be distributed across a set of agents).
From my initial research, it seems that having a AcceptanceTests meta-build config that pulls in the set of individual Acceptance test configs via Snapshot dependencies should do the trick. Then all I have to do is say that my Commit build config should trigger AcceptanceTests and they'll all get pulled in. So, lets say I also have AcceptanceSuiteA, AcceptanceSuiteB and AcceptanceSuiteC
So far, so good (I know I could also turn it around the other way and cause the Commit config to trigger AcceptanceSuiteA, AcceptanceSuiteB and AcceptanceSuiteC - problem there is I need to manually aggregate the results to determine the overall success of the acceptance tests as a whole).
The complicating bit is that while AcceptanceSuiteC just needs some Commit artifacts and can then live on it's own, AcceptanceSuiteA and AcceptanceSuiteB need to:
DeploySite (lets say it takes 2 minutes and I cant afford to spin up a completely isolated one just for this run)
Run tests against the deployed site
The problem is that I need to be able to ensure that:
the website only gets configured once
The website does not get clobbered while the two suites are running
If I set up DeploySite as a build config and have AcceptanceSuiteA and AcceptanceSuiteB pull it in as a snapshot dependency, AFAICT:
a subsequent or parallel run of AcceptanceSuiteB could trigger another DeploySite which would clobber the deployment that AcceptanceSuiteA and/or AcceptanceSuiteB are in the middle of using.
While I can say Limit the number of simultaneously running builds to force only one to happen at a time, I need to have one at a time and not while the dependent pieces are still running.
Is there a way in TeamCity to model such a hierarchy?
EDIT: Ideas:-
A crap solution is that DeploySite could set a 'in use flag' marker and then have the AcceptanceTests config clear that flag [after AcceptanceSuiteA and AcceptanceSuiteB have completed]. The problem then becomes one of having the next DeploySite down the pipeline wait until said gate has been opened again (Doing a blocking wait within the build, doesnt feel right - I want it to be flagged as 'not yet started' rather than looking like it's taking a long time to do something). However this sort of stuff a flag over here and have this bit check it is the sort of mutable state / flakiness smell I'm trying to get away from.
EDIT 2: if I could programmatically alter the agent configuration, I could set Agent Requirements to require InUse=false and then set the flag when a deploy starts and clear it after the tests have run
Seems you go look on the Jetbrains Devnet and YouTrack tracker first and remember to use the magic word clobber in your search.
Then you install groovy-plug and use the StartBuildPrecondition facility
To use the feature, add system.locks.readLock. or system.locks.writeLock. property to the build configuration.
The build with writeLock will only start when there are no builds running with read or write locks of the same name.
The build with readLock will only start when there are no builds running with write lock of the same name.
therein to manage the fact that the dependent configs 'read' and the DeploySite config 'writes' the shared item.
(This is not a full productised solution hence the tracker item remains open)
EDIT: And I still dont know whether the lock should be under Build Parameters|System Properties and what the exact name format should be, is it locks.writeLock.MYLOCKNAME (i.e., show up in config with reference syntax %system.locks.writeLock.MYLOCKNAME%) ?
Other puzzlers are: how does one manage giving builds triggered by build completion of a writeLock task read access - does the lock get dropped until the next one picks up (which would allow another writer in) - or is it necessary to have something queue up the parent and child dependency at the same time ?