Is there a way to trigger a child plan in Bamboo and pass it information like a version number? - bamboo

We're using Go.Cd and transitioning to Bamboo.
One of the features we use in Go.Cd is value stream maps. This enables triggering another pipeline and passing information (and build artifacts) to the downstream pipeline.
This is valuable when an upstream build has a particular version number, and you want to pass that version number to the downstream build.
I want to replicate this setup in Bamboo (without a plugin).
My question is: Is there a way to trigger a child plan in Bamboo and pass it information like a version number?

This has three steps.
Use a parent plan/child plan to setup the relationship.
Using the artifacts tab, setup shared artifacts to transfer files of one plan to another.
3a. At the end of the parent build, dump the environment variables to a file
env > env.txt
3b. Setup (using the artifacts tab) an artifact selector that picks this up.
3c. Setup a fetch for this artifact from the shared artifacts in the child plan.
3d. Using the Inject Variables task - read the env.txt file you have transferred over. Now your build number from the original pipeline is now available in this downstream pipeline. (Just like Go.Cd).

Related

How to run the same job against multiple repositories with multiple triggers?

So I'm actively trying to circumvent the job limit bamboo has in place because I have many inactive repositories that get fixed occasionally when new platform updates come out or a one-off new feature is added.
What I would like to happen is for my repository polling to pick up that there's been a change on one of my repository branches, run the job, and presto-change-o we're back to square 1 where I'm listening again for another repository polling update from another change.
Example:
Repo 1 has a commit pushed
Bamboo "hears" the change and starts the job
Repo 2 has a commit pushed
Bamboo hears this change as well, but doesn't continue due to 1 agent being available, this change is queued for later
Repo 1's triggered update finishes and publishes an artifact that can be shared
Bamboo resolves and starts Repo 2's job
Is doing something like this even possible? The best solution (meh) that I've found thus far is to just create one job with a sequential build where it's basically checkout/build/checkout/build/checkout/build but that would result in having to run through many unnecessary steps should I poll only one update from one repository. It's not like these things are changing frequently.
You can add multiple repositories to your build plan, and in your repository polling trigger put checkboxes on all repositories added into the plan.
To add multiple repositories,
Open Plan Configuration Editing
Select third tab "Repositories"
Press "Add repository" button.
Configure your repository and save.
Select fourth tab "Triggers".
Open your Repository Polling trigger and select all repositories you've added on steps 3-4.
Save the trigger.
Then repository polling has to check all configured repos, according to documentation:
https://confluence.atlassian.com/display/BAMBOO058/Triggering+builds
You can also add additional repositories into Source code checkout task, and checkout every repository in different subdirectory.
E.g. for repos R1, R2, R3 you will have working copy directories ./W1, ./W2, ./W3.
And Then it's up to you - either you clone your assembler task T to T1, T2, T3 to make builds from each working copy correspondingly, then it will be done for all jobs on every commit, they will all produce artifacts with the same build number, or you can add a shell script task and write a shell script which discovers the latest commit among all working copies (let's assume it is ./W2), creates symbolic link to that working copy subdirectory as ./MySymbolicLink, and your job that assembles the build will do that from ./MySymbolicLink folder.

Bamboo: Using a newly created Tag/Branch in checkout tasks at later stages in the plan

I am trying to create a build plan which has a VCS Tagging(or VCS Branching) task in its first stage, and then at later stages uses the newly created tag(or branch) to checkout code from it(repository is SVN).
I use a plan variable for the tag/branch name - ${bamboo.repoBranch} - and this variable is also used in the repository URL. I understand that this URL would not be valid until the tagging/branching task is executed, but tasks that try to checkout from that URL are at later stages.
From what I understand, there is something like a code change detection phase, during which Bamboo checks all defined repositories for changes(no matter the order they are referenced in the plan or even if they are not used in the plan at all). I think this is the reason my approach doesn't work, is that correct?
Here is the exception I get:
com.atlassian.bamboo.repository.InvalidRepositoryException: svn:
at
com.atlassian.bamboo.repository.svn.SvnRepository.detectCommitsForUrl(SvnRepository.java:527)
at
com.atlassian.bamboo.repository.svn.SvnRepository.collectChangesSinceLastBuild(SvnRepository.java:278)
Another alternative to what I am trying to achieve is to have a plan that creates a tag/branch and a child plan of that plan which uses the newly created tag/branch. The problem with this is that plan variables cannot be passed to child plans - I want to use Run Customized to override the value for ${bamboo.repoBranch} and the overridden value to be passed to the child plan. From what I've read the workaround for this is to use a script task which using the Bamboo REST API queues the next plan for execution, but this seems a not very elegant solution.
Any other approaches for what I am trying to achieve will be helpful.
Thanks

TFS 2010 Build Pick a configuration at build time

How can i configure a build definition to allow me to pick a solution configuration at build time?
I have 3 configurations in my solution: (Local, UAT and Live).
I want people to pick and the configuration they need and the build will do the config transforms, deployment etc. as required. I have the build script I need, just need to know how I can switch upon the configuration.
If I cannot use the actual configurations, a custom property would do, but obviously I need to be able to access it in my build script.
My opinion is that your Build Defition should contain all three configurations, so that Build shall execute all three of them by default.Then, you can insert a custom argument in your build process template as an "Configuration Override" with default = empty.Checking this Hofman-post you can have your argument part of the 'Queue new Build dialog.So, when your users queue a new build, they either leave this empty and build executes all configs, or they enter one of the three and only the one selected shall be executed.There are various ways to implement this in your build process template, in general you might want to intervene in section For Each Configuration in BuildSettings.PlatformConfigurations:
and check if your custom argument is empty (so all nodes should execute), or if it is filled with a specific entry (so it should proceed only once). Further handling of a user input that does not comply with any of the available configs should be added, so that build can graciously fail.

TeamCity: Managing deployment dependencies for acceptance tests?

I'm trying to configure a set of build configurations in TeamCity 6 and am trying to model a specific requirement in the cleanest possible manner way enabled by TeamCity.
I have a set of acceptance tests (around 4-8 suites of tests grouped by the functional area of the system they pertain to) that I wish to run in parallel (I'll model them as build configurations so they can be distributed across a set of agents).
From my initial research, it seems that having a AcceptanceTests meta-build config that pulls in the set of individual Acceptance test configs via Snapshot dependencies should do the trick. Then all I have to do is say that my Commit build config should trigger AcceptanceTests and they'll all get pulled in. So, lets say I also have AcceptanceSuiteA, AcceptanceSuiteB and AcceptanceSuiteC
So far, so good (I know I could also turn it around the other way and cause the Commit config to trigger AcceptanceSuiteA, AcceptanceSuiteB and AcceptanceSuiteC - problem there is I need to manually aggregate the results to determine the overall success of the acceptance tests as a whole).
The complicating bit is that while AcceptanceSuiteC just needs some Commit artifacts and can then live on it's own, AcceptanceSuiteA and AcceptanceSuiteB need to:
DeploySite (lets say it takes 2 minutes and I cant afford to spin up a completely isolated one just for this run)
Run tests against the deployed site
The problem is that I need to be able to ensure that:
the website only gets configured once
The website does not get clobbered while the two suites are running
If I set up DeploySite as a build config and have AcceptanceSuiteA and AcceptanceSuiteB pull it in as a snapshot dependency, AFAICT:
a subsequent or parallel run of AcceptanceSuiteB could trigger another DeploySite which would clobber the deployment that AcceptanceSuiteA and/or AcceptanceSuiteB are in the middle of using.
While I can say Limit the number of simultaneously running builds to force only one to happen at a time, I need to have one at a time and not while the dependent pieces are still running.
Is there a way in TeamCity to model such a hierarchy?
EDIT: Ideas:-
A crap solution is that DeploySite could set a 'in use flag' marker and then have the AcceptanceTests config clear that flag [after AcceptanceSuiteA and AcceptanceSuiteB have completed]. The problem then becomes one of having the next DeploySite down the pipeline wait until said gate has been opened again (Doing a blocking wait within the build, doesnt feel right - I want it to be flagged as 'not yet started' rather than looking like it's taking a long time to do something). However this sort of stuff a flag over here and have this bit check it is the sort of mutable state / flakiness smell I'm trying to get away from.
EDIT 2: if I could programmatically alter the agent configuration, I could set Agent Requirements to require InUse=false and then set the flag when a deploy starts and clear it after the tests have run
Seems you go look on the Jetbrains Devnet and YouTrack tracker first and remember to use the magic word clobber in your search.
Then you install groovy-plug and use the StartBuildPrecondition facility
To use the feature, add system.locks.readLock. or system.locks.writeLock. property to the build configuration.
The build with writeLock will only start when there are no builds running with read or write locks of the same name.
The build with readLock will only start when there are no builds running with write lock of the same name.
therein to manage the fact that the dependent configs 'read' and the DeploySite config 'writes' the shared item.
(This is not a full productised solution hence the tracker item remains open)
EDIT: And I still dont know whether the lock should be under Build Parameters|System Properties and what the exact name format should be, is it locks.writeLock.MYLOCKNAME (i.e., show up in config with reference syntax %system.locks.writeLock.MYLOCKNAME%) ?
Other puzzlers are: how does one manage giving builds triggered by build completion of a writeLock task read access - does the lock get dropped until the next one picks up (which would allow another writer in) - or is it necessary to have something queue up the parent and child dependency at the same time ?

checking for maven snapshot dependency changes on continuous integration server

There is a cruisecontrol plugin that checks for changes to snapshot dependencies, triggering a build if required. This involves using the Maven embedder to download the dependencies, then checking the timestamps of the snapshot files in the local repository. This works ok, but involves downloading all the parents and dependencies to check some timestamps.
I'm working on a distributed CI system (e.g. Bamboo/Buildforge) and would like to avoid downloading the entire dependency hierarchy to check if a build is required. It is possible to determine the build date of a snapshot dependency by checking the maven-metadata.xml on the remote repository.
Are there any plugins or tools to streamline this process?
Assuming you're using maven as your build process, you want a plugin to do the checking and conditional build.
I don't know of any maven plugin that will do exactly what you want. However,
you should be able cobble together a couple plugins for the same effect.
Use the exec plugin with "wget" to fetch the maven-metadata.xml.
Then use the xslt plugin to transform the resulting XML into a boolean value that will indicate whether or not an update has occured. You'll want to XPath to the //metadata/versioning/lastUpdated node and compare it to the current date and time. Finally, you'll need to examine the resulting transformed XML to determine if you should proceed with the build.
Find those plugins at http://mojo.codehaus.org/plugins.html
It looks like Mercury provides the higher level API I was looking for.
Mercury provides an implementation-neutral way to access GAV-based repositories, including AV repositories, like OSGi. OSGi access is not implemented yet. By access I mean reading artifacts and metadata from repositories and writing artifacts to repositories, metadata is updated by writes.
All the calls accept a collection of requests as an input and return an object that hides getResults, that normally is a map< queryElement, Collection > response. The response object has convenience methos hasExceptions(), hasResults(), getExceptions(), getResults()
One of the key building blocks is a hierarchy of Artifact data:
ArtifactCoordinates - is truly the 3 components GAV
ArtifactBasicMetadata - is coordinates plus type/classifier plus convenience methods like hash calculation and such
ArtifactMetadata adds a list of dependency objects, captured as ArtifactBasicMetadata
DefaultArtifact implements Artifact interface and adds pomBlob (byte[]) and file, that points to actual binary