There is a cruisecontrol plugin that checks for changes to snapshot dependencies, triggering a build if required. This involves using the Maven embedder to download the dependencies, then checking the timestamps of the snapshot files in the local repository. This works ok, but involves downloading all the parents and dependencies to check some timestamps.
I'm working on a distributed CI system (e.g. Bamboo/Buildforge) and would like to avoid downloading the entire dependency hierarchy to check if a build is required. It is possible to determine the build date of a snapshot dependency by checking the maven-metadata.xml on the remote repository.
Are there any plugins or tools to streamline this process?
Assuming you're using maven as your build process, you want a plugin to do the checking and conditional build.
I don't know of any maven plugin that will do exactly what you want. However,
you should be able cobble together a couple plugins for the same effect.
Use the exec plugin with "wget" to fetch the maven-metadata.xml.
Then use the xslt plugin to transform the resulting XML into a boolean value that will indicate whether or not an update has occured. You'll want to XPath to the //metadata/versioning/lastUpdated node and compare it to the current date and time. Finally, you'll need to examine the resulting transformed XML to determine if you should proceed with the build.
Find those plugins at http://mojo.codehaus.org/plugins.html
It looks like Mercury provides the higher level API I was looking for.
Mercury provides an implementation-neutral way to access GAV-based repositories, including AV repositories, like OSGi. OSGi access is not implemented yet. By access I mean reading artifacts and metadata from repositories and writing artifacts to repositories, metadata is updated by writes.
All the calls accept a collection of requests as an input and return an object that hides getResults, that normally is a map< queryElement, Collection > response. The response object has convenience methos hasExceptions(), hasResults(), getExceptions(), getResults()
One of the key building blocks is a hierarchy of Artifact data:
ArtifactCoordinates - is truly the 3 components GAV
ArtifactBasicMetadata - is coordinates plus type/classifier plus convenience methods like hash calculation and such
ArtifactMetadata adds a list of dependency objects, captured as ArtifactBasicMetadata
DefaultArtifact implements Artifact interface and adds pomBlob (byte[]) and file, that points to actual binary
Related
We're using Go.Cd and transitioning to Bamboo.
One of the features we use in Go.Cd is value stream maps. This enables triggering another pipeline and passing information (and build artifacts) to the downstream pipeline.
This is valuable when an upstream build has a particular version number, and you want to pass that version number to the downstream build.
I want to replicate this setup in Bamboo (without a plugin).
My question is: Is there a way to trigger a child plan in Bamboo and pass it information like a version number?
This has three steps.
Use a parent plan/child plan to setup the relationship.
Using the artifacts tab, setup shared artifacts to transfer files of one plan to another.
3a. At the end of the parent build, dump the environment variables to a file
env > env.txt
3b. Setup (using the artifacts tab) an artifact selector that picks this up.
3c. Setup a fetch for this artifact from the shared artifacts in the child plan.
3d. Using the Inject Variables task - read the env.txt file you have transferred over. Now your build number from the original pipeline is now available in this downstream pipeline. (Just like Go.Cd).
Artifactory version: 4.15.0 or Latest
Summary (optional info, but can help anyone understand my case better):
Build+Test pipeline generates artifacts in the form of jar/war/zip/tar/rpms etc.
Once these artifacts are generated and stored in Artifactory with their usual build/test related properties(i.e. build time, build url, build tool used, test pass status, test coverage percentage etc. associated with a given artifact), I want to cherry pick these artifacts to create multiple sub-system level releases as each sub-system has different artifacts from different pipelines (services/apps/projects).
Sub-system level releases just tells, go pick a given jar/war/zip/rpm etc for a given version of a project (making sure some testing is done, some artifact properties pass/match a defined selection criteria) and basically the end result of sub-system release is a deployment-manifest file at that sub-system level.
Some sub-system releases contain common artifacts (shareable coming from various projects) and some contain artifacts which are created specific for a target deployment environment (of a sub-system or higher level system release).
Deployment and Testing is done at each sub-system level and once they pass some set of tests, performance benchmarks, etc, all deployment+testing related properties at a given deploy+test environment level (for that sub-system release) are applied to all the artifacts which were included i.e. which made that sub-system release.
Now, a System level release contains many sub-system level releases i.e. they refer many sub-system level releases or sub-system level manifest files (JSON format for any common/end system specific sub-system). I know, fun times.
Finally, deployment and testing is performed at System level and all artifacts from any sub-system release level artifacts (either common / environment specific) and any other "global artifacts" (which together makes a complete System level release) are tagged with these properties.
The idea behind applying/having properties at all (service/app level - build+test, sub-system and system release level) applied to "the project level artifacts (jar/war/rpms/zip/tar/etc)" in any pipeline/automation-deploy/test step, is that: A user can easily query Artifactory anytime by passing a set of properties to get the artifacts (rpms/zip/tar/etc..) for any service/app/sub-system/system level, which were used for deploying/testing it.
--
I working on a solution for releasing various pipelines dependent/based on artifact properties (Artifactory) and wondering if there are any recommendations or limitations on applying "N no. of properties" to an artifact or types of values used?
Are there any performance impacts if the number of properties attached to an artifact in Artifactory crosses a certain number?
OR
Type of property values (key=value pairs) that I'm planning to use are in the form of:
prop1="value1" or numberValue
prop2=[value1, value2]
prop3={..JSON blob ..}
Just trying to see if anyone have experienced any issues with such limitations if any. I checked Artifactory website and other blogs, but couldn't find anything about the limitations on number of properties or type of values associated with a property and how they can impact Artifactory performance while querying or using Artifactory properties.
There is no conclusive answer to your question. Properties are saved as an entry in the database, and yes if you have a very big property table the database performance will be affected and property search will become slower. With that being said, it all depends on the number of properties in the database and the specification of your database machine. There are Artifactory users that are using tens of properties on each artifact they have in Artifactory and it is working perfectly for them, so coming back to my first sentence, there is no conclusive answer to that question. :)
How can i configure a build definition to allow me to pick a solution configuration at build time?
I have 3 configurations in my solution: (Local, UAT and Live).
I want people to pick and the configuration they need and the build will do the config transforms, deployment etc. as required. I have the build script I need, just need to know how I can switch upon the configuration.
If I cannot use the actual configurations, a custom property would do, but obviously I need to be able to access it in my build script.
My opinion is that your Build Defition should contain all three configurations, so that Build shall execute all three of them by default.Then, you can insert a custom argument in your build process template as an "Configuration Override" with default = empty.Checking this Hofman-post you can have your argument part of the 'Queue new Build dialog.So, when your users queue a new build, they either leave this empty and build executes all configs, or they enter one of the three and only the one selected shall be executed.There are various ways to implement this in your build process template, in general you might want to intervene in section For Each Configuration in BuildSettings.PlatformConfigurations:
and check if your custom argument is empty (so all nodes should execute), or if it is filled with a specific entry (so it should proceed only once). Further handling of a user input that does not comply with any of the available configs should be added, so that build can graciously fail.
I would like to skip publishing an artifact if it already exists in the repository, but as far as I can see from the documentation there isn't a way to do this. There is an overwrite attribute, but if set to false that causes the publish to fail if the artifact exists. I definitely don't want to overwrite the artifact, either.
I've looked into using <ivy:info> and <ivy:findrevision> to see if the artifact exists and set a property I can use on my publish target (as an unless attribute, for example), but neither of these tasks allows me to specify the repository to check.
I'd rather not resort to using an external taskdef, like antcontrib's try/catch tasks.
Does anyone have any other suggestions?
Info and findrevision allow the settingsRef-attribute. So you could use an extra settings-file that only references the resolver you need (via ivy:settings and ivy:configure) and use that settingsRef in your task.
Why would you run the "publish" task if you don't intend saving what you built?
I use the buildnumber task to ensure that my version number is incremented automatically based on what was previously published.
I would like to be able to, as part of a maven build, set the build number (doesn't matter exactly what) in a properties file/class (so I can show it in a UI). Any ideas?
We used the Build Number Plugin now available from Codehaus. It can generate a sequential build number or allows you to use the time stamp.
The Maven build number plugin should do what you want.
I use the maven-property-plugin to store the CruiseControl build label in a properties file (the build label is is available as a system property named 'label').
A post on how to do this with Hudson.