How to Provision Bundle committed through HTTP API in Apache Ace - apache

I am committing a bundle(say Test.jar) to the Apache Ace Repository using the HTTP API, http://<Ace Host>/repository/commit?customer=apache&name=shop&version=2. After this how can i provision this bundle to a target?

I am a bit puzzled by your question because the repository you mention contains metadata (in XML format) that describes the relationship between artifacts (bundles or other files), features and distributions. You should not commit bundles to it. Bundles go in the OBR, which has its own REST API if you programmatically want to upload them.
To provision a bundle to a target, using the Web UI, you:
Upload the bundle as a new artifact. It will be sent to the OBR and show up in the artifacts column.
Associate the artifact with a feature and the feature with a distribution.
Launch an new target and watch it show up in the UI (you might need to "retrieve" the current configuration for that to happen).
Associate the distribution with the target.
Commit everything.

Related

Can I deploy bamboo.yml YAML specs manually?

I am trying to learn how YAML specs works in Bamboo. So far I achieved to deploy the plan following the official documentation. enter link description here
The documentation explains that you need to create a bitbucket repository, create bamboo.yml, set a new project in bamboo, enable a bamboo specs repository and finally you get your plan created and based in YAML specs.
My question is, can I create a plan.yml and deploy it from other bamboo plan?
For example, for JAVA specs, it is enough to checkout a repo with several *.java specs files and use maven and a pom file to deploy all the plans.
Can I do something similar with YAML specs? To have a folder in some SCM with several *.yml files and deploy them simultaneously. As a result, to have a lot of plans in bamboo deployed and based on the yml files.
yes and no, yaml can't be sent to the server as you can do with java specs. It needs to be committed to the repo first
you also need to have your different project created prior to committing the yaml specs and or have that repo granted access to each individual project or enabled the flag on the linked repo to allow access to all projects in the specs tab.
if this is not an issue,then yes there is no problem defining multiple plans in your bamboo specs yml file, even across multiple projects, as long as they are split up in separate yaml documents (with "---")

Trouble deploying snapshot from Bamboo to Artifactory

I would like to deploy snapshot builds from Bamboo to Artifactory. My repository's Handle Snapshots option is checked and it's Maven Snapshot Version Behavior is set to Unique. The repository's layout is gradle-default.
My goal is for a build plan to deploy an artifact at a location similar to the following:
repo-local:com.company/project/1.0-SNAPSHOT/project-1.0-20120612.101600.txt
In Bamboo I have a Artifactory Generic Deploy Task, configured with the following for the Edit Published Artifacts field:
project-1.0-SNAPSHOT.txt=>com.company/project/1.0-SNAPSHOT
However Artifactory rejects my build artifacts, saying The repository 'repo-local' rejected the artifact 'repo-local:com.company/project/1.0-SNAPSHOT/project-1.0-SNAPSHOT.txt' due to its snapshot/release handling policy.
How do I get Artifactory to accept the artifact and automatically replace SNAPSHOT with a timestamp in the filename?
Your problem is most likely the fact that the path you deploy to is not considered a valid integration revision by the layout you've selected (gradle-default).
The gradle-default layout expects integration revisions like:
org/module/1.0-12345678912345/module-1.0-12345678912345.jar
That is, for a 14 digit long time stamp to be appended after the base revision;
While your path contains SNAPSHOT instead of a 14 digit long timestamp.
If you want to have pattern like:
com.company/project/1.0-SNAPSHOT/project-1.0-20120612.101600.txt
You will have to customize the layout to accept -SNAPSHOT as the folder integration revision and modify your artifact to contain a timestamp as the file integration revision.
I'm guessing your assumption was that Artifactory will convert the non-unique integration revision to a unique one; Artifactory performs this conversion only when the repository is set to the default Maven layout and when the artifacts adhere to Maven's layout.
This is due to the fact the while Maven actually has defined standards for integration revisions, Gradle do not have such a standard; So basically, a Gradle revision could be practically anything.
On top of that, the concept of unique and non-unique integration revisions doesn't really exist in the Gradle world, it doesn't actually have any built-on functionality to support these features; and so when you see a Mavenized path in Gradle, it's just basically mimicking the pattern.

How to make a maven project buildable for the customer

We have a project which should be buildable by the customer using maven. It has some open source dependencies that are mavenized (no problem), some that aren't mavenized, proprietary stuff (oracle jdbc driver) and some internal stuff.
Until now we had everything but the first category packaged with the project itself in a local repository (repository with file://path-in-project-folder specified in the projects pom.xml).
We would love to move these out of the project, as we are about to use them in other projects as well. Currently we plan to use nexus as an internal maven repository.
Whats the best practice to make such dependencies/maven repositories available to the customer so he can continue to build the project.
Ideas so far:
Customer sets up a nexus repository as well, we somehow deploy all these non-public dependencies to his repository (like a mirror)
We provide a 'dumb' dump/snapshot of the non-public dependencies, customer adds this snapshot to this settings.xml as a repository, (but how is this possible).
Make our internal nexus repo available to the customers build server (not an option in our case)
I'm wondering how others solve these problems.
Thank you!
Of course, hosting a repository of some kind is a straightforward option, as long as you can cover the uptime / bandwidth / authentication requirements.
If you're looking to ship physical artifacts, you'll find this pattern helpful: https://brettporter.wordpress.com/2009/06/10/a-maven-friendly-pattern-for-storing-dependencies-in-version-control/
That relies on the repository being created in source control - if you want a project to build a repository, consider something like: http://svn.apache.org/viewvc/incubator/npanday/trunk/dist/npanday-repository-builder/pom.xml?revision=1139488&view=markup (using the assembly plugin's capability to build a repository).
Basically, by building a repository you can ship that with the source code and use file:// to reference it from within the build.
There are two options:
Document exactly what artifacts you need to compile which are not
available via Maven Central
Implement Nexus and make a export with Nexus give the export
to customer and they need to do a import of it. I'm not sure
if you come to licenses issues.
I assumed that you already have a Repository Manager already but it reads like you didn't.

maven clearcase integration analysis

I am planning to integrate Clearcase UCM(under dynamic view) with Maven.
1) I found that Maven SCM is partially
implemented for clearcase. Is there
are any still issue with this? what is
meaning partailly implemented SCM?
2) How compatable Clearcase with
Maven?
3)Any issues or limitation with this 2
tools integration?
4)Maven docs says that it is not
possible to use SCM plugin features
like creating tags (applying labels),
creating Change logs, and so on.
5) where can i find good document to integrate Maven with clearcase?. Apache site have given, but it is not very clear for beginners.
There are very few documentations on Maven with UCM ClearCase, and limitations like the ones described in SCM Implementation: ClearCase:
The ClearCase SCM provider uses snapshot views.
(so no dynamic view for instance, but you mention tags, which should be implemented as UCM baseline)
As no SCM metadata can be accessed, it is not possible to use SCM plugin features like creating tags (applying labels), creating changelogs, and so on.
Another limitation, in this thread:
Hi. I have been able to integrate Hudson and ClearCase without too much trouble using a Windows machine. Downloading source code from a given baseline or stream is fine.
The problem comes if you try to use some ant tasks for checking out a pom file, make some changes ( like updating some version numbers ) and then checkin the modified pom file before starting to build.
No matter if I use an ant script with ClearCase tasks, or internal Java classes, or even a maven-release-plugin for Hudson that tries to do this kind of job, I always end with the following error :
cleartool: Error: Type manager "_xml2" failed create_version operation
when trying to checking a XML file.
Which kind of integration are you looking for?
If it's about identifying and documenting the changes between UCM baselines, streams, activities and components, you can use CompBL - a complemntary add-on for ClearCase.
It's an easy to install add-on yet very powerful.
Cheers
This is an error thrown by clearcase while checking in xml files, if xml file is exceeding more then /1000 characters.
try changing xml file type, this will resolve the issue "cleartool chtype file file.xml"

Maven best practice for generating artifacts for multiple environments [prod, test, dev] with CI/Hudson support?

I have a project that need to be deployed into multiple environments (prod, test, dev). The main differences mainly consist in configuration properties/files.
My idea was to use profiles and overlays to copy/configure the specialized output. But I'm stuck into if I have to generate multiple artifacts with specialized classifiers (ex: "my-app-1.0-prod.zip/jar", "my-app-1.0-dev.zip/jar") or should I create multiple projects, one project for every environment ?!
Should I use maven-assembly-plugin to generate multiple artifacts for every environment ?
Anyway, I'll need to generate all them at once so it seams that the profiles does not fit ... still puzzled :(
Any hints/examples/links will be more than welcomed.
As a side issue, I'm also wondering how to achieve this in a CI Hudson/Bamboo to generate and deploy these generated artifacts for all the environments, to their proper servers (ex: using SCP Hudson plugin) ?
I prefer to package configuration files separately from the application. This allows you to run the EXACT same application and supply the configuration at run time. It also allows you to generate configuration files after the fact for an environment you didn't know you would need at build time. e.g. CERT
I use the "assembly" tool to zip up each domain's config files into named files.
I would use the version element (like 1.0-SNAPSHOT, 1.0-UAT, 1.0-PROD) and thus tags/branches at the VCS level in combination with profiles (for environments specific things like machines names, user name passwords, etc), to build the various artifacts.
We implemented a m2 plugin to build the final .properties using the following approach:
The common, environment-unaware settings are read from common.properties.
The specific, environment-aware settings are read from dev.properties, test.properties or production.properties, thus overriding default values if necessary.
The final .properties files is written to disk with the Properties instance after reading the files in given order.
Such .properties file is what gets bundled depending on the target environment.
We use profiles to achieve that, but we only have the default profile - which we call "development" profile, and has configuration files on it, and we have a "release" profile, where we don't include the configuration files (so they can be properly configured when the application is installed).
I would use profiles to do it, and I would append the profile in the artifact name if you need to deploy it. I think it is somewhat similar to what Pascal had suggested, only that you will be using profiles and not versions.
PS: Another reason why we have dev/ release profiles only, is that whenever we send something for UAT or PROD, it has been released, so if there is a bug we can track down what the state of the code was when the application was released - it is easier to tag it in SVN than trying to find its state from the commit history.
I had this exact scenario last summer.
I ended up using profiles for each higher environment with classifiers. Default profile was "do no harm" development build. I had a DEV, INT, UAT, QA, and PROD profile.
I ended up defining multiple jobs within Hudson to generate the region specific artifacts.
The one thing I would have done differently was to architect the projects a bit differently so that the region specific build was outside of the modularized main project. That was it would simply pull in the lastest artifacts for each specific build rather than rebuild the entire project for each region.
In fact, when I setup the jobs, the QA and PROD jobs were always setup to build off of a tag. Clearly this is something that you would tailor to your specific workplace rules on deployment.
Try using https://github.com/khmarbaise/multienv-maven-plugin to create one main WAR and one configuration JAR for each environment.