How to use Ivy/Ant to build using intermediate artifacts - ivy

I am trying to revise my build process to use ant with apache ivy for my personal projects. These consist of a few shared modules, and a few application modules that depend on the shared modules. For the sake of this post, let's simplify and say I have a shared module (common), and an application module (application) which depends on common. Each module has it's own effective svn repository:
svn_repo_1/common/trunk
/branches
/tags
svn_repo_2/application/trunk
/branches
/tags
I check out the relevant revision into a common workspace, in a flat structure:
workspace/common
workspace/application
In general, application will depend on a published version of common, so there will be no need to build common when building application.
However, when I need to add new functionality to common that is required by application, I would then like application to depend on the latest common build from my workspace (without needing to publish common to my repository).
I assumed this is what latest.integration meant (i.e. changing application's ivy.xml to specify latest.integration for the common revision). My intention was to use the ivy buildlist task to find the local modules that needed to be built before application could be built. This does not work however, because the buildlist task seems to include the common/build.xml entry regardless of whether application's ivy.xml file specifies latest.integration or some other published revision.
I would appreciate any suggestions. I am struggling with ivy's documentation and samples, so any real-world examples would also be helpful. Note: I am not interested in a Maven solution here.

Wow, this is truly deja vu! Go back to some of my first questions on this site from 3 - 4 months ago and they're almost all Ivy-related! I empathize with you 100% that Ivy is a difficult beast to learn and tame, but after using it professionally for a few months now, I'll never develop without it again. So my first piece of advice: keep going. Sooner or later, what little (practical) documentation you find on Apache Ivy will alll start to make sense and fall into play.
I can understand there may be extenuating reasons for why you don't want to publish your common to your repo. However, if you are a newcome to transitive dependency management, the first piece of practical advice I can give you is that you should always publish your JARs/WARs/whatever to your repo; not an intermediary "integration" local to your workspace.
The reason for this is simple: Ivy only has the ability to crawl the repositories you define in your settings file (basically). If you deliberately keep a JAR like common outside of one of these defined repositories, then: (a) Ivy has no way to resolve transitive dependencies (its primary job), and (b) "downstream" (dependent) JARs fail to be dynamically updated every time you tweak common. Thus, using Ivy only to not publish JARs is a bit counter-productive; I'm surprised Ivy even includes it as a feature.
I guess I would need to understand your motivation for not publishing common. If you're simply having problems getting the ivy:publish task to work, no worries I can provide plenty of examples to help get you started. But if there are some other reasons, then I ask you to consider this solution: set up multiple repositories.
Perhaps you have one "primary" repository where mostly everything gets published; and then you have a "secondary" or "intermediary" repository where you publish common to whenever it makes sense (for you) to do that. You can then configure your Ant build with two different publish tasks, such as publish-main and publish-integration.
That way you get the best of both worlds: you get your intermediary staging area, and you get to keep everything inside of Ivy's powerful control.

Related

JBoss 7 : fluff or a real good application server?

I am asking this especially, because JBoss AS 7+ has completely changed 360 degrees, enforcing the application developer to think completely in terms of JBoss Modules. That prevents earlier classpath-hell issues etc and encourages clean modular thinking etc. Also it claims a quick startup time etc.
All that is fine BUT my major concerns are thus, please confirm if you feel the same :
JBoss insists to put the jboss-deployment-structure.xml file inside WEB-INF. This would make the WAR file not portable at all since now it contains app server specific configuration files inside it. I am worried about inter-operability.
I am still nervous about the enormous amount of XML configuration needed - Create a module directory structure for each dependency you would like to add, create a module.xml for that dependency, create a jboss-deployment-structure.xml entries for non-modules or Manifest entries for libs inside WEB-INF/lib. etc etc.
That would require enough developer time and effort being spent towards being an configuration expert or hire an expert or buy the support - a significant cost in the long run for any team and company.
There is nothing about jboss-deployment-structure.xml that makes it non-portable. Other application servers will simply ignore the file if they don't use it.
You do not need to create a module if you want to use a dependency in your application. You would only do that if you want to use a common dependency among several deployments. For example a JDBC driver library.
There is no need to create a jboss-deployment-structure.xml or add manifest entries for libraries in WEB-INF/lib. The only time you would need a jboss-deployment-structure.xml is if you want to exclude server dependencies, like log4j, or add dependencies outside the scope of your deployment that are not automatically added. There are probably some other use cases, but those are the most common.

How to make a maven project buildable for the customer

We have a project which should be buildable by the customer using maven. It has some open source dependencies that are mavenized (no problem), some that aren't mavenized, proprietary stuff (oracle jdbc driver) and some internal stuff.
Until now we had everything but the first category packaged with the project itself in a local repository (repository with file://path-in-project-folder specified in the projects pom.xml).
We would love to move these out of the project, as we are about to use them in other projects as well. Currently we plan to use nexus as an internal maven repository.
Whats the best practice to make such dependencies/maven repositories available to the customer so he can continue to build the project.
Ideas so far:
Customer sets up a nexus repository as well, we somehow deploy all these non-public dependencies to his repository (like a mirror)
We provide a 'dumb' dump/snapshot of the non-public dependencies, customer adds this snapshot to this settings.xml as a repository, (but how is this possible).
Make our internal nexus repo available to the customers build server (not an option in our case)
I'm wondering how others solve these problems.
Thank you!
Of course, hosting a repository of some kind is a straightforward option, as long as you can cover the uptime / bandwidth / authentication requirements.
If you're looking to ship physical artifacts, you'll find this pattern helpful: https://brettporter.wordpress.com/2009/06/10/a-maven-friendly-pattern-for-storing-dependencies-in-version-control/
That relies on the repository being created in source control - if you want a project to build a repository, consider something like: http://svn.apache.org/viewvc/incubator/npanday/trunk/dist/npanday-repository-builder/pom.xml?revision=1139488&view=markup (using the assembly plugin's capability to build a repository).
Basically, by building a repository you can ship that with the source code and use file:// to reference it from within the build.
There are two options:
Document exactly what artifacts you need to compile which are not
available via Maven Central
Implement Nexus and make a export with Nexus give the export
to customer and they need to do a import of it. I'm not sure
if you come to licenses issues.
I assumed that you already have a Repository Manager already but it reads like you didn't.

How to generate different deployables from the same Maven project?

I have a situation that I'm sure must be fairly common. I have some Maven-built applications that deploy to different types of application server - like Tomcat, JBoss, etc.
The build processes 'tunes' the deployable artifact to the specific target type of application server (for example, different included dependencies, context roots, other config). This tuning is controlled with build profiles (-Ptomcat, -Pjboss etc)
So, for a given version of my application, I need to run builds that produce different deployables. I run mvn -Ptomcat clean package for example and I get an artifact in my /target directory that is the tomcat-tuned version.
The best approach I've been able to come up with so far is to specify finalnames for the artifacts that include the profile information, but for that approach, I'm not sure how to configure Maven to copy the final artifact off to some specific location so that the next build for a different type doesn't overwrite it.
Is this a good approach? If so, how can I achieve that final copy?
Or is there a better way?
You'll need to use Maven Assembly Plugin.

Maven multi-module project with many reports: looking for an example

Is there an open source project that can serve as a good example on how to use the maven site plugin to generate reports? I would prefer it to
consist of many modules, possibly hierarchically structured
use as many plugins as possible (surefire, jxr, pmd, findbugs, javadoc, checkstyle, you name it)
the reports should be aggregated: if some tests fail you want to have a single page that shows all modules with failing tests, not only a gazillion individual pages to check
include enterprisey stuff (WAR, EAR etc), but this is not so important.
The idea is to have something where you can gather ideas on how it is done and what is possible.
I gave up trying to aggregate reports of a complex multi-modules project with the maven-site-plugin. For this, I use Sonar, it's much more powerful (with features like evolution of metrics over time, aggregation, neat drill down, etc) and just works. Have a look at Nemo, the online demo instance and cry.
For an example see http://www.bartswennenhuis.nl/2013/12/maven-aggregate-reports-for-multi-module-projects/. Findbugs does not support aggregate reports.
I don't think there is such a project, if there is I want to know it as well. In order to find things in maven you have to know what you're looking for(which is not exactly the same with what you want to accomplish).
If its any help I'm building 13 module project with MAVEN, use cobertura maven plugin, surefire, javadoc, etc .. it works as charm, why are you asking this question, you want to determine the capabilities of maven or ?
this is actually a response to your question. please take a look at the Apache Directory project. it contains two big blocks: the directory server and the tooling support (Eclipse based).
you can find the SVN repository of the Apache Directory Studio (this is a complete directory tooling platform intended to be used with any LDAP server) here: http://svn.apache.org/repos/asf/directory/studio/trunk/
take a look at the POM file ( http://svn.apache.org/repos/asf/directory/studio/trunk/pom.xml ) of this multi module project. it consists out of lots of modules, uses most of the plug-ins you're using and it also aggregates some of the reports.this
You can use Violations Maven Plugin to aggregate Findbugs (and many other static code analysis) reports.
It needs to run after the analysis. It will parse their report-files and present them in one unified report. It can, optionally, fail the build depending on number of violations found.

Maven repository configurations

I've asked a similar question in which part of this was addressed, but I'd like to expand in more detail.
When configuring maven to look at internal repositories, is it best to put that information in the project pom or in a user's settings.xml? An explanation on why would be really helpful here.
thanks,
Jeff
You should always try to make the maven project so that it compiles from a clean checkout from source control in your local environment; without a settings.xml. In my opinion this means that you place any overrides to sensible default values in the user's settings.xml file. But the pom should contain sensible values that will work for everyone.
I encourage you to put the repository definition in the POM, this way any developer just grab a copy of the code and run Maven to get it compiled, without having to change things in his settings file.
I find the setting.xml file useful just for hacking Maven's behaviour in special situations, for example when one repository is not accessible due to a firewall and you need to use a mirror. But that's my personal opinion. Maven documentation gives you more freedom:
The settings element in the
settings.xml file contains elements
used to define values which configure
Maven execution in various ways, like
the pom.xml, but should not be bundled
to any specific project, or
distributed to an audience. These
include values such as the local
repository location, alternate remote
repository servers, and authentication
information.
If you have a local repository which is used in every single project you may add that at the settings.xml, just be sure that configuration is well documented, in my current project it's not and new developers struggle at the beginning when they try to compile something.
We use the user's settings.xml and include info in the README about what possible other repos may be needed.
In theory a given group-artifact-version is the same no matter which repo it comes from. It works pretty well for us. If you find yourself with two different assets that have the same group-artifact-version identifier, then that indicates you're doing something really bad.