We are currently attempting to port a very (very) large project built with ant to maven (while also moving to svn). All possibilities are being explored in remodeling the project structure to best fit the maven paradigm.
Now to be more specific, I have come across classifiers and would like to know how I could use them to my advantage, while refraining from "classifier anti-patterns".
Thanks
from: http://maven.apache.org/pom.html
classifier: You may occasionally find a fifth element on the
coordinate, and that is the classifier. We will visit the classifier
later, but for now it suffices to know that those kinds of projects
are displayed as groupId:artifactId:packaging:classifier:version.
and
The classifier allows to distinguish artifacts that were built from
the same POM but differ in their content. It is some optional and
arbitrary string that - if present - is appended to the artifact name
just after the version number. As a motivation for this element,
consider for example a project that offers an artifact targeting JRE
1.5 but at the same time also an artifact that still supports JRE 1.4. The first artifact could be equipped with the classifier jdk15 and the
second one with jdk14 such that clients can choose which one to use.
Another common use case for classifiers is the need to attach
secondary artifacts to the project's main artifact. If you browse the
Maven central repository, you will notice that the classifiers sources
and javadoc are used to deploy the project source code and API docs
along with the packaged class files.
I think the correct question would be How to use or abuse attached artifacts maven? Because basicaly that is why classifiers are introduced - to allow you to publish attached artifacts.
Well, Maven projects often implicitely use attached artifacts, e.g. by using maven-javadoc-plugin or maven-source-plugin. maven-javadoc-plugin publishes attached artifact that contains generated documentation by using a javadoc classifier, and maven-source-plugin publishes sources by using sources classifier.
Now what about explicit usage of attached artifacts? I use attached artifacts to publish harness shell scripts (start.sh and Co). It's also a good idea to publish SQL scripts in the attached artifact with a classifier sql or something like that.
How can you attach an arbitary artifact with your classifier? - this can be done with build-helper-maven-plugin.
... I would like to know how I could use them to my advantage ...
Don't use them. They are optional and arbitrary.
If you are in the middle of porting a project over to maven, keep things simple and only do what is necessary (at first) to get everything working as you'd like. Then, after things are working like you want, you can explore more advanced features of maven to do cool stuff.
This answer is based on your question sounding like a "This features sounds neat, how can I use it even though I don't have a need for it?" kind of question. If you have a need for this feature, please update your question with more information on how you were thinking of utilizing the classifier feature and we will all be more informed to help you.
In contrast to Jesse Web's answer, it is good to learn about classifiers so that you can leverage them and avoid having to refactor code in addition to porting to maven. We went through the same process a year or two ago. Previously we had everything in one code base and built together with ant. In migrating to maven, we also found the need to break out the various components into their own maven projects. Some of these projects were really libraries, but had some web resources (jsp, js, images, etc.). The end result was us creating an attached artifact (as mentioned by #Male) with the web resources, using the classifier "web-resources" and the type "war" (to use as an overlay). This was then, and still does after understanding maven better, the best solution to port an old, coupled, project. We are eventually wanting to separate out these web resources since they don't belong in this library, but at least it can be done as a separate task.
In general, you want to avoid having attached artifacts. This is typically a sign that a separate project should be created to build that artifact. I suggest looking at doing this anytime you are tempted to attach an artifact with a separate classifier.
I use classifiers to define supporting artefacts to the main artefact.
For example I have com.bar|foo-1.0.war and have some associated config called com.bar|foo-1.0-properties.zip
You can use classifers when you have different versions of the same artifact that you want to deploy to your repository.
Here's a use case:
I use them in conjunction with properties in a pom. The pom has default values which can be overriden via the command line. Running without options uses the default property value. If I build a version of the artifact with different property values, I can deploy that to the repo with a classifier.
For example, the command:
mvn -DmyProperty=specialValue package install:install-file -Dfile=target/my-ear.ear -DpomFile=my-ear/pom.xml -Dclassifier=specialVersion
Builds a version of an ear artifact with special properties and deploys the artifact to my repo with a classifier "specialVersion".
So, my repo can have my-ear-1.0.0.ear and my-ear-1.0.0-specialVersion.ear.
Related
We are struggeling hard with how to use features the correct way.
Let’s say we have the plug-in org.acme.module which depends on org.thirdparty.specific and org.acme.core.
And we have the plug-in org.acme.other which depends on org.acme.core.
We want to create an application from these, which includes a target file and a product file. We have the following options:
One feature per module:
org.acme.core.feature
org.acme.core
org.acme.module.feature
org.acme.module
org.acme.other.feature
org.acme.other
org.thirdparty.specific.feature
org.thirdparty.specific
This makes the target and product files gigantic, and the dependencies are very hard to manage manually.
One feature per dependency group:
org.acme.module.feature
org.acme.core
org.acme.module
org.thirdparty.specific
org.acme.other.feature
org.acme.core
org.acme.other
This approach makes the dependencies very easy to manage, and the target and product files are easy to read and maintain. However it does not work at all. The moment org.acme.core changes, you need to change ALL the features. Furthermore, the application has no say in what to package, so it can’t even decide to update org.acme.core (because of a bugfix or something).
Platform Feature:
org.acme.platform.feature
org.acme.core
org.acme.other
org.thirdparty.specific (but could be its own feature)
org.acme.module.feature
org.acme.module
This is the approach used for Hello World applications and Eclipse add-ons - and it only works for those. Since all modules' target platforms would point to org.acme.platform.feature, every time anything changes for any platform plug-in, you'd have to update org.acme.platform.feature accordingly.
We actually tried that approach with only about 50 platform plug-ins. It's not feasible to have a developer change the feature for every bugfix. (And while Tycho supports version "0.0.0", Eclipse does not, so it's another bag of problems to use that. Also, we need reproducibility, so having PDE choose versions willy-nilly is out of the question.)
Again it all comes down to "I can't use org.acme.platform.feature and override org.acme.core's version for two weeks until the new feature gets released.
The entire problem is made even more difficult since sometimes more than one configuration of plug-ins are possible (let's say for different database providers), and then there are high level modules using other child modules to work correctly, which has to be managed somehow.
Is there something we are missing? How do other companies manage these problems?
The Eclipse guys seem to use the “one feature per module” approach. Not surprisingly, since it’s the only one that works. But they don’t use target platforms nor product files.
The key to a successful grouping is when to use "includes" in features and when to just use dependencies. The difference is that "includes" are really included, i.e. p2 will install included bundles and/or included features all the time. That's the reason why you need to update a bundle in every feature if it's included. If you don't update it, you will end up with multiple versions in the install.
Also, in the old day one had to specify dependencies in features. These days, p2 will mostly figure out dependencies from the bundles. Thus, I would actually stop specifying dependencies in features but just includes. Think of features as a way to specify what gets aggregated.
Another key point to grouping is - less is more. If you have as many features as bundles chances a pretty high that you have a granularity issue. Instead, think about what would a user install separately. There is no need to have four features for things that a user would never install alone. Features should not be understood as a way of grouping development/project structures - that's where folders in SCM or different SCM repos are ok. Think of features as deployment structures.
With that approach, I would recommend a structure similar to the following example.
my.product.base
base feature containing the bare minimum of the product
could be org.acme.core plus a few minimum
my.product.base.dependencies
features with 3rd party libraries for my.product.base
my.addon.xyz
feature bundling an add-on
separate features for things that can be installed separately
my.addon.xyz.dependencies
3rd party libraries for add-on dependencies
Now in the product definition I would list just my.product.base. There is no need to also list the dependencies features. p2 will fetch and install the dependencies automatically. However, if you want to bind your product to specific versions of the dependencies and don't want p2 to select any matching one, then you must include the my.product.base.dependencies feature.
In the target definition I would include a "my.product.sdk" feature. That feature is an aggregation feature of all other features. It makes target platform management easier. I typically create an sdk feature with everything.
Another feature that is also very often seen is a "master" feature. This is an "everything" feature that maybe used for creating a p2 repository during the build. The resulting p2 repository is then used for assembling products.
For a more real world example see here:
http://git.eclipse.org/c/gyrex/gyrex-server.git/tree/releng/features
Features and Continuous Delivery
There was a comment regarding frequent updates to feature.xml. A feature.xml only needs to be modified when there is a change in structure. No updates need to happen when the bundle version is modified. You should reference bundles in features with version 0.0.0. That makes Tycho to fill in the proper version at build time. Thus, all you need to do is commit a change to any bundle and then kick off a rebuild. Tycho also takes care of updating the feature qualifier based on the qualifiers of the contained bundles. Thus, the new feature qualifier will be different than in a previous build.
Is there a guide that outlines how to perform each of the following ant tasks using Maven?
http://ant.apache.org/manual/tasklist.html
Is it considered best practice to use Maven for these tasks or just run them in ANT via ant tasks feature.
There is no mapping between ant tasks and "Maven tasks", because Maven doesn't have tasks. Its philosophy is completely different.
Ant is imperative: you tell Ant what to do with a sequence of parameterized tasks.
Maven is descriptive: you describe which kind of project you have, respect a set of conventions (or describe how you broke these conventions, and Maven decides which tasks it must do.
Somewhere there's a table that shows Maven plugins that are analagous to given Ant tasks, but I can't seem to find it. The "Available Plugins" list might help you out.
There is a list of ant-task-to-maven-plugin at the end of this page -> http://maven.apache.org/plugins/maven-antrun-plugin/usage.html
Unfortunately, it only covers a small subset of ant tasks.
I've recently mavenized ant-based project (I really had it with dependency management) and from that experience I had very little need to retain Ant code. The only place I've used antrun plugin was around custom code generation.
I think that it's perfectly fine to delegate some tasks to ant (and sometimes even to scripts) if that's well documented and saves time and maintenance cost.
One other place where I still use antrun is for echoing some environment properties and generic text to a build output, but maybe it's just my ignorance and there is a Maven way for that.
We're trying to migrate from current Ant build to Maven. In the current project, we've different properites files for each of the env say
qa.properties, prod.properties & dev.properties.
The property values present in these files, are used to replace wherever these properties are being referred through config files (present in src\main\resources\config ). The current Ant build process replaces all these properties which are being referred in config files with their corresponding value for the current build env.
I'm somewhat aware of the Profiles concept in maven. However, I'm not able to figure how to achieve this using maven.
Any help would be appreicated.
Thanks,
Prabhjot
There are several ways to implement this but they are all variations around the same features: combine profiles with filtering. A Maven2 multi-environment filter setup shows one way to implement such a setup (a little variation would be to move the filter declaration inside each profile).
See also
9.3. Resource Filtering
I wonder what is the Maven way in my situation.
My application has a bunch of configuration files, let's call them profiles. Each profile configuration file is a *.properties file, that contains keys/values and some comments on these keys/values semantics. The idea is to generate these *.properties to have unified comments in all of them.
My plan is to create a template.properties file that contains something like
#Comments for key1/value1
key1=${key1.value}
#Comments for key2/value2
key2=${key2.value}
and a bunch of files like
#profile_data_1.properties
key1.value=profile_1_key_1_value
key2.value=profile_1_key_2_value
#profile_data_2.properties
key1.value=profile_2_key_1_value
key2.value=profile_2_key_2_value
Then bind to generate-resources phase to create a copy of template.properties per profile_data_, and filter that copy with profile_data_.properties as a filter.
The easiest way is probably to create an ant build file and use antrun plugin. But that is not a Maven way, is it?
Other option is to create a Maven plugin for that tiny task. Somehow, I don't like that idea (plugin deployment is not what I want very much).
Maven does offer filtering of resources that you can combine with Maven profiles (see for example this post) but I'm not sure this will help here. If I understand your needs correctly, you need to loop on a set of input files and to change the name of the output file. And while the first part would be maybe possible using several <execution>, I don't think the second part is doable with the resources plugin.
So if you want to do this in one build, the easiest way would be indeed to use the Maven AntRun plugin and to implement the loop and the processing logic with Ant tasks.
And unless you need to reuse this at several places, I wouldn't encapsulate this logic in a Maven plugin, this would give you much benefits if this is done in a single project, in a unique location.
You can extend the way maven does it's filtering, as maven retrieves it's filtering strategy from the plexus container via dependency injection. So you would have to register a new default strategy. This is heavy stuff and badly documented, but I think it can be done.
Use these URLs as starting point:
http://maven.apache.org/shared/maven-filtering/usage.html
and
http://maven.apache.org/plugins/maven-resources-plugin/
Sean
I developed a Java utility library (similarly to Apache Commons) that I use in various projects.
In addition to fat clients, I also use it for mobile clients (PDA with J9 Foundation profile).
In time the library that started as a single project spread over multiple packages. As a result I end up with a lot of functionality, which is not really needed in all the projects.
Since this library is also used inside some mobile/PDA projects I need a way to collect just the used classes and generate the actual specialized jars.
Currently in the projects that are using this library, I have Ant jar tasks that generate (from the utility project) the specialized jar files (ex: my-util-1.0-pda.jar, my-util-1.0-rcp.jar) using include/exclude jar task features. This is mostly needed due to the size constraints on the generated jar file, for the mobile projects.
Migrating now to Maven I just wonder if there are any best practices to arrive to something similar. I consider the following scenarios:
[1] - additionally to the main jar artifact (my-lib-1.0.jar) also generating inside my-lib project the separate/specialized artifacts using classifiers (ex: my-lib-1.0-pda.jar) using Maven Jar Plugin or Maven Assembly Plugin filtering/includes. I'm not very comfortable with this approach since it pollutes the library with library consumers demands (filters).
[2] - Create additional Maven projects for all the specialized clients/projects, that will "wrap" the "my-lib" and generate the filtered jar artifacts (ex: my-lib-wrapper-pda-1.0 ...etc). As a result, these wrapper projects will include the filtering (to generate the filtered artifact) and will depend just on the "my-lib" project and the client projects will depend on my-lib-wrapper-xxx-1.0 instead of my-lib-1.0. This approach may look problematic since even that will let "my-lib" project intact (with no additional classifiers and artifacts), basically will double the number of projects since for every client project I'll have one lib, just to collect the needed classes from the "my-util" library ("my-pda-app" project will need a "my-lib-wrapper-for-my-pda-app" project/dependency).
[3] - In every client project that uses the library (ex: my-pda-app) add some specialized Maven plugins to trim out (when generating the final artifact/package) the classes that are not required (ex: maven-assembly-plugin, maven-jar-plugin, proguard-maven-plugin).
What is the best practice for solving this kind of problems in the "Maven way"?
The Maven general rule is "one primary artifact per POM" for the sake of modularity and the reasons one shouldn't break this convention (in general) are very well explained in the How to Create Two JARs from One Project (...and why you shouldn’t) blog post. There are however justified exceptions (for example an EJB project producing an EJB JAR and a client EJB JAR with only interfaces). Having said that:
The mentioned blog post (also check Using Maven When You Can't Use the Conventions) explains how you could implement Option 1 using separate profiles or the JAR plugin. If you decide to implement this solution, keep in mind that this should be an exception and that it might make dependency management trickier (and, as you mentioned, pollute the project with "client filtering logic"). Just in case, I would use several JAR plugin executions here.
Option 2 isn't very different from Option 1 IMO (except that it separate things): basically, having N other wrapping/filtering projects is very similar with having N filtering rules in one project. And if filtering makes sense, I prefer Option 1.
I don't like Option 3 at all because I think it shouldn't be the responsibility of a client of a library to "trim out" unwanted things. First, a client project doesn't necessarily have the required knowledge (what to trim) and, second, this might create a big mess with other plugins.
BUT if the fat clients are not using the whole my-lib (like server-side code would require the whole EJB JAR), then filtering isn't the right "maven way" to handle your situation. The right way would be Option 4: put everything common in a project (producing my-lib-core-1.0.jar) and specific parts in specific projects (that will produce my-lib-pda-1.0.jar etc). Clients would then depend on the core artifact and specialized ones.