Maven: local development deploy vs bundling for distribution - maven-2

Bear with me, I'm migrating from Ant to Maven2: I think I've hit one of those little things that was easy in Ant, but not so in Maven...
How do I handle the difference between a local deployment vs. creating an archive/bundle for distribution to another machine?
Let's assume my project's output is an EAR plus some additional config files. A developer that is actively working on the project will need to deploy and re-deploy frequently to his local app-server (say JBoss), while an Integration Engineer that is building for QA/production will need only to create the final archive assembly (tar/gz).
In Ant we had two targets for this: "dev-deploy" and "bundle". Both do a complete build, but differ in the final step: "dev-deploy" copies the EAR and config files to the respective local folders, while "bundle" just puts the EAR & config files in a tar.gz assembly.
How do you do this in Maven?
I've seen that the assembly plugin can create either archives (tar, gz, etc.) or exploded directories (from the same assembly descriptor). I can invoke either assembly:assembly or assembly:directory, but for the latter, how do I copy the final output to the local JBoss deployment folders? From a related post it seems that ad-hoc copying of files is not really what Maven is about, so an antrun copy is probably the most appropriate?
Finally, since the type of assembly may differ depending on who invokes it, it doesn't seem wise to bind assembly to the build lifecycle, not so? But this means that a developer will always need to invoke 'mvn package' followed by 'mvn assembly:directory' to rebuild and test a change. Conversely, an Integration Engineer will always need to run 'mvn package' followed by 'mvn assembly:assembly' to create the distributable archive. I was hoping for a one-command solution for each, or should I just script it?

In Ant we had two targets for this: "dev-deploy" and "bundle". Both do a complete build, but differ in the final step: "dev-deploy" copies the EAR and config files to the respective local folders, while "bundle" just puts the EAR & config files in a tar.gz assembly.
Not sure what you mean by respective local folders about "dev-deploy" but this sounds like what mvn pacakge is doing and "bundle" indeed sounds like a maven assembly.
I've seen that the assembly plugin can create either archives (tar, gz, etc.) or exploded directories (from the same assembly descriptor). I can invoke either assembly:assembly or assembly:directory, but for the latter, how do I copy the final output to the local JBoss deployment folders? From a related post it seems that ad-hoc copying of files is not really what Maven is about, so an antrun copy is probably the most appropriate?
I guess that we are talking about the Integration Engineer's tasks here. As you didn't explain what the "bundle" contains exactly, what the target application server is (my understanding is that you are using JBoss for QA/production too but, again, this is a guess), if this bundle has to be deployed automatically, it's hard to imagine all solutions and/or alternatives to antrun. But indeed, to copy/move/unzip/whatever the assembly, the maven antrun plugin is a candidate.
Finally, since the type of assembly may differ depending on who invokes it, it doesn't seem wise to bind assembly to the build lifecycle, not so? But this means that a developer will always need to invoke 'mvn package' followed by 'mvn assembly:directory' to rebuild and test a change. Conversely, an Integration Engineer will always need to run 'mvn package' followed by 'mvn assembly:assembly' to create the distributable archive. I was hoping for a one-command solution for each, or should I just script it?
My understanding was that the Integration Engineer was building the bundle. Why would a developer need the bundle? This is confusing... Anyway, I don't really need the details to think of an answer. You could actually declare the maven assembly plugin into specific build profiles, one for development and one for integration, and bind either the single or the directory-single mojos to the project's build lifecycle in each profile. This would allow to use only one command and avoid any scripting (really, don't go this way).

Related

Several artifacts, how to force build order?

I have a (standard non-maven, non-gradle, non-whatsoever) project in IntelliJ IDEA that consists of several modules.
One of those modules results in a jar that is used by one of the other modules.
I have two artifacts. The first one creates a war file. This one depends on the jar file built from the second artifact.
How can I order the build process of the two artifacts so that the second one creates the jar file and copies it to the lib folder of the first, before the first one builds, without the need to recreate both artifacts?
As soon as I select "Build/Build Artifacts/All Artifacts" it always tries to build the first one first.
EDIT: Maybe a better question: What is the recommended way to manually build several artifacts in order of their dependencies?
How can I [configure IDEA] ... so that [it] ... creates the jar file and copies it to the lib folder of the first...
You can't really configure IDEA to do this directly. While you can configure Artifacts in the Project Structure dialog, there are no provisions for copying artifacts. IntelliJ IDEA is an IDE, not a build tool. While it can do a lot regarding complying and building, it has its limits.
One possible hackish way would be to go to the Artifact definition in the project structure. There, there are "Pre-processing" and "post-processing" tabs, They have the option to run an Ant target. So you could create a simple Ant target to do the copying. But in the end, I think the best answer to your question:
Maybe a better question: What is the recommended way to manually build several artifacts in order of their dependencies?
is to use a build tool such as Ant, Maven, or Gradle for building the project.

How can I get Eclipse to use my IVY_HOME variable when downloading ivy dependencies?

My company uses extensive use of ivy to download dependencies. Some of these dependencies are huge (~500MB) and take a while to download from the remote repositories.
To build our application we have an ant script that will first resolve all the dependencies and the deploy to the server.
I have set an "IVY_HOME" environment variable so that all the dependencies are downloaded to D:\ivy_home instead of C:\Users\.ivy2\ - this is because D: is my SSD which is significantly faster, and it is where my local server directories are located - so copying files from ivy_home to the server is super fast.
But for some reason when I am using IvyDE plugin inside eclipse - it always wants to download a separate copy of all the dependencies and puts them into my C:\ which is causing several issues:
Local publishes from the ant script will not be picked up in eclipse since they are placed into a different location
Dependencies already downloaded in D: will not get picked up which makes the ivy Resolve inside eclipse much slower than it needs to be
The dependencies are in a slower drive in eclipse so performing searches, and executing these jars is also slower
How about creating symlink to replace the .ivy2 in Users to D? I've tried it on my own and it's looks working fine.
Open cmd as root, and then execute this line
mklink /d C:\Users\{username}\.ivy2 D:\.ivy2
I'd create an ivysettings.xml file and specify the location of my cache using the caches directive. See the following answer for example:
can I turn off the .ivy cache all together?
Why don't you set up IVY globally with the ivysettings.xml along with a property file.
This property file could have this:
ivy.default.ivy.user.dir=D:\ivy_home
For individual projects you could uncheck "enable project specific settings" for each IvyDE library management, so they would use IVY global settings, with one extra eclipse environment configuration.

Maven2: Possible to deploy depends on artifact classifier?

In fact I have 2 different problems, but I think they are kind of related:
I have an artifact, with an assembly descriptor set which will build an extra JAR (with extra classifier). By default, Maven2/3 will deploy the assembly generated together with the main artifact to remove Maven repository. Is there any way that I can deploy only the main artifact but not the assembly?
I have an artifact, in which I have jar plugin generate another artifact with different classifier (more specific, an EJB artifact, and I generate an client JAR). I want to deploy only the client JAR to Maven repo coz I think the main EJB artifact is not really going to be shared by other project. Is it possible to do so?
Thanks a lot
editied to provide more info:
The reason for avoiding deploy the EJB, is because the EJB main artifact is not going to be depended by other project except the containing project. The containing project will build a EAR (which contains the EJB), and normally we only need that build locally (by mvn package). However, the EJB client is something that we will deploy to our repo to let other project share when they need to communicate with our application.
Honestly it doesn't harm to deploy the EJB too, but I just want to see if I can save unnecessary waste of disk space on our repository.
Similarly, for deploying assembly, it is because the project is something we want to deploy to let other project to depends on. However, when building that project, we also have a separate assembly created on the same time (for example, an all-in-one executable jar) which we only need that built locally, and it is not something that other projects will depends on.
Turn off the 'attach' option to the assembly plugin. Then it won't be officially an artifact and it won't deploy; it will just lurk in the target directory, sulking that you don't love it as much as it's elder sibling and plot revenge.
Based on your first question i would like to know why do you create the supplemental assembly which is usually deployed as well as the main artifact. If you wan't to prevent you can put the creation of the assembly into a profile but this means you will not generate the supplemental artifact in your usual build only by activating the profile.

How to generate different deployables from the same Maven project?

I have a situation that I'm sure must be fairly common. I have some Maven-built applications that deploy to different types of application server - like Tomcat, JBoss, etc.
The build processes 'tunes' the deployable artifact to the specific target type of application server (for example, different included dependencies, context roots, other config). This tuning is controlled with build profiles (-Ptomcat, -Pjboss etc)
So, for a given version of my application, I need to run builds that produce different deployables. I run mvn -Ptomcat clean package for example and I get an artifact in my /target directory that is the tomcat-tuned version.
The best approach I've been able to come up with so far is to specify finalnames for the artifacts that include the profile information, but for that approach, I'm not sure how to configure Maven to copy the final artifact off to some specific location so that the next build for a different type doesn't overwrite it.
Is this a good approach? If so, how can I achieve that final copy?
Or is there a better way?
You'll need to use Maven Assembly Plugin.

Maven best practice for generating artifacts for multiple environments [prod, test, dev] with CI/Hudson support?

I have a project that need to be deployed into multiple environments (prod, test, dev). The main differences mainly consist in configuration properties/files.
My idea was to use profiles and overlays to copy/configure the specialized output. But I'm stuck into if I have to generate multiple artifacts with specialized classifiers (ex: "my-app-1.0-prod.zip/jar", "my-app-1.0-dev.zip/jar") or should I create multiple projects, one project for every environment ?!
Should I use maven-assembly-plugin to generate multiple artifacts for every environment ?
Anyway, I'll need to generate all them at once so it seams that the profiles does not fit ... still puzzled :(
Any hints/examples/links will be more than welcomed.
As a side issue, I'm also wondering how to achieve this in a CI Hudson/Bamboo to generate and deploy these generated artifacts for all the environments, to their proper servers (ex: using SCP Hudson plugin) ?
I prefer to package configuration files separately from the application. This allows you to run the EXACT same application and supply the configuration at run time. It also allows you to generate configuration files after the fact for an environment you didn't know you would need at build time. e.g. CERT
I use the "assembly" tool to zip up each domain's config files into named files.
I would use the version element (like 1.0-SNAPSHOT, 1.0-UAT, 1.0-PROD) and thus tags/branches at the VCS level in combination with profiles (for environments specific things like machines names, user name passwords, etc), to build the various artifacts.
We implemented a m2 plugin to build the final .properties using the following approach:
The common, environment-unaware settings are read from common.properties.
The specific, environment-aware settings are read from dev.properties, test.properties or production.properties, thus overriding default values if necessary.
The final .properties files is written to disk with the Properties instance after reading the files in given order.
Such .properties file is what gets bundled depending on the target environment.
We use profiles to achieve that, but we only have the default profile - which we call "development" profile, and has configuration files on it, and we have a "release" profile, where we don't include the configuration files (so they can be properly configured when the application is installed).
I would use profiles to do it, and I would append the profile in the artifact name if you need to deploy it. I think it is somewhat similar to what Pascal had suggested, only that you will be using profiles and not versions.
PS: Another reason why we have dev/ release profiles only, is that whenever we send something for UAT or PROD, it has been released, so if there is a bug we can track down what the state of the code was when the application was released - it is easier to tag it in SVN than trying to find its state from the commit history.
I had this exact scenario last summer.
I ended up using profiles for each higher environment with classifiers. Default profile was "do no harm" development build. I had a DEV, INT, UAT, QA, and PROD profile.
I ended up defining multiple jobs within Hudson to generate the region specific artifacts.
The one thing I would have done differently was to architect the projects a bit differently so that the region specific build was outside of the modularized main project. That was it would simply pull in the lastest artifacts for each specific build rather than rebuild the entire project for each region.
In fact, when I setup the jobs, the QA and PROD jobs were always setup to build off of a tag. Clearly this is something that you would tailor to your specific workplace rules on deployment.
Try using https://github.com/khmarbaise/multienv-maven-plugin to create one main WAR and one configuration JAR for each environment.