Recently, I have used Eclipse and Oracle Glassfish server plugin to deploy my webapplications on the server.
However, I would like to do it in batch mode using Maven 2 and also do some testing before deploying.
I would like to do following tasks:
Get required dependencies (if any) from any repository (which shall I use?).
Run unit tests
If tests successful, deploy it
I am running Maven 2 and Glassfish 3.2.1
Can you support me with some example project including pom.xml file? Is there any knowledge resource for this kind of things?
Best regards
The process you describes is very common.
Indeed, maven itselft will
A Build Lifecycle is Made Up of Phases
Each of these build lifecycles is defined by a different list of build phases, wherein a build phase represents a stage in the lifecycle.
For example, the default lifecycle has the following build phases (for a complete list of the build phases, refer to the Lifecycle Reference):
validate - validate the project is correct and all necessary information is available
compile - compile the source code of the project
test - test the compiled source code using a suitable unit testing framework. These tests should not require the code be packaged or deployed
package - take the compiled code and package it in its distributable format, such as a JAR.
integration-test - process and deploy the package if necessary into an environment where integration tests can be run
verify - run any checks to verify the package is valid and meets quality criteria
install - install the package into the local repository, for use as a dependency in other projects locally
deploy - done in an integration or release environment, copies the final package to the remote repository for sharing with other developers and projects.
These build phases (plus the other build phases not shown here) are executed sequentially to complete the default lifecycle. Given the build phases above, this means that when the default lifecycle is used, Maven will first validate the project, then will try to compile the sources, run those against the tests, package the binaries (e.g. jar), run integration tests against that package, verify the package, install the verifed package to the local repository, then deploy the installed package in a specified environment.
To do all those, you only need to call the last build phase to be executed, in this case, deploy:
mvn deploy
You should use the maven glassfish plugin, and then execute
mvn glassfish:deploy
Here the full ewample from the official documentation :
...
<build>
...
<plugins>
...
<plugin>
<groupId>org.glassfish.maven.plugin</groupId>
<artifactId>maven-glassfish-plugin</artifactId>
<version>2.1</version>
<configuration>
<glassfishDirectory>${glassfish.home}</glassfishDirectory>
<user>${domain.username}</user>
<adminPassword>${domain.password}</adminPassword>
<!-- <passFile>path/to/asadmin/passfile</passFile> -->
<autoCreate>true</autoCreate>
<debug>true</debug>
<echo>false</echo>
<terse>true</terse>
<skip>${test.int.skip}</skip>
<domain>
<name>${project.artifactId}</name>
<adminPort>4848</adminPort>
<httpPort>8080</httpPort>
<httpsPort>8443</httpsPort>
<iiopPort>3700</iiopPort>
<jmsPort>7676</jmsPort>
<reuse>false</reuse>
<jvmOptions>
<option>-Djava.security.auth.login.config=${project.build.testOutputDirectory}/login.conf</option>
</jvmOptions>
<properties>
<property>
<name>server.log-service.file</name>
<value>${domain.log.dir}/server.log</value>
</property>
</properties>
<auth>
<realm>
<name>testRealm</name>
<className>com.sun.enterprise.security.auth.realm.file.FileRealm</className>
<properties>
<property>
<name>jaas-context</name>
<value>test</value>
</property>
<property>
<name>file</name>
<value>${project.build.outputDirectory}/keyfile</value>
</property>
</properties>
</realm>
</auth>
<!-- <resourceDescriptor>path/to/resources.xml</resourceDescriptor> -->
<resources>
<connectionFactory>
<jndiName>jms/testQueueConnectionFactory</jndiName>
<type>queueConnectionFactory</type>
<properties>
<property>
<name>UserName</name>
<value>guest</value>
</property>
<property>
<name>Password</name>
<value>guest</value>
</property>
<property>
<name>AddressList</name>
<value>localhost:7676</value>
</property>
</properties>
</connectionFactory>
<jmsTopic>
<jndiName>jms/testTopic</jndiName>
<destinationName>TestTopic</destinationName>
<connectionFactory>
<jndiName>jms/testTopicConnectionFactory</jndiName>
<properties>
<property>
<name>UserName</name>
<value>guest</value>
</property>
<property>
<name>Password</name>
<value>guest</value>
</property>
<property>
<name>AddressList</name>
<value>localhost:7676</value>
</property>
</properties>
</connectionFactory>
</jmsTopic>
<jdbcDataSource>
<name>SomeDS</name>
<type>connectionPoolDataSource</type>
<poolName>SomePool</poolName>
<className>com.mysql.jdbc.jdbc2.optional.MysqlConnectionPoolDataSource</className>
<description>Some JDBC Connection Pool</description>
<allowNonComponentCallers>false</allowNonComponentCallers>
<validateConnections>true</validateConnections>
<validationMethod>metaData</validationMethod>
<properties>
<property>
<name>portNumber</name>
<value>3306</value>
</property>
<property>
<name>password</name>
<value>somePassword</value>
</property>
<property>
<name>user</name>
<value>someUser</value>
</property>
<property>
<name>serverName</name>
<value>some.host.somewhere</value>
</property>
<property>
<name>databaseName</name>
<value>SomeDB</value>
</property>
</properties>
</jdbcDataSource>
</resources>
</domain>
<components>
<component>
<name>${project.artifactId}</name>
<artifact>${project.build.directory}/artifacts/${project.build.finalName}.war</artifact>
</component>
</components>
</configuration>
</plugin>
...
</plugins>
...
</build>
...
</project>
I'm assuming you mean GlassFish Server 3.1.2 :-)
Using Maven with GlassFish documentation is covered in the Embedded Server Guide:
http://docs.oracle.com/cd/E26576_01/doc.312/e24932/embedded-server-guide.htm#gijhs
Hope this helps.
Related
I'm familiar with Tomcat/TomEE and testing applications using Arquillian. Now were are switching to Open Liberty. I see there is a module for Arquillian using embedded Open Liberty but it seems to require an existing Open Liberty installation whose path is provided in the configuration. This makes it non-portable and therefore unsuitable for automated testing since the installation has to be present at the exact same path. Arquillian and TomEE are self-contained, no installation required. Therefore my question is why this isn't also possible with Open Liberty? And is this planned for the future?
For reference this is how you use Arquillian with TomEE/Tomcat:
<arquillian xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns="http://jboss.org/schema/arquillian"
xsi:schemaLocation="http://jboss.org/schema/arquillian http://www.jboss.org/schema/arquillian/arquillian_1_0.xsd">
<container qualifier="tomee" default="true">
<configuration>
<property name="httpPort">-1</property>
<property name="stopPort">-1</property>
<property name="users">user=pass</property>
</configuration>
</container>
</arquillian>
As you can see, there is no path to a local installation required to run the tests. The only thing you need to do is a add a couple of Maven dependencies in test scope that pull in TomEE (embedded). If the same would work for Open Liberty that would be great.
Going further..so the above is how we do automated testing
but it still uses the location.
I see, regarding not needing any location specified at all, you say:
"The only thing you need to do is a add a couple of Maven dependencies in test scope that pull in TomEE (embedded). If the same would work for Open Liberty that would be great."
So, thinking, maven will put a bunch of classes on the classpath due to the TomEE
dependancies and then the test run will find the appropriate container to
run the tests on.
I will raise an issue over on
https://github.com/OpenLiberty/liberty-arquillian/issues/39
to cover the requirement please feel free to add remarks etc.
If you have a look at https://github.com/OpenLiberty/open-liberty/blob/integration/dev/com.ibm.ws.microprofile.config.1.2_fat_tck/publish/tckRunner/tck/src/test/resources/arquillian.xml
you will see an example arquillian.xml that sets $wlpHome
<property name="wlpHome">${wlp}</property>
from an environment variable $wlp.
('wlp' is short for Websphere Liberty Profile)
The wlpHome variable is used in the managed/local container here:
https://github.com/OpenLiberty/liberty-arquillian/blob/42cb523b8ae6596a00f2e1793e460a910d863625/liberty-managed/src/main/java/io/openliberty/arquillian/managed/WLPManagedContainer.java#L224
An example that does this dynamically is the setting of the
system property ${wlp} here:
https://github.com/OpenLiberty/open-liberty/blob/95c266d4d6aa57cf32b589e7c9d8b39888176e91/dev/fattest.simplicity/src/componenttest/topology/utils/MvnUtils.java#L161
If you have any more queries please post them...
and hope you love OpenLiberty - it rocks!
Gordon
The result you seem to be trying to achieve is a embedded runtime for liberty using arquillian. However, all as far as I can see, the openliberty team only provides a remote container adapter and a managed container adapter at the moment.
With us having a similar need, wanting to run automated integration tests where we wouldnt necessarily have a Openliberty server in-place. We managed to work-around this using liberty-maven-plugin.
The build/testing process would then be:
Running mvn verify, liberty-maven-plugin would generate the specified openliberty which we want to run our tests against.
<plugin>
<groupId>net.wasdev.wlp.maven.plugins</groupId>
<artifactId>liberty-maven-plugin</artifactId>
<version>${version.liberty-maven-plugin}</version> <!-- plugin version -->
<configuration>
<assemblyArtifact> <!-- Liberty server to run test against -->
<groupId>io.openliberty</groupId>
<artifactId>openliberty-runtime</artifactId>
<version>18.0.0.4</version>
<type>zip</type>
</assemblyArtifact>
<configDirectory>src/liberty/${env}/</configDirectory>
<configFile>src/liberty/server.xml</configFile>
<serverName>defaultServer</serverName>
</configuration>
<executions>
<execution>
<phase>prepare-package</phase>
<goals>
<goal>create-server</goal>
</goals>
</execution>
</executions>
</plugin>
As liberty-maven-plugin per default adds the Liberty server to the target/folder
<arquillian xmlns="http://jboss.org/schema/arquillian">
<container qualifier="liberty-managed" default="true">
<configuration>
<property name="wlpHome">target/liberty/wlp</property>
<property name="serverName">defaultServer</property>
</configuration>
</container>
</arquillian>
This way we can assure that a runnable liberty server according to our liking is always existant in our local environment or e.g. our Jenkins CI/CD Server, essentially getting the same effect as having a embedded container.
I have a project on Wildfly-Swarm. I need to use query cache, so I put Infinispan dependency on my POM.
<dependency>
<groupId>org.wildfly.swarm</groupId>
<artifactId>infinispan</artifactId>
</dependency>
I setup Infinispan on my persistence.xml
<persistence-unit name="Condominio" transaction-type="JTA">
<shared-cache-mode>ENABLE_SELECTIVE</shared-cache-mode>
<properties>
<property name="hibernate.cache.use_query_cache" value="true"/>
<property name="hibernate.cache.infinispan.entity.expiration.max_idle" value="300000"/>
...
<properties>
</persistence-unit>
It's work well, the cache works. But I read the documentation of Infinispan Wildfly-Swarm fraction (https://reference.wildfly-swarm.io/fractions/infinispan.html) and I was in doubt if should setup it on project-defaults.yml using those configurations.
I don't know which configurations of Infinispan Wildfly-Swarm fraction are equivalents to persistence.xml configurations.
How can I change a .properties file in maven depending on my profile? Depending on whether the application is built to run on a workstation or the datacenter parts of the file my_config.properties change (but not all).
Currently I manually change the .properties file within the .war file after hudson builds each version.
As often, there are several ways to implement this kind of things. But most of them are variations around the same features: profiles and filtering. I'll show the most simple approach.
First, enable filtering of resources:
<project>
...
<build>
<resources>
<resource>
<directory>src/main/resources</directory>
<filtering>true</filtering>
</resource>
</resources>
...
</build>
</project>
Then, declare a place holder in your src/main/resources/my_config.properties, for example:
myprop1 = somevalue
myprop2 = ${foo.bar}
Finally, declare properties and their values in a profile:
<project>
...
<profiles>
<profile>
<id>env-dev</id>
<activation>
<property>
<name>env</name>
<value>dev</value>
</property>
</activation>
<properties>
<foo.bar>othervalue</foo.bar>
</properties>
</profile>
...
</profiles>
</project>
And run maven with a given profile:
$ mvn process-resources -Denv=dev
[INFO] Scanning for projects...
...
$ cat target/classes/my_config.properties
myprop1 = somevalue
myprop2 = othervalue
As I said, there are variation around this approach (e.g. you can place values to filter in files), but this will get you started.
References
Introduction to Build Profiles
Maven Resources Filtering
More resources
A Maven2 multi-environment filter setup
Maven project filtering
Using Maven profiles and resource filtering
Building For Different Environments with Maven 2
I have integrated Cargo plugin in my maven 2 project POM.xml.
During hot deployment I am unable to connect to my Tomcat container that is available across a proxy. My maven settings.xml already contain proxy setting but cargo is not picking it up.
I tried defining proxy settings for Cargo plugin expilicitly but that too didn't worked.
My plugin xml for Cargo is as:
<plugin>
<groupId>org.codehaus.cargo</groupId>
<artifactId>cargo-maven2-plugin</artifactId>
<!--<version>1.0.1-alpha-1</version>-->
<version>1.0-beta-1</version>
<configuration>
<container>
<containerId>tomcat6x</containerId>
<type>remote</type>
</container>
<configuration>
<type>runtime</type>
<properties>
<cargo.proxy.host>xxx.xxx.xxx.xxx</cargo.proxy.host>
<cargo.proxy.port>xxxx</cargo.proxy.port>
<cargo.hostname>xxx.xxx.xxx.xxx</cargo.hostname>
<cargo.protocol>http</cargo.protocol>
<cargo.servlet.port>80</cargo.servlet.port>
<cargo.tomcat.manager.url>http://xxx.xxx.xxx.xxx/manager</cargo.tomcat.manager.url>
<cargo.remote.username>xxxxxxx</cargo.remote.username>
<cargo.remote.password>xxxxxxx</cargo.remote.password>
</properties>
</configuration>
<deployer>
<type>remote</type>
<deployables>
<deployable>
<groupId>Test</groupId>
<artifactId>Test</artifactId>
<type>war</type>
<!--
<properties> <context>optional root context</context>
</properties> <pingURL>optional url to ping to know if deployable
is done or not</pingURL> <pingTimeout>optional timeout to ping
(default 20000 milliseconds)</pingTimeout>
-->
</deployable>
</deployables>
</deployer>
</configuration>
</plugin>
Please help.
Thanks in advance.
Ashish
I may be wrong but I don't think Cargo support this. But, as the remote deployer for Tomcat uses the manager application and thus HTTP, try to set proxy settings at the JVM level by passing properties on the command line when invoking maven:
mvn cargo:deploy -Dhttp.proxyHost=<hostname> -Dhttp.proxyPort=<port>
Or use the environment variable MAVEN_OPTS:
export MAVEN_OPTS="-Dhttp.proxyHost=<hostname> -Dhttp.proxyPort=<port>"
Hopefully this issue with proxy will be fixed in Cargo 1.1.0
I have a set of web apps that I manage that I am trying to move to maven.
/pom.xml // parent pom
webapp1/pom.xml // configured to point to parent
webapp2/pom.xml // peer of webapp1 and points to parent.
each of the webapps refers to the parent pom, and they both currently have a jetty maven plugin that works.
My question is how do I mount each of the webapps from the parent pom such that mvn jetty:run works in the parent directory?
edit to anwer: Pascal T
The issue is not so much that I'm getting an error when I try and run the command from the root pom, but that I'm not sure how to configure it.
for example the webapp1/pom.xml
looks like:
<project>
...
<plugins>
<plugin>
<groupId>org.mortbay.jetty</groupId>
<artifactId>maven-jetty-plugin</artifactId>
</plugin>
</plugins>
...
</project>
changing to this directory and typing mvn jetty:run works just fine and affords me the ability to hit: http://localhost:8080/webapp1.
However, what I would like would be to be in the parent of webapp1, and run all 'n' webapps from the parent directory. Thus having http://localhost:8080/webapp1, and http://localhost:8080/webapp2 available with one command line parameter.
btw, if the answer involved a tomcat plugin, that would be fine.
EDIT: I've totally edited my first answer now that I have a better understanding of the OP's expectations.
Check out Cargo, a thin wrapper that allows you to manipulate Java EE containers in a standard way.
Actually, there is a tutorial on Cargo's website that demonstrates how to use the Cargo Maven2 plugin to automatically start/stop a container (possibly deploying some deployables to it as it starts), which is what you're looking for from what I've understood.
I'm just not sure that doing this from the parent directory is feasible and if it's a requirement or if it would be ok to do it from another directory. I'll come back on this later. Lets first take a look at the Cargo Maven2 plugin setup.
In your case, you can start with the minimal configuration (that uses Jetty 5.x which is Cargo's default container):
[...]
<build>
<plugins>
<plugin>
<groupId>org.codehaus.cargo</groupId>
<artifactId>cargo-maven2-plugin</artifactId>
</plugin>
</plugins>
</build>
[...]
If you want to use Jetty 6.x, you'll have to specify <containerId> and <type> in the <container> element:
[...]
<plugin>
<groupId>org.codehaus.cargo</groupId>
<artifactId>cargo-maven2-plugin</artifactId>
<configuration>
<container>
<containerId>jetty6x</containerId>
<type>embedded</type>
</container>
</configuration>
</plugin>
[...]
Then, add the modules you want to deploy by defining deployables explicitly inside the plugin configuration (refer to the Maven2 Plugin Reference Guide for the details of the configuration) :
<deployables>
<deployable>
<groupId>com.mycompany.myproject</groupId>
<artifactId>myproject-alpha</artifactId>
<type>war</type>
<properties>
<context>optional alpha root context</context>
</properties>
</deployable>
<deployable>
<groupId>com.mycompany.myproject</groupId>
<artifactId>myproject-beta</artifactId>
<type>war</type>
<properties>
<context>optional beta root context</context>
</properties>
</deployable>
[...]
</deployables>
With this, you should be able to start Jetty and have your webapps deployed on it with a simple (to run from the project containing the cargo plugin configuration):
$ mvn cargo:start
I'm just not sure that this can work with the parent pom (I wonder if this can lead to cyclic dependencies issues) and I didn't test it. But personally, I'd put all this stuff in the pom of a dedicated project, e.g. in a sibling project of your webapps, and not in the parent pom. I don't think it's a really a big deal and this is IMHO a better setup, especially if you plan to use cargo for integration testing.