I recently discovered that Hudson was not the problem. In actuality it was Maven itself as the multi-module build was causing the build failure, not Hudson. I just hadn't noticed where the issue actually existed.
Leaving the original question here.
I'm using the failsafe-maven-plugin to run some integration tests. The difference between failsafe and surefire is that failsafe allows failures and does not fail the build.
On my nightly builds there are occasions that a service the integration tests use might be down. In normal builds, the failsafe plugin would let the build continue since the integration tests are allowed to fail. However, Hudson does not seem to respect this and stops the build and produces rain.
I tried to turn the failsafe tests off on nightly builds using -DskipITs. This appears to fail since I'm in a multi module build.
Any ideas on how to get Maven to respect that these tests can fail even though they're part of a specific module?
The project structure is as follows:
-parent
\-jar
\-jar (where integration tests run)
\-war
\-ear
You can use profiles to make builds a bit different for different environments (nightly builds, releases, normal developer builds and so on).
I'd also try updating the Maven version, there were recently few fixes related to multi-module builds.
I don't believe your original assumption that failsafe-maven doesn't fail the build is correct. A failed test does not stop the integration-test phase from completing, which is different from the surefire plugin that runs unit tests. This allows the post-integration-test phase to run, so the test environment can be torn down (app server shut down, etc.).
After this, the verify phase is run, which looks at the results of the integration tests. if one of these tests has failed, then Maven will return with a build failure, which Hudson will rightly pick up so your build can be flagged as broken.
Use a maven profile to turn on/off the verify goal of the maven failsafe plugin.
Related
I started a project and have about 7 tests in my project now and it takes already more than a minute to execute the whole test suite using gradle test.
From the additional output (--info flag) I can see that the whole quarkus application and also dependencies like the mongodb instance are restarted for every test class and method.
This is the exact opposite of what the quarkus documentation says on the testing guide page:
So far in all our examples we only start Quarkus once for all tests. Before the first test is run Quarkus will boot, then all tests will run, then Quarkus will shutdown at the end. This makes for a very fast testing experience however it is a bit limited as you can’t test different configurations.
All the tests are annotated with #QuarkusTest and every test just tests a single endpoint.
I use "pure" kotlin (1.5.21), Quarkus version 2.2.2.Final and gradle 6.9.
Installed features: cdi, config-yaml, jacoco, kotlin, mongodb-client, mongodb-panache-kotlin, narayana-jta, rest-client, rest-client-jackson, resteasy, resteasy-jackson, smallrye-context-propagation, smallrye-health, smallrye-openapi, swagger-ui
Is that a normal behaviour? If yes, an application with multiple hundred tests could easily take ~20 minutes or more to run the entire test suite.
I didn't try out maven yet, so I can't verify that it's not a gradle related issue.
While trying to reproduce it with a fresh project, I think I found the issue with my code:
I also used #QuarkusTestResources with restrictToAnnotatedClass=true on my tests.
This means the configuration & test profiles must be reloaded and therefore also the quarkus application.
Apparently all the DevServices get restarted, too (in my case it was a mongodb, since I'm using the panache extension), which explains the long runtimes of the tests.
I reorganized my tests a little bit, so they work with the "global" test resources (it was a WireMockServer in my case).
Now quarkus only gets started once before the tests and the total runtime of the gradle test task is acceptable.
I would like to create two custom run configuraion runners for testng. The first would be the default TestNG runner and the second would start jetty for integration tests before running them. I use the CMD+SHIFT+R and CMD+R a lot to run individual tests or a whole class, but it is hard to use this feature whe I cannot start my server before an integration test runs.
Is there a way to set up two configurations, so when I run a test in a package that matches something it uses one configuration, otherwise it will use another?
Maven profiles sounds like a good tool for the job, yes.
A simple and very common approach is to split your tests into unit tests (which are plain vanilla java code) and integration tests (which require other fancy stuff to run).
I see the maven-surefire-plugin supports TestNG, so you are fine there.
Now, to set up jetty, the second pom at this link describes how to start and stop jetty on the maven pre-integration-test and post-integration-test phases.
Then, after you bind the relevant tests to the maven integration-test phase, you can execute everything (start jetty -> integration tests -> stop jetty) via this command:
mvn verify
There are other ways to do it but this is a good starting point.
Good luck.
Disclaimer: I'm not at all a skilled Jenkins user.
I was looking for a plugin to allow me to run only tests affected by the latest commits, in maven-built projects, but could find neither a maven plugin smart enough to not run all tests again, nor a jenkins plugin doing this. Is there a way to achieve this?
Say, for example, I have a project consisting of A.java, testA.java, B.java and testB.java. I have jenkins configured to run a build and test for each commit. I have just commited a change to A.java, but not to B.Java, and there's no dependency from testB.java and B.java to A.java. I don't want Jenkins to run testB.java when it checks out fresh sources. If ther3e was a dependency from B.java or testB.java to A.java, I'd like jenkins to figure this out, and run testB.java too.
Background: running all tests in a project is too time consuming. Running only a fixed set of tests leaves bug holes for bugs. Running all tests which might be broken by a change by hand is error prone.
I have a project 'ABC' with the main code and junit tests. I do have the requirement that i can execute the set unit tests against a older version of the product artefacts.
To solve this requirement i would create a maven project which only contains the junit tests.
Another maven product builds my product code and places the artifact into the repository.
Now i could launch my tests against any product build by changing the build dependency within the junit test project.
Is this a good solution? Are there perhaps better solutions to solve this requirement?
I think thats a pretty good approach. You could create a profile for each old version and then activate them via the profile name and test different old versions without having to change the pom file for each run. You could then also run the different profiles separately scheduled on a continuous integration server...
I've tried in each and every way to test a grails-app using hudson. I've tried testing with maven, I've tried testing with the grails plugin and I've tried testing with a shell builder it seems that building via shell is the only thing that works..
Every time I get the same error:
org.hibernate.HibernateException:
contains is not valid without active
transaction
But If i go to a shell and type
grails test-app
everything runs fine.
Does anyone have any idea on what's going on?
I'm using CentOS with Java 1.6, no slaves, just a simple hudson deploy over Tomcat6.
I've tried both with maven and grails builder, both fail.
Edit: it seems that if I run both unit and integration tests on the same command (either with grails or with mvn) the integration tests fail always.
Hudson/Jenkins usually just use the command line for executing grails plugins (You should be able to confirm that from the build output). You could probably add a pre build step to dump the environment, so you can see if anything there (or in your own shell) cause it to be fundamentally different.
Otherwise try to log in as the hudson user and find the hudson workspace and repeat the process manually. That has been the easiest way to debug hard problems like this..
regards