I would like to create two custom run configuraion runners for testng. The first would be the default TestNG runner and the second would start jetty for integration tests before running them. I use the CMD+SHIFT+R and CMD+R a lot to run individual tests or a whole class, but it is hard to use this feature whe I cannot start my server before an integration test runs.
Is there a way to set up two configurations, so when I run a test in a package that matches something it uses one configuration, otherwise it will use another?
Maven profiles sounds like a good tool for the job, yes.
A simple and very common approach is to split your tests into unit tests (which are plain vanilla java code) and integration tests (which require other fancy stuff to run).
I see the maven-surefire-plugin supports TestNG, so you are fine there.
Now, to set up jetty, the second pom at this link describes how to start and stop jetty on the maven pre-integration-test and post-integration-test phases.
Then, after you bind the relevant tests to the maven integration-test phase, you can execute everything (start jetty -> integration tests -> stop jetty) via this command:
mvn verify
There are other ways to do it but this is a good starting point.
Good luck.
Related
I started a project and have about 7 tests in my project now and it takes already more than a minute to execute the whole test suite using gradle test.
From the additional output (--info flag) I can see that the whole quarkus application and also dependencies like the mongodb instance are restarted for every test class and method.
This is the exact opposite of what the quarkus documentation says on the testing guide page:
So far in all our examples we only start Quarkus once for all tests. Before the first test is run Quarkus will boot, then all tests will run, then Quarkus will shutdown at the end. This makes for a very fast testing experience however it is a bit limited as you can’t test different configurations.
All the tests are annotated with #QuarkusTest and every test just tests a single endpoint.
I use "pure" kotlin (1.5.21), Quarkus version 2.2.2.Final and gradle 6.9.
Installed features: cdi, config-yaml, jacoco, kotlin, mongodb-client, mongodb-panache-kotlin, narayana-jta, rest-client, rest-client-jackson, resteasy, resteasy-jackson, smallrye-context-propagation, smallrye-health, smallrye-openapi, swagger-ui
Is that a normal behaviour? If yes, an application with multiple hundred tests could easily take ~20 minutes or more to run the entire test suite.
I didn't try out maven yet, so I can't verify that it's not a gradle related issue.
While trying to reproduce it with a fresh project, I think I found the issue with my code:
I also used #QuarkusTestResources with restrictToAnnotatedClass=true on my tests.
This means the configuration & test profiles must be reloaded and therefore also the quarkus application.
Apparently all the DevServices get restarted, too (in my case it was a mongodb, since I'm using the panache extension), which explains the long runtimes of the tests.
I reorganized my tests a little bit, so they work with the "global" test resources (it was a WireMockServer in my case).
Now quarkus only gets started once before the tests and the total runtime of the gradle test task is acceptable.
Presently we built a Automation framework which uses Selenium Webdriver+ specflow + Nunit, and we suing bamboo as our CI to run our Job against our every build.
we written a build.xml to handle our targets (like clean, init, install latest build, run Selenium scripts, uninstall build. etc)
ant command will read the tag name from the build.xml and runs the respective feature/scenarios based on Tags (like #smoke, #Regression)with Nunit in CI machine.
Now our requirement is to use Selenium Grid to divide scripts into different machine and execute with above set-up. Grid has to divide the scripts based on feature file or based on Tags.How to achieve this.
Is there any thing need to done under [BeforeFeature] and [BeforeScenario] ?
If you provide in details steps or any link which explains detail steps that would be a great help.
Please any one can help in this regards.
Thanks,
Ashok
You have misunderstood the role Grid plays in distributed parallel testing. It does not "divide the scripts", but simply provides a single hub resource through which multiple tests can open concurrent sessions.
It is the role of the test runner (in your case Specflow) to divide tests and start multiple threads.
I believe that you require SpecFlow+ (http://www.specflow.org/plus/), but this does have a license cost.
It should be possible to create your own multithread test runner for Specflow but will require programming and technical knowledge.
If you want a free open source approach to parallel test execution in DotNet, then there is MbUnit (http://code.google.com/p/mb-unit) but this would require you to rewrite your tests
I have setup Bamboo to run JBehave tests on a remote agent (with JBehave-web plugin launching test using webdriver), and everything runs fine. Only problem is after the execution is finished Bamboo shows no test executed. I can see the option in Bamboo to select the output of the test results, but it has to be a JUnit xml, and Jbehave reports are only generated in plain text or html.
Any idea how to solve this?
Thanks
I ran in the same situation about a year ago. JBehave "doesn't" integrate with Bamboo out of the box. Although, they have a plugin for Hudson CI.
In my case, such as yours, I resorted in running the tests through the Surefire plugin; the outputs are considered as JUnit tests results and Bamboo can recognize them.
Hope it helps.
There is a really simple way to do this. And I'm currently doing this for our build system.
Write a simple parse script that transforms your plain of html report into JUnit compatible results. And add that script as a task in your Bamboo task, then use Junit parser to parse the results. Boo! You're done! Plus, you've got the capability to quarantine!
This is way much faster than writing a plugin for Bamboo, which involves considerable more time to learn/write.
Setup JBehave with Maven.In Bamboo build plan use Maven task to run it. For getting results in Bamboo use JBehave Task for Bamboo. It will convert JBehave scenarios in tests in Bamboo. If scenario names contains JIRA issue ids, it will link them to JIRA issues.
https://marketplace.atlassian.com/plugins/com.mdb.plugins.jebehaveforbamboo/server/overview
Sample JBehave as Maven Project
https://bitbucket.org/vikasborse/jbehavesampleproject/overview
Download or clone this repository on your local machine.
To run navigate to this project in command line and use command:
"mvn integration-test"
I've tried in each and every way to test a grails-app using hudson. I've tried testing with maven, I've tried testing with the grails plugin and I've tried testing with a shell builder it seems that building via shell is the only thing that works..
Every time I get the same error:
org.hibernate.HibernateException:
contains is not valid without active
transaction
But If i go to a shell and type
grails test-app
everything runs fine.
Does anyone have any idea on what's going on?
I'm using CentOS with Java 1.6, no slaves, just a simple hudson deploy over Tomcat6.
I've tried both with maven and grails builder, both fail.
Edit: it seems that if I run both unit and integration tests on the same command (either with grails or with mvn) the integration tests fail always.
Hudson/Jenkins usually just use the command line for executing grails plugins (You should be able to confirm that from the build output). You could probably add a pre build step to dump the environment, so you can see if anything there (or in your own shell) cause it to be fundamentally different.
Otherwise try to log in as the hudson user and find the hudson workspace and repeat the process manually. That has been the easiest way to debug hard problems like this..
regards
I recently discovered that Hudson was not the problem. In actuality it was Maven itself as the multi-module build was causing the build failure, not Hudson. I just hadn't noticed where the issue actually existed.
Leaving the original question here.
I'm using the failsafe-maven-plugin to run some integration tests. The difference between failsafe and surefire is that failsafe allows failures and does not fail the build.
On my nightly builds there are occasions that a service the integration tests use might be down. In normal builds, the failsafe plugin would let the build continue since the integration tests are allowed to fail. However, Hudson does not seem to respect this and stops the build and produces rain.
I tried to turn the failsafe tests off on nightly builds using -DskipITs. This appears to fail since I'm in a multi module build.
Any ideas on how to get Maven to respect that these tests can fail even though they're part of a specific module?
The project structure is as follows:
-parent
\-jar
\-jar (where integration tests run)
\-war
\-ear
You can use profiles to make builds a bit different for different environments (nightly builds, releases, normal developer builds and so on).
I'd also try updating the Maven version, there were recently few fixes related to multi-module builds.
I don't believe your original assumption that failsafe-maven doesn't fail the build is correct. A failed test does not stop the integration-test phase from completing, which is different from the surefire plugin that runs unit tests. This allows the post-integration-test phase to run, so the test environment can be torn down (app server shut down, etc.).
After this, the verify phase is run, which looks at the results of the integration tests. if one of these tests has failed, then Maven will return with a build failure, which Hudson will rightly pick up so your build can be flagged as broken.
Use a maven profile to turn on/off the verify goal of the maven failsafe plugin.