So i have a small integration test that houses 5 tests in total. Running that test exclusively results in all tests passed. However running my entire test suite results in 4 test failures of the 5.
I've just recently upgraded to grails-2.0 from 1.3.7 and i switched from hsqldb to h2.
Has anyone any pointers in which direction i should be looking in order to fix this (test-pollution) problem?
Domain model
Integration test:
class SeriesIntegrationTests extends GrailsUnitTestCase {
Series series
Episode episode
protected void setUp() {
super.setUp()
series = new Series(ttdbId: 2348);
episode = new Episode(ttdbId: 2983, season: 0, episodeNumber: 0, series: series);
}
protected void tearDown() {
super.tearDown()
}
void testCreateSeries() {
series.save()
assertFalse("should not have validation errors : $series.errors", series.hasErrors())
assertEquals("should be one series stored in db", 1, Series.count())
}
void testCreateEpisode() {
series.save()
episode.save()
assertFalse("should not have validation errors : $episode.errors", episode.hasErrors())
assertEquals("should be one episode stored in db", 1, Episode.count())
}
void testCreateSeriesAndAddEpisode() {
series.addToEpisodes(episode)
series.save(flush: true)
series.refresh()
assertEquals("series should contain one episode", 1, series.episodes.size())
}
void testDeleteSeriesAndCascadeToEpisode() {
series.addToEpisodes(episode)
series.save(flush: true)
series.delete(flush: true)
assertEquals(0, Episode.count())
assertEquals(0, Series.count())
}
void testDeleteSeriesAndCascadeToBackdropImage() {
series.backdrop = new Image();
series.backdrop.binaryData = new byte[0]
series.save(flush: true)
assertFalse(series.hasErrors())
assertEquals(1, Image.count())
series.delete(flush: true)
assertEquals(0, Image.count())
}
}
I had a similar problem when moving from 1.3.7 to 2.0. The integration tests were ok when launched with
grails test-app --integration
but were failing when launched with
grails test-app
I fixed everything by converting unit tests to grails 2.0 test (using annotations).
My solution as to upgrade all the unit tests to grails 2.0 method of doing tests. When this was done, every test passed. So it seem's that unit tests somehow polluted integration tests. But only on certain hardware configurations.
Related
In the testNG.xml file, I have 10+ test classes (within a test-suite tag) for regression testing. I, then, have ordered the automated tests of several test classes in a particular sequence by using the priority=xxx in #Test annotation. The priority values within a particular class are sequential - but each test class has different ranges. For example:
testClass1 : values are from 1-10
testClass2 : values are from 11-23
testClass3 : values are from 31-38
.
.
.
lastTestClass : values are from 10201-10215
The purpose of this is to have a particular sequence in which the 10+ test-classes are executed. There is one test-class that I need to be executed towards the end of the test execution - so, the priorities in that class range from 10201-10215. However, this particular test-class gets tested right after the 1st class with priorities from 1-10.
Instead of using priority, I would recommend you to use dependencies. They will run your tests in a strict order, never running the depended before the dependent, even if you are running in parallel.
I understand you have the different ranges in different classes, so in dependOnMethods you would have to specify the root of the test you are referencing:
#Test( description = "Values are from 1-10")
public void values_1_10() {
someTest();
}
#Test( description = "Values are from 21-23",
dependsOnMethods = { "com.project.test.RangeToTen.values_1_10" })
public void values_21_23() {
someTest();
}
If you have more than one test in each range then you can use dependsOnGroups:
#Test( enabled = true,
description = "Values are from 1-10")
public void values_1_10_A() {
someTest();
}
#Test( enabled = true,
description = "Values are from 1-10")
public void values_1_10_B() {
someTest();
}
#Test( enabled = true,
description = "Values are from 1-10",
dependsOnGroups = { "group_1_10" })
public void values_21_23_A() {
someTest();
}
#Test( enabled = true,
description = "Values are from 1-10",
dependsOnGroups = { "group_1_10" })
public void values_21_23_B() {
someTest();
}
You can also do the same with more options from the testng.xml:
https://testng.org/doc/documentation-main.html#dependencies-in-xml
Another option you have is to use the "preserve order":
https://www.seleniumeasy.com/testng-tutorials/preserve-order-in-testng
But as Anton mention, that could bring you troubles if you ever want to run in parallel, so I recommend you using dependencies.
Designing your tests to be run in specific order is a bad practice. You might want to run tests in parallel in future - and having dependencies on order will stop you from doing that.
Consider using TestNG listeners instead:
It looks like you are trying to implement some kind of tearDown process after tests.
If this is the case - you can implement ITestListener and use onFinish method to run some code after all of your tests were executed.
Also, this TestNG annotation might work for your case:
org.testng.annotations.AfterSuite
Gradle allows me to start multiple jvms for testing like so:
test {
maxParallelForks = 10
}
Some of the tests for an application I have requires a fake ftp server which needs a port. This is quite easy to do with one jvm:
test {
systemProperty 'ftpPort', 10000
}
However, when running in parallel I would need to start 10 fake ftp servers. How do I add a custom system property for each jvm spawned by gradle?
Something like:
test {
maxParallelForks 10
customizeForks { index ->
systemProperty 'ftpPort', 10000 + index
}
}
There's no setup task to do before an after the fork. However, you can override the test task in your project to achieve the behavior (here in Java), as the Test task is simply a class:
public class ForkTest extends org.gradle.api.tasks.testing.Test {
private final AtomicInteger nextPort = new AtomicInteger(10000);
public Test copyTo(JavaForkOptions target) {
super.copyTo(target);
target.systemProperty("ftpPort", nextPort.getAndAIncrement());
return this;
}
}
Then in your build.gradle :
task testFork(Type: ForkTest){
forkEvery = 1
maxParallelForks = 10
...
}
I'm using JMockit 1.14 with Junit 4.
private void method()
{
new NonStrictExpectations()
{
{
firstObject.getLock();
returns(new Lock());
secondObject.getDetails();
result = secondObjectDetails;
secondObject.isAvailable();
result = true;
}
};
}
Is there anything obviously wrong with my code?
I resolved a similar issue (using android studio, junit 4.12 and JMockit 1.20) by adding
#RunWith(JMockit.class)
outside the test case class, with a couple of import changes.
See JMockit documentation: http://jmockit.org/tutorial/Introduction.html#runningTests
Does JUnit have an OOB tool to plot the test results of a suite? Specifically, I am using the Selenium 2 webdriver, and I want to plot passed vs failed tests. Secondly, I want to have my tests suite continue even with a failed test, how would I go about doing this? I tried researching the topic, but none of the answers fully addresses my question.
Thanks in advance!
Should probably put my code in here as well:
#Test
public void test_Suite() throws Exception {
driver.get("www.my-target-URL.com");
test_1();
test_2();
}
#Test
public void test_1() throws Exception {
//perform test
assertTrue(myquery);
}
#Test
public void test_2() throws Exception {
//perform test
assertTrue(myquery);
}
If you using Jenkins as your CI server, you got Junit Plugin that will allow you to publish the results in the end of the test. And you got Junit Graph to display them.
I am using the sausage framework to run parallelized phpunit-based Selenium web driver tests through Sauce Labs. Everything is working well until I want to mark a test as skipped via markTestSkipped(). I have tried this via two methods:
setting markTestSkipped() in the test method itself:
class MyTest
{
public function setUp()
{
//Some set up
parent::setUp();
}
public function testMyTest()
{
$this->markTestSkipped('Skipping test');
}
}
In this case, the test gets skipped, but only after performing setUp, which performs a lot of unnecessary work for a skipped test. To top it off, phpunit does not track the test as skipped -- in fact it doesn't track the test at all. I get the following output:
Running phpunit in 4 processes with <PATH_TO>/vendor/bin/phpunit
Time: <num> seconds, Memory: <mem used>
OK (0 tests, 0 assertions)
The other method is by setting markTestSkipped() in the setUp method:
class MyTest
{
public function setUp()
{
if (!shouldRunTest()) {
$this->markTestSkipped('Skipping test');
} else {
parent::setUp();
}
}
protected function shouldRunTest()
{
$shouldrun = //some checks to see if test should be run
return $shouldrun;
}
public function testMyTest()
{
//run the test
}
}
In this case, setUp is skipped, but the test still fails to track the tests as skipped. phpunit still returns the above output. Any ideas why phpunit is not tracking my skipped test when they are executed in this fashion?
It looks like, at the moment, there is no support for logging markTestSkipped() and markTestIncomplete() results in PHPunit when using paratest. More accurately, PHPunit won't log tests which call markTestSkipped() or markTestIncomplete() if called with arguments['junitLogfile'] set -- and paratest calls PHPunit with a junitLogfile.
For more info, see: https://github.com/brianium/paratest/issues/60
I suppose I can hack away at either phpunit or paratest...