Spock 2.0 is reporting an extra test for data-driven tests - intellij-idea

I'm upgrading a project from Spock 1.3 to 2.0, and I've noticed that data-driven tests seem to have an extra test that the IDE is reporting somewhere. For example, the "maximum of two numbers" data-driven example from the documentation shows 4 tests pass when there are only 3 rows:
class MathSpec extends Specification {
def "maximum of two numbers"() {
expect:
Math.max(a, b) == c
where:
a | b | c
1 | 3 | 3
7 | 4 | 7
0 | 0 | 0
}
}
What is going on here?

Firstly, your question is an IntelliJ IDEA question as much as it is a Spock question, because you want to know why parametrised Spock 2 tests look like that in IDEA.
Secondly, the code you posted is different from the code you ran in IntelliJ IDEA. Probably your feature method starts more like this in order to achieve the test iteration naming we see in your screenshot:
def "maximum of #a and #b is #c"() {
// ...
}
Having established that, next let me remind you of the very first sentence of the Spock 2.0 release notes:
Spock is now a test engine based on the JUnit Platform
This means that in contrast to Spock 1.x which was based on a JUnit 4 runner, Spock 2.0 sports its own JUnit test engine, i.e. the Spock engine is on the same level as the Jupiter engine, both running on the JUnit platform.
The way parametrised tests are reported in IDEA is the same for JUnit 5 tests as for Spock 2 tests:
Test class A
- Test method x
- parametrised method name 0
- ...
- parametrised method name n
- Test method y
- parametrised method name 0
- ...
- parametrised method name n
Test class B
- Test method z
- parametrised method name 0
- ...
- parametrised method name n
...
IDEA is not "reporting an extra test", it simply adds a level of grouping by method name to the test report.
If for example you run this parametrised JUnit 5 test
import org.junit.jupiter.params.ParameterizedTest;
import org.junit.jupiter.params.provider.ValueSource;
import static org.junit.jupiter.api.Assertions.assertTrue;
public class NumbersTest {
#ParameterizedTest(name = "{0} is an odd number")
#ValueSource(ints = {1, 3, 5, -3, 15, Integer.MAX_VALUE}) // six numbers
void isOdd_ShouldReturnTrueForOddNumbers(int number) {
assertTrue(Numbers.isOdd(number));
}
public static class Numbers {
public static boolean isOdd(int number) {
return number % 2 != 0;
}
}
}
it looks like this in IDEA:
I.e., what you see it to be expected for JUnit platform tests.

Related

In Robot framework, how can I execute multiple of test cases in data driven meithod

I have set of Test scenarios (say 10) which I would like to execute against different countries (say 3).
for loop is not preferred as execution time per scenario will be longer and each scenario Pass/Fail will have to be managed.
Create Keyword for each Test Scenario and call them per country.
this leads to 3 different robot file one per country with 10 Testcases for each scenario
Any new add/remove scenarios, will have to update 3 files
robot data driver template-based approach appears to support one Test scenario per robot file. Uses data file and dynamically execute one data entry as one testcase
This leads 10 robot file one per Test Scenario
Any new Test Scenario will be new robot file
Any way to include more Test scenario in robot data-driven approach
Any other approach you would suggest for iterative execution of scenario against data set where each iteration results are captured separately.
My first recommendation would be Templates with for loops. This way you do not have to manage failures, each iterations will be independent from the other. Every data set will be executed with the template. Note that if one iteration fails the whole test case will be marked as failed, but you will be able to check which iteration has failed.
Here is the code for the above example:
*** Variables ***
#{COUNTRIES} USA UK
*** Test Cases ***
Test Scenario 1
[Template] Test Scenario 1 Template
FOR ${country} IN #{COUNTRIES}
${country}
END
Test Scenario 2
[Template] Test Scenario 2 Template
FOR ${country} IN #{COUNTRIES}
${country}
END
Test Scenario 3
[Template] Test Scenario 3 Template
FOR ${country} IN #{COUNTRIES}
${country}
END
*** Keywords ***
Test Scenario 1 Template
[Arguments] ${country}
Log ${country}
Run Keyword If $country == 'UK' Fail Simulate failure.
Test Scenario 2 Template
[Arguments] ${country}
Log ${country}
Run Keyword If $country == 'USA' Fail Simulate failure.
Test Scenario 3 Template
[Arguments] ${country}
Log ${country}
The other option is based on this answer and it generates test cases dynamically during run time. Only a small library that also acts as a listener is needed. This library can have a start_suite method that will be invoked and it will get the suite(s) as Python object(s), robot.running.model.TestSuite. Then you could use this object along with Robot Framework's API to create new test cases programmatically.
DynamicTestLibrary.py:
from robot.running.model import TestSuite
class DynamicTestLibrary(object):
ROBOT_LISTENER_API_VERSION = 3
ROBOT_LIBRARY_SCOPE = 'GLOBAL'
ROBOT_LIBRARY_VERSION = 0.1
def __init__(self):
self.ROBOT_LIBRARY_LISTENER = self
self.top_suite = None
def _start_suite(self, suite, result):
self.top_suite = suite
self.top_suite.tests.clear() # remove placeholder test
def add_test_case(self, name, keyword, *args):
tc = self.top_suite.tests.create(name=name)
tc.keywords.create(name=keyword, args=args)
def add_test_matrix(self, data_set, test_scenarios):
for data in data_set:
for test_scenario in test_scenarios:
self.add_test_case(f'{test_scenario} - {data}', test_scenario, data)
globals()[__name__] = DynamicTestLibrary
UPDATE for Robot Framework 4.0
Due to the backward incompatible changes (Running and result models have been changed) made in the 4.0 release the add_test_case function should be change as below if you are using version above 4.0.
def add_test_case(self, name, keyword, *args):
tc = self.top_suite.tests.create(name=name)
tc.body.create_keyword(name=keyword, args=args)
In the robot file add a Suite Setup in which you can call the Add Test Matrix keyword with the list of countries and test scenarios to generate a test case for each combination. This way there will be an individual test case for each country - test scenario pair while having everything in one single file.
test.robot:
*** Settings ***
Library DynamicTestLibrary
Suite Setup Generate Test Matrix
*** Variables ***
#{COUNTRIES} USA UK
*** Test Cases ***
Placeholder test
[Documentation] Placeholder test to prevent empty suite error. It will be removed from execution during the run.
No Operation
*** Keywords ***
Generate Test Matrix
${test scenarios}= Create List Test Scenario 1 Test Scenario 2 Test Scenario 3
DynamicTestLibrary.Add Test Matrix ${COUNTRIES} ${test scenarios}
Test Scenario 1
[Arguments] ${country}
Log ${country}
Run Keyword If $country == 'UK' Fail Simulate failure.
Test Scenario 2
[Arguments] ${country}
Log ${country}
Run Keyword If $country == 'USA' Fail Simulate failure.
Test Scenario 3
[Arguments] ${country}
Log ${country}
This will be the output on the console:
# robot --pythonpath . test.robot
==============================================================================
Test
==============================================================================
Test Scenario 1 - USA | PASS |
------------------------------------------------------------------------------
Test Scenario 2 - USA | FAIL |
Simulate failure.
------------------------------------------------------------------------------
Test Scenario 3 - USA | PASS |
------------------------------------------------------------------------------
Test Scenario 1 - UK | FAIL |
Simulate failure.
------------------------------------------------------------------------------
Test Scenario 2 - UK | PASS |
------------------------------------------------------------------------------
Test Scenario 3 - UK | PASS |
------------------------------------------------------------------------------
Test | FAIL |
6 critical tests, 4 passed, 2 failed
6 tests total, 4 passed, 2 failed
==============================================================================

Change of Junit Report Format

My JUnit report is generated in a way that all the test in the class forms the Total Test Cases. I want that the report should conatin the Total test cases as per the different mehtods. Eg. My class conatins 3 tests and currently when the report is geenarted it shows total as 16. However , I want individual test method report.
Test1 has 10 cases, so it should be 10 paased and 0 failed
Test2 has 4 cases, it should be 2 passed and 2 failed
Test3 has 2 cases, it should be 2 passed 0 failed.

Geb, Spock, Gradle and maxParallelForks

I am having some trouble understanding an issue we are having with our Geb/Spock tests. We are using gradle and we are trying to run our tests in parallel. As I understand it, the maxParallelForks property in gradle will run test classes in separate JVMs.
The issue I am running into is when I have 6 test classes and I set maxParallelForks to 4, when the test starts I will get 4 test classes running in parallel. Awesome! But the final 2 classes is where the problem is. Let's say out of the first 4 classes running, 2 of the classes are done in 1 minute and 2 of the classes are done in 5 minutes. What I'm seeing is instead of the first 2 finishing and starting the next 2 classes, it seems to waiting until the last 2 long running classes finish before spinning up the other forks. This is way less than ideal.
Am I misunderstanding something or am I missing a property somewhere? This is what I have in my build.gradle:
tasks.withType(Test) {
systemProperties System.properties
maxParallelForks = 4
forkEvery = 1
}
Classes are assigned to forks for execution upfront and not on a polling basis. So the first two forks will get two classes assigned upfront and the other two one each regardless of how long each of these classes takes to finish. In worst case scenario two of the longest running classes will be assigned to the a single fork. This is how it works - classes are split into groups and then separate test jvms (forks) are spun up with the list of classes to execute for each of them.
On a side note - you don't want to do forkEvery = 1 - this will restart your test jvms after each test class slowing your test execution down for no benefit.
Using JUNIT suites you can decide which set of classes need to be picked by a particular fork.
import org.junit.runner.RunWith
import org.junit.runners.Suite
#RunWith(Suite.class)
#Suite.SuiteClasses([
TimeTaking.class, // Class that takes a lot of time
NotSoMuchTimeTaking.class, //Class that is quick
// Add more test classes which need to be executed in same fork.
])
public class FirstTestSuite { // keep this empty
}
Similarly, create a SecondTestSuite {
} and so on..
In addition to above steps, include the *TestSuite.class in your build.gradle
tasks.withType(Test) {
systemProperties System.properties
maxParallelForks = 4
forkEvery = 1
include '**/*TestSuite*.class'
}
This way, you will be able to control your execution and decide which test classes need to be executed in what order.

Execute UITests In Different Order

I have many test methods added to my project. I want to test by changing the order of tests each time.
For example if i have 3 test methods.
I want to run 6 tests like
i) 1 2 3
ii) 1 3 2
iii) 2 1 3
iv) 3 1 2
v) 2 3 1
vi) 3 2 1
How can i achive this in XCode?
Note: It is in means like test 1 creates setup things for test 2. Input or UI somethings like that. Thats why i need to keep a sequence
Tests should not have any effect on state at all. And they should not depend on previous state either.
Running 1, 2, 3. Should have the exact same results as running 3, 2, 1.
Before running, each test should set up the required conditions that it is testing.
After running, each test should tear down the system so that there is nothing hanging around for the next test.
So, in answer to your question, I don't know if it's possible to specify an order... but you really shouldn't care what order they run in. If you do then that's a sign that your tests are not independent of each other.

How can you "parameterize" Clojure Contrib's test-is?

Both Junit and TestNG provide mechanisms for iterating over a collection of input parameters and running your tests against them. In Junit this is supported via the Parameterized annotation, while TestNG uses #DataProvider.
How can you write data-driven tests using the test-is library? I tried using for list comprehension to iterate over an input parameter collection, but because deftest is a macro it's expecting is clauses.
From reading the article on parameterized tests in Junit it seems that once you get past the poiler plate the cool part of parameterization is that it lets you type this:
return Arrays.asList(new Object[][] {
{ 2, true },
{ 6, false },
{ 19, true },
{ 22, false }
and easily define four tests.
in test-is the equivalent (no boiler-plate code required) macro is are
(are [n prime?] (= prime? (is-prime n))
3 true
8 false)
If you want to give your inputs as a map then you could run something like:
(dorun (map #(is (= %2 (is-prime %1))
{ 3 true, 8 false}))
though the are macro will produce more easily read output.
Not sure I understand the point of parameterized tests, but I would use dynamic binding for this.
user> (def *test-data* [0 1 2 3 4 5])
#'user/*test-data*
user> (deftest foo
(doseq [x *test-data*]
(is (< x 4))))
#'user/foo
user> (run-tests)
Testing user
FAIL in (foo) (NO_SOURCE_FILE:1)
expected: (< x 4)
actual: (not (< 4 4))
FAIL in (foo) (NO_SOURCE_FILE:1)
expected: (< x 4)
actual: (not (< 5 4))
Ran 1 tests containing 6 assertions.
2 failures, 0 errors.
nil
user> (defn run-tests-with-data [data]
(binding [*test-data* data] (run-tests)))
#'user/run-tests-with-data
user> (run-tests-with-data [0 1 2 3])
Testing user
Ran 1 tests containing 4 assertions.
0 failures, 0 errors.
nil
You could rewrite deftest and run-tests yourself. It'd be maybe a dozen lines of Clojure to let tests accept parameters in some other way.