My JUnit report is generated in a way that all the test in the class forms the Total Test Cases. I want that the report should conatin the Total test cases as per the different mehtods. Eg. My class conatins 3 tests and currently when the report is geenarted it shows total as 16. However , I want individual test method report.
Test1 has 10 cases, so it should be 10 paased and 0 failed
Test2 has 4 cases, it should be 2 passed and 2 failed
Test3 has 2 cases, it should be 2 passed 0 failed.
Related
Do we have an API in Qtest that can provide summary of test cycle execution ?
E.g. Passed: 23 Failed: 7 Unexecuted: 10 Running: 2
Need this data for generating report in our consolidated reporting tool along with data from some other sources.
Nothing that gives exactly what you ask for, but you could use the API calls below to create it yourself.
You can get the status of all test runs in a project using
GET /api/v3/projects/{projectId}/test-runs/execution-statuses
Or, to get results from a specific test cycle, first find all the test runs in that cycle using
/api/v3/projects/{projectId}/test-runs?parentId={testCycleID}&parentType=test-cycle
(append &expand=descendants to find test runs in containers under the test cycle)
and then get the results of each run individually using
/api/v3/projects/{projectId}/test-runs/{testRunId}/test-logs/last-run
See https://qtest.dev.tricentis.com/
I am generating cypress report using mochawesome, which is default UI.
I need to make it custom.
Heading - Test Summary
Number of Total Pass case
Number of Total Fail Case
On click on Total Number of pass test case , accordion will be open and will display Test Name and time which took to run test case
On click on Total Number of failed test case , accordion will be open and will display Test Name and time which took to run test case
and reason of failed
I'm upgrading a project from Spock 1.3 to 2.0, and I've noticed that data-driven tests seem to have an extra test that the IDE is reporting somewhere. For example, the "maximum of two numbers" data-driven example from the documentation shows 4 tests pass when there are only 3 rows:
class MathSpec extends Specification {
def "maximum of two numbers"() {
expect:
Math.max(a, b) == c
where:
a | b | c
1 | 3 | 3
7 | 4 | 7
0 | 0 | 0
}
}
What is going on here?
Firstly, your question is an IntelliJ IDEA question as much as it is a Spock question, because you want to know why parametrised Spock 2 tests look like that in IDEA.
Secondly, the code you posted is different from the code you ran in IntelliJ IDEA. Probably your feature method starts more like this in order to achieve the test iteration naming we see in your screenshot:
def "maximum of #a and #b is #c"() {
// ...
}
Having established that, next let me remind you of the very first sentence of the Spock 2.0 release notes:
Spock is now a test engine based on the JUnit Platform
This means that in contrast to Spock 1.x which was based on a JUnit 4 runner, Spock 2.0 sports its own JUnit test engine, i.e. the Spock engine is on the same level as the Jupiter engine, both running on the JUnit platform.
The way parametrised tests are reported in IDEA is the same for JUnit 5 tests as for Spock 2 tests:
Test class A
- Test method x
- parametrised method name 0
- ...
- parametrised method name n
- Test method y
- parametrised method name 0
- ...
- parametrised method name n
Test class B
- Test method z
- parametrised method name 0
- ...
- parametrised method name n
...
IDEA is not "reporting an extra test", it simply adds a level of grouping by method name to the test report.
If for example you run this parametrised JUnit 5 test
import org.junit.jupiter.params.ParameterizedTest;
import org.junit.jupiter.params.provider.ValueSource;
import static org.junit.jupiter.api.Assertions.assertTrue;
public class NumbersTest {
#ParameterizedTest(name = "{0} is an odd number")
#ValueSource(ints = {1, 3, 5, -3, 15, Integer.MAX_VALUE}) // six numbers
void isOdd_ShouldReturnTrueForOddNumbers(int number) {
assertTrue(Numbers.isOdd(number));
}
public static class Numbers {
public static boolean isOdd(int number) {
return number % 2 != 0;
}
}
}
it looks like this in IDEA:
I.e., what you see it to be expected for JUnit platform tests.
I have set of Test scenarios (say 10) which I would like to execute against different countries (say 3).
for loop is not preferred as execution time per scenario will be longer and each scenario Pass/Fail will have to be managed.
Create Keyword for each Test Scenario and call them per country.
this leads to 3 different robot file one per country with 10 Testcases for each scenario
Any new add/remove scenarios, will have to update 3 files
robot data driver template-based approach appears to support one Test scenario per robot file. Uses data file and dynamically execute one data entry as one testcase
This leads 10 robot file one per Test Scenario
Any new Test Scenario will be new robot file
Any way to include more Test scenario in robot data-driven approach
Any other approach you would suggest for iterative execution of scenario against data set where each iteration results are captured separately.
My first recommendation would be Templates with for loops. This way you do not have to manage failures, each iterations will be independent from the other. Every data set will be executed with the template. Note that if one iteration fails the whole test case will be marked as failed, but you will be able to check which iteration has failed.
Here is the code for the above example:
*** Variables ***
#{COUNTRIES} USA UK
*** Test Cases ***
Test Scenario 1
[Template] Test Scenario 1 Template
FOR ${country} IN #{COUNTRIES}
${country}
END
Test Scenario 2
[Template] Test Scenario 2 Template
FOR ${country} IN #{COUNTRIES}
${country}
END
Test Scenario 3
[Template] Test Scenario 3 Template
FOR ${country} IN #{COUNTRIES}
${country}
END
*** Keywords ***
Test Scenario 1 Template
[Arguments] ${country}
Log ${country}
Run Keyword If $country == 'UK' Fail Simulate failure.
Test Scenario 2 Template
[Arguments] ${country}
Log ${country}
Run Keyword If $country == 'USA' Fail Simulate failure.
Test Scenario 3 Template
[Arguments] ${country}
Log ${country}
The other option is based on this answer and it generates test cases dynamically during run time. Only a small library that also acts as a listener is needed. This library can have a start_suite method that will be invoked and it will get the suite(s) as Python object(s), robot.running.model.TestSuite. Then you could use this object along with Robot Framework's API to create new test cases programmatically.
DynamicTestLibrary.py:
from robot.running.model import TestSuite
class DynamicTestLibrary(object):
ROBOT_LISTENER_API_VERSION = 3
ROBOT_LIBRARY_SCOPE = 'GLOBAL'
ROBOT_LIBRARY_VERSION = 0.1
def __init__(self):
self.ROBOT_LIBRARY_LISTENER = self
self.top_suite = None
def _start_suite(self, suite, result):
self.top_suite = suite
self.top_suite.tests.clear() # remove placeholder test
def add_test_case(self, name, keyword, *args):
tc = self.top_suite.tests.create(name=name)
tc.keywords.create(name=keyword, args=args)
def add_test_matrix(self, data_set, test_scenarios):
for data in data_set:
for test_scenario in test_scenarios:
self.add_test_case(f'{test_scenario} - {data}', test_scenario, data)
globals()[__name__] = DynamicTestLibrary
UPDATE for Robot Framework 4.0
Due to the backward incompatible changes (Running and result models have been changed) made in the 4.0 release the add_test_case function should be change as below if you are using version above 4.0.
def add_test_case(self, name, keyword, *args):
tc = self.top_suite.tests.create(name=name)
tc.body.create_keyword(name=keyword, args=args)
In the robot file add a Suite Setup in which you can call the Add Test Matrix keyword with the list of countries and test scenarios to generate a test case for each combination. This way there will be an individual test case for each country - test scenario pair while having everything in one single file.
test.robot:
*** Settings ***
Library DynamicTestLibrary
Suite Setup Generate Test Matrix
*** Variables ***
#{COUNTRIES} USA UK
*** Test Cases ***
Placeholder test
[Documentation] Placeholder test to prevent empty suite error. It will be removed from execution during the run.
No Operation
*** Keywords ***
Generate Test Matrix
${test scenarios}= Create List Test Scenario 1 Test Scenario 2 Test Scenario 3
DynamicTestLibrary.Add Test Matrix ${COUNTRIES} ${test scenarios}
Test Scenario 1
[Arguments] ${country}
Log ${country}
Run Keyword If $country == 'UK' Fail Simulate failure.
Test Scenario 2
[Arguments] ${country}
Log ${country}
Run Keyword If $country == 'USA' Fail Simulate failure.
Test Scenario 3
[Arguments] ${country}
Log ${country}
This will be the output on the console:
# robot --pythonpath . test.robot
==============================================================================
Test
==============================================================================
Test Scenario 1 - USA | PASS |
------------------------------------------------------------------------------
Test Scenario 2 - USA | FAIL |
Simulate failure.
------------------------------------------------------------------------------
Test Scenario 3 - USA | PASS |
------------------------------------------------------------------------------
Test Scenario 1 - UK | FAIL |
Simulate failure.
------------------------------------------------------------------------------
Test Scenario 2 - UK | PASS |
------------------------------------------------------------------------------
Test Scenario 3 - UK | PASS |
------------------------------------------------------------------------------
Test | FAIL |
6 critical tests, 4 passed, 2 failed
6 tests total, 4 passed, 2 failed
==============================================================================
I have many test methods added to my project. I want to test by changing the order of tests each time.
For example if i have 3 test methods.
I want to run 6 tests like
i) 1 2 3
ii) 1 3 2
iii) 2 1 3
iv) 3 1 2
v) 2 3 1
vi) 3 2 1
How can i achive this in XCode?
Note: It is in means like test 1 creates setup things for test 2. Input or UI somethings like that. Thats why i need to keep a sequence
Tests should not have any effect on state at all. And they should not depend on previous state either.
Running 1, 2, 3. Should have the exact same results as running 3, 2, 1.
Before running, each test should set up the required conditions that it is testing.
After running, each test should tear down the system so that there is nothing hanging around for the next test.
So, in answer to your question, I don't know if it's possible to specify an order... but you really shouldn't care what order they run in. If you do then that's a sign that your tests are not independent of each other.