I am trying googletest.
Previously i have been using Boost test and i have been using the macro BOOST_AUTO_TEST_SUITE to group my tests into a Testsuite.
This makes the junit reports much more readable.
I have not found a hint how to do this or something similar in googletest. Is it possible?
I use the first parameter of the call to TEST() or TEST_F() as sort of a "test suite" identifier, like this:
TEST(TestSuiteName, shouldExpectTrue) {
EXPECT_TRUE(true);
}
TEST(TestSuiteName, shouldExpectFalse) {
EXPECT_FALSE(false);
}
Of course, when using a fixture class with TEST_F(), your TestSuiteName will need to match the name of your fixture class, so it will be necessary to create a separate fixture class for each test suite.
There is no way that I know of to break the test suites into sub-suites or anything like that, but of course you could always run your tests multiple times using the --gtest_filter="someFilter" option if you wanted to clean up your output.
Related
Have been working with pytest for sometime, (and really like it, I must say). I have been able to generate a self contained html with additional columns etc. What I need is either:
Have the results displayed in the order in which they are run (not Failed first as it normally appears in the self contained html output)
OR
Print the order in which the tests are run. Am using #pytest.mark.run(order=123456) in my tests.
The order is important as there are dependency tests that needs to be executed in a certain sequence.
I'am not sure about pytest, but I worked with pyhtml and had a similar problem.
I would say you shuld use a "yield" operator in the function you are calling from your pyhtml part. If you are calling the pytest function direct from your pyhtml part, you shuld be able to write a new function, that calls pytest for you.
#Peter - As per your suggestion from my previous queries, I have used ExecutionHooks to implement ReportPortal. I am finding difficulties in passing all the required values from my Runner to Base Runner. Below is my configuration-
BaseRunner.java
Results results = Runner.parallel(tags,path,ScenarioName,Collections.singletonList(new
ScenarioReporter()),threads,karateOutputPath);
Runner.java
#KarateOptions(tags = { "#Shakedown" },
features = "classpath:tests/Shakedown"
)
I want to understand how can I pass the attributes like Scenario Name, path and tags. ScenarioReporter() is my class where I have implemented Execution Hook. I have a base runner that will have all the details and a normal runner that will have minimal information. I have just given snippets, please don't mind if there are some syntactical errors.
You don't need the annotations any more, and you can set all parameters including tags using the new "builder" (fluent interface) on the Runner. Refer the docs: https://github.com/intuit/karate#parallel-execution
Results results = Runner.path("classpath:some/package").tags("~#ignore").parallel(5);
So it should be easier to inherit from base classes etc. just figure out a way to pass a List<String> of tags and use it.
Just watch out for this bug, fixed in 0.9.6.RC1: https://github.com/intuit/karate/issues/1061
I tried ReportNG, but it is not updating the report now & I found that ReportNG is no more used from this answer.
I want to create a test report/customize TestNG report to gave to development team. I used Hybrid Framework for creating the project and followed this tutorial.
Yes, you can customize the TestNG reports using Listeners and Reporters. Here is the link of documentation. It is not clear from a question what type of customization you want to do.
But I want to suggest better alternatives for reporting here. There are two most used libraries which generally used with Selenium.
Allure Test Report
Extent Report.
I have not used Allure test reports but it seems to be good and widely used in the community. I have had used Extent Reports in two projects and really happy with it. Anshoo Arora has done the remarkable job. Documentation is very good with lot of example & code snippet. I would highly recommend it.
To customize selenium TestNG report, you can use testng listeners.
ITestListener: Log Result/Screenshot on test pass/fail/skip.
IReporter: To generate html report from xml suite results and log.
But as an alternative you can use qaf-reporting.
It provides Detailed Live Reporting (Don`t need to wait for complete execution).
I know this is old thread, but these reports can be edited and custom reports can be made like below. I have also explained here how TestHTMLReporter can be edited . And if you would like to know , how index.html report is customized have a look here , where I have explained it in detail
With your customReport You'd have to implement IReporter , extend TestListenerAdapter and override generateReport method if you want to implement a custom TestHTMLReporter . For other reporters you may have to do things a bit differently but the concept will remain the same. You'd achieve custom 'TestHTMLReporter' like below .
Create a CustomReport.java file in your project and copy-paste the whole content of TestHTMLReporter.java , change the name of file in getOutputFile method and it would look like below
public class CustomReport extends TestListenerAdapter implements IReporter {
#Override
public void generateReport(List<XmlSuite> xmlSuites, List<ISuite> suites,
String outputDirectory) {
}
...
//paste the content of TestHTMLReporter.java here
...
...
Make sure all your imports are in place from TestHTMLReporter.java
Now, in this file change as per your requirement . For ex: if you'd like to add the end time of each of the test then at the correct place in generateTable method add the below snippet
// Test class
String testClass = tr.getTestClass().getName();
long testMillis = tr.getEndMillis();
String testMillisString = Long.toString(testMillis);
if (testClass != null) {
pw.append("<br>").append("Test class Name: ").append(testClass);
// this line to add end time in ms
pw.append("<br>").append("End Time(ms): ").append(testMillisString);
// Test name
String testName = tr.getTestName();
if (testName != null) {
pw.append(" (").append(testName).append(")");
}
Then you'll get like below
Now, You'll get two reports one with default and the other with your file name.
The only thing now remains is switching off the default reporting listeners, so you get only one report. For that you can follow this or you may search for solutions. Hope this helps
In phpunit, with Yii, is possibile to create more fixtures for the same table?
I would like to have different fixtures folders to be used with different unit test, to avoid problems between the various test file.
You can set the fixture folder for each test by adding the following to your test classes:
protected function setUp()
{
$this->getFixtureManager()->basePath = 'path/to/fixtures';
parent::setUp();
}
With this, you can have your tests use whichever set of fixtures you want.
Make sure to call parent::setUp(), and to call it after setting the basePath property, as that is what actually loads the fixtures.
See also CDbFixtureManager.
Jasmine has iit() and ddescribe, and Mocha has it.only and describe.only, but I can't see any way to get QUnit to focus on running a single test.
The QUnit UI allows you to run a single test, but I can't see how to get this to work inside Karma, because it doesn't display the QUnit UI.
Here is my solution. It works fine for me, but YMMV.
Download qunit-karma-setup.js, qunit-karma-launch.js from here
Then, in your Gruntfile:
config.set({
frameworks: ['qunit'],
files: [
'your-app-files-here/**/*.js',
'qunit-karma-setup.js',
'your-tests-here/**/*.js',
'qunit-karma-launch.js'
]
});
Now, you can use omodule(), otest(), oasyncTest() functions to run only the selected modules or tests respectively.
QUnit is not well designed for this, but you can improve it!
The filtering mechanism of QUnit is validTest function that read QUnit.config.filter (string, name of a test). See https://github.com/jquery/qunit/blob/master/src/core.js#L820
There are two problems:
it only allows selecting one test,
you need to know the selected test name in advance (because the tests are filtered when being created).
Suggest changing QUnit to:
filter using a custom "filter" function (the default implementation can do what the current validTest does),
filter the tests when executing (means, collects all the tests first)
Then, implementing inclusive tests will be simple ;-)
I believe you can use only. Taken from the docs:
test('my test 1', function (assert) { ... });
only('my test that will be run exclusively now', function (assert) { ... });
So instead of using word test, you use only.
Add this on the top of your test file
var ttest = test;
test = function() {}
And rename the test you want to run to ttest. Looks clumsy, but simple and works well.