#Peter - As per your suggestion from my previous queries, I have used ExecutionHooks to implement ReportPortal. I am finding difficulties in passing all the required values from my Runner to Base Runner. Below is my configuration-
BaseRunner.java
Results results = Runner.parallel(tags,path,ScenarioName,Collections.singletonList(new
ScenarioReporter()),threads,karateOutputPath);
Runner.java
#KarateOptions(tags = { "#Shakedown" },
features = "classpath:tests/Shakedown"
)
I want to understand how can I pass the attributes like Scenario Name, path and tags. ScenarioReporter() is my class where I have implemented Execution Hook. I have a base runner that will have all the details and a normal runner that will have minimal information. I have just given snippets, please don't mind if there are some syntactical errors.
You don't need the annotations any more, and you can set all parameters including tags using the new "builder" (fluent interface) on the Runner. Refer the docs: https://github.com/intuit/karate#parallel-execution
Results results = Runner.path("classpath:some/package").tags("~#ignore").parallel(5);
So it should be easier to inherit from base classes etc. just figure out a way to pass a List<String> of tags and use it.
Just watch out for this bug, fixed in 0.9.6.RC1: https://github.com/intuit/karate/issues/1061
Related
So I currently have an automation pack that I have created using Selenium/Specflow.
I wanted to know whether it is possible to have multiple BeforeTestRun hooks?
I've already tried: [BeforeTestRun("example1")] but I receive an error stating BeforeTestRunAttribute does not contain a constructor that takes 1 arguments
I tried the following but that also failed:
[BeforeTestRun]
[Scope(Tag = "example1")]
And referenced the above in the .feature file like this:
#example1
Scenario: This is an example
Given...
When...
Then...
Is there a way to implement this correctly such that in one .feature file I can have two scenarios that can use different [BeforeTestRun]?
If you cannot use [BeforeScenario] like suggested, you can try to manually check for tags using if statements. To get the current tags and compare them to the ones you need, try this:
var tags = ScenarioContext.ScenarioInfo.Tags;
if (tags.Any(x => x.Equals("MyTag")))
{
DoWork();
}
More info here: https://stackoverflow.com/a/42417623/9742876
I'd like to enable a test if a certain tag is "included", i.e. passed with option --include-tag of the ConsoleLauncher or useJUnitPlatform.includeTags property in Gradle. Is there any API to retrieve the value of this option in the context of test class or method?
I tried the script-based condition #EnabledIf like this:
#EnabledIf("'true' == systemProperty.get('itest.backendSystemPresent') || junitTags.contains('BackendSystemIT') == true")
But junitTags contains the #Tag annotations of the element in question, not the tags included at runtime.
Reading your question again, my answer is "No". You can't use junitTags to achieve your goals. And no, there's no such API at the moment. You would need something like:
#EnabledIf("'true' == evaluateTagExpression('BackendSystemIT') || ...)
Because you need to take care of tag expression here as well: https://junit.org/junit5/docs/current/user-guide/#running-tests-tag-expressions
But, tags are evaluated earlier in the process. Your condition will not get a chance to be executed when the test was already excluded by tag evaluation. So, I guess, you'll have to stick with the single system property switch to control the enabled state of the test method.
Btw. we are improving the tag expression language with any() and none() tokens, soon. https://github.com/junit-team/junit5/issues/1679
Possible solution:
Annotate your test with #Tag("BackendSystemIT")
Before running your tests, check for itest.backendSystemPresent system property and if it is set, pass a --include-tag "BackendSystemIT" to the test run.
Let Jupiter do the job of evaluating tag expressions
Is there any API to retrieve the value (of this option) of all tags that are attached directly or inherited in the context of test class or method?
Yes. Declare and use a org.junit.jupiter.api.TestInfo parameter in your test method.
#Test
#DisplayName("TEST 1")
#Tag("my-tag")
void test1(TestInfo testInfo) {
assertEquals("TEST 1", testInfo.getDisplayName());
assertTrue(testInfo.getTags().contains("my-tag"));
}
For details see https://junit.org/junit5/docs/current/user-guide/#writing-tests-dependency-injection
But junitTags contains the #Tag annotations of the element in question, not the tags included at runtime.
This is the expected behaviour -- the platform (here: console launcher) already applied the filter passed via --include-tag and other configuration parameters. In short: there's no need to manually check for tags in standard Jupiter tests. If there's problem with the built-in filtering, please create an issue here: https://github.com/junit-team/junit5/issues/new/choose
I have three-four Cucumber tags (say #smoke, #testing, #test) which are randomly put on scenarios and feature files. Now, I need to list all the scenarios which fall only under smoke. Can someone please suggest a way to do this
You can use the dryRun=true option in CucumberOptions with tag filters in your runner to get the list of scenarios in the report. This option will not execute your features but will list them out plus check if the steps have the appropriate glue code.
#CucumberOptions(plugin={"pretty", "html:report"}, tags={"#smoke"},
snippets=SnippetType.CAMELCASE,
glue="....", features="src/test/resources/features", dryRun=true)
Make sure you add the correct path of your glue code. Also the features should point to the top directory containing the feature files.
The above should list out the any scenario containing the #smoke tag in the html report.
But if you are looking for scenario list with only #smoke tag and not the others use this filter (tags="#smoke","~#testing","~#test").
Point of caution, if you have Scenario Outlines they will be repeated by the number of scenarios in the examples table.
You can write your own filter to print what you would run but run nothing.
require 'cucumber/core/filter'
# put inside features/support/
class PrintOnlyCukeFilter < Cucumber::Core::Filter.new
def test_case(test_case)
puts test_case.location.to_s
end
end
AfterConfiguration do |config|
config.filters << PrintOnlyCukeFilter.new
end
Put the file in features/support.
To print only file names you can see this gist.
I tried ReportNG, but it is not updating the report now & I found that ReportNG is no more used from this answer.
I want to create a test report/customize TestNG report to gave to development team. I used Hybrid Framework for creating the project and followed this tutorial.
Yes, you can customize the TestNG reports using Listeners and Reporters. Here is the link of documentation. It is not clear from a question what type of customization you want to do.
But I want to suggest better alternatives for reporting here. There are two most used libraries which generally used with Selenium.
Allure Test Report
Extent Report.
I have not used Allure test reports but it seems to be good and widely used in the community. I have had used Extent Reports in two projects and really happy with it. Anshoo Arora has done the remarkable job. Documentation is very good with lot of example & code snippet. I would highly recommend it.
To customize selenium TestNG report, you can use testng listeners.
ITestListener: Log Result/Screenshot on test pass/fail/skip.
IReporter: To generate html report from xml suite results and log.
But as an alternative you can use qaf-reporting.
It provides Detailed Live Reporting (Don`t need to wait for complete execution).
I know this is old thread, but these reports can be edited and custom reports can be made like below. I have also explained here how TestHTMLReporter can be edited . And if you would like to know , how index.html report is customized have a look here , where I have explained it in detail
With your customReport You'd have to implement IReporter , extend TestListenerAdapter and override generateReport method if you want to implement a custom TestHTMLReporter . For other reporters you may have to do things a bit differently but the concept will remain the same. You'd achieve custom 'TestHTMLReporter' like below .
Create a CustomReport.java file in your project and copy-paste the whole content of TestHTMLReporter.java , change the name of file in getOutputFile method and it would look like below
public class CustomReport extends TestListenerAdapter implements IReporter {
#Override
public void generateReport(List<XmlSuite> xmlSuites, List<ISuite> suites,
String outputDirectory) {
}
...
//paste the content of TestHTMLReporter.java here
...
...
Make sure all your imports are in place from TestHTMLReporter.java
Now, in this file change as per your requirement . For ex: if you'd like to add the end time of each of the test then at the correct place in generateTable method add the below snippet
// Test class
String testClass = tr.getTestClass().getName();
long testMillis = tr.getEndMillis();
String testMillisString = Long.toString(testMillis);
if (testClass != null) {
pw.append("<br>").append("Test class Name: ").append(testClass);
// this line to add end time in ms
pw.append("<br>").append("End Time(ms): ").append(testMillisString);
// Test name
String testName = tr.getTestName();
if (testName != null) {
pw.append(" (").append(testName).append(")");
}
Then you'll get like below
Now, You'll get two reports one with default and the other with your file name.
The only thing now remains is switching off the default reporting listeners, so you get only one report. For that you can follow this or you may search for solutions. Hope this helps
I am trying googletest.
Previously i have been using Boost test and i have been using the macro BOOST_AUTO_TEST_SUITE to group my tests into a Testsuite.
This makes the junit reports much more readable.
I have not found a hint how to do this or something similar in googletest. Is it possible?
I use the first parameter of the call to TEST() or TEST_F() as sort of a "test suite" identifier, like this:
TEST(TestSuiteName, shouldExpectTrue) {
EXPECT_TRUE(true);
}
TEST(TestSuiteName, shouldExpectFalse) {
EXPECT_FALSE(false);
}
Of course, when using a fixture class with TEST_F(), your TestSuiteName will need to match the name of your fixture class, so it will be necessary to create a separate fixture class for each test suite.
There is no way that I know of to break the test suites into sub-suites or anything like that, but of course you could always run your tests multiple times using the --gtest_filter="someFilter" option if you wanted to clean up your output.