Gradle configuring TestNG and JUnit reports dirs - testing

I am relatively new to gradle and we use both JUnit and testNG unit tests in our project. With google help I figured out how to make both testNG and JUnit tests running. Below is how I ended up by achieving it
build.gradle
....
task testNG(type: Test) {
useTestNG {}
}
test {
dependsOn testNG
}
However only the JUnit reports are produced. Google again helped me with this link https://discuss.gradle.org/t/using-junit-and-testng-together-steps-on-testng-html-file/5484 which shows how to solve a problem that looks exactly like mine by configuring two separate test reports folders like below:
testng.testReportDir = file("$buildDir/reports/testng")
test.testReportDir = file("$buildDir/reports/testjunit")
However it does not exactly say where to put those two entries and I feel like I am going to get crazy trying to look at gradle books, gradle examples and API without figuring out how to. According with the API test task has a reports property where you can configure TestTaskReports instance but whatever I tried failed.
Can you please help me with this. It must be something so obvious that I am missing it.
Thank you in advance

Take a look at the DSL reference for Test. There you can find a reports property at https://docs.gradle.org/current/dsl/org.gradle.api.tasks.testing.Test.html#org.gradle.api.tasks.testing.Test:reports
This gives you a TestTaskReports. Following this route in the DSL leads you to:
task testNG(type: Test) {
useTestNG {}
reports.html.destination = file("$buildDir/reports/testng")
}
test {
reports.html.destination = file("$buildDir/reports/test")
dependsOn testNG
}

Related

Karate - How to add Junit RunListener to KarateParallel Runner

I am trying to add RunListener to Karate ParallelRunner class. I have done this for Karate runner using #Karate.class and add a custom runner. I am writing the data to infuxdb and generating reports is grafana, I am able to successfully achieve it in karate runner. Below is the code snippet for the same. I am running my karate runner using this custom runner where I have added this listener. I want to achieve the same for parallel runner.
#Override
public void run(RunNotifier notifier) {
notifier.addListener(new ExecutionListener());
super.run(notifier);
This is not directly possible, the parallel runner is a very specific implementation and has nothing to do with JUnit by design.
Since you seem to be experienced in adding JUnit listeners and the like, you can refer to this code in case it gives you any ideas.
CliExecutionHook.java.
For more details about the "ExecutionHook" refer this: https://github.com/intuit/karate/issues/970#issuecomment-557443551
But let me say I think you are un-necessary trying to put effort into reports that will not really get you any benefit in the long run except for "looking good" :) And if you feel something needs to change in Karate, please contribute, it is open-source.
Thanks for your suggestion peter. I was able to send scenario and test run details to influx db in order to generate reports in grafana. I just made use of karate results extracted all the values required and called it in Junit #After.
public void writeScenarioResults(List<ScenarioResult> results){
String status;
for (ScenarioResult a:results) {
status=((a.isFailed()==true)?"FAIL":"PASS");
gb.sendTestMethodStatus(a.getScenario().getName(),status,build);
}
}

I want to run each .feature file as single TestNG test using Karate?

I want to run each .feature file as single TestNG test using Karate, how can I do it?
We are using karate to automate REST API's. We have custom listener which will record status of each TestNG test and other information to postgress DB.
One more way running by tags:
#CucumberOptions(tags = { "#getVersion" })
public class GetVersionTest extends KarateRunner {
}
Feature File:
#getVersion
Feature: Testing Rest API with Karate and Java
Scenario:
Given url 'https://......'
When method get
Then status 200
And match response contains '{version= x.xx.x}'
Did you try using the Karate TestNG support ? The documentation has details on how to use: https://github.com/intuit/karate#running-with-testng
As the developer of Karate I actually strongly recommend that teams don't use TestNG. My suggestion is that you use the new 'hooks' feature to call your custom Java code from Karate directly and you won't depend on TestNG or even JUnit.
If you still need a custom way of running, the TestNG support is actually a single class: KarateRunner.java. I really don't know if it runs each feature as a TestNG test or not since I am not a TestNG expert. But you should be able to create your own version of this Java code that does what you want and maybe contribute it back to the project.

How to integrate Galen reports in Jenkins

I've started to use Galen framework to test the layout of my website pages and I also have my other test, written in Selenium, integrated into Jenkins.
I'm using Java+JUnit+Maven and I would like to know if anyone has managed to integrate the Galen reporting into Jenkins and how.
Because for the moment I am using something like:
assertThat(layoutReport.errors(), is(0));
which tells me if there were errors in the tests but not where.
Thanks!
P.S. If someone with reputation could make the tag galen-framework so that we can group these type of questions, it would be great :D
In your case you could use Galen for generating HTML reports as it normally does when you run tests with it. Though you will have to manage the creation of GalenTestInfo objects.
Here is how the HTML report generation works. Imagine you have somewhere obtainAllTests method defined which returns a list of all executed tests.
List<GalenTestInfo> tests = obtainAllTests();
new HtmlReportBuilder().build(tests, "target/galen-html-reports");
Somewhere in your code you could create a GalenTestInfo and add it to some collection:
GalenTestInfo testInfo = GalenTestInfo.fromString("Here goes the name of your test");
Once you have done the layout checking and obtained LayoutReport object you could add this report to the report of the test. Here is how you can do it:
LayoutReport layoutReport = Galen.checkLayout(driver,
specPath, includedTags, null,
new Properties(), null);
testInfo.getReport().layout(layoutReport, "A title for your layout check");
You can find more insights in this project https://github.com/galenframework/galen-sample-java-tests. It has a basic setup for Galen tests in Java + TestNG + Maven. The reports in it are collected in a singleton GalenReportsContainer. There is also a reporter implemented in GalenReportingListener which takes all those tests from GalenReportsContainer and generates HTML reports.
You can see a full example for Java (TestNG and JUnit) and JavaScript here:
https://github.com/hypery2k/galen_samples.
I use the HTML Plugin together with Jenkins, see example here

SpecFlow - How to use data driven tests like NUnits TestCaseSource property?

I'm a QA who decided to use SpecFlow for my test automation after some consideration. I think it's brilliant, but missing one feature which I did use often with other test runners such as NUnit - something similar to the TestCaseSource property from NUnit to specify a potentially dynamic set of data for tests to be ran against at run time.
I would often have different data in each environment the test should run in, so cannot specify hardcoded values for test parameters. A trivial example is for checking that each type of user account is able to login, the user account credentials can be retrieved using a DB query to populate each test case dynamically in NUnit:
public List<User> GetTestData()
{
List<User> testData = new List<User>();
testData = MyDatabase.GetAllUsersInfo().ToList();
return testData;
}
[Test, TestCaseSource("GetTestData")]
public void CallLoginService(User user)
{
var response = LoginController.TryLogin(User.UserName, User.Password);
if (response.Error != null)
{
Assert.Fail("Failed to Login: {0}", response.Error);
}
Assert.AreEqual("Logged in ok", response.Message, "Login message not as expected");
}
Obviously this is a simple example of that feature, but I think it describes it well enough. I know we have the ability in SpecFlow to use a Scenario Outline and table of test run input data, but that is still static, so doesn't fit the bill.
I've been looking for a while and have not found anything in SpecFlow like this yet, does anybody know of anything similar to the above which can be used (or planned if anyone who works on the project reads this)?
Thanks :)
I have no idea if anything like this is planned but for now the problem is that there is a background code generation step when you edit your feature file via Visual Studio.
When it is saved in Visual Studio it is parsed and converted into the feature.cs file and that is the one that is compiled and used for testing.
So your process would become
edit your data source
export to feature file
get specflow's VS plugin to convert to feature.cs
run msbuild
run tests via Nunit or similar
I wouldn't do this. Instead I'd focus on getting my tests to be better examples. It sounds like you are to trying to exhaustively cover every possibility. Don't come up with examples to cover every possible case, but instead cover as much logic as possible with fewer tests.

NAnt, MbUnit, CruiseControl, Selenium - passing settings to the test assembly

I am putting together some ideas for our automated testing platform and have been looking at Selenium for the test runner.
I am wrapping the recorded Selenium C# scripts in an MbUnit test, which is being triggered via the MbUnit NAnt task. The Selenium test client is created as follows:
selenium = new DefaultSelenium("host", 4444, "*iexplore", "http://[url]/");
How can I pass the host, port and url settings into the test so their values can be controlled via the NAnt task?
For example, I may have multiple Selenium RC servers listening and I want to use the same test code passing in each server address instead of embedding the settings within the tests themselves.
I have an approach mocked up using a custom NAnt task I have written but it is not the most elegant solution at present and I wondered if there was an easier way to accomplish what I want to do.
Many thanks if anyone can help.
Thanks for the responses so far.
Environment variables could work, however, we could be running parallel tests via a single test assembly so I wouldn't want settings to be overwritten during execution, which could break another test. Interesting line of thought though, thanks, I reckon I could use that in other areas.
My current solution involves a custom NAnt task build on top of the MbUnit task, which allows me to specify the additional host, port, url settings as attributes. These are then saved as a config file within the build directory and then read in by the test assemblies. This feels a bit "clunky" to me as my tests need to inherit from a specific class. Not too bad but I'd like to have less dependencies and concentrate on the testing.
Maybe I am worrying too much!!
I have a base class for all test fixtures which has the following setup code:
[FixtureSetUp]
public virtual void TestFixtureSetup ()
{
BrowserType = (BrowserType) Enum.Parse (typeof (BrowserType),
System.Configuration.ConfigurationManager.AppSettings["BrowserType"],
true);
testMachine = System.Configuration.ConfigurationManager.AppSettings["TestMachine"];
seleniumPort = int.Parse (System.Configuration.ConfigurationManager.AppSettings["SeleniumPort"],
System.Globalization.CultureInfo.InvariantCulture);
seleniumSpeed = System.Configuration.ConfigurationManager.AppSettings["SeleniumSpeed"];
browserUrl = System.Configuration.ConfigurationManager.AppSettings["BrowserUrl"];
targetUrl = new Uri (System.Configuration.ConfigurationManager.AppSettings["TargetUrl"]);
string browserExe;
switch (BrowserType)
{
case BrowserType.InternetExplorer:
browserExe = "*iexplore";
break;
case BrowserType.Firefox:
browserExe = "*firefox";
break;
default:
throw new NotSupportedException ();
}
selenium = new DefaultSelenium (testMachine, seleniumPort, browserExe, browserUrl);
selenium.Start ();
System.Console.WriteLine ("Started Selenium session (browser type={0})",
browserType);
// sets the speed of execution of GUI commands
if (false == String.IsNullOrEmpty (seleniumSpeed))
selenium.SetSpeed (seleniumSpeed);
}
I then simply supply the test runner with a config. file:
For MSBuild I use environment variables, I create those in my CC.NET config then they would be available in the script. I think this would work for you too.
Anytime I need to integrate with an external entity using NAnt I either end up using the exec task or writing a custom task. Given the information you posted it would seem that writing your own would indeed be a good solution, However you state you're not happy with it. Can you elaborate a bit on why you don't think you current solution is an elegant one?
Update
Not knowing internal details it seems like you've solved it pretty well with a custom task. From what I've heard, that's how I would have done it.
Maybe a new solution will show itself in time, but for now be light on yourself!