I'm trying to run all tests in the project using testng.xml I have 7 #Test method under one class. I used data provider for 2 test method and 5 #test method execute without data provider.
When I execute a class it executes only 1st #test method(test1).
I used dependsonmethods also but it is not working out for me.
**#Test class:**
public class regressionTest{
#DataProvider
public Object[][] getCardTestData() {
return TestUtil.getTestData(Constants.test1data);
}
#DataProvider
public Object[][] getCardTestDataNeg() {
return TestUtil.getTestData(Constants.test2data);
}
#Test(priority = 1,dataProvider = "getCardTestData", enabled = true){
public void test1{
}
}
#Test( priority = 2,dataProvider = "getCardTestDataNeg", enabled = true){
public void test2{
}
}
#Test( priority = 3, enabled = true){
public void test3{
}
}
#Test( priority = 4, enabled = true){
public void test4{
}
}
#Test( priority = 5, enabled = true){
public void test5{
}
}
#Test( priority = 6, enabled = true){
public void test6{
}
}
#Test( priority = 7, enabled = true){
public void test7{
}
}
}
**Testng.xml:**
<suite name="Automation Test Suite : Regression">
parameter name="environment" value="qa" />
<test name="Automation Test Cases : Regression" preserve-order="true">
<classes>
<class name="com.regressionTest"></class>
</test>
</suite>
Related
I want to #AfterClass teardown and the next class should initiate/be configured again. For example: Class1 closed Class 2 should run But I am not sure what is wrong with my code
XML Code
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE suite SYSTEM "https://testng.org/testng-1.0.dtd">
<suite name="Suite" preserve-order="true">
<test thread-count="5" name="Test" preserve-order="true" enabled="true">
<classes>
<class name="com.example.chat_pom.ProfileEditTest"/>
<class name="com.example.chat_pom.ProfileImageTest"/>
<class name="com.example.chat_pom.FeedTest"/>
</classes>
</test> <!-- Test -->
</suite> <!-- Suite -->
Class 1
public class ProfileImageTest extends TestBase {
ProfilePage profilePage;
public ProfileImageTest() {
super();
}
#BeforeClass
public void setup() throws MalformedURLException {
initialization();
profilePage = new ProfilePage();
}
#Test(priority = 1)
public void UserProfileImageTest() {
profilePage.setUploadProfilePhoto();
Assert.assertTrue(profilePage.ValidateThumbnail());
}
#AfterClass(enabled = true)
public void teardown() {
if (driver != null) {
driver.quit();
}
}
}
I want to move next class after teardown First Class
Class 2
public class FeedTest extends TestBase {
ExploreFeed exploreFeed;
public FeedTest() {
super();
}
#BeforeClass
public void setup() throws MalformedURLException {
initialization();
exploreFeed = new ExploreFeed();
}
#Test(priority = 1)
public void ExploreBtn() {
exploreFeed.ValidateExploreBtn();
}
#Test(priority = 2)
public void FeedClickTest() {
exploreFeed.FeedClickBtn();
}
#Test(priority = 3)
public void GalleryImageTest() throws InterruptedException {
exploreFeed.GalleryBtnClick();
exploreFeed.GalleryImageEditor();
}
#AfterClass(enabled = false)
public void teardown() {
if (driver != null) {
driver.quit();
}
}
}
But when I run this code class 1 teardown but Class 2 didn't start
#BeforeClass and #AfterClass just define functions that should be run before and after any #Test cases in that class. It would be down to how you run your code in the IDE or environment that you are using, i.e. run a single class or a suite of classes.
Try these:
#BeforeClass and #AfterClass: https://www.guru99.com/junit-test-framework.html
Test Suite: https://www.guru99.com/create-junit-test-suite.html
My issue is mainly to know how to populate TestRail results after Cucumber scenarios are run. I'm trying to have the results from my JUnit tests run set on an existing TestRail run. I have the APIClient and APIException as per this project. I then created this JUnit class also copying that same project. Not sure how to proceed now as first time using Cucumber and JUnit. Our project has also a Hooks class and a MainRunner if that helps?
public class Hooks {
public static WebDriver driver;
#Before
public void initializeTest() {
System.out.println("Testing whether it starts before every scenario");
driver = DriverFactory.startDriver();
}
}
import java.io.File;
#RunWith(Cucumber.class)
#CucumberOptions(
features = {"src/test/java/clinical_noting/feature_files/"},
glue = {"clinical_noting.steps", "clinical_noting.runner"},
monochrome = true,
tags = {"#current"},
plugin = {"pretty", "html:target/cucumber",
"json:target/cucumber.json",
"com.cucumber.listener.ExtentCucumberFormatter:target/cucumber-
reports/report.html"}
)
public class MainRunner {
#AfterClass
public static void writeExtentReport() {
Reporter.loadXMLConfig(new File(FileReaderManager.getInstance().getConfigReader().getReportConfigPath()))
;
}
}
Thanks for the help.
Update
Got TestRail to update when running the JUnit tests separately. Still not sure how to do it after the Cucumber scenario is run though? That's how it's working now:
public class JUnitProject {
private static APIClient client = null;
private static Long runId = 3491l;
private static String caseId = "";
private static int FAIL_STATE = 5;
private static int SUCCESS_STATE = 1;
private static String comment = "";
#Rule
public TestName testName = new TestName();
#BeforeClass
public static void setUp() {
//Login to API
client = testRailApiClient();
}
#Before
public void beforeTest() throws NoSuchMethodException {
Method m = JUnitProject.class.getMethod(testName.getMethodName());
if (m.isAnnotationPresent(TestRails.class)) {
TestRails ta = m.getAnnotation(TestRails.class);
caseId = ta.id();
}
}
#TestRails(id = "430605")
#Test
public void validLogin() {
comment = "another comment";
Assert.assertTrue(true);
}
#Rule
public final TestRule watchman = new TestWatcher() {
Map data = new HashMap();
#Override
public Statement apply(Statement base, Description description) {
return super.apply(base, description);
}
#Override
protected void succeeded(Description description) {
data.put("status_id", SUCCESS_STATE);
}
// This method gets invoked if the test fails for any reason:
#Override
protected void failed(Throwable e, Description description) {
data.put("status_id", FAIL_STATE);
}
// This method gets called when the test finishes, regardless of status
// If the test fails, this will be called after the method above
#Override
protected void finished(Description description) {
try {
data.put("comment", comment);
client.sendPost("add_result_for_case/" + runId + "/" + caseId, data);
} catch (IOException e) {
e.printStackTrace();
} catch (APIException e) {
e.printStackTrace();
}
};
};
}
And the annotation
#Retention(RetentionPolicy.RUNTIME)
#Target(ElementType.METHOD) //on method level
public #interface TestRails {
String id() default "none";
}
Working now. Had to add the scenario param inside the before method and do the TestRail connection from there.
#regressionM1 #TestRails(430605)
Scenario: Verify the user can launch the application
Given I am on the "QA-M1" Clinical Noting application
Then I should be taken to the clinical noting page
And
public class Hooks {
private static APIClient client = null;
private static Long runId = 3491l;
private static String caseId = "";
private static int FAIL_STATE = 5;
private static int SUCCESS_STATE = 1;
private static String SUCCESS_COMMENT = "This test passed with Selenium";
private static String FAILED_COMMENT = "This test failed with Selenium";
#Rule
public TestName testName = new TestName();
public static WebDriver driver;
#Before
public void initializeTest() {
client = testRailApiClient();
System.out.println("Testing whether it starts before every scenario");
driver = DriverFactory.startDriver();
}
#After()
public void tearDown(Scenario scenario) {
String caseIdSplit = "";
for (String s : scenario.getSourceTagNames()) {
if (s.contains("TestRail")) {
caseIdSplit = s.substring(11, 17); // Hardcoded for now as all the ids have 6 characters
System.out.println("Testing whether the browser closes after every scenario" + caseIdSplit);
}
}
caseId = caseIdSplit;
Map data = new HashMap();
if (!scenario.isFailed()) {
data.put("status_id", SUCCESS_STATE);
data.put("comment", SUCCESS_COMMENT);
} else if (scenario.isFailed()) {
data.put("status_id", FAIL_STATE);
data.put("comment", SUCCESS_COMMENT);
}
try {
client.sendPost("add_result_for_case/" + runId + "/" + caseId, data);
} catch (IOException e) {
e.printStackTrace();
} catch (APIException e) {
e.printStackTrace();
}
}
}
Update
Wrote a post on this here
I am currently rewriting the automated testing framework for my company's mobile testing. We are attempting to use an interface which is implemented by multiple Page Object Models dependent on the Operating System of the mobile device the application is being run on. I can get this framework to run sequentially and even create multiple threads but it will not run in parallel no matter what I do. Of Note, we use Appium and something called the DeviceCart/DeviceConnect which allows me to physically remote into multiple devices, thus this isn't running on a grid. With that said I will link my pertinent code (this is my second version of this same code, I wrote one with and one without using ThreadLocal)
This should instantiate a new driver with a new thread for each Test
public class TLDriverFactory {
private ThreadLocal < AppiumDriver < MobileElement >> tlDriver = new ThreadLocal <>();
public synchronized void setTLDriver(OS platform, String server, String udid, String bundleID) {
switch (platform) {
case IOS:
tlDriver = ThreadLocal.withInitial(() -> {
try {
return new IOSDriver < MobileElement > (new URL(server), DesiredCapsManager.getDesiredCapabilities(OS.IOS, udid, bundleID));
} catch(MalformedURLException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
return null;
});
break;
case ANDROID:
tlDriver = ThreadLocal.withInitial(() -> {
try {
return new AndroidDriver < MobileElement > (new URL(server), DesiredCapsManager.getDesiredCapabilities(OS.ANDROID, udid, bundleID));
} catch(MalformedURLException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
return null;
});
break;
default:
break;
}
}
public synchronized ThreadLocal < AppiumDriver < MobileElement >> getTLDriver() {
return tlDriver;
}
}
This handles browser capbilities
public class DesiredCapsManager {
public static DesiredCapabilities getDesiredCapabilities(OS platform, String udid, String bundleID) {
//Set DesiredCapabilities
DesiredCapabilities capabilities = new DesiredCapabilities();
capabilities.setCapability("deviceConnectUserName", "User#Name.com");
capabilities.setCapability("deviceConnectApiKey", "API-Token-Here");
capabilities.setCapability("udid", udid);
capabilities.setCapability("platformName", platform);
capabilities.setCapability("bundleID", bundleID);
//IOS only Settings
if (platform.equals(OS.IOS)) {
capabilities.setCapability("automationName", "XCUITest");
}
else {
//Android only Settings
capabilities.setCapability("automationName", "appium");
}
return capabilities;
}
}
This is the Base Test class from which every test inherits
public class BaseTest {
protected AppiumDriver < MobileElement > driver;
protected AppiumSupport.TLDriverFactory TLDriverFactory = new AppiumSupport.TLDriverFactory();
public enum OS {
ANDROID,
IOS
}
#AfterMethod
public synchronized void tearDown() throws Exception {
driver.quit();
TLDriverFactory.getTLDriver().remove();
}
}
Here is the test case itself
public class Test_SignIn extends BaseTest {
protected SignInPage signInPage;
#Parameters(value = {
"udid",
"bundleID",
"platform",
"server"
})
#BeforeMethod
public void setup(String udid, String bundleID, OS platform, String server) throws MalformedURLException,
InterruptedException {
//Set & Get ThreadLocal Driver
TLDriverFactory.setTLDriver(platform, server, udid, bundleID);
driver = TLDriverFactory.getTLDriver().get();
Thread.sleep(5000);
switch (platform) {
case IOS:
signInPage = new SignInPageIOS(driver);
break;
case ANDROID:
signInPage = new SignInPageAndroid(driver);
break;
default:
break;
}
System.out.println("Current Thread ID BeforeTest: " + Thread.currentThread().getName());
}
#Test
public synchronized void Authenticate() throws Exception {
System.out.println("Current Thread ID Test 1: " + Thread.currentThread().getName());
signInPage.Login("Username", "Password");
}
}
Here is the testng.xml file
< !DOCTYPE suite SYSTEM "http://testng.org/testng-1.0.dtd" >
<suite name="Test" parallel="tests" thread-count="4">
<test name="SignIn" parallel ="instances" thread-count="2">
<parameter name="udid" value="DeviceIdGoesHere" />
<parameter name="bundleID" value="Environment.address.here" />
<parameter name="platform" value="ANDROID" />
<parameter name="server" value="http://deviceconnect/appium" />
<classes>
<class name="Test.Test_SignIn">
</class>
</classes>
</test>
<test name="SignIn2" parallel="instances" thread-count="2">
<parameter name="udid" value="DeviceIdGoesHere" />
<parameter name="bundleID" value="Environment.address.here" />
<parameter name="platform" value="IOS" />
<parameter name="server" value="http://deviceconnect/appium" />
<classes>
<class name="Test.Test_SignIn">
</class>
</classes>
</test>
</suite>
What I'm looking for is if anyone can determine what mistake I've made or what the bottleneck is preventing the tests from running in parallel
Based on what you have shared so far, here's the cleaned-up and fixed code that should support your concurrency requirements.
The Driver factory class which is responsible for creation and clean-up of Appium driver instances for each and every thread, looks like below:
import io.appium.java_client.AppiumDriver;
import io.appium.java_client.MobileElement;
import io.appium.java_client.android.AndroidDriver;
import io.appium.java_client.ios.IOSDriver;
import java.net.MalformedURLException;
import java.net.URL;
public class TLDriverFactory {
private static final ThreadLocal<AppiumDriver<MobileElement>> tlDriver = new ThreadLocal<>();
public static void setTLDriver(BaseTest.OS platform, String server, String udid, String bundleID) throws MalformedURLException {
System.out.println("Current Thread ID Driver Instantiation: " + Thread.currentThread().getName());
AppiumDriver<MobileElement> driver;
switch (platform) {
case IOS:
driver = new IOSDriver<>(new URL(server), DesiredCapsManager.getDesiredCapabilities(BaseTest.OS.IOS, udid, bundleID));
break;
default:
driver = new AndroidDriver<>(new URL(server), DesiredCapsManager.getDesiredCapabilities(BaseTest.OS.ANDROID, udid, bundleID));
break;
}
tlDriver.set(driver);
}
public static AppiumDriver<MobileElement> getTLDriver() {
return tlDriver.get();
}
public static void cleanupTLDriver() {
tlDriver.get().quit();
tlDriver.remove();
}
}
Here's how the BaseTest which I am guessing is supposed to be the base class for all tests, would look like
import org.testng.annotations.AfterMethod;
import org.testng.annotations.BeforeMethod;
import org.testng.annotations.Parameters;
public class BaseTest {
private static final ThreadLocal<SignInPage> signInPage = new ThreadLocal<>();
public enum OS {
ANDROID,
IOS
}
#Parameters(value = {"udid", "bundleID", "platform", "server"})
#BeforeMethod
public void setup(String udid, String bundleID, OS platform, String server) throws Exception {
//Set & Get ThreadLocal Driver
TLDriverFactory.setTLDriver(platform, server, udid, bundleID);
Thread.sleep(5000);
SignInPage instance;
switch (platform) {
case IOS:
instance = new SignInPageIOS(TLDriverFactory.getTLDriver());
break;
default:
instance = new SignInPageAndroid(TLDriverFactory.getTLDriver());
break;
}
System.out.println("Current Thread ID BeforeTest: " + Thread.currentThread().getName());
signInPage.set(instance);
}
#AfterMethod
public void tearDown() {
System.out.println("Current Thread ID AfterTest: " + Thread.currentThread().getName());
TLDriverFactory.cleanupTLDriver();
}
protected static SignInPage getPageForTest() {
return signInPage.get();
}
}
Here's how the constructor of your page classes would look like
import io.appium.java_client.AppiumDriver;
import io.appium.java_client.MobileElement;
public class SignInPageIOS extends SignInPage {
public SignInPageIOS(AppiumDriver<MobileElement> tlDriver) {
super(tlDriver);
}
}
Here's how a typical test case could look like
import org.testng.annotations.Test;
public class Test_SignIn extends BaseTest {
#Test
public void authenticate() {
//Get the instance of "SignInPage" for the current thread and then work with it.
getPageForTest().Login("Username", "Password");
}
}
I am trying to implement a very simple PDX autoserialization in Geode. I've created a domain class of my own with a zero arg constructor:
public class TestPdx
{
public string Test1 { get; set; }
public string Test2 { get; set; }
public string Test3 { get; set; }
public TestPdx() { }
}
Now I want this class to auto serialize. I start a server cache with the following cache.xml where I attempt to register this type for auto PDX:
<?xml version="1.0" encoding="UTF-8"?>
<cache xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns="http://geode.apache.org/schema/cache"
xsi:schemaLocation="http://geode.apache.org/schema/cache
http://geode.apache.org/schema/cache/cache-1.0.xsd"
version="1.0">
<cache-server/>
<pdx>
<pdx-serializer>
<class-name>org.apache.geode.pdx.ReflectionBasedAutoSerializer</class-name>
<parameter name="classes"><string>TestPdx</string></parameter>
</pdx-serializer>
</pdx>
<region name="webclient" refid="REPLICATE_PERSISTENT"/>
</cache>
and then run the following code:
static void Main(string[] args)
{
// 1. cache
CacheFactory cacheFactory = CacheFactory.CreateCacheFactory();
Cache cache = cacheFactory
.SetSubscriptionEnabled(true)
.SetPdxReadSerialized(true)
.Create();
Serializable.RegisterPdxSerializer(new ReflectionBasedAutoSerializer());
RegionFactory regionFactory = cache.CreateRegionFactory(RegionShortcut.CACHING_PROXY);
IRegion<string, TestPdx> region = regionFactory.Create<string, TestPdx>("webclient");
// 3. TestPx object
TestPdx t = new TestPdx();
t.Test1 = "test1";
t.Test2 = "test2";
t.Test3 = "test3";
region["1"] = t;
// 4. Get the entries
TestPdx result1 = region["1"];
// 5. Print result
Console.WriteLine(result1.Test1);
Console.WriteLine(result1.Test2);
Console.WriteLine(result1.Test3);
}
This code is crashing at line region["1"] = t; with error
GFCLI_EXCEPTION:System.Runtime.InteropServices.SEHException (0x80004005): External component has thrown an exception.
at apache.geode.client.SerializationRegistry.GetPDXIdForType(SByte* , SharedPtr<apache::geode::client::Serializable>* )
So I haven't registered the PDX type properly. How do you do that with native client?
THANKS
An answer here is to implement IPdxSerializable in the TestPdx as follows:
public class TestPdx : IPdxSerializable
{
public string Test1 { get; set; }
public string Test2 { get; set; }
public string Test3 { get; set; }
public int Pid { get; set; }
public void ToData(IPdxWriter writer)
{
writer.WriteString("Test1", Test1);
writer.WriteString("Test2", Test2);
writer.WriteString("Test3", Test3);
writer.WriteInt("Pid", Pid);
writer.MarkIdentityField("Pid");
}
public void FromData(IPdxReader reader)
{
Test1 = reader.readString("Test1");
Test2 = reader.readString("Test2");
Test3 = reader.readString("Test3");
Pid = reader.readInt("Pid");
}
public static IPdxSerializable CreateDeserializable()
{
return new TestPdx();
}
public TestPdx() { }
}
and then register the Pdx type in the Geode, and use a region of type object or type TestPdx as follows:
Serializable.RegisterPdxType(TestPdx.CreateDeserializable);
IRegion<string, Object> t = regionFactory.Create<string, Object>("test");
and to write the TestPdx to the region simply:
TestPdx value = new TestPdx();
value.Test1 = "hello";
value.Test2 = "world";
value.Test3 = "again";
t[key] = value;
and there will be a PdxInstance in the Geode region so you can run OQL queries on it, etc.
I am using testNG 6.9.10 that installed in Eclipse.
I was trying to use retry to make sure the failed tests could run maxcount times that defined.
See below codes.
public class TestRetry implements IRetryAnalyzer {
private int retryCount = 0;
private int maxRetryCount = 1;
public boolean retry(ITestResult result) {
if (retryCount < maxRetryCount) {
retryCount++;
return true;
}
return false;
}
#Test(retryAnalyzer = TestRetry.class)
public void testGenX() {
Assert.assertEquals("google", "google");
}
#Test(retryAnalyzer = TestRetry.class)
public void testGenY() {
Assert.assertEquals("hello", "hallo");
}
}
I got below result:
===============================================
Default test
Tests run: 3, Failures: 1, Skips: 1
===============================================
===============================================
Default suite
Total tests run: 3, Failures: 1, Skips: 1
===============================================
But seems like the result count with some problems. I want below:
===============================================
Default test
Tests run: 2, Failures: 1, Skips: 0
===============================================
===============================================
Default suite
Total tests run: 2, Failures: 1, Skips: 0
===============================================
I tried to defined the listeners to implement it, something like to override the onFinish function. You may find it in http://www.seleniumeasy.com/testng-tutorials/retry-listener-failed-tests-count-update
But finally not works.
can someone who had met this could help?
Its working fine, i hope there is some problem on listener usage. I created TestRetry as same like you but with out #Test methods.
public class TestRetry implements IRetryAnalyzer{
private int retryCount = 0;
private int maxRetryCount = 1;
#Override
public boolean retry(ITestResult arg0) {
// TODO Auto-generated method stub
if (retryCount < maxRetryCount) {
retryCount++;
return true;
}
return false;
}
}
Created Listener class
public class TestListener implements ITestListener{
#Override
public void onFinish(ITestContext context) {
// TODO Auto-generated method stub
Set<ITestResult> failedTests = context.getFailedTests().getAllResults();
for (ITestResult temp : failedTests) {
ITestNGMethod method = temp.getMethod();
if (context.getFailedTests().getResults(method).size() > 1) {
failedTests.remove(temp);
} else {
if (context.getPassedTests().getResults(method).size() > 0) {
failedTests.remove(temp);
}
}
}
}
#Override
public void onStart(ITestContext arg0) {
// TODO Auto-generated method stub
}
#Override
public void onTestFailedButWithinSuccessPercentage(ITestResult arg0) {
// TODO Auto-generated method stub
}
#Override
public void onTestFailure(ITestResult arg0) {
// TODO Auto-generated method stub
}
#Override
public void onTestSkipped(ITestResult arg0) {
// TODO Auto-generated method stub
}
#Override
public void onTestStart(ITestResult arg0) {
// TODO Auto-generated method stub
}
#Override
public void onTestSuccess(ITestResult arg0) {
// TODO Auto-generated method stub
}
}
Finally my test class with those methods
public class RunTest {
#Test(retryAnalyzer = TestRetry.class)
public void testGenX() {
Assert.assertEquals("google", "google");
}
#Test(retryAnalyzer = TestRetry.class)
public void testGenY() {
Assert.assertEquals("hello", "hallo");
}
}
Executed this RunTest from testng.xml file by specifying the my custom listener
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE suite SYSTEM "http://testng.org/testng-1.0.dtd">
<suite name="Suite1" parallel="false" preserve-order="true">
<listeners>
<listener class-name="com.test.TestListener"/>
</listeners>
<test name="TestA">
<classes>
<class name="com.test.RunTest"/>
</classes>
</test> <!-- Test -->
</suite> <!-- Suite -->
Please have a try..
Thank You,
Murali
#murali could please see my codes below? I really cannot see any difference.
The CustomLinstener.java
package cases;
import java.util.Set;
import org.testng.ITestContext;
import org.testng.ITestListener;
import org.testng.ITestNGMethod;
import org.testng.ITestResult;
public class CustomLinstener implements ITestListener{
#Override
public void onFinish(ITestContext context) {
Set<ITestResult> failedTests = context.getFailedTests().getAllResults();
for (ITestResult temp : failedTests) {
ITestNGMethod method = temp.getMethod();
if (context.getFailedTests().getResults(method).size() > 1) {
failedTests.remove(temp);
} else {
if (context.getPassedTests().getResults(method).size() > 0) {
failedTests.remove(temp);
}
}
}
}
#Override
public void onStart(ITestContext arg0) {
// TODO Auto-generated method stub
}
#Override
public void onTestFailedButWithinSuccessPercentage(ITestResult arg0) {
// TODO Auto-generated method stub
}
#Override
public void onTestFailure(ITestResult arg0) {
// TODO Auto-generated method stub
}
#Override
public void onTestSkipped(ITestResult arg0) {
// TODO Auto-generated method stub
}
#Override
public void onTestStart(ITestResult arg0) {
// TODO Auto-generated method stub
}
#Override
public void onTestSuccess(ITestResult arg0) {
// TODO Auto-generated method stub
}
}
The RunTest.java
package cases;
import org.testng.Assert;
import org.testng.annotations.Test;
public class RunTest {
#Test(retryAnalyzer = TestRetry.class)
public void testGenX() {
Assert.assertEquals("google", "google");
}
#Test(retryAnalyzer = TestRetry.class)
public void testGenY() {
Assert.assertEquals("hello", "hallo");
}
}
The TestRetry.java
package cases;
import org.testng.IRetryAnalyzer;
import org.testng.ITestResult;
public class TestRetry implements IRetryAnalyzer{
private int retryCount = 0;
private int maxRetryCount = 1;
#Override
public boolean retry(ITestResult arg0) {
// TODO Auto-generated method stub
if (retryCount < maxRetryCount) {
retryCount++;
return true;
}
return false;
}
}
Finally the XML. I right click it and run as the testNG suite.
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE suite SYSTEM "http://testng.org/testng-1.0.dtd">
<suite name="Suite1" parallel="false" preserve-order="true">
<test name="TestA">
<classes>
<class name="cases.RunTest" />
</classes>
</test> <!-- Test -->
<listeners>
<listener class-name="cases.CustomLinstener" />
</listeners>
</suite> <!-- Suite -->
The documentation for TestNG's IRetryAnalyzer does not specify test reporting behavior:
Interface to implement to be able to have a chance to retry a failed test.
There are no mention of "retries" on http://testng.org/doc/documentation-main.html and searching across the entire testng.org site only returns links to the documentation of and references to IRetryAnalyzer (see site:testng.org retry - Google Search).
As there is no documentation for how a retried test is reported we cannot make many sound expectations. Should each attempt appear in the test results? If so, is each attempt except for the last attempt marked as a skip and the last as either a success or a failure? It isn't documented. The behavior is undefined and it could change with any TestNG release in subtle or abrupt ways.
As such, I recommend using a tool other than TestNG for retry logic.
e.g. You can Spring Retry (which can be used independently of other Spring projects):
TestRetry.java
public class TestRetry {
private static RetryOperations retryOperations = createRetryOperations();
private static RetryOperations createRetryOperations() {
RetryTemplate retryTemplate = new RetryTemplate();
retryTemplate.setRetryPolicy(createRetryPolicy());
return retryTemplate;
}
private static RetryPolicy createRetryPolicy() {
int maxAttempts = 2;
Map<Class<? extends Throwable>, Boolean> retryableExceptions =
Collections.singletonMap(AssertionError.class, true);
return new SimpleRetryPolicy(maxAttempts, retryableExceptions);
}
#Test
public void testGenX() {
runWithRetries(context -> {
Assert.assertEquals("google", "google");
});
}
#Test
public void testGenY() {
runWithRetries(context -> {
Assert.assertEquals("hello", "hallo");
});
}
private void runWithRetries(RetryRunner<RuntimeException> runner) {
retryOperations.execute(runner);
}
}
RetryRunner.java
/**
* Runner interface for an operation that can be retried using a
* {#link RetryOperations}.
* <p>
* This is simply a convenience interface that extends
* {#link RetryCallback} but assumes a {#code void} return type.
*/
interface RetryRunner<E extends Throwable> extends RetryCallback<Void, E> {
#Override
default Void doWithRetry(RetryContext context) throws E {
runWithRetry(context);
return null;
}
void runWithRetry(RetryContext context) throws E;
}
Console Output
===============================================
Default Suite
Total tests run: 2, Failures: 1, Skips: 0
===============================================
Spring Retry may look slightly more complicated at first but it provides very flexible features and API and enables separation of concerns of the test retry logic and the test reporting.