I have used #AfterClass and After TearDown test closed but next class page doesn't take initiate to run - selenium

I want to #AfterClass teardown and the next class should initiate/be configured again. For example: Class1 closed Class 2 should run But I am not sure what is wrong with my code
XML Code
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE suite SYSTEM "https://testng.org/testng-1.0.dtd">
<suite name="Suite" preserve-order="true">
<test thread-count="5" name="Test" preserve-order="true" enabled="true">
<classes>
<class name="com.example.chat_pom.ProfileEditTest"/>
<class name="com.example.chat_pom.ProfileImageTest"/>
<class name="com.example.chat_pom.FeedTest"/>
</classes>
</test> <!-- Test -->
</suite> <!-- Suite -->
Class 1
public class ProfileImageTest extends TestBase {
ProfilePage profilePage;
public ProfileImageTest() {
super();
}
#BeforeClass
public void setup() throws MalformedURLException {
initialization();
profilePage = new ProfilePage();
}
#Test(priority = 1)
public void UserProfileImageTest() {
profilePage.setUploadProfilePhoto();
Assert.assertTrue(profilePage.ValidateThumbnail());
}
#AfterClass(enabled = true)
public void teardown() {
if (driver != null) {
driver.quit();
}
}
}
I want to move next class after teardown First Class
Class 2
public class FeedTest extends TestBase {
ExploreFeed exploreFeed;
public FeedTest() {
super();
}
#BeforeClass
public void setup() throws MalformedURLException {
initialization();
exploreFeed = new ExploreFeed();
}
#Test(priority = 1)
public void ExploreBtn() {
exploreFeed.ValidateExploreBtn();
}
#Test(priority = 2)
public void FeedClickTest() {
exploreFeed.FeedClickBtn();
}
#Test(priority = 3)
public void GalleryImageTest() throws InterruptedException {
exploreFeed.GalleryBtnClick();
exploreFeed.GalleryImageEditor();
}
#AfterClass(enabled = false)
public void teardown() {
if (driver != null) {
driver.quit();
}
}
}
But when I run this code class 1 teardown but Class 2 didn't start

#BeforeClass and #AfterClass just define functions that should be run before and after any #Test cases in that class. It would be down to how you run your code in the IDE or environment that you are using, i.e. run a single class or a suite of classes.
Try these:
#BeforeClass and #AfterClass: https://www.guru99.com/junit-test-framework.html
Test Suite: https://www.guru99.com/create-junit-test-suite.html

Related

Extent Reports: Test Steps are getting merged in last test in Extent Reports, When executing test in Parallel

The Test Steps and Test Logs are getting merged in to the single last test.
Extent Report 3.2
Actual Reports
Function 1 logs
Function 2 logs [Having all steps]
My Project Structure is
HomePage.java
package pom;
import test.BaseTest;
public class HomePage extends BaseTest
{
public void setClick()
{
test.pass("This test is pass which is in click of home page");
}
public void setName()
{
test.fail("This test is fail which is in set of home page");
}
public void select()
{
test.pass("This test is info which is in selct of home page");
}
}
Test1.java
package test;
import org.testng.annotations.Test;
import pom.HomePage;
public class Test1 extends BaseTest
{
#Test
public void funtion1()
{
HomePage hp = new HomePage();
hp.setName();
hp.setClick();
hp.select();
test.pass("Test is Passed! ins funtion 2");
}
}
Test2.java
package test;
import org.testng.annotations.Test;
import pom.HomePage;
public class Test2 extends BaseTest
{
#Test
public void funtion2()
{
HomePage hp = new HomePage();
hp.setClick();
hp.select();
test.pass("Test is Passed!");
}
}
BaseTest.Java
package test;
import java.lang.reflect.Method;
import org.testng.ITestResult;
import org.testng.annotations.AfterMethod;
import org.testng.annotations.AfterSuite;
import org.testng.annotations.BeforeMethod;
import org.testng.annotations.BeforeSuite;
import com.aventstack.extentreports.ExtentReports;
import com.aventstack.extentreports.ExtentTest;
import com.aventstack.extentreports.Status;
import com.aventstack.extentreports.markuputils.ExtentColor;
import com.aventstack.extentreports.markuputils.MarkupHelper;
import com.aventstack.extentreports.reporter.ExtentHtmlReporter;
import com.aventstack.extentreports.reporter.configuration.ChartLocation;
import com.aventstack.extentreports.reporter.configuration.Theme;
public class BaseTest
{
public static ExtentHtmlReporter htmlReporter;
public static ExtentReports extent;
public static ExtentTest test;
#BeforeSuite
public void setUp()
{
htmlReporter = new ExtentHtmlReporter("./Reports/MyOwnReport.html");
extent = new ExtentReports();
extent.attachReporter(htmlReporter);
extent.setSystemInfo("OS", "Mac Sierra");
extent.setSystemInfo("Host Name", "Jayshreekant");
extent.setSystemInfo("Environment", "QA");
extent.setSystemInfo("User Name", "Jayshreekant S");
htmlReporter.config().setChartVisibilityOnOpen(false);
htmlReporter.config().setDocumentTitle("AutomationTesting.in Demo Report");
htmlReporter.config().setReportName("My Own Report");
htmlReporter.config().setTestViewChartLocation(ChartLocation.TOP);
//htmlReporter.config().setTheme(Theme.DARK);
htmlReporter.config().setTheme(Theme.STANDARD);
}
#BeforeMethod
public void startTest(Method m)
{
test = extent.createTest(m.getName(),"This is the description of Test" + m.getName());
}
#AfterMethod
public void getResult(ITestResult result)
{
if(result.getStatus() == ITestResult.FAILURE)
{
test.log(Status.FAIL, MarkupHelper.createLabel(result.getName()+" Test case FAILED due to below issues:", ExtentColor.RED));
test.fail(result.getThrowable());
}
else if(result.getStatus() == ITestResult.SUCCESS)
{
test.log(Status.PASS, MarkupHelper.createLabel(result.getName()+" Test Case PASSED", ExtentColor.GREEN));
}
else
{
test.log(Status.SKIP, MarkupHelper.createLabel(result.getName()+" Test Case SKIPPED", ExtentColor.ORANGE));
test.skip(result.getThrowable());
}
}
#AfterSuite
public void tearDown()
{
extent.flush();
}
}
testngall.xml
<suite name="Suite" parallel="tests">
<test name="Test 1 ">
<classes>
<class name="test.Test1" />
</classes>
</test>
<test name="Test 2">
<classes>
<class name="test.Test2" />
</classes>
</test>
</suite> <!-- Suite -->
So this is the entire project code structure, I am getting the logs appending in last test
This is your problem:
public static ExtentTest test;
Since this is static there is only ever one instance of it. When you run your tests in parallel this #BeforeMethod is called twice.
#BeforeMethod
public void startTest(Method m)
{
test = extent.createTest(m.getName(),"This is the description of Test" + m.getName());
}
The second time it is called the first test probably hasn't finished, but it is still referencing the test object so you will get the output of the second test and some parts of the first test that had no completed running at the point the #BeforeMethod was called.
You are going to need to rewrite your code to not use a static test object.
In order to keep your parallel execution thread safe, your ExtentTest have to use ThreadLocal class instance variable. try,
private static ThreadLocal<ExtentTest> test = new InheritableThreadLocal<>();
In the class where you create tests, you can make this a child class of the class where you define extent report classes and variables. Now in the child class (having tests) you can create multiple Extent Test instances.
So create a new instance for every new test

Parallel methods do not run as expected

I need all the tests to be part of single class and run these tests in parallel. I'm using parallel="methods" in Testng.xml. I have class like
Public class DemoParallel{
#Test
/* some code to launch Google.*/
#Test
/* some code to launch Facebook*/
}
Actual : 2 instances of chrome launches. Google test is running completely.Facebook test is only launched but does not run. Gets hanged. Only one test passes and also have tried implementing listeners but no luck.
Any suggestions would be helpful.
Local Driver Factory :
import org.openqa.selenium.chrome.ChromeDriver;
import org.openqa.selenium.firefox.FirefoxDriver;
import org.openqa.selenium.ie.InternetExplorerDriver;
class LocalDriverFactory {
static WebDriver createInstance(String browserName) {
WebDriver driver = null;
if (browserName.toLowerCase().contains("firefox")) {
System.setProperty("webdriver.firefox.marionette","path to driver exe");
driver = new FirefoxDriver();
return driver;
}
if (browserName.toLowerCase().contains("internet")) {
driver = new InternetExplorerDriver();
return driver;
}
if (browserName.toLowerCase().contains("chrome")) {
System.setProperty("webdriver.chrome.driver","path to driver exe");
driver = new ChromeDriver();
return driver;
}
return driver;
}
}
use ThreadLocal class as follows :
public class LocalDriverManager {
private static ThreadLocal<WebDriver> webDriver = new ThreadLocal<WebDriver>();
public static WebDriver getDriver() {
return webDriver.get();
}
static void setWebDriver(WebDriver driver) {
webDriver.set(driver);
}
}
Create Webdriver Listener class :
import org.openqa.selenium.WebDriver;
import org.testng.IInvokedMethod;
import org.testng.IInvokedMethodListener;
import org.testng.ITestResult;
public class WebDriverListener implements IInvokedMethodListener {
#Override
public void beforeInvocation(IInvokedMethod method, ITestResult testResult) {
if (method.isTestMethod()) {
String browserName = method.getTestMethod().getXmlTest().getLocalParameters().get("browserName");
WebDriver driver = LocalDriverFactory.createInstance(browserName);
LocalDriverManager.setWebDriver(driver);
}
}
#Override
public void afterInvocation(IInvokedMethod method, ITestResult testResult) {
if (method.isTestMethod()) {
WebDriver driver = LocalDriverManager.getDriver();
if (driver != null) {
driver.quit();
}
}
}
}
Test Class
public class ThreadLocalDemo {
#Test
public void testMethod1() {
invokeBrowser("https://www.google.com/");
}
#Test
public void testMethod2() {
invokeBrowser("http://www.facebook.com");
}
private void invokeBrowser(String url) {
System.out.println("Thread id = " + Thread.currentThread().getId());
System.out.println("Hashcode of webDriver instance = " + LocalDriverManager.getDriver().hashCode());
LocalDriverManager.getDriver().get(url);
}
}
Suite Xml File :
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE suite SYSTEM "http://testng.org/testng-1.0.dtd">
<suite name="Suite" parallel="methods">
<listeners>
<listener class-name="path-to-class-WebDriverListener"></listener>
</listeners>
<test name="Test">
<parameter name="browserName" value="firefox"></parameter>
<classes>
<class name="path-to-class-ThreadLocalDemo" />
</classes>
</test> <!-- Test -->
</suite> <!-- Suite -->

#AfterClass is not calling immediately after class is executed in TestNG

I have 2 test classes and each contains around 3 tests in them. 3rd test case depends on 2nd test case and 2nd test case depends on 1st testcase in each class.
Class1:
public class MyTest1 extends BaseCase{
#Test
public void Test1(){
System.out.println("Test1");
}
#Test(dependsOnMethods = "Test1")
public void Test2(){
System.out.println("Test2");
}
#Test(dependsOnMethods = "Test2")
public void Test3(){
System.out.println("Test3");
}
}
Class2:
public class MyTest2 extends BaseCase{
#Test
public void Test1(){
System.out.println("Test1");
}
#Test(dependsOnMethods = "Test1")
public void Test2(){
System.out.println("Test2");
}
#Test(dependsOnMethods = "Test2")
public void Test3(){
System.out.println("Test3");
}
}
I have written #BeforeClass and #AfterClass in BaseCase class. When I run the test cases from testng.xml file #AfterClass is invoked after two classes are executed. But, #BeforeClass is working fine.
I would have to execute #AfterClass after each class is completed.
Please let me know if you have any ideas on it.
Testng.xml
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE suite SYSTEM "http://testng.org/testng-1.0.dtd">
<suite name="TestCases">
<parameter name="browserName" value="firefox"></parameter>
<parameter name="environment" value="qa"></parameter>
<test name="BulkUserImport">
<classes>
<class name="com.MyTest1"></class>
<class
name="com.MyTest2"></class>
</classes>
</test>
</suite>

Why won't my cross platform test automation framework run in parallel?

I am currently rewriting the automated testing framework for my company's mobile testing. We are attempting to use an interface which is implemented by multiple Page Object Models dependent on the Operating System of the mobile device the application is being run on. I can get this framework to run sequentially and even create multiple threads but it will not run in parallel no matter what I do. Of Note, we use Appium and something called the DeviceCart/DeviceConnect which allows me to physically remote into multiple devices, thus this isn't running on a grid. With that said I will link my pertinent code (this is my second version of this same code, I wrote one with and one without using ThreadLocal)
This should instantiate a new driver with a new thread for each Test
public class TLDriverFactory {
private ThreadLocal < AppiumDriver < MobileElement >> tlDriver = new ThreadLocal <>();
public synchronized void setTLDriver(OS platform, String server, String udid, String bundleID) {
switch (platform) {
case IOS:
tlDriver = ThreadLocal.withInitial(() -> {
try {
return new IOSDriver < MobileElement > (new URL(server), DesiredCapsManager.getDesiredCapabilities(OS.IOS, udid, bundleID));
} catch(MalformedURLException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
return null;
});
break;
case ANDROID:
tlDriver = ThreadLocal.withInitial(() -> {
try {
return new AndroidDriver < MobileElement > (new URL(server), DesiredCapsManager.getDesiredCapabilities(OS.ANDROID, udid, bundleID));
} catch(MalformedURLException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
return null;
});
break;
default:
break;
}
}
public synchronized ThreadLocal < AppiumDriver < MobileElement >> getTLDriver() {
return tlDriver;
}
}
This handles browser capbilities
public class DesiredCapsManager {
public static DesiredCapabilities getDesiredCapabilities(OS platform, String udid, String bundleID) {
//Set DesiredCapabilities
DesiredCapabilities capabilities = new DesiredCapabilities();
capabilities.setCapability("deviceConnectUserName", "User#Name.com");
capabilities.setCapability("deviceConnectApiKey", "API-Token-Here");
capabilities.setCapability("udid", udid);
capabilities.setCapability("platformName", platform);
capabilities.setCapability("bundleID", bundleID);
//IOS only Settings
if (platform.equals(OS.IOS)) {
capabilities.setCapability("automationName", "XCUITest");
}
else {
//Android only Settings
capabilities.setCapability("automationName", "appium");
}
return capabilities;
}
}
This is the Base Test class from which every test inherits
public class BaseTest {
protected AppiumDriver < MobileElement > driver;
protected AppiumSupport.TLDriverFactory TLDriverFactory = new AppiumSupport.TLDriverFactory();
public enum OS {
ANDROID,
IOS
}
#AfterMethod
public synchronized void tearDown() throws Exception {
driver.quit();
TLDriverFactory.getTLDriver().remove();
}
}
Here is the test case itself
public class Test_SignIn extends BaseTest {
protected SignInPage signInPage;
#Parameters(value = {
"udid",
"bundleID",
"platform",
"server"
})
#BeforeMethod
public void setup(String udid, String bundleID, OS platform, String server) throws MalformedURLException,
InterruptedException {
//Set & Get ThreadLocal Driver
TLDriverFactory.setTLDriver(platform, server, udid, bundleID);
driver = TLDriverFactory.getTLDriver().get();
Thread.sleep(5000);
switch (platform) {
case IOS:
signInPage = new SignInPageIOS(driver);
break;
case ANDROID:
signInPage = new SignInPageAndroid(driver);
break;
default:
break;
}
System.out.println("Current Thread ID BeforeTest: " + Thread.currentThread().getName());
}
#Test
public synchronized void Authenticate() throws Exception {
System.out.println("Current Thread ID Test 1: " + Thread.currentThread().getName());
signInPage.Login("Username", "Password");
}
}
Here is the testng.xml file
< !DOCTYPE suite SYSTEM "http://testng.org/testng-1.0.dtd" >
<suite name="Test" parallel="tests" thread-count="4">
<test name="SignIn" parallel ="instances" thread-count="2">
<parameter name="udid" value="DeviceIdGoesHere" />
<parameter name="bundleID" value="Environment.address.here" />
<parameter name="platform" value="ANDROID" />
<parameter name="server" value="http://deviceconnect/appium" />
<classes>
<class name="Test.Test_SignIn">
</class>
</classes>
</test>
<test name="SignIn2" parallel="instances" thread-count="2">
<parameter name="udid" value="DeviceIdGoesHere" />
<parameter name="bundleID" value="Environment.address.here" />
<parameter name="platform" value="IOS" />
<parameter name="server" value="http://deviceconnect/appium" />
<classes>
<class name="Test.Test_SignIn">
</class>
</classes>
</test>
</suite>
What I'm looking for is if anyone can determine what mistake I've made or what the bottleneck is preventing the tests from running in parallel
Based on what you have shared so far, here's the cleaned-up and fixed code that should support your concurrency requirements.
The Driver factory class which is responsible for creation and clean-up of Appium driver instances for each and every thread, looks like below:
import io.appium.java_client.AppiumDriver;
import io.appium.java_client.MobileElement;
import io.appium.java_client.android.AndroidDriver;
import io.appium.java_client.ios.IOSDriver;
import java.net.MalformedURLException;
import java.net.URL;
public class TLDriverFactory {
private static final ThreadLocal<AppiumDriver<MobileElement>> tlDriver = new ThreadLocal<>();
public static void setTLDriver(BaseTest.OS platform, String server, String udid, String bundleID) throws MalformedURLException {
System.out.println("Current Thread ID Driver Instantiation: " + Thread.currentThread().getName());
AppiumDriver<MobileElement> driver;
switch (platform) {
case IOS:
driver = new IOSDriver<>(new URL(server), DesiredCapsManager.getDesiredCapabilities(BaseTest.OS.IOS, udid, bundleID));
break;
default:
driver = new AndroidDriver<>(new URL(server), DesiredCapsManager.getDesiredCapabilities(BaseTest.OS.ANDROID, udid, bundleID));
break;
}
tlDriver.set(driver);
}
public static AppiumDriver<MobileElement> getTLDriver() {
return tlDriver.get();
}
public static void cleanupTLDriver() {
tlDriver.get().quit();
tlDriver.remove();
}
}
Here's how the BaseTest which I am guessing is supposed to be the base class for all tests, would look like
import org.testng.annotations.AfterMethod;
import org.testng.annotations.BeforeMethod;
import org.testng.annotations.Parameters;
public class BaseTest {
private static final ThreadLocal<SignInPage> signInPage = new ThreadLocal<>();
public enum OS {
ANDROID,
IOS
}
#Parameters(value = {"udid", "bundleID", "platform", "server"})
#BeforeMethod
public void setup(String udid, String bundleID, OS platform, String server) throws Exception {
//Set & Get ThreadLocal Driver
TLDriverFactory.setTLDriver(platform, server, udid, bundleID);
Thread.sleep(5000);
SignInPage instance;
switch (platform) {
case IOS:
instance = new SignInPageIOS(TLDriverFactory.getTLDriver());
break;
default:
instance = new SignInPageAndroid(TLDriverFactory.getTLDriver());
break;
}
System.out.println("Current Thread ID BeforeTest: " + Thread.currentThread().getName());
signInPage.set(instance);
}
#AfterMethod
public void tearDown() {
System.out.println("Current Thread ID AfterTest: " + Thread.currentThread().getName());
TLDriverFactory.cleanupTLDriver();
}
protected static SignInPage getPageForTest() {
return signInPage.get();
}
}
Here's how the constructor of your page classes would look like
import io.appium.java_client.AppiumDriver;
import io.appium.java_client.MobileElement;
public class SignInPageIOS extends SignInPage {
public SignInPageIOS(AppiumDriver<MobileElement> tlDriver) {
super(tlDriver);
}
}
Here's how a typical test case could look like
import org.testng.annotations.Test;
public class Test_SignIn extends BaseTest {
#Test
public void authenticate() {
//Get the instance of "SignInPage" for the current thread and then work with it.
getPageForTest().Login("Username", "Password");
}
}

selenium testNG retry with incorrect result account

I am using testNG 6.9.10 that installed in Eclipse.
I was trying to use retry to make sure the failed tests could run maxcount times that defined.
See below codes.
public class TestRetry implements IRetryAnalyzer {
private int retryCount = 0;
private int maxRetryCount = 1;
public boolean retry(ITestResult result) {
if (retryCount < maxRetryCount) {
retryCount++;
return true;
}
return false;
}
#Test(retryAnalyzer = TestRetry.class)
public void testGenX() {
Assert.assertEquals("google", "google");
}
#Test(retryAnalyzer = TestRetry.class)
public void testGenY() {
Assert.assertEquals("hello", "hallo");
}
}
I got below result:
===============================================
Default test
Tests run: 3, Failures: 1, Skips: 1
===============================================
===============================================
Default suite
Total tests run: 3, Failures: 1, Skips: 1
===============================================
But seems like the result count with some problems. I want below:
===============================================
Default test
Tests run: 2, Failures: 1, Skips: 0
===============================================
===============================================
Default suite
Total tests run: 2, Failures: 1, Skips: 0
===============================================
I tried to defined the listeners to implement it, something like to override the onFinish function. You may find it in http://www.seleniumeasy.com/testng-tutorials/retry-listener-failed-tests-count-update
But finally not works.
can someone who had met this could help?
Its working fine, i hope there is some problem on listener usage. I created TestRetry as same like you but with out #Test methods.
public class TestRetry implements IRetryAnalyzer{
private int retryCount = 0;
private int maxRetryCount = 1;
#Override
public boolean retry(ITestResult arg0) {
// TODO Auto-generated method stub
if (retryCount < maxRetryCount) {
retryCount++;
return true;
}
return false;
}
}
Created Listener class
public class TestListener implements ITestListener{
#Override
public void onFinish(ITestContext context) {
// TODO Auto-generated method stub
Set<ITestResult> failedTests = context.getFailedTests().getAllResults();
for (ITestResult temp : failedTests) {
ITestNGMethod method = temp.getMethod();
if (context.getFailedTests().getResults(method).size() > 1) {
failedTests.remove(temp);
} else {
if (context.getPassedTests().getResults(method).size() > 0) {
failedTests.remove(temp);
}
}
}
}
#Override
public void onStart(ITestContext arg0) {
// TODO Auto-generated method stub
}
#Override
public void onTestFailedButWithinSuccessPercentage(ITestResult arg0) {
// TODO Auto-generated method stub
}
#Override
public void onTestFailure(ITestResult arg0) {
// TODO Auto-generated method stub
}
#Override
public void onTestSkipped(ITestResult arg0) {
// TODO Auto-generated method stub
}
#Override
public void onTestStart(ITestResult arg0) {
// TODO Auto-generated method stub
}
#Override
public void onTestSuccess(ITestResult arg0) {
// TODO Auto-generated method stub
}
}
Finally my test class with those methods
public class RunTest {
#Test(retryAnalyzer = TestRetry.class)
public void testGenX() {
Assert.assertEquals("google", "google");
}
#Test(retryAnalyzer = TestRetry.class)
public void testGenY() {
Assert.assertEquals("hello", "hallo");
}
}
Executed this RunTest from testng.xml file by specifying the my custom listener
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE suite SYSTEM "http://testng.org/testng-1.0.dtd">
<suite name="Suite1" parallel="false" preserve-order="true">
<listeners>
<listener class-name="com.test.TestListener"/>
</listeners>
<test name="TestA">
<classes>
<class name="com.test.RunTest"/>
</classes>
</test> <!-- Test -->
</suite> <!-- Suite -->
Please have a try..
Thank You,
Murali
#murali could please see my codes below? I really cannot see any difference.
The CustomLinstener.java
package cases;
import java.util.Set;
import org.testng.ITestContext;
import org.testng.ITestListener;
import org.testng.ITestNGMethod;
import org.testng.ITestResult;
public class CustomLinstener implements ITestListener{
#Override
public void onFinish(ITestContext context) {
Set<ITestResult> failedTests = context.getFailedTests().getAllResults();
for (ITestResult temp : failedTests) {
ITestNGMethod method = temp.getMethod();
if (context.getFailedTests().getResults(method).size() > 1) {
failedTests.remove(temp);
} else {
if (context.getPassedTests().getResults(method).size() > 0) {
failedTests.remove(temp);
}
}
}
}
#Override
public void onStart(ITestContext arg0) {
// TODO Auto-generated method stub
}
#Override
public void onTestFailedButWithinSuccessPercentage(ITestResult arg0) {
// TODO Auto-generated method stub
}
#Override
public void onTestFailure(ITestResult arg0) {
// TODO Auto-generated method stub
}
#Override
public void onTestSkipped(ITestResult arg0) {
// TODO Auto-generated method stub
}
#Override
public void onTestStart(ITestResult arg0) {
// TODO Auto-generated method stub
}
#Override
public void onTestSuccess(ITestResult arg0) {
// TODO Auto-generated method stub
}
}
The RunTest.java
package cases;
import org.testng.Assert;
import org.testng.annotations.Test;
public class RunTest {
#Test(retryAnalyzer = TestRetry.class)
public void testGenX() {
Assert.assertEquals("google", "google");
}
#Test(retryAnalyzer = TestRetry.class)
public void testGenY() {
Assert.assertEquals("hello", "hallo");
}
}
The TestRetry.java
package cases;
import org.testng.IRetryAnalyzer;
import org.testng.ITestResult;
public class TestRetry implements IRetryAnalyzer{
private int retryCount = 0;
private int maxRetryCount = 1;
#Override
public boolean retry(ITestResult arg0) {
// TODO Auto-generated method stub
if (retryCount < maxRetryCount) {
retryCount++;
return true;
}
return false;
}
}
Finally the XML. I right click it and run as the testNG suite.
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE suite SYSTEM "http://testng.org/testng-1.0.dtd">
<suite name="Suite1" parallel="false" preserve-order="true">
<test name="TestA">
<classes>
<class name="cases.RunTest" />
</classes>
</test> <!-- Test -->
<listeners>
<listener class-name="cases.CustomLinstener" />
</listeners>
</suite> <!-- Suite -->
The documentation for TestNG's IRetryAnalyzer does not specify test reporting behavior:
Interface to implement to be able to have a chance to retry a failed test.
There are no mention of "retries" on http://testng.org/doc/documentation-main.html and searching across the entire testng.org site only returns links to the documentation of and references to IRetryAnalyzer (see site:testng.org retry - Google Search).
As there is no documentation for how a retried test is reported we cannot make many sound expectations. Should each attempt appear in the test results? If so, is each attempt except for the last attempt marked as a skip and the last as either a success or a failure? It isn't documented. The behavior is undefined and it could change with any TestNG release in subtle or abrupt ways.
As such, I recommend using a tool other than TestNG for retry logic.
e.g. You can Spring Retry (which can be used independently of other Spring projects):
TestRetry.java
public class TestRetry {
private static RetryOperations retryOperations = createRetryOperations();
private static RetryOperations createRetryOperations() {
RetryTemplate retryTemplate = new RetryTemplate();
retryTemplate.setRetryPolicy(createRetryPolicy());
return retryTemplate;
}
private static RetryPolicy createRetryPolicy() {
int maxAttempts = 2;
Map<Class<? extends Throwable>, Boolean> retryableExceptions =
Collections.singletonMap(AssertionError.class, true);
return new SimpleRetryPolicy(maxAttempts, retryableExceptions);
}
#Test
public void testGenX() {
runWithRetries(context -> {
Assert.assertEquals("google", "google");
});
}
#Test
public void testGenY() {
runWithRetries(context -> {
Assert.assertEquals("hello", "hallo");
});
}
private void runWithRetries(RetryRunner<RuntimeException> runner) {
retryOperations.execute(runner);
}
}
RetryRunner.java
/**
* Runner interface for an operation that can be retried using a
* {#link RetryOperations}.
* <p>
* This is simply a convenience interface that extends
* {#link RetryCallback} but assumes a {#code void} return type.
*/
interface RetryRunner<E extends Throwable> extends RetryCallback<Void, E> {
#Override
default Void doWithRetry(RetryContext context) throws E {
runWithRetry(context);
return null;
}
void runWithRetry(RetryContext context) throws E;
}
Console Output
===============================================
Default Suite
Total tests run: 2, Failures: 1, Skips: 0
===============================================
Spring Retry may look slightly more complicated at first but it provides very flexible features and API and enables separation of concerns of the test retry logic and the test reporting.