Inoperation of testng Retry Analyzer - selenium

I was following several different Web Sites explaining how to use RetryAnalyzer (they all say basically the same thing, but I checked several to see whether there was any difference). I implemented as they did in the samples, and deliberately caused a failure the first run (which ended up being the only run). Even though it failed, the test was not repeated. I even put a breakpoint inside the analyzer at the first line (res = false). which never got hit. I tell it to try 3 times but it did not retry at all. Am I missing something?
My sample is below: Is it something to do with setting counter = 0? But the "res = false" at least should get hit?
public class RetryAnalyzer implements IRetryAnalyzer {
int counter = 0;
#Override
public boolean retry(ITestResult result) {
boolean res = false;
if (!result.isSuccess() && counter < 3) {
counter++;
res = true;
}
return res;
}
}
and
#Test(dataProvider = "dp", retryAnalyzer = RetryAnalyzer.class)
public void testA(TestContext tContext) throws IOException {
genericTest("A", "83701");
}
The test usually passes. I deliberately caused it to fail but it did not do a retry. Am I missing something?
===============================================
Default suite
Total tests run: 1, Failures: 1, Skips: 0

Try adding alwaysRun = true to your test method decorator.
#Test(dataProvider = "dp", retryAnalyzer = RetryAnalyzer.class, alwaysRun = true)
public void testA(TestContext tContext) throws IOException {
genericTest("A", "83701");
}
Also, before retrying, you may want to restart your driver instance, so that you start clean with your test. Otherwise, your second run will execute in the same browser instance.
Just do a driver.Quit() following by a reinstantiation of the browser driver.

RetryAnalyzer class has to be public. Also, if it is an inner class, it should be static. TestNg silently ignores retryAnalyzer otherwise.

Related

How to Take Screenshot when TestNG Assert fails?

String Actualvalue= d.findElement(By.xpath("//[#id=\"wrapper\"]/main/div[2]/div/div[1]/div/div[1]/div[2]/div/table/tbody/tr[1]/td[1]/a")).getText();
Assert.assertEquals(Actualvalue, "jumlga");
captureScreen(d, "Fail");
The assert should not be put before your capture screen. Because it will immediately shutdown the test process so your code
captureScreen(d, "Fail");
will be not reachable
This is how i usually do:
boolean result = false;
try {
// do stuff here
result = true;
} catch(Exception_class_Name ex) {
// code to handle error and capture screen shot
captureScreen(d, "Fail");
}
# then using assert
Assert.assertEquals(result, true);
1.
A good solution will be is to use a report framework like allure-reports.
Read here:allure-reports
2.
We don't our tests to be ugly by adding try catch in every test so we will use Listeners which are using an annotations system to "Listen" to our tests and act accordingly.
Example:
public class listeners extends commonOps implements ITestListener {
public void onTestFailure(ITestResult iTestResult) {
System.out.println("------------------ Starting Test: " + iTestResult.getName() + " Failed ------------------");
if (platform.equalsIgnoreCase("web"))
saveScreenshot();
}
}
Please note I only used the relevant method to your question and I suggest you read here:
TestNG Listeners
Now we will want to take a screenshot built in method by allure-reports every time a test fails so will add this method inside our listeners class
Example:
#Attachment(value = "Page Screen-Shot", type = "image/png")
public byte[] saveScreenshot(){
return ((TakesScreenshot)driver).getScreenshotAs(OutputType.BYTES);
}
Test example
#Listeners(listeners.class)
public class myTest extends commonOps {
#Test(description = "Test01: Add numbers and verify")
#Description("Test Description: Using Allure reports annotations")
public void test01_myFirstTest(){
Assert.assertEquals(result, true)
}
}
Note we're using at the beginning of the class an annotation of #Listeners(listeners.class) which allows our listeners to listen to our test, please mind the (listeners.class) can be any class you named your listeners.
The #Description is related to allure-reports and as the code snip suggests you can add additional info about the test.
Finally, our Assert.assertEquals(result, true) will take a screen shot in case the assertion fails because we enabled our listener.class to it.

Abort/ignore parameterized test in JUnit 5

I have some parameterized tests
#ParameterizedTest
#CsvFileSource(resources = "testData.csv", numLinesToSkip = 1)
public void testExample(String parameter, String anotherParameter) {
// testing here
}
In case one execution fails, I want to ignore all following executions.
AFAIK there is no built-in mechanism to do this. The following does work, but is a bit hackish:
#TestInstance(Lifecycle.PER_CLASS)
class Test {
boolean skipRemaining = false;
#ParameterizedTest
#CsvFileSource(resources = "testData.csv", numLinesToSkip = 1)
void test(String parameter, String anotherParameter) {
Assumptions.assumeFalse(skipRemaining);
try {
// testing here
} catch (AssertionError e) {
skipRemaining = true;
throw e;
}
}
}
In contrast to a failed assertion, which marks a test as failed, an assumption results in an abort of a test. In addition, the lifecycle is switched from per method to per class:
When using this mode, a new test instance will be created once per test class. Thus, if your test methods rely on state stored in instance variables, you may need to reset that state in #BeforeEach or #AfterEach methods.
Depending on how often you need that feature, I would rather go with a custom extension.

Exceptions thrown while soft asserting fail the subsequent tests

As per title, I'm trying to run a test case in a loop. To be able to calculate the number of failed assertions, I'm expecting that if AssertJ is trying to assert the returned value from a method call, it should softly fail a single iteration and carry on. Otherwise, it defies the purpose of soft assertions. Here's a snippet illustrating this:
public static void main(String[] args) {
SoftAssertions softAssertions = new SoftAssertions();
softAssertions.assertThat(throwException(10)).isTrue();
softAssertions.assertThat(throwException(10)).isTrue();
softAssertions.assertThat(throwException(1)).isTrue();
softAssertions.assertAll();
}
private static boolean throwException(int stuff){
if(stuff == 1){
throw new RuntimeException();
}
return true;
}
The output:
Exception in thread "main" java.lang.RuntimeException
at eLCMUpdate.throwException(MyClass.java:101)
at eLCMUpdate.main(MyClass.java:95)
I'm missing something here. Am I doing something wrong?
The problem in the code softAssertions.assertThat(throwException(10)).isTrue(); is that if the exception is thrown then assertThat is not executed at all.
What you need is to lazy evaluate the code you are passing in assertThat, you can do this with AssertJ assertThatCode as below:
final SoftAssertions softAssertions = new SoftAssertions();
softAssertions.assertThatCode(() -> throwException(10)).doesNotThrowAnyException();
softAssertions.assertThatCode(() -> throwException(1)).isInstanceOf(RuntimeException.class);
softAssertions.assertAll();
According to my understanding soft assertions work on boolean values and not on exceptions.
Also: if you throw an exception before calling softAssertions.assertAll(), obviously this method will also never be executed. This is actually the cause of the behaviour you reported.
Just try to debug through your code and you will see that the softAssertions.assertAll() is never called.
Soft assertions will work properly if you change your code to:
#Test
void soft_assertions() {
SoftAssertions softAssertions = new SoftAssertions();
softAssertions.assertThat(checkCondition(10)).isTrue();
softAssertions.assertThat(checkCondition(10)).isTrue();
softAssertions.assertThat(checkCondition(1)).isTrue();
softAssertions.assertThat(checkCondition(2)).isTrue();
softAssertions.assertThat(checkCondition(20)).isTrue();
softAssertions.assertAll();
}
private static boolean checkCondition(int stuff){
if(stuff == 1 || stuff == 2){
return false;
}
return true;
}
This will output the result of multiple assertions and not stop on the evaluation of the first failed assertion.
Output:
org.assertj.core.api.SoftAssertionError:
The following 2 assertions failed:
1)
Expecting:
<false>
to be equal to:
<true>
but was not.
at JsonStewardshipCustomerConversionTest.soft_assertions(JsonStewardshipCustomerConversionTest.java:301)
2)
Expecting:
<false>
to be equal to:
<true>
but was not.
at JsonStewardshipCustomerConversionTest.soft_assertions(JsonStewardshipCustomerConversionTest.java:302)
Update
SoftAssertion does not seem to fit your purpose.
I suggest you use instead JUnit 5 assertAll. According to my tests it evaluates all conditions in an assertAll block and survives exceptions too. The problem here is you need JUnit 5 which is probably not largely adopted yet.
Here is an example with a failure on a boolean condition and also an exception. Both are reported in the console.
#Test
void soft_assertions() {
assertAll("Check condition",
() -> assertThat(checkCondition(9)).isTrue(),
() -> assertThat(checkCondition(10)).isTrue(),
() -> assertThat(checkCondition(11)).isTrue(),
() -> assertThat(checkCondition(2)).isTrue(), // Throws exception
() -> assertThat(checkCondition(3)).isFalse(), // fails
() -> assertThrows(IllegalArgumentException.class, () -> {
checkCondition(1);
})
);
}
private static boolean checkCondition(int stuff) {
if (stuff == 1 || stuff == 2) {
throw new IllegalArgumentException();
}
return true;
}
You will see this in the output:
org.opentest4j.MultipleFailuresError: Check condition (2 failures)
<no message> in java.lang.IllegalArgumentException
Expecting:
<true>
to be equal to:
<false>
but was not.

Restart failed test case automatically in TestNG/Selenium

I am using Selenium webdriver, in Java with TestNG to run an X amount of test cases.
What I would like, is for any test case to automatically restart (either from starting or from point of failure), as soon as it fails.
I know TestNG framework has the following method
#Override
public void onTestFailure(ITestResult tr) {
log("F");
}
but I do not know how to find out which testcase it was and then how would I restart it.
I wanted to see an example with actual code in it and found it here:
Restarting Test immediately with TestNg
Observe how the below tests will each be re-run once as soon as the failure happens.
import org.testng.Assert;
import org.testng.IRetryAnalyzer;
import org.testng.ITestResult;
import org.testng.annotations.Test;
public class Retry implements IRetryAnalyzer {
private int retryCount = 0;
private int maxRetryCount = 1;
public boolean retry(ITestResult result) {
if (retryCount < maxRetryCount) {
retryCount++;
return true;
}
return false;
}
#Test(retryAnalyzer = Retry.class)
public void testGenX() {
Assert.assertEquals("james", "JamesFail"); // ListenerTest fails
}
#Test(retryAnalyzer = Retry.class)
public void testGenY() {
Assert.assertEquals("hello", "World"); // ListenerTest fails
}
}
From testng.org
Every time tests fail in a suite, TestNG creates a file called testng-failed.xml in the output directory. This XML file contains the necessary information to rerun only these methods that failed, allowing you to quickly reproduce the failures without having to run the entirety of your tests.
If you want to rerun the test exactly after the failure you need to call the method that failed. You can get that method name from ITestResult object.
If you want to rerun all the failed test cases together, then you can give the testng-failed.xml as input xml after the first execution.

Problem attaching WatiN to IE

I am experimenting with WatiN for our UI testing, I can get tests to work, but I can't get IE to close afterwards.
I'm trying to close IE in my class clean up code, using WatiN's example IEStaticInstanceHelper technique.
The problem seems to be attaching to the IE thread, which times out:
_instance = IE.AttachTo<IE>(Find.By("hwnd", _ieHwnd));
(_ieHwnd is the handle to IE stored when IE is first launched.)
This gives the error:
Class Cleanup method
Class1.MyClassCleanup failed. Error
Message:
WatiN.Core.Exceptions.BrowserNotFoundException:
Could not find an IE window matching
constraint: Attribute 'hwnd' equals
'1576084'. Search expired after '30'
seconds.. Stack Trace: at
WatiN.Core.Native.InternetExplorer.AttachToIeHelper.Find(Constraint
findBy, Int32 timeout, Boolean
waitForComplete)
I'm sure I must be missing something obvious, has anyone got any ideas about this one?
Thanks
For completeness, the static helper looks like this:
public class StaticBrowser
{
private IE _instance;
private int _ieThread;
private string _ieHwnd;
public IE Instance
{
get
{
var currentThreadId = GetCurrentThreadId();
if (currentThreadId != _ieThread)
{
_instance = IE.AttachTo<IE>(Find.By("hwnd", _ieHwnd));
_ieThread = currentThreadId;
}
return _instance;
}
set
{
_instance = value;
_ieHwnd = _instance.hWnd.ToString();
_ieThread = GetCurrentThreadId();
}
}
private int GetCurrentThreadId()
{
return Thread.CurrentThread.GetHashCode();
}
}
And the clean up code looks like this:
private static StaticBrowser _staticBrowser;
[ClassCleanup]
public static void MyClassCleanup()
{
_staticBrowser.Instance.Close();
_staticBrowser = null;
}
The problem is that when MSTEST executes the method with the [ClassCleanup] attribute, it will be run on a thread that isn't part of the STA.
If you run the following code it should work:
[ClassCleanup]
public static void MyClassCleanup()
{
var thread = new Thread(() =>
{
_staticBrowser.Instance.Close();
_staticBrowser = null;
});
thread.SetApartmentState(ApartmentState.STA);
thread.Start();
thread.Join();
}
The WatiN website briefly mentions that WatiN won't work with threads not in the STA here but it isn't obvious that [TestMethod]'s run in the STA while methods like [ClassCleanup] and [AssemblyCleanupAttribute] do not.
By default when IE object are destroyed, they autoclose the browser.
Your CleanUp code may try to find a browser already close, which why you have an error.
Fixed this myself by dumping mstest and using mbunit instead. I also found that I didn't need to use any of the IEStaticInstanceHelper stuff either, it just worked.