TestNG: How to Ignore the method in dependsOnMethods if it was skipped and continue execution - selenium

I have a TestNG class with the following methods.
#Test(priority=4)
public void Test1()
{
}
#Test(priority=5, dependsOnMethods={"Test1"})
public void Test2()
{
throw new SkipException("SKIP");
}
#Test(priority=6, dependsOnMethods={"Test1"})
public void Test3()
{
}
#Test(priority=7, dependsOnMethods={"Test1"})
public void Test4()
{
}
#Test(priority=8, dependsOnMethods={"Test2","Test3","Test4"})
public void Test5()
{
}
What I would like to achieve is:
Test2, Test3 and Test 4 depends on Test1. So Only if Test1 pass then I need to continue.
And Test5 depends on Test2, Test3 and Test4.
But I can skip any test (i.e Test2, Test3 or Test4) and still want to continue my execution of Test5 if other Test's did not fail.
How can I achieve this.

You can use alwaysRun=true
#Test(alwaysRun = true, priority=8, dependsOnMethods={"Test2","Test3","Test4"})
public void Test5()
{
}
Test5 will be executed if any of the dependent test is skipped and also it will be executed if any one of dependent test fails.

Related

JUnit 5 What happens to test methods with the same #Order(#)?

I am using JUnit 5 to test my RESTful application. I have a test class in which test methods are annotated with #Order(#). Any idea what happen when the same order number is used to some test methods? Would they be run in parallel?
They won't be executed in parallel. The ordering ensures to sort the test execution order adjacent to each other depending on their value.
#TestMethodOrder(MethodOrderer.OrderAnnotation.class)
public class OrderingTest {
#Test
void test0() {
assertEquals(2, 1 + 1);
System.out.println("Test0");
}
#Test
#Order(1)
void test1() {
assertEquals(2, 1 + 1);
System.out.println("Test1");
}
#Test
#Order(1)
void test2() {
assertEquals(2, 1 + 1);
System.out.println("Test2");
}
}
In this example, test1 and test2 have the same order number and the test execution will be: test1 -> test2 -> test0.
Internally JUnit 5 does the following for ordering:
#Override
public void orderMethods(MethodOrdererContext context) {
context.getMethodDescriptors().sort(comparingInt(OrderAnnotation::getOrder));
}
while .sort() is from java.util.List.
If you want parallel test execution, you can configure this with JUnit 5 in a different way.

How to print the number of failures for every test case using SoftAssert in TestNG

I have a test case that performs the following:
#Test
public void testNumber() {
SoftAssert softAssert = new SoftAssert();
List<Integer> nums = Arrays.asList(1,2,3,4,5,6,7,8,9,10);
for (Integer num : nums){
softAssert.assertTrue(num%2==0, String.format("\n Old num : %d", num);
}
softAssert.assertAll();
}
The above test will fail for numbers 1,3,5,7,9
Five statements will be printed in the test report.
If I run the test for a larger data set, I find it difficult to get the count of test data that failed the test case.
Is there any easier way to get the number of test data that failed for a test case using softAssert itself ?
Easy way to find count of test data that failed the test case in SoftAssert itself is not possible.
Alternatively you can think of extending Assertion class. Below is the sample that gives count of test data that failed the test case and prints each failure in new line.
package yourpackage;
import java.util.Map;
import org.testng.asserts.Assertion;
import org.testng.asserts.IAssert;
import org.testng.collections.Maps;
public class AssertionExtn extends Assertion {
private final Map<AssertionError, IAssert> m_errors;
public AssertionExtn(){
this.m_errors = Maps.newLinkedHashMap();
}
protected void doAssert(IAssert a) {
onBeforeAssert(a);
try {
a.doAssert();
onAssertSuccess(a);
} catch (AssertionError ex) {
onAssertFailure(a, ex);
this.m_errors.put(ex, a);
} finally {
onAfterAssert(a);
}
}
public void assertAll() {
if (!(this.m_errors.isEmpty())) {
StringBuilder sb = new StringBuilder("Total assertion failed: "+m_errors.size());
sb.append("\n\t");
sb.append("The following asserts failed:");
boolean first = true;
for (Map.Entry ae : this.m_errors.entrySet()) {
if (first)
first = false;
else {
sb.append(",");
}
sb.append("\n\t");
sb.append(((AssertionError)ae.getKey()).getMessage());
}
throw new AssertionError(sb.toString());
}
}
}
You can use this as:
#Test
public void test()
{
AssertionExtn asert=new AssertionExtn();
asert.assertEquals(false, true,"failed");
asert.assertEquals(false, false,"passed");
asert.assertEquals(0, 1,"brokedown");
asert.assertAll();
}

Mockito.doNothing() is still running

I'm trying to test small pieces of code. I do not want test one of the method and used Mockito.doNothing(), but this method was still run. How can I do that?
protected EncoderClientCommandEventHandler clientCommandEventHandlerProcessStop = new EncoderClientCommand.EncoderClientCommandEventHandler() {
#Override
public void onCommandPerformed(
EncoderClientCommand clientCommand) {
setWatcherActivated(false);
buttonsBackToNormal();
}
};
protected void processStop() {
EncoderServerCommand serverCommand = new EncoderServerCommand();
serverCommand.setAction(EncoderAction.STOP);
checkAndSetExtension();
serverCommand.setKey(getArchiveJobKey());
getCommandFacade().performCommand(
serverCommand,
EncoderClientCommand.getType(),
clientCommandEventHandlerProcessStop);
}
#Test
public void testClientCommandEventHandlerProcessStop() {
EncoderClientCommand encoderClientCommand = mock(EncoderClientCommand.class);
Mockito.doNothing().when(encoderCompositeSpy).buttonsBackToNormal();
when(encoderCompositeSpy.isWatcherActivated()).thenReturn(false);
encoderCompositeSpy.clientCommandEventHandlerProcessStop.onCommandPerformed(encoderClientCommand);
I've found the problem. One of the variable is already mocked in buttonsBackNormal().

How can I have #After run even if a cucumber step failed?

We have several cucumber step definitions that are modifying the database which would mess up the test afterwards if it doesn't get cleaned up after the test runs. We do this by having a function with the #After annotation that will clean things up.
The problem is that if there's a failure in one of the tests, the function with #After doesn't run, which leaves the database in a bad state.
So the question is, how can I make sure the function with #After always runs, regardless if a test failed or not?
I saw this question, but it's not exactly what I'm trying to do, and the answers don't help.
If it helps, here is part of one of the tests. It's been greatly stripped down, but it has what I think are the important parts.
import static org.hamcrest.MatcherAssert.assertThat;
import cucumber.api.java.After;
public class RunMacroGMUStepDefinition
{
#Autowired
protected ClientSOAPRecordkeeperInterface keeper;
#Given( "^the following Macro exists:$" )
#Transactional
public void establishDefaultPatron( final DataTable dataTable )
{
for ( final DataTableRow dataTableRow : dataTable.getGherkinRows() )
{
// Stuff happens here
keeper.insert( macroScriptRecord );
}
}
#After( value = "#RunMacroGMU" )
#Transactional
public void teardown()
{
for ( int i = 0; i < macroScripts.size(); i++ )
{
keeper.delete( macroScripts.get( i ) );
}
}
// Part of #Then
private void compareRecords( final String has, // Other stuff )
{
// Stuff happens here
if ( has.equals( "include" ) )
{
assertThat( "No matching data found", foundMatch, equalTo( true ) );
}
else
{
assertThat( "Found matching data", foundMatch, equalTo( false ) );
}
}
}
I personally use Behat (The PHP dist of Cucumber), and we use something like this to take screenshots after a failed test. Did a bit of searching, and found this snippet in Java, that may help with this situation.
#After
public void tearDown(Scenario scenario) {
if (scenario.isFailed()) {
(INSERT FUNCTIONS YOU WOULD LIKE TO RUN AFTER A FAILING TEST HERE)
}
driver.close();
}
I hope this helps.

TestNG Test Case failing with JMockit "Invalid context for the recording of expectations"

The following TestNG (6.3) test case generates the error "Invalid context for the recording of expectations"
#Listeners({ Initializer.class })
public final class ClassUnderTestTest {
private ClassUnderTest cut;
#SuppressWarnings("unused")
#BeforeMethod
private void initialise() {
cut = new ClassUnderTest();
}
#Test
public void doSomething() {
new Expectations() {
MockedClass tmc;
{
tmc.doMethod("Hello"); result = "Hello";
}
};
String result = cut.doSomething();
assertEquals(result, "Hello");
}
}
The class under test is below.
public class ClassUnderTest {
MockedClass service = new MockedClass();
MockedInterface ifce = new MockedInterfaceImpl();
public String doSomething() {
return (String) service.doMethod("Hello");
}
public String doSomethingElse() {
return (String) ifce.testMethod("Hello again");
}
}
I am making the assumption that because I am using the #Listeners annotation that I do not require the javaagent command line argument. This assumption may be wrong....
Can anyone point out what I have missed?
The JMockit-TestNG Initializer must run once for the whole test run, so using #Listeners on individual test classes won't work.
Instead, simply upgrade to JMockit 0.999.11, which works transparently with TestNG 6.2+, without any need to specify a listener or the -javaagent parameter (unless running on JDK 1.5).